uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,995,463
arxiv
\section{Introduction} Second order elliptic and parabolic operators with unbounded coefficients have received a great deal of attention because of their analytical interest as well as their applications to stochastic analysis, both in the autonomous and, more recently, in the nonautonomous case. Due to the applications in Stochastics, much of the work has been done in spaces of continuous and bounded functions and in the $L^p$-spaces with respect to {\em the invariant measure}, in the autonomous, and {\em evolution systems of measures}, in the nonautonomous case. The existence of a unique classical solution for homogeneous parabolic Cauchy problems associated with operators with unbounded coefficients in spaces of continuous and bounded functions, or equivalently the existence of a {\em semigroup} $T(t)$ or an {\em evolution operator} $G(t,s)$, respectively, can be shown under mild assumptions on the growth of the coefficients. Let us refer the reader to \cite{MetPalWac1,BerLorbook,LorSurvey} and their bibliographies for more information. On the other hand, the analysis in the $L^p$ setting with respect to the Lebesgue measure has an independent analytical interest and it turns out to be much more difficult than the analysis in the space of continuous and bounded functions or in $L^p$-spaces with respect to the invariant measure (resp. evolution system of measures). Even in the autonomous case, the Cauchy problem may be not well posed in $L^p(\Rd, dx)$ if the coefficients are unbounded, unless they satisfy very restrictive assumptions. For instance, in the $1$-dimensional case very simple operators, such as $D^2-|x|^\varepsilon x D$, with $\varepsilon>0$, do not generate any semigroup in $L^p(\R, dx)$ and in this situation, the lack of the potential term plays a crucial role, see also \cite{AreMetPal06Scr} for further examples and comments. Since nowadays many of the results obtained concern the single equations, the aim of this paper is the study of parabolic systems with unbounded coefficients, coupled in the zero and first order terms, in the Lebesgue space $L^p(\Rd,\R^m)$. We consider the Cauchy problem \begin{equation} \left\{ \begin{array}{lll} D_t\uu(t,x)=(\boldsymbol{\mathcal A}(t)\uu)(t,x),\quad\quad & t>s\in I, &x\in\R^d,\\[1mm] \uu(s,x)=\f(x), &&x\in\R^d \end{array} \right. \label{eq:cauchy_problem_system} \end{equation} where $I$ is an open right-halfline or the whole $\R$ and the elliptic operators \begin{equation} \boldsymbol{\mathcal A}{\bf v}=\sum_{i,j=1}^dD_i(q_{ij}D_{j}{\bf v})+\sum_{i=1}^d B_iD_i{\bf v}+C{\bf v} \label{operat-A} \end{equation} have unbounded coefficients $q_{ij}:I\times\Rd\to \R$ and $B_i, C: I\times\Rd \to \R^{m^2}$ ($m \ge 1$). Second order elliptic and parabolic systems have been already studied in the simplest case of {\em zero order coupling}, i.e., when $B_i=b_iI_m$ (see \cite{HieLorPruRhaSch09Glo,DelLor11OnA}). The more general frame of {\em first order coupling}, i.e., uncoupled diffusion and coupled drift and potential, has been very recently studied in the space of continuous and bounded functions in \cite{AddAngLor15Cou}, where the existence of an {\em evolution operator} ${\bf G}(t,s)$ associated with $\boldsymbol{\mathcal A}(t)$ in $C_b(\Rd; \Rm)$ has been shown. Here, we take advantage of such construction and of a pointwise estimate shown in \cite{AddAngLor15Cou} to start our investigation on the properties of ${\bf G}(t,s)$ in the $L^p$ context. We refer to \cite{LunardiBook,CL} for the abstract theory of evolution operators. We assume that the coefficients are regular enough, namely locally $C^{\alpha/2, \alpha}$, for some $\alpha \in (0,1)$, together with the first order spatial derivatives of $q_{ij}$ and of the entries of $B_i$, for any $i,j=1, \ldots, d$, and that the matrix $Q(t,x)=[q_{ij}(t,x)]_{i,j=1, \ldots,d}$ is uniformly positive definite, see Hypotheses \ref{hyp_base}. The $L^p$ analysis is carried out under two different sets of assumptions, Hypotheses \ref{uni1} and \ref{uni2}, which we compare in Remark \ref{hypocomparison}. The two approaches give slightly different results. Indeed, under Hypotheses \ref{uni1} we deal directly with the vectorial problem. Using the pointwise estimate proved in \cite{AddAngLor15Cou} (and recalled in the Appendix), an interpolation argument and requiring a balance between the growth of the potential matrix $C$ and the derivative of the drift matrices $B_i$ ($i=1, \ldots,d$), we prove that the evolution operator ${\bf G}(t,s)$ extends to a bounded and strongly continuous operator in $L^p(\Rd; \Rm)$ for any $p \in [1,+\infty)$. On the other hand, when Hypotheses \ref{uni2} are satisfied, we estimate $|{\bf G}(t,s)\f|^p$ in terms of $G(t,s)|\f|^p$ for any $t>s \in I$, $p \in [p_0, +\infty)$ and some $p_0>1$. Here, $G(t,s)$ is the evolution operator which governs an auxiliary scalar problem. As a consequence of this comparison result, the boundedness of ${\bf G}(t,s)$ in $\mathcal{L}(L^p(\Rd;\Rm))$ for $p\in [p_0,+\infty)$ can be obtained as a byproduct of the boundedness of $G(t,s)$ in $\mathcal{L}(L^1(\Rd))$. Sufficient conditions in order that $G(t,s)$ is bounded in $L^p$ for any $p \in [1,+\infty)$ can be found in \cite{AngLor10Com}. Notice however that slightly strengthening Hypothesis \ref{uni2}(ii) we can deal with the whole scale of $1<p<\infty$ rather than $p\geq p_0$, see Remark \ref{alpha}. Going further, we find conditions for the hypercontractivity of ${\bf G}(t,s)$. More precisely, under suitable assumptions, we prove that \begin{equation}\label{iper_intro} \|{\bf G}(t,s)\f\|_{L^q(\Rd;\Rm)}\le c\|\f\|_{L^p(\Rd;\Rm)}, \end{equation} for any $t\in (s, T]$, $T>s \in I$, $\f \in L^p(\Rd;\Rm)$, $q\geq p$ and some positive constant $c$ depending on $p,q,s$ and $T$. Actually, whenever Hypotheses \ref{uni1} are satisfied, under the same assumptions which guarantee that $L^p(\Rd,\Rm)$ is preserved by the action of ${\bf G}(t,s)$, we prove \eqref{iper_intro} for any $2 \le p \le q$. Then, arguing by duality we establish \eqref{iper_intro} also when $1 \le p \le q \le 2$. Applying this hypercontractivity result to the scalar evolution operator $G(t,s)$ and using the pointwise estimate of $|{\bf G}(t,s)\f|^p$ in terms of $G(t,s)|\f|^p$, we provide conditions for \eqref{iper_intro} to hold for $p_0 \le p \le q$, when Hypotheses \ref{uni2} are satisfied. The hypercontractivity estimate \eqref{iper_intro}, in this generality, seems to be new also in the autonomous scalar case. Some $L^p$-$L^q$ estimates have been recently proved in \cite{IoMeSoSp14LpLq} for a special class of homogeneous operators with unbounded diffusion. Next, we prove some pointwise estimates for the spatial derivatives of ${\bf G}(t,s)\f$. Under additional assumptions, which are essentially growth conditions on the coefficients of the operator $\boldsymbol{\mathcal A}(t)$ and their derivatives, we show that there exist positive constants $c_1, c_2$ such that \begin{equation}\label{stima_grad1_intro} |D_x {\bf G}(t,s)\f|^p \le c_1 G(t,s)(|\f|^p+|D\f|^p) \end{equation} and, under more restrictive conditions, that \begin{equation}\label{lp-w1p-intro} |D_x {\bf G}(t,s)\f|^p\le c_2 (t-s)^{-\frac{p}{2}}G(t,s)|\f|^p, \end{equation} for any $t\in (s, T]$, $T>s \in I$, $\f \in C^1_c(\Rd;\Rm)$ and $p \in [p_1,+\infty) $ for some $p_1>1$. Now, if the scalar evolution operator $G(t,s)$ preserves $L^1(\Rd)$, estimates \eqref{stima_grad1_intro} and \eqref{lp-w1p-intro} yield that the evolution operator ${\bf G}(t,s)$ belongs to $\mathcal{L}(W^{1,p}(\Rd;\Rm))$ and to $\mathcal{L}(L^p(\Rd;\Rm),W^{1,p}(\Rd;\Rm))$, respectively. As a consequence of this fact, we show that ${\bf G}(t,s)$ is bounded from $W^{\theta_1,p}(\Rd; \Rm)$ into $W^{\theta_2,p}(\Rd;\Rm)$ for any $0 \le \theta_1 \le \theta_2 \le 1$ and any $p \ge p_1$. We believe that estimates \eqref{stima_grad1_intro} and \eqref{lp-w1p-intro} could represent a helpful tool to study the evolution operator ${\bf G}(t,s)$ in $L^p$-spaces with respect to {\em a natural extension to the vector case of evolution systems of measures}, whose definition and analysis is deferred to a future paper. Indeed, already in the scalar case, (see \cite{AngLorLun12Asy, AngLor12OnI}), pointwise gradient estimates have been a key tool to study the asymptotic behaviour of the evolution operator associated with the problem and in establishing some summability improving results for such operator in the $L^p$ spaces with respect the tight time dependent family of invariant measures. The last section of the paper is devoted to exhibit some classes of operators which satisfy our assumptions. \subsection*{Acknowledgements} The authors have been partially supported by the PRIN 2010 MIUR project ``Problemi differenziali di evoluzione: approcci deterministici e stocastici e loro interazioni" and are members of G.N.A.M.P.A. of the Italian Istituto Nazionale di Alta Matematica (INdAM). L.A. and L.L. have also been supported by ``Progetto GNAMPA 2014: Equazioni ellittiche e paraboliche''. \subsection*{Notations} Functions with values in $\R^m$ are displayed in bold style. Given a function $\f$ (resp. a sequence $(\f_n)$) as above, we denote by $f_i$ (resp. $f_{n,i}$) its $i$-th component (resp. the $i$-th component of the function $\f_n$). By $B_b(\Rd;\Rm)$ we denote the set of all the bounded Borel measurable functions $\f:\Rd\to\Rm$. For any $k\ge 0$, $C^k_b(\R^d;\R^m)$ is the space of all the functions whose components belong to $C^k_b(\R^d)$, where the notation $C^k(\R^d)$ ($k\ge 0$) is standard and we use the subscript ``$c$'' and ``$b$'' for spaces of functions with compact support and bounded, respectively. Similarly, when $k\in (0,1)$, we use the subscript ``loc'' to denote the space of all $f\in C(\Rd)$ which are H\"older continuous in any compact set of $\Rd$. We assume that the reader is familiar also with the parabolic spaces $C^{\alpha/2,\alpha}(I\times \Rd)$ ($\alpha\in (0,1)$) and $C^{1,2}(I\times \Rd)$, and we use the subscript ``loc'' with the same meaning as above. The Euclidean inner product of the vectors $x,y\in\R^d$ is denoted by $\langle x,y\rangle$. For any square matrix $M$, we denote by $M_{ij}$, ${\rm Tr}(M)$ and $M^*$ the $ij$-th element of the matrix $M$, the trace of $M$ and the matrix transposed to $M$, respectively. Finally, $\lambda_M$ and $\Lambda_M$ denote the minimum and the maximum eigenvalue of the (symmetric) matrix $M$. For any $k \in \N$, by $I_k$ we denote the identity matrix of size $k$. Square matrices of size $m$ are thought as elements of $\R^{m^2}$. By $\chi_A$, $\one$ and ${\bf e}_j$ we denote the characteristic function of the set $A\subset\R^d$, the function which is identically equal to 1 in $\R^d$ and the $j$-th vector of the Euclidean basis of $\R^m$. Finally, the Euclidean open ball with centre $x_0$ and radius $R>0$ and its closure are denoted by $B_{R}(x_0)$ and $\overline B_R(x_0)$; when $x_0=0$ we simply write $B_R$ and $\overline B_R$. For any interval $J\subset \R$ we denote by $\Sigma_J$ the set $\{(t,s)\in J\times J:\;\, t>s\}$. \section{Preliminary results} Let $I$ be an open right-halfline (possibly $I=\R$) and $\{\boldsymbol{\mathcal A}(t)\}_{t \in I}$ be the family of second order uniformly elliptic operators defined in \eqref{operat-A}. In this paper we study the Cauchy problem \eqref{eq:cauchy_problem_system} when $\f \in L^p(\Rd;\Rm)$ and $s\in I$, under the following standing assumptions. \noindent \begin{hyp} \label{hyp_base} \begin{enumerate}[\rm (i)] \item The matrices $Q=[q_{ij}]_{i,j=1, \ldots,d}$, $B_i$ $(i=1,\ldots,d)$ and $C$ are symmetric. Further, $q_{ij}, (B_i)_{lk}\in C^{\alpha/2,1+\alpha}_{\rm loc}(I\times \Rd)$ and $C_{lk}\in C^{\alpha/2,\alpha}_{\rm loc}(I\times\R^d)$ for any $i,j=1,\ldots,d$ and $l,k =1,\ldots,m$; \item the matrix $Q$ is uniformly elliptic, i.e., $\nu_0:=\inf_{I\times \Rd}\lambda_Q (t,x)>0$ where \begin{equation*} \lambda_Q(t,x):=\min\{\langle Q(t,x)\xi,\xi\rangle:\, \xi \in \Rd,\ |\xi|=1\},\qquad\;\,t\in I, \;\, x\in\R^d \end{equation*} is the minimum eigenvalue of $Q(t,x)$. \end{enumerate} \end{hyp} Besides Hypotheses \ref{hyp_base} we consider one of the following two sets of assumptions. \noindent \begin{hyp}\label{uni1} \begin{enumerate}[\rm(i)] \item The function ${\mathcal K}_\eta: I\times\R^d\to \R$, defined by \begin{equation} {\mathcal K}_\eta=\sum_{i,j=1}^d(Q^{-1})_{ij}[\langle B_i\eta,\eta\rangle\langle B_j\eta,\eta\rangle-\langle B_i\eta,B_j\eta\rangle]-4\langle C\eta,\eta\rangle, \label{eq:positive condition} \end{equation} is nonnegative in $I \times \Rd$, for any $\eta \in \partial B_1$; \item for any bounded interval $J\subset I$ there exist a constant $\lambda_J$ and a positive ({\em Lyapunov}) function $\varphi_J\in C^2(\R^d)$, blowing up as $|x|\to +\infty$, such that \begin{equation*} \sup_{\eta \in \partial B_1}\sup_{(t,x)\in J\times\R^d}(\A_{\eta}(t)\varphi_J)(x)-\lambda_J\varphi_J(x))<+\infty, \end{equation*} where \begin{equation}\label{defAeta} \A_{\eta}={\rm div}(Q D_x)+\langle b_{\eta},D_x\rangle,\qquad (b_{\eta})_i=\langle B_i\eta,\eta\rangle. \end{equation} \end{enumerate} \end{hyp} Condition \ref{uni1}(i) is already used by \cite{KreMaz12Max} in the case of bounded coefficients. \noindent \begin{hyp}\label{uni2} \begin{enumerate}[\rm (i)] \item There exist functions $b_i:I\times \Rd \to \R$ and $\tilde{B}_i:I \times \R^d\to \R^{m^2}$ such that $B_i:=b_iI_m+\tilde B_i$, for any $i=1, \ldots,d$, $\sigma>0$, and a function $\xi:I \to (0,+\infty)$ such that \[ |(\tilde{B}_{i})_{jk}(t,x)|\leq \xi(t)\lambda_Q^{\sigma}(t,x), \qquad\;\, (t,x)\in I\times \Rd, \] for any $j,k=1,\ldots,m$ and $i=1,\ldots,d$; \item for any bounded interval $J\subset I$ there exists $\beta \geq 1/4$ such that \begin{equation}\label{HpJ} H_{\beta,J}:=\sup_{J\times\Rd}(\Lambda_C+\beta dm^2\xi^2\lambda_Q^{2\sigma-1})<+\infty; \end{equation} \item for any bounded interval $J\subset I$ there exist $\lambda_J>0$ and a positive function $\varphi_J\in C^2(\R^d)$ blowing up as $|x|\to +\infty$ such that $\sup_{J\times\R^d}(\A\varphi_J-\lambda_J\varphi_J)<+\infty$, where \begin{equation}\label{defA} \A={\rm div}(Q D_x)+\langle b,D_x\rangle,\qquad b=(b_1,\ldots,b_m). \end{equation} \end{enumerate} \end{hyp} \begin{rmk}\label{luftansa}{\rm Hypothesis $\ref{uni1}$(i) can be replaced with the weaker condition \begin{equation}\label{weak} \inf_{\eta\in \partial B_1}\inf_{J \times \Rd}\mathcal K_\eta >-\infty \end{equation} for any bounded interval $J \subset I$. Indeed, in this latter case, for any bounded interval $J \subset I$ there exists a positive constant $c_J$ such that $\mathcal K_\eta\geq -c_J$ in $J\times\R^d$ for any $\eta \in\partial B_1$. Let us notice that $\uu$ is a classical solution of the Cauchy problem \eqref{eq:cauchy_problem_system} if and only if the function $\vv$, defined by $\vv(t,x):=e^{-c_J(t-s)/4}\uu(t,x)$ for any $(t,x)\in (s,+\infty)\times \Rd$, is a classical solution of the problem \begin{equation}\label{weak_pb} \left\{ \begin{array}{ll} D_t\vv(t,x)=\displaystyle\left (\boldsymbol{\mathcal A}(t)-\frac{c_J}{4}\right )\vv(t,x), \qquad\,&(t,x)\in (s,+\infty)\times \Rd \\[2mm] \vv(s,x)=\f(x), \qquad &x \in \Rd. \end{array} \right. \end{equation} The elliptic operator in problem \eqref{weak_pb} satisfies Hypothesis $\ref{uni1}$(i) and, clearly, the uniqueness of $\vv$ is equivalent to the uniqueness of $\uu$.} \end{rmk} \begin{rmk}\label{hypocomparison} {\rm A comparison between Hypotheses \ref{uni1} and \ref{uni2} is in order. First of all, notice that writing the matrices $B_i$ as in \ref{uni2}(i) the function ${\mathcal K}_\eta$ depends only upon $\tilde{B}_i$, because the diagonal part cancels. The two sets of hypotheses are independent in general: \ref{uni2}(i) and (ii) imply \ref{uni1}(i), whereas \ref{uni1}(ii) is stronger than \ref{uni2}(iii). Indeed, assuming \ref{uni2}(i) it is easily seen that \[ \sum_{i,j=1}^d(Q^{-1})_{ij}[\langle B_i\eta,\eta\rangle\langle B_j\eta,\eta\rangle-\langle B_i\eta,B_j\eta\rangle] \] is negative and of order $\lambda_Q^{2\sigma-1}$. This fact together with \ref{uni2}(ii) implies \ref{uni1}(i) (taking Remark \ref{luftansa} into account). On the other hand, assuming \ref{uni2}(i), the function ${\mathcal K}_\eta$ can be of order less than $\lambda_Q^{1-2\sigma}$. For instance, assume $d=m=2$, $Q={\rm diag}(\lambda_Q,\Lambda_Q)$, $B_1=b_1I_2$ diagonal and $\tilde{B}_2\neq 0$. Then, we have \[ {\mathcal K}_\eta = \Lambda_Q^{-1}(\langle\tilde{B}_2\eta,\eta\rangle^2-|\tilde{B}_2\eta|^2)-4\langle C\eta,\eta\rangle \geq 0 \quad {\rm if} \quad \Lambda_C+2\xi^2 \lambda_Q^{2\sigma}\Lambda_Q^{-1}<+\infty , \] which is weaker than \eqref{HpJ} if $\lambda_Q=o(\Lambda_Q)$. Concerning \ref{uni1}(ii) and \ref{uni2}(iii), the latter requires the existence of a Lyapunov function for {\em one} decomposition of each drift matrix, while the former requires the existence of a Lyapunov function for {\em any} decomposition $B_i = b_\eta I_m + \tilde{B}_{\eta,i}$, $\eta \in \partial B_1$. }\end{rmk} We start by recalling some known results used in the sequel and proved in \cite{AddAngLor15Cou}. The evolution operator on $C_b(\Rd;\Rm)$ which gives a solution of problem \eqref{eq:cauchy_problem_system} is obtained as the limit of the sequence of the evolution operators related to the following Cauchy-Dirichlet problem in $I\times B_n$: \begin{equation}\label{prob_approx} \left\{ \begin{array}{lll} D_t\uu_n(t,x)=(\boldsymbol{\mathcal A}(t)\uu_n)(t,x), \qquad\quad& t>s, \,x\in B_n,\\ \uu_n(t,x)={\bf 0}, & t>s,\,x\in\partial B_n,\\ \uu_n(s,x)=\f(x), & x\in\overline{B_n}. \end{array} \right. \end{equation} We shall also be concerned with the Cauchy-Neumann problem in $I\times B_n$: \begin{equation}\label{prob_approx_Neu} \left\{ \begin{array}{lll} D_t\uu_n(t,x)=(\boldsymbol{\mathcal A}(t)\uu_n)(t,x), \qquad\quad& t>s, \,x\in B_n,\\[1mm] \displaystyle\frac{\partial\uu_n}{\partial\nu}(t,x)={ \bf 0}, & t>s,\,x\in\partial B_n,\\[1mm] \uu_n(s,x)=\f(x), & x\in\overline{B_n}, \end{array} \right. \end{equation} where $\nu$ denotes the unit exterior normal vector to $\partial B_n$. Throughout the paper, we denote by $\G_n^{\mathcal D}(t,s)$ and $\G_n^{\mathcal N}(t,s)$ the Dirichlet and Neumann evolution operators associated with problems \eqref{prob_approx}, \eqref{prob_approx_Neu} in $C_b(B_n;\Rm)$. \begin{prop}\label{recall} Under Hypotheses $\ref{hyp_base}$ and $\ref{uni1}$ $($resp. $\ref{uni2})$, for any $\f \in C_b(\Rd;\Rm)$, problem \eqref{eq:cauchy_problem_system} admits a unique classical solution $\uu$ which is bounded in the strip $[s,T]\times\R^d$ for any $T>s \in I$. Setting ${\bf G}(t,s)\f:= \uu(t, \cdot)$ for any $t>s$ and $\f \in C_b(\Rd;\Rm)$, $\G(t,s)$ is a bounded linear operator in $C_b(\Rd;\Rm)$ and \begin{equation} \|\G(t,s)\f\|_{\infty}\leq \gamma(t-s)\|\f\|_{\infty}, \qquad\;\,t\in (s,T), \label{eq:norma_condition_classical solution} \end{equation} where $\gamma(r)=1$ $($resp.\footnote{Here $H_{1/4,[s,T]}$ is the constant in \eqref{HpJ}.} $\gamma(r)=e^{H_{1/4,[s,T]}r})$ for any $r>0$. Moreover, for any $s \in I$ and $\f\in C_b(\Rd;\Rm)$, both $\G_n^{\mathcal N}(\cdot,s)\f$ and $\G_n^{\mathcal D}(\cdot,s)\f$ converge to $\G(\cdot,s)\f$ in $C_{\rm loc}^{1,2}((s,+\infty)\times \Rd;\R^m)$. \end{prop} The uniqueness of the solution of the problem \eqref{eq:cauchy_problem_system} shows that the family $\{{\bf G}(t,s)\}_{t\geq s\in I}$ is an evolution operator in $C_b(\Rd;\R^m)$. \begin{rmk}\label{alpha}{\rm Notice that working in $L^p$ is allowed provided that Hypothesis \ref{uni2}(ii) holds for some $\beta\geq [4(p-1)]^{-1}$, as we shall see in the proof of Proposition \ref{stimapuntLp} below. We are supposing $\beta\geq 1/4$ in order to encompass the case $p=2$: indeed, estimate \eqref{eq:norma_condition_classical solution} has been obtained as consequence of a pointwise estimate for $|\uu|^2$ in terms of the solution of a suitable scalar problem. Moreover, we point out that if \eqref{HpJ} holds with $\lambda_Q^\alpha$ in place of $\lambda_Q^{2\sigma -1}$ for some $\alpha<2\sigma-1$, then every $\beta>0$ is allowed and we can extend our results to the whole scale of $p>1$. We shall not mention this extension anymore. }\end{rmk} Since in this paper we are interested in studying the evolution operator $\G(t,s)$ in the $L^p(\Rd;\Rm)$ setting under Hypotheses $\ref{uni2}$, we extend the just mentioned pointwise estimate to $|\uu|^p$ for any $p\in [1+\frac{1}{4\beta},+\infty)$. \begin{prop}\label{stimapuntLp} Assume that Hypotheses $\ref{uni2}$ hold true; then, for every bounded interval $J \subset I$ and $p\geq 1+\frac{1}{4\beta}$, there exists a positive constant $K_{J}$ such that \begin{equation}\label{pointwise} |(\G(t,s)\f)(x)|^p\leq e^{pK_{J}(t-s)} (G(t,s)|\f|^p)(x), \end{equation} for any $(t,s)\in \Sigma_J$, $x \in \Rd$ and $\f \in C_b(\Rd;\R^m)$, where $G(t,s)$ denotes the evolution operator in $C_b(\Rd)$ associated with the operator $\mathcal A$ defined in \eqref{defA}. Here, $K_{J}=H_{1/4,J}$ if $p \ge 2$ whereas $K_{J}= H_{\beta,J}$ if $p \in [1+\frac{1}{4\beta},2)$. \end{prop} \begin{proof} Estimate \eqref{pointwise} has been already proved when $p=2$ in \cite[Prop. 2.8]{AddAngLor15Cou} with $K_{J}=H_{1/4,J}$; for a general $p$, its proof is similar, so that we limit ourselves to sketch it. Moreover, it suffices to prove \eqref{pointwise} only for $p\in [1+\frac{1}{4\beta},2)$. Indeed, if $p>2$, the integral representation formula of $G(t,s)|\f|^2$ in terms of the transition kernels associated with $\mathcal A$ in $C_b(\Rd)$ (see \cite[Prop. 2.4]{KunLorLun09Non}) and the Jensen inequality yield \[ |\G(t,s)\f|^p\le (e^{2H_{{1/4},J}(t-s)}G(t,s)|\f|^2)^{p/2}\le e^{pH_{1/4,J}(t-s)}G(t,s)|\f|^p \] for any $(t,s) \in \Sigma_J$. Hence, \eqref{pointwise} follows. Now, let $J \subset I$ be a bounded interval. Fix $p\in [1+\frac{1}{4\beta},2]$, $\varepsilon>0$, and, for brevity, let $H= H_{\beta,J}$ be as in Hypotheses $\ref{uni2}$(ii) and $\uu=\G(\cdot,s)\f$. We set $w_\varepsilon=(|\uu|^2+\varepsilon)^{p/2}$ and \[ u_\varepsilon(t,\cdot)=e^{-pH(t-s)}w_\varepsilon(t,\cdot)- G(t,s)(|\f|^2+\varepsilon)^{p/2},\qquad\;\, t>s \in I . \] The function $u_\varepsilon$ belongs to $C^{1,2}((s,+\infty)\times \Rd)\cap C_b([s,+\infty)\times \Rd)$ and verifies \begin{align*} D_t u_\varepsilon- \A u_\varepsilon=p e^{-pH(t-s)} w_\varepsilon^{1-2/p} \bigg [&\sum_{i=1}^d\langle\uu,\tilde{B}_i D_i \uu \rangle+\langle\uu, C \uu \rangle -\sum_{i,j=1}^d q_{ij}\langle D_i\uu,D_j\uu\rangle\\ &+(2-p)(|\uu|^2+\varepsilon)^{-1}\sum_{i,j=1}^d q_{ij}\langle\uu, D_i\uu\rangle \langle \uu, D_j \uu\rangle -H (|\uu|^2+\varepsilon)\bigg ] \end{align*} in $(s,\infty) \times \Rd$. Since \begin{align} \sum_{i,j=1}^dq_{ij}\langle\uu,D_i\uu\rangle\langle\uu,D_j\uu\rangle \le &\sum_{h,k=1}^m|u_h||u_k||\langle QD_xu_h,D_x u_k\rangle|\le \sum_{h,k=1}^m|u_h||u_k||Q^{1/2}D_x u_h||Q^{1/2}D_x u_k| \nonumber \\ \label{star} =&\left (\sum_{h=1}^m|u_h||Q^{1/2}D_x u_h|\right )^2 \\ \nonumber \le & \left (\sum_{h=1}^m|u_h|^2\right )\left (\sum_{h=1}^m|Q^{1/2}D_x u_h|^2\right )=|\uu|^2\sum_{i,j=1}^dq_{ij}\langle D_i\uu,D_j\uu\rangle, \end{align} by the assumptions it follows that \begin{align}\label{form1} D_t u_\varepsilon- \A(t) u_\varepsilon\le & p e^{-pH(t-s)} w_\varepsilon^{1-\frac{2}{p}}\left[\sum_{i=1}^d\langle\uu,\tilde{B}_i D_i \uu\rangle+(1-p)\lambda_Q |D_x \uu|^2+(\Lambda_C-H)|\uu|^2\right] \end{align} in $(s, \infty) \times \Rd$. The Young and the Cauchy-Schwarz inequalities and Hypotheses $\ref{uni2}$(i) show that \begin{align} \sum_{i=1}^d\langle\uu, \tilde{B}_i D_i \uu \rangle+(1-p)\lambda_Q |D_x \uu|^2&\le m\xi\lambda_Q^{\sigma}|\uu|\sum_{i=1}^d|D_i\uu|+(1-p)\lambda_Q |D_x \uu|^2\notag\\ &\le (a dm^2\xi^2+1-p)\lambda_Q|D_x\uu|^2+\frac{\lambda_Q^{2\sigma-1}}{4a}|\uu|^2 \label{form2} \end{align} in $J \times \Rd$ where and $a=a(t)$ is an arbitrary positive function. Putting together \eqref{form1}, \eqref{form2} and choosing $a=(p-1)(dm^2\xi^2)^{-1}$ yield that \[ D_t u_\varepsilon- \A u_\varepsilon\le p e^{-pH(t-s)} w_\varepsilon^{1-2/p} \left[\frac{dm^2\xi^2}{4(p-1)}\lambda_Q^{2\sigma-1}+\Lambda_C-H\right]|\uu|^2\le 0 \] in $((s, \infty)\cap J)\times \Rd$. The maximum principle in \cite[Prop. 2.1]{KunLorLun09Non} yields that $u_\varepsilon \le 0$ in $((s, \infty)\cap J)\times \Rd$, i.e., \[ (|\uu(t,\cdot)|^2+\varepsilon)^{p/2} \le e^{pH(t-s)}G(t,s)(|\f|^2+\varepsilon)^{p/2},\quad\;\, (t,s) \in \Sigma_J. \] Letting $\varepsilon \to 0^+$ we get \eqref{pointwise} with $K_{J}= H_{\beta, J}$. \end{proof} \section{The evolution operator ${\bf G}(t,s)$ in $L^p(\Rd;\Rm)$} As it has been already stressed in the introduction, even in the autonomous scalar case, the Cauchy problem \eqref{eq:cauchy_problem_system} is not well posed in $L^p(\Rd, dx)$ if the coefficients of $\boldsymbol{\mathcal A}$ are unbounded, unless they satisfy suitable assumptions. Actually, in some cases the Lebesgue space $L^p(\Rd, dx)$ is not preserved by the action of the evolution operator associated with $\boldsymbol{\mathcal A}$. For example, the compactness in $C_b(\Rd)$ implies that $L^p(\Rd, dx)$ is not preserved (see e.g. \cite{MetPalWac2, AngLor10Com}) by the action of the evolution operator. Here, we are interested in studying properties of the evolution operators $\G(t,s)$ in $L^p(\Rd;\Rm)$ when this space is preserved by its action and when an estimate like \begin{equation}\label{est_p} \|\G(t,s)\f\|_{L^p(\Rd;\Rm)}\le c_p(t-s) \|\f\|_{L^p(\Rd;\Rm)} \end{equation} holds true for some function $c_p:[0,+\infty)\to (0,+\infty)$. In what follows we consider alternatively Hypotheses $\ref{uni1}$ and $\ref{uni2}$, under additional assumptions. See also Remark \ref{alpha} in connection to Theorem \ref{th2} and Proposition \ref{def_Lp}. We begin by considering the case when Hypotheses \ref{uni1} are satisfied. Here, in order to use a duality argument we introduce the following conditions. \noindent \begin{hyp}\label{uni11} \begin{enumerate}[\rm(i)] There exists a function $\kappa:I\times\Rd\to\R$, bounded from above by a constant $\kappa_0$, such that \item the function $\tilde{{\mathcal K}}_\eta:I\times \Rd\to \R$ defined by \begin{equation*} \tilde{{\mathcal K}}_\eta={\mathcal K}_\eta+4\sum_{k=1}^d\langle D_k B_k\eta,\eta\rangle+4\kappa , \end{equation*} where ${\mathcal K}_\eta$ is defined in \eqref{eq:positive condition}, is nonnegative in $I \times \Rd$ for any $\eta \in \partial B_1$; \item for any bounded interval $J\subset I$ there exist a constant $\lambda_J$ and a positive function $\varphi_J\in C^2(\R^d)$, blowing up as $|x|\to +\infty$, such that \begin{equation*} \sup_{\eta \in \partial B_1}\sup_{(t,x)\in J\times\R^d}\Bigl((\tilde{\A}_{\eta}(t) \varphi_J)(x)-\lambda_J\varphi_J(x)\Bigr)<+\infty, \end{equation*} where \begin{equation*} \tilde{\A}_{\eta}={\rm div}(Q D_x)-\langle b_{\eta},D_x\rangle+2\kappa \end{equation*} and $b_\eta$ is defined in \eqref{defAeta}. \end{enumerate} \end{hyp} \begin{rmk} {\rm The same arguments as in Remark \ref{luftansa} show that the condition $\tilde{\mathcal K}_{\eta}\ge 0$ in $J\times\Rd$ can be replaced with the weaker condition $\inf_{\eta\in \partial B_1}\inf_{J \times \Rd}\tilde{\mathcal K}_{\eta} >-\infty$ for any bounded interval $J \subset I$. } \end{rmk} \begin{thm}\label{th1} Assume that Hypotheses $\ref{uni1}$ hold true. If for some interval $J\subset I$ there exists a positive constant $L_J$ such that \begin{equation}\label{p-2-infty} \Lambda_{2C- \sum_{i=1}^d D_iB_i}(t,x) \le L_J, \quad\;\,(t,x)\in J \times \Rd, \end{equation} then estimate \eqref{est_p} is satisfied for any $(t,s)\in \Sigma_J$, $\f \in C_c(\Rd;\Rm)$ and $p \in [2,+\infty)$ with $c_p(r)=e^{rL_J/p}$. In addition, if Hypotheses $\ref{uni11}$ are satisfied, then estimate \eqref{est_p} holds also for $p \in [1,2)$ with $c_p(r)=e^{r(L_J+ \kappa_0(p'-2))/p'}$, $r \ge 0$ and $p'=p/(p-1)$. \end{thm} \begin{proof} Let us fix $s \in J$, $\f \in C_c(\Rd;\Rm)$ and for any $n\in\N$ consider the classical solution $\uu_n:=\G_n(\cdot,s)\f=\G_n^{\mathcal D}(\cdot,s)\f$ of the Cauchy-Dirichlet problem \eqref{prob_approx}. >From Proposition \ref{recall}, $\G_n(\cdot,s)\f$ converges pointwise to $\G(\cdot,s)\f$ as $n \to +\infty$ and \begin{equation}\label{infty1} \|\G_n(t,s)\f\|_\infty \le \|\f\|_\infty,\qquad\,\, t \in (s,+\infty). \end{equation} Let us prove that estimate \eqref{est_p} holds true for $p=2$ with $\G(t,s)$ replaced by $\G_n(t,s)$ and some positive function $c$ independent of $n$. To ease the notation, we use $\|\cdot\|_p$ (resp. $\|\cdot\|_{p,n}$) in place of $\|\cdot\|_{L^p(\Rd;\Rm)}$ (resp. $\|\cdot\|_{L^p(B_n;\Rm)}$). To this aim, first observe that from the symmetry of $B_i$ it follows that $2\langle \vv, B_i D_i \vv\rangle={\rm Tr}(B_i D_i (\vv\otimes \vv))$ for any smooth function $\vv: \Rd\to \Rm$ and $i=1,\ldots,d$. Then, multiplying the differential equation $D_t \uu_n= \boldsymbol{\mathcal A}(t)\uu_n$ by $\uu_n$ and integrating by parts in $B_n$, we get \begin{align*} D_t\|\uu_n(t,\cdot)\|_{2,n}^2 =&\, 2\int_{B_n}\langle \uu_n(t,\cdot),(\boldsymbol{\mathcal A}(t)\uu_n)(t,\cdot) \rangle dx\\ =&-2 \int_{B_n}\langle Q(t,\cdot) D_x\uu_n(t,\cdot),D_x\uu_n(t,\cdot) \rangle dx -\sum_{i=1}^d\int_{B_n}\langle (D_i B_i)(t,\cdot) \uu_n(t,\cdot),\uu_n(t,\cdot)\rangle dx \\ &+ 2\int_{B_n} \langle C(t,\cdot)\uu_n(t,\cdot),\uu_n(t,\cdot)\rangle dx. \end{align*} Thus, from Hypotheses $\ref{hyp_base}$(ii) and $\eqref{p-2-infty}$ we deduce that \begin{align*} D_t\|\uu_n(t,\cdot)\|_{2,n}^2 \le& L_J \|\uu_n(t,\cdot)\|_{2,n}^2, \end{align*} whence $\|\uu_n(t,\cdot)\|_{2,n}^2=\|\G_n(t,s)\f\|_{2,n}^2\le e^{L_J(t-s)}\|\f\|^2_2$, for any $(t,s)\in \Sigma_J$ and any $n \in \N$. This latter inequality together with estimate \eqref{infty1} and the Riesz-Thorin interpolation theorem yields \begin{equation*} \|\G_n(t,s)\f\|_{p,k}\le e^{p^{-1}L_J(t-s)}\|\f\|_p \end{equation*} for any $(t,s)\in \Sigma_J$, $p \in [2,+\infty)$ and $k, n \in \N$ with $k\le n$. Since $\G_n(t,s)\f$ converges pointwise to $\G(t,s)\f$ in $\Rd$ as $n \to +\infty$, Fatou's lemma yields that $\|\G(t,s)\f\|_{p,k}\le e^{p^{-1}L_J(t-s)}\|\f\|_p$, for any $k \in \N$. Letting $k \to +\infty$ in the previous inequality and using Fatou's lemma again we get the first part of the claim. Now, let us suppose that Hypotheses $\ref{uni11}$ are satisfied, too. Multiplying the differential equation $(D_r-\boldsymbol{\mathcal A}(r))\G_n(r,s){\bf f}$ $={\bf 0}$ by $\g \in C^2_c([s,t]\times B_n;\Rm)$ and integrating by parts with respect to $r$ and $x$ in $[s,t]\times B_n$, we easily deduce that, for any $\f\in C^\infty_c(B_n;\R^m)$, the function $\vv_n(s,\cdot)=\G_n^*(t,s)\f$ is a weak solution of the backward Dirichlet Cauchy problem \begin{equation}\label{prob_approx_dual} \left\{ \begin{array}{lll} D_s\vv_n(s,x)=-(\boldsymbol{\mathcal A}^*(s)\vv_n)(s,x), &\qquad t>s, \,x\in B_n,\\ \vv_n(s,x)={\bf 0}, &\qquad t>s,\,x\in\partial B_n,\\ \vv_n(t,x)=\f(x), &\qquad x\in\overline{B_n}, \end{array} \right. \end{equation} where \begin{equation*} \boldsymbol{\mathcal A}^*{\bf v}=\sum_{i,j=1}^dD_i(q_{ij}D_j{\bf v}) -\sum_{i=1}^d B_iD_i{\bf v}+\bigg (C- \sum_{k=1}^d D_k B_k\bigg ){\bf v} \end{equation*} for any smooth function ${\bf v}:\Rd\to \Rm$. Actually, by the duality theory developed in \cite{friedman} (see, in particular, Theorem 9.5.5), $\vv_n$ is the unique classical solution of problem \eqref{prob_approx_dual} and from Hypotheses $\ref{uni11}$ it follows that $\|\G_n^*(t,s)\f\|_\infty \le e^{\kappa_0(t-s)}\|\f\|_\infty$, for any $t>s$ and $\f$ as above (see \cite{AddAngLor15Cou} and the Appendix). We can then apply the arguments above to $\G^*_n(t,s)$, showing that \eqref{est_p} holds true with $\G(t,s)\f$ replaced by $\G^*(t,s)\f$ for any $p \ge 2$. Indeed, multiplying the differential equation in \eqref{prob_approx_dual} by $\vv_n$ and integrating by parts in $B_n$, we get \begin{align*} D_s\|\vv_n(s,\cdot)\|_{2,n}^2=& -2\int_{B_n}\langle \vv_n(s,\cdot),(\boldsymbol{\mathcal A}^*(s)\vv_n)(s,\cdot)\rangle dx\\ =&\int_{B_n} \langle Q(s,\cdot) D_x\vv_n(s,\cdot),D_x\vv_n(s,\cdot) \rangle dx +\sum_{i=1}^d\int_{B_n}\langle (D_i B_i)(s,\cdot) \vv_n(s,\cdot),\vv_n(s,\cdot)\rangle dx\\ &- 2\int_{B_n} \langle C(s,\cdot)\vv_n(s,\cdot),\vv_n(s,\cdot)\rangle dx\\ \ge&\int_{B_n}\lambda_{\sum_{i=1}^dD_iB_i-2C}(s,\cdot)|\vv_n(s,\cdot)|^2dx . \end{align*} Since $-\lambda_A=\Lambda_{-A}$ for any symmetric matrix $A$, from \eqref{p-2-infty} it follows that \begin{equation}\label{dual} D_r\|\vv_n(r,\cdot)\|_{2,n}^2\ge -L_J\|\vv_n(r,\cdot)\|_{2,n}^2 \end{equation} for any $r \in (s,t)$ and $n \in \N$. Integrating \eqref{dual} with respect to $r$ from $s$ to $t$ and taking the final condition in \eqref{prob_approx_dual} into account, we get \[ \|\G_n^*(t,s)\f\|_{2,n}^2\le e^{L_J(t-s)}\|\f\|_2^2. \] Again, by the Riesz-Thorin theorem and the uniform estimate $\|\G_n^*(t,s)\f\|_\infty \le e^{\kappa_0(t-s)}\|\f\|_\infty$, we obtain \[ \|\G^*_n(t,s)\f\|_{p,n}\le e^{\frac{1}{p}(L_J+\kappa_0(p-2))(t-s)}\|\f\|_p, \] for any $(t,s)\in \Sigma_J$ and $p \in [2,+\infty)$. Arguing as above and letting $n \to +\infty$ in the previous inequality we get \begin{equation}\label{est_p_n_dual} \|\G^*(t,s)\f\|_{p}\le e^{\frac{1}{p}(L_J+\kappa_0(p-2))(t-s)}\|\f\|_p \end{equation} for the same values of $t,s$ and $p$. Now, fix $p \in [1,2)$ and $\f \in C_c(\Rd;\Rm)$. Then, from \eqref{est_p_n_dual} \begin{align*} \|\G(t,s)\f\|_{p}=& \sup\left\{\int_{\Rd}\langle\G(t,s)\f,\g\rangle dx: \g \in C^\infty_c(\Rd;\Rm),\,\,\|\g\|_{p'}\le 1\right\}\\[1mm] \le & \|\f\|_p \sup\{\|\G^*(t,s)\g\|_{p'}: \g \in C^\infty_c(\Rd;\Rm),\,\,\|\g\|_{p'}\le 1\}\\[1mm] \le & e^{\frac{1}{p'}(L_J+ \kappa_0(p'-2))(t-s)}\|\f\|_p \end{align*} for any $(t,s)\in\Sigma_J$, which completes the proof. \end{proof} The case when the pointwise estimate \eqref{pointwise} holds is much simpler. Indeed, estimate \eqref{est_p} can be obtained just requiring conditions on the scalar evolution operator $G(t,s)$. As an immediate consequence of estimate \eqref{pointwise} we get the following \begin{thm}\label{th2} Assume that Hypotheses $\ref{uni2}$ hold true and fix $p \in [1+\frac{1}{4\beta},+\infty)$. If $G(t,s)$ preserves $L^1(\Rd)$ and satisfies \eqref{est_p} with $p=m=1$ and $c_1=\tilde{c}_1$, then estimate \eqref{est_p} holds true for any $(t,s)\in \Sigma_J$ and $\f\in C_c(\Rd;\Rm)$ with $c_p(r)=e^{K_J r}\tilde{c}_1(r)$. \end{thm} \begin{rmk}{\rm Sufficient conditions for the scalar evolution operator $G(t,s)$ to satisfy \eqref{est_p} with $p\in [1,+\infty)$ can be found in \cite[Thms. 5.3 \& 5.4]{AngLor10Com} when $\A$ is not in divergence form. Adapting the cited theorems to our case, one can show that estimate \eqref{est_p} is satisfied with $p=1$ if there exists an interval $J\subset I$ and a positive constant $\Gamma_J$ such that either ${\rm{div}}_x b\ge- \Gamma_J$ or $ |b|^2\le \Gamma_{J}\lambda_Q$ in $J \times \Rd$. }\end{rmk} \begin{prop} \label{def_Lp} Let the assumptions of Theorem \ref{th1} (resp. Theorem \ref{th2}) be satisfied. Then, the evolution operator $\G(t,s)$ associated with $\boldsymbol{\mathcal A}(t)$ in $C_c(\Rd;\Rm)$ admits a continuous extension to $L^p(\Rd;\Rm)$ for any $p \in [1,+\infty)$ $($resp. $p \in [1+\frac{1}{4\beta},+\infty))$. Moreover, $\G(t,s)\f$ tends to $\f$ in $L^p(\Rd,\Rm)$ as $t \to s^+$, for any $s \in I$, $\f\in L^p(\Rd;\Rm)$ and $p \in [1,+\infty)$ $($resp. $p \in [1+\frac{1}{4\beta},+\infty))$. \end{prop} \begin{proof} The first part of the claim is an easy consequence of estimate \eqref{est_p}. Indeed, fix $(t,s)\in \Sigma_J$, $\f\in L^p(\Rd;\Rm)$ and let $(\f_n)$ be a sequence in $C_c(\Rd;\Rm)$ converging to $\f$ in $L^p(\Rd; \Rm)$, as $n \to +\infty$. Then, from \eqref{est_p} it follows that \begin{equation}\label{Cau_seq} \|\G(t,s)(\f_n-\f_k)\|_{L^p(\Rd;\Rm)}\le c_p(t-s)\|\f_n-\f_k\|_{L^p(\Rd;\Rm)} \end{equation} for any $n, k\in \N$ and, consequently, $(\G(t,s)\f_n)$ is a Cauchy sequence in $L^p(\Rd; \Rm)$. We can then define $\G(t,s)\f$ as the $L^p(\Rd; \Rm)$-limit of $\G(t,s)\f_n$ as $n \to +\infty$. Moreover, from \eqref{Cau_seq} it follows that $\|\G(t,s)\f\|_{L^p(\Rd;\Rm)}\le c\|\f\|_{L^p(\Rd;\Rm)}$ for any $\f\in L^p(\Rd;\Rm)$. To prove the remaining part of the claim it suffices to show that, for any $t>s \in I$, any $x\in\Rd$ and any $\f \in C^2_c(\Rd;\Rm)$, \begin{equation}\label{for} (\G(t,s)\f)(x)- \f(x)= -\int_{s}^t (\G(t,r)\boldsymbol{\mathcal A}(r)\f)(x)dr. \end{equation} Indeed, fix $[a,b]\subset I$; from estimates \eqref{for} and \eqref{est_p} we deduce that \begin{align*} \|\G(t,s)\f-\f\|_{L^p(\Rd;\Rm)}\le \sup_{r\in[a,b]}\|\boldsymbol{\mathcal A}(r)\f\|_{L^p(\Rd;\Rm)}\int_s^t c_p(r-s)dr \end{align*} for any $s \in [a,b]$ and $t \ge s$. Since, in our assumptions, the last integral vanishes as $t \to s^+$, $\G(t,s)\f$ tends to $\f$ in $L^p(\Rd;\Rm)$ as $t \to s^+$ and $s \in [a,b]$. A standard density argument and the arbitrariness of $[a,b]$ allow us to get the same result for $\f \in L^p(\Rd;\Rm)$ and any $s \in I$. Let us show formula \eqref{for}. From \cite[Thm 2.3 (ix)]{Acq88Evo} (see also \cite[Thm. A.1]{AngLor14Non}), we know that, for any $n$ such that ${\rm{supp}}(f)\subset B_n$, \begin{equation}\label{for_n} (\G_n^{\mathcal D}(t,s_1)\f)(x)- (\G_n^{\mathcal D}(t,s_0)\f)(x)= \int_{s_0}^{s_1} (\G_n^{\mathcal D}(t,r)\boldsymbol{\mathcal A}(r)\f)(x)dr \end{equation} for any $s_0 \le s_1 \le t$, $x \in \Rd$. Since the function $\boldsymbol{\mathcal A}(r)\f$ belongs to $C_b(\Rd;\Rm)$, by Proposition \ref{recall} $\G_n^{\mathcal D}(\cdot, r)\boldsymbol{\mathcal A}(r)\f$ converges to $\G(\cdot, r)\boldsymbol{\mathcal A}(r)\f$ in $C_{\rm loc}^{1,2}((r,+\infty)\times \Rd; \Rm)$. Thus, letting $n \to +\infty$ in \eqref{for_n} and choosing $s_1=t$ we get \eqref{for}. \end{proof} \section{Hypercontractivity estimates} The aim of this section consists in proving that, under suitable assumptions, the evolution operator ${\bf G}(t,s)$ maps $L^p(\Rd;\Rm)$ into $L^q(\Rd;\Rm)$ for any $t>s$ and $1 \le p \le q \le +\infty$ and that \begin{equation}\label{iper} \|{\bf G}(t,s)\f\|_{L^q(\Rd;\Rm)} \le c_{p,q}(t-s)\|\f\|_{L^p(\Rd;\Rm)}, \qquad\,\, t >s,\, \f \in L^p(\Rd; \Rm), \end{equation} for suitable functions $c_{p,q}:(0,+\infty)\to (0,+\infty)$. \begin{thm}\label{thm_hyper} Assume that Hypotheses $\ref{uni1}$ hold true and that, for some interval $J \subset I$, estimate \eqref{p-2-infty} is satisfied for any $(t,s)\in \Sigma_J$. Then, the following properties are satisfied. \begin{enumerate}[\rm(i)] \item Estimate \eqref{iper} holds true for any $2 \le p \le q\le +\infty$, $(t,s)\in \Sigma_J$ and $\f \in L^p(\Rd;\Rm)$. Moreover, $c_{2,\infty}(r)\le k_1 e^{k_2r}$ for some positive $k_1$, $k_2$ depending on $m$, $d$, $\inf_{J\times \Rd}\lambda_Q$, $L_J$, and \footnote{Here and below $c_p,\ 1<p<\infty$, is the constant in Theorem \ref{th2}.} $c_{p,q}(r)=(c_p(r))^{p/q}(c_{2,\infty}(r))^{2(q-p)/pq}$, for any $r>0$ and $(p,q)\neq (2,\infty)$. \item If, in addition, Hypotheses $\ref{uni11}$ are satisfied, then estimate \eqref{iper} holds true for any $1 \le p \le q\le +\infty$, $t,s$ and $\f$ as in $($i$)$. Moreover, $c_{1,2}(r)\le k_1 e^{k_2r}$ for some positive $k_1$, $k_2$ as in (i) and \[ c_{p,q}(r)=(c_p(r))^{\frac{p(2-q)}{q(2-p)}}(c_{1,2}(r))^{\frac{2(q-p)}{pq}}c_2^{4\frac{(q-p)(p-1)}{pq(2-p)}} \] for any $r>0$, if $q\le 2$, and $c_{p,q}(r)=c_{p,2}(r/2)c_{2,q}(r/2)$ for any $r>0$, if $p<2<q$. \end{enumerate} \end{thm} \begin{proof} Taking the result of the Proposition \ref{def_Lp} into account, we confine ourselves to proving \eqref{iper} for functions belonging to $C_c(\Rd;\Rm)$. (i) Fix $\f \in C_c(\Rd;\Rm)$ and let $J$ be as in the assumptions. Note that it suffices to prove that \begin{equation}\label{aim} \|{\bf G}(t,s)\f\|_\infty\le c_{2,\infty}(t-s)\|\f\|_{L^2(\Rd;\Rm)},\qquad\;\, (t,s)\in \Sigma_J \end{equation} for some positive function $c_{2,\infty}:(0,+\infty)\to (0,+\infty)$. Indeed, once \eqref{aim} is proved, using the estimate $\|{\bf G}(t,s)\f\|_\infty\le \|\f\|_\infty$, which holds for any $t>s \in I$, and the Riesz-Thorin theorem, we deduce that $\|{\bf G}(t,s)\f\|_\infty\le c_{p,\infty}(t-s)\|\f\|_{L^p(\Rd;\Rm)}$ for any $p \in [2,+\infty]$, $(t,s)\in \Sigma_J$ where $c_{p,\infty}(t-s)=[c_{2,\infty}(t-s)]^{\frac{2}{p}}$ for any $p>2$. On the other hand, Theorem \ref{th1} shows that $\|{\bf G}(t,s)\f\|_{L^p(\Rd;\Rm)}\le c_p(t-s)\|\f\|_{L^p(\Rd;\Rm)}$, for any $(t,s)\in \Sigma_J$ and $p \ge 2$. Hence, again by interpolation we deduce that \[ \|{\bf G}(t,s)\f\|_{L^q(\Rd;\Rm)}\le c_{p,q}(t-s)\|\f\|_{L^p(\Rd;\Rm)}, \qquad\;\, (t,s)\in \Sigma_J \] for any $2\le p \le q< +\infty$, where $c_{p,q}(t-s)=[c_p(t-s)]^{\frac{p}{q}}[c_{p,\infty}(t-s)]^{1-\frac{p}{q}}$. So, let us prove \eqref{aim}. First, observe that for any $n\in\N$, any $\h\in C^2(\overline{B_n};\Rm)$, which vanishes on $\partial B_n$, and $\lambda>0$, it holds that \begin{align*} \int_{B_n} \langle \lambda \h-\boldsymbol{\mathcal A}^*(s)\h, \h\rangle dx= &\sum_{i=1}^d\int_{B_n}\langle QD_xh_i,D_xh_i\rangle dx +\lambda \|\h\|_2^2 +2^{-1} \sum_{i=1}^d\int_{B_n}{\rm Tr}(B_i D_i (\h\otimes \h)) dx \\ &- \int_{B_n}\bigg\langle\bigg (C-\sum_{i=1}^d D_iB_i\bigg ) \h,\h\bigg\rangle dx\\ &\ge \nu_0\|D_x \h\|_{L^2(B_n;\Rm)}^2+ \lambda\|\h\|_{L^2(B_n;\Rm)}^2 - \int_{B_n}\bigg\langle \bigg (C-\frac{1}{2}\sum_{i=1}^d D_iB_i\bigg ) \h,\h\bigg\rangle dx \\ &\ge \nu_0\|D_x \h\|_{L^2(B_n;\Rm)}^2+ (\lambda-L_J/2)\|\h\|_{L^2(B_n;\Rm)}^2 \end{align*} for any $s \in J$, with $L_J$ as in \eqref{p-2-infty}, where $\nu_0$ is the ellipticity bound in Hypotheses \ref{hyp_base}(ii). Nash's inequality (see \cite[Thm. 2.4.6]{Dav89Hea}) together with the latter estimate yield \begin{equation} \label{nash} \int_{\Rd} \langle (\lambda -\boldsymbol{\mathcal A}^*(s))\h, \h\rangle dx \ge c_1 \|\h\|_{W^{1,2}(B_n;\Rm)}^2\ge c_2\|\h\|_{L^2(B_n;\Rm)}^{2+4/d}\|\h\|_{L^1(B_n;\Rm)}^{-4/d} \end{equation} for any $\lambda >L_J/2$, $s \in J$ and some positive constants $c_1, c_2$ depending on $\nu_0, L_J$ and $m$. Now, fix $\g \in C^\infty_c(\Rd;\Rm)$ and $\lambda >L_J/2$. For any $n \in \N$, such that ${\rm supp}(f)\subset B_n$, we set \[ v_n(s)=\|e^{-\lambda(t-s)}\G_n^*(t,s)\g\|_{L^2(B_n;\Rm)}^2, \qquad\;\, (t,s)\in \Sigma_J, \] where, as in the proof of Theorem \ref{th1}, $\G_n^*(t,s)\g$ denotes the unique classical solution of \eqref{prob_approx_dual}. Estimate \eqref{nash} implies \begin{align} v_n'(s)= & 2 e^{-2\lambda(t-s)}\int_{\Rd}\langle (\lambda -\boldsymbol{\mathcal A}^*(s))\G_n^*(t,s)\g,\G_n^*(t,s)\g\rangle dx\notag \\ \ge & 2 c_2 \|e^{-\lambda(t-s)}\G_n^*(t,s)\g\|_{L^2(B_n;\Rm)}^{2+4/d} \|e^{-\lambda(t-s)}\G_n^*(t,s)\g\|_{L^1(B_n;\Rm)}^{-4/d}\notag\\ \ge & 2 c_2 e^{\frac{4}{d}\lambda(t-s)} \|e^{-\lambda(t-s)}\G_n^*(t,s)\g\|_{L^2(B_n;\Rm)}^{2+4/d}\|\g\|_{L^1(B_n;\Rm)}^{-4/d}, \label{dom-10} \end{align} where in the last inequality we have used the estimate $\|{\bf G}^*_n(t,s)\g\|_{L^1(B_n;\Rm)} \le \|\g\|_{L^1(B_n;\Rm)}$ which holds true for any $\g \in C^\infty_c(\Rd;\Rm)$. Indeed, the function ${\bf G}^*_n(t,s)\g$ belongs to $L^1(B_n;\R^m)$ and \begin{align*} \Big|\int_{B_n}\langle {\bf G}^*_n(t,s)\g,\f\rangle dx \Big| &= \Big|\int_{B_n}\langle \g,{\bf G}_n(t,s)\f\rangle dx \Big| \\ &\le \|\g\|_{L^1(B_n;\R^m)}\|{\bf G}_n(t,s)\f\|_{L^{\infty}(B_n;\R^m)} \\ &\le \|\g\|_{L^1(B_n;\R^m)}\|\f\|_{L^{\infty}(B_n;\R^m)} \end{align*} for any $\f\in C_b(B_n;\R^m)$, since the proof of Proposition \ref{prop-appendix} shows that $\|{\bf G}_n(t,s)\f\|_{L^{\infty}(B_n;\R^m)}\le\|\f\|_{L^{\infty}(B_n;\R^m)}$ for any $t\ge s$. By approximating any $\f\in L^{\infty}(B_n;\R^m)$ by a bounded sequence $(\f_n)\subset C_b(B_n;\R^m)$ converging to $\f$ in a dominated way, we conclude that \[ \Big|\int_{B_n}\langle {\bf G}^*_n(t,s)\g,\f\rangle dx\Big|\leq \|\g\|_{L^1(B_n;\R^m)}\|\f\|_{L^{\infty}(B_n;\R^m)} \] for any such $\f$. This estimate shows that $\|{\bf G}^*_n(t,s)\g\|_{L^1(B_n;\R^m)}\le \|\g\|_{L^1(B_n;\R^m)}$, as claimed. {}From \eqref{dom-10} it thus follows that \[ \frac{d}{ds}[(v_n(s))^{-2/d}]\le - \frac{4c_2}{d} e^{\frac{4}{d}\lambda(t-s)} \|\g\|_{L^1(B_n;\Rm)}^{-4/d},\qquad\,\, (t,s)\in \Sigma_J, \] whence, integrating from $s$ to $t$ and estimating $\int_s^te^{\frac{4}{d}\lambda(t-r)} dr$ from below by $1$, we get \[ (v_n(t))^{-2/d}-(v_n(s))^{-2/d}\le -\frac{4c_2}{d} \|\g\|_{L^1(B_n;\Rm)}^{-4/d}. \] Consequently, $v_n(s)=\|e^{-\lambda(t-s)}{\bf G}^*_n(t,s)\g\|_{L^2(B_n;\Rm)}^2\le d^{d/2}(4c_2)^{-d/2}\|\g\|_{L^1(B_n;\Rm)}^2,$ for any $(t,s)\in \Sigma_J$. Thus, we have established that \[ \|{\bf G}^*_n(t,s)\g\|_{L^2(B_n;\Rm)} \le c_0 e^{\lambda(t-s)}\|\g\|_{L^1(B_n;\Rm)}, \] for any $\g \in C_c(\Rd;\Rm)$, $(t,s)\in \Sigma_J$, $\lambda \ge L_J/2 $ and $c_0:=d^{d/4}(4c_2)^{-d/4}$. By duality, the latter inequality leads to \begin{align} \label{app} \|{\bf G}_n(t,s)\f\|_{\infty} &= \sup\left\{\int_{\Rd}\langle \f , {\bf G}^*_n(t,s)\g\rangle dx: \g \in C^\infty_c(B_n;\Rm),\ \|\g\|_{L^1(B_n;\Rm)} \le 1\right\} \\ \nonumber & \le c_0 e^{\lambda(t-s)}\|\f\|_{L^2(B_n;\Rm)} \end{align} for any $(t,s)\in \Sigma_J$. Letting $n \to +\infty$ in \eqref{app} yields estimate \eqref{aim} with $c_{2,\infty}(t-s)=c_0 e^{\lambda(t-s)}$. \\ (ii) The second part of the statement can be easily obtained arguing again by interpolation as in (i). In this case, since $\|{\bf G}(t,s)\f\|_{L^p(\Rd;\Rm)}\le c_p(t-s)\|\f\|_{L^p(\Rd;\Rm)}$, for any $(t,s)\in \Sigma_J$ and $p \in [1,2]$, it is enough to prove that \begin{equation} \label{aim_1-2} \|{\bf G}(t,s)\f\|_{L^2(\Rd;\Rm)}\le c_{1,2}(t-s)\|\f\|_{L^1(\Rd;\Rm)}, \qquad\;\, (t,s)\in \Sigma_J, \end{equation} Once \eqref{aim_1-2} is proved, using Riesz-Thorin theorem and interpolating between \eqref{est_p}, with $p=2$, and \eqref{aim_1-2}, we get \eqref{iper} with $q=2$. Next, interpolating between this latter estimate and, again, \eqref{est_p}, we get \eqref{iper} for any $1\le p<q\le 2$, with $c_{p,q}(r)=(c_p(r))^{\frac{2-q}{q(2-p)}}(c_{1,2}(r))^{\frac{2(q-p)}{pq}}$. Finally, splitting ${\bf G}(t,s)={\bf G}(t,(t+s)/2){\bf G}((t+s)/2,s)$, we get \eqref{iper} with $p<2<q$ and $c_{p,q}(r)=c_{p,2}(r/2)c_{2,q}(r/2)$. The proof of \eqref{aim_1-2} can be obtained arguing as in (i) replacing the function $v_n$ defined there by the function $u_n(t)=\|e^{-\lambda(t-s)}{\bf G}(t,s)\g\|_{L^2(\Rd;\R^m)}^2$ for any $(t,s)\in \Sigma_J$. \end{proof} Theorem \ref{thm_hyper} can now be used to prove that the hypercontractivity estimate \eqref{iper} holds true also when Hypotheses \ref{uni2} are satisfied, see also Remark \ref{alpha}. \begin{thm} Let us assume that Hypotheses $\ref{uni2}$ hold true and that for some interval $J\subset I$ there exist a positive constant $\lambda_J$ and two functions $\kappa_J:J\times\Rd\to\R$, bounded from above, and $\varphi_J\in C^2(\R^d)$, blowing up as $|x|\to +\infty$, such that ${\rm div}_x b+\kappa_J\ge 0$, in $J \times \Rd$ and $\sup_{J\times\R^d}(\tilde{\A}\varphi_J-\lambda\varphi_J)<+\infty$, where $\tilde{\A}={\rm div}(Q D_x)- \langle b,D_x\rangle+2\kappa_J$. Then, ${\bf G}(t,s)$ maps $L^p(\Rd;\Rm)$ into $L^q(\Rd;\Rm)$ for any $1+ \frac{1}{4\beta} \le p \le q \le +\infty$. Moreover, $\|{\bf G}(t,s)\f\|_{L^q(\Rd;\Rm)}\le \tilde{c}_{p,q}(t-s)\|\f\|_{L^p(\Rd;\Rm)}$ for any $(t,s)\in \Sigma_J$, $1+\frac{1}{4\beta} \le p \le q \le +\infty$ and some function ${\tilde c}_{p,q}:(0,+\infty)\to (0,+\infty)$. \end{thm} \begin{proof} Note that all the assumptions of Theorem \ref{thm_hyper}(ii) are satisfied by the scalar operator $\A$ in \eqref{defA}. As a consequence, the evolution operator $G(t,s)$ associated with $\A$ satisfies \eqref{iper} for any $p,q$ as in the statement. In particular $G(t,s)$ maps $L^1(\Rd)$ into $L^{q/p}(\Rd)$ and \begin{equation}\label{iper_scalar} \|G(t,s)\psi\|_{L^{q/p}(\Rd)}\le c_{1,q/p}(t-s)\|\psi\|_{L^1(\Rd)}, \qquad\,\; (t,s)\in \Sigma_J,\,\psi \in L^1(\Rd). \end{equation} Therefore, from \eqref{pointwise} and \eqref{iper_scalar} it follows that \begin{align*} \|{\bf G}(t,s)\f\|_{L^q(\Rd;\Rm)}^q &=\int_{\Rd}|{\bf G}(t,s)\f|^q dx \le e^{qK_p(t-s)/p}\int_{\Rd}(G(t,s)|\f|^p)^{q/p} dx \\ & \le e^{qK_p(t-s)/p} [c_{1,q/p}(t-s)]^{q/p}\||\f|^p\|_{L^1(\Rd;\Rm)}^{q/p} \\ &=e^{qK_p(t-s)/p}[c_{1,q/p}(t-s)]^{q/p} \|\f\|_{L^p(\Rd;\Rm)}^q \end{align*} for any $\f\in C_c(\Rd;\Rm)$ and $(t,s)\in \Sigma_J$. The density of $C_c(\Rd; \Rm)$ in $L^p(\Rd; \Rm)$ allows us to obtain the claim with $\tilde{c}_{p,q}(r)= e^{K_pr/p}[c_{1,q/p}(r)]^{1/p}$, $r \ge 0$. \end{proof} \section{Pointwise gradient estimates} In this section we prove some gradient estimates satisfied by the evolution operator ${\bf G}(t,s)\f$ when $\f \in C^\infty_c(\Rd;\Rm)$ when Hypotheses \ref{uni2} are satisfied. Notice that $p>1$ could be allowed in all the results if $\beta$ is arbitrary in \eqref{HpJ}, according to Remark \ref{alpha}. We also add the following assumptions. \noindent \begin{hyp}\label{gra_est} There exist $\gamma\ge 1/4$ and a function $k$ such that $|D_xq_{ij}|\le k\lambda_Q$ in $I\times \Rd$ for any $i,j=1, \ldots,d$ and \begin{equation}\label{fin1} \sup_{J \times \Rd} \bigg[\sqrt{d}m\xi\lambda_Q+\bigg(\sum_{i=1}^d |D_i C|^2\bigg)^{\frac{1}{2}} +2\Lambda_C\bigg] < +\infty \end{equation} \begin{equation}\label{fin2} \sup_{J \times \Rd} \bigg [\sqrt{d}\bigg (\sum_{i,j,l=1}^d|D_{il}q_{ij}|^2\bigg )^{\frac{1}{2}} +\bigg (\sum_{i,j=1}^d |D_j\tilde{B}_i|^2\bigg )^{\frac{1}{2}}+\Lambda_{D_x b} + \Lambda_C + M_{\gamma}\lambda_Q+\frac{1}{2}\bigg (\sum_{i=1}^d |D_i C|^2\bigg )^{\frac{1}{2}}\bigg ]<+\infty \end{equation} where $M_{\gamma}:=\gamma(\sqrt{d}m\xi+dk)^2+\frac{1}{2}\sqrt{d}m\xi+\frac{1}{4\gamma}$ (see Hypotheses $\ref{uni2}$). \end{hyp} \begin{thm} \label{thm-avvvooooccato} Assume that Hypotheses $\ref{uni2}$ (with $\sigma=1$) and Hypotheses $\ref{gra_est}$ are satisfied. Then, for any $p\geq 1+\frac{1}{4(\beta\wedge\gamma)}$, \begin{equation}\label{stima_grad1} |D_x {\bf G}(t,s)\f|^p \le c_p\, e^{C_{p,J}(t-s)}G(t,s)(|\f|^p+|D\f|^p) \end{equation} for any $(t,s) \in \Sigma_J$, $\f \in C^\infty_c(\Rd;\Rm)$ and some positive constants $c_p$ and $C_{p,J}$, where $G(t,s)$ is the evolution operator associated with $\A(t)$ in $C_b(\Rd)$. \end{thm} \begin{proof} >From \cite[Prop. 2.4]{KunLorLun09Non} it follows that $|G(t,s)\psi|^p\le G(t,s)|\psi|^p$, for any $\psi\in C_b(\Rd)$, $t \ge s \in I$ and $p\in [1,+\infty)$. Thus, it suffices to prove the claim only for $p \in [1+\frac{1}{4(\beta\wedge\gamma)},2]$. Let $J$ be as in Hypotheses \ref{gra_est}, $\f \in C^\infty_c(\Rd;\Rm)$ and for large $n\in\N$, we consider the classical solution $\uu_n=\G_n^{\mathcal N}(\cdot,s)\f$ of the Cauchy-Neumann problem \eqref{prob_approx_Neu}. The core of the proof consists in proving that \begin{equation}\label{stima_grad1_appr} |D_x \uu_n(t,\cdot)|^p \le e^{C_{p, J}(t-s)}G_n^{\mathcal N}(t,s)(|\f|^2+|D\f|^2)^{\frac{p}{2}} \end{equation} for any $(t,s)\in \Sigma_J$, $\f \in C^\infty_c(\Rd;\Rm)$, $p \in [1+\frac{1}{4(\beta\wedge\gamma)},2]$ and some positive constant $C_{p,J}$. Here, $G_n^{\mathcal N}(t,s)$ denotes the evolution operator associated with the restriction of $\A(t)$ (see \eqref{defA}) to $B_n$, with homogeneous Neumann boundary conditions. Indeed, once \eqref{stima_grad1_appr} is proved, estimate \eqref{gra_est} follows, from Proposition \ref{recall}, with $c_p= 2^{(p/2-1)\vee 0}$. So, let us prove \eqref{stima_grad1_appr}. For any $\varepsilon >0$, let us consider the function $v_n= (|\uu_n|^2+|D_x \uu_n|^2+\varepsilon)^{\frac{p}{2}}$. >From \cite[Thm. IV.5.5]{LadSolUra68Lin} it follows that $v_n \in C^{1,2}([s,+\infty)\times \Rd)\cap C_b([s,T]\times \Rd)$ for any $T>s$. Moreover, $v_n$ solves the problem \begin{equation}\label{pro_scal} \left\{\begin{array}{ll} D_t v_n- \A(t)v_n=pv_n^{1-2/p}\bigg (\displaystyle\sum_{i=1}^5\psi_i+(2-p)v_n^{-2/p}\psi_6\bigg ), \qquad\,\, &(s,+\infty)\times B_n,\\ \displaystyle\frac{\partial v_n}{\partial \nu} \le 0\qquad\,\, &(s,+\infty)\times \partial B_n,\\[1.6mm] v_n(s)= (|\f|^2+|D_x \f|^2+\varepsilon)^{p/2} \qquad\,\, & B_n, \end{array} \right. \end{equation} where \begin{align*} \psi_1= &\sum_{i,j,l=1}^d\sum_{k=1}^m D_{li}q_{ij}D_lu_{n,k}D_j u_{n,k}+\sum_{i,l=1}^d\sum_{k,j=1}^m D_l(\tilde{B}_i)_{kj}D_lu_{n,k}D_i u_{n,j} \\ &+\sum_{j=1}^m\langle D_x bD_x u_{n,j},D_xu_{n,j}\rangle+ \sum_{i=1}^d\langle CD_i \uu_n,D_i \uu_n \rangle, \\[1mm] \psi_2= &\sum_{i,j,l=1}^d\sum_{k=1}^m D_lq_{ij}D_{ij} u_{n,k}D_lu_{n,k}+\sum_{i,l=1}^d\sum_{k,j=1}^m (\tilde{B}_i)_{kj}D_{li} u_{n,j}D_lu_{n,k}, \\[1mm] \psi_3= &\sum_{i=1}^d\langle \uu_n, \tilde{B}_i D_i \uu_n\rangle +\sum_{l=1}^d\sum_{k,j=1}^m D_l C_{kj}u_{n,j}D_l u_{n,k}, \\[1mm] \psi_4= &\langle C\uu_n,\uu_n \rangle, \\[1mm] \psi_5= &-\sum_{k=1}^m \langle Q D_x u_{n,k}, D_x u_{n,k}\rangle - \sum_{i=1}^d\sum_{k=1}^m\langle Q D_x D_i u_{n,k}, D_x D_i u_{n,k}\rangle, \\[1mm] \psi_6=&-\sum_{i,j=1}^dq_{ij}\bigg (\langle\uu,D_i\uu\rangle+\sum_{l=1}^d\langle D_{il}\uu,D_l\uu\rangle\bigg ) \bigg (\langle\uu,D_j\uu\rangle+\sum_{m=1}^d\langle D_{jm}\uu,D_m\uu\rangle\bigg ) \end{align*} and the boundary condition in \eqref{pro_scal} follows since the normal derivative of $|D_x u_{n,k}|^2$ is nonpositive in $(s,+\infty)\times\partial B_n$ for any $k=1, \ldots,m$ (see e.g., \cite{BerFor04Gra,BerForLor07Gra}). Using Hypotheses $\ref{uni2}$(i)-(ii) and the inequality $|D_xq_{ij}|\le k\lambda_Q$, we get the following estimates for the functions $\psi_i$, for $i=1,2,3$: \begin{align*} \psi_1 \le& \bigg [\sqrt{d}\bigg(\sum_{i,j,l=1}^d |D_{li}q_{ij}|^2\bigg)^{1/2} +\bigg(\sum_{i,l=1}^d |D_l\tilde{B}_i|^2\bigg)^{1/2} +\Lambda_{D_x b}+ \Lambda_C\bigg ]|D_x \uu_n|^2\\[1mm] \psi_2 \le& \bigg [\bigg (\sum_{i=1}^d |D_iQ|^2\bigg )^{1/2} +\bigg [\bigg (\sum_{i=1}^d |\tilde{B}_i|^2\bigg )^{1/2}\bigg ] |D_x \uu_n||D_x^2 \uu_n| \le a(dk +\sqrt{d}m\xi)^2 \lambda_Q |D_x^2 \uu_n|^2+ \frac{1}{4a}\lambda_Q |D_x \uu_n|^2, \\[1mm] \psi_3 \le &\frac{1}{2}\bigg [\sqrt{d} m \xi\lambda_Q + \bigg (\sum_{i=1}^d |D_i C|^2\bigg )^{1/2}\bigg ](|\uu_n|^2+|D_x \uu_n|^2) \end{align*} in $J \times \Rd$. To estimate $\psi_6$, we observe that \begin{align*} \psi_6=&\sum_{h,k=1}^m\sum_{i,j=1}^dq_{ij}\bigg (u_{n,h}D_iu_{n,h}+\sum_{l=1}^dD_{il}u_{n,h}D_lu_{n,h}\bigg ) \bigg (u_{n,k}D_ju_{n,k}+\sum_{m=1}^dD_{jm}u_{n,k}D_mu_{n,k}\bigg )\\ =&\sum_{h,k=1}^mu_{n,h}u_{n,k}\langle QD_xu_{n,h},D_xu_{n,k}\rangle+2\sum_{h,k=1}^mu_{n,h} \sum_{l=1}^dD_lu_{n,k}\langle QD_xu_{n,h},D_xD_lu_{n,k}\rangle\\ &+\sum_{h,k=1}^m\sum_{l,m=1}^dD_lu_{n,h}D_mu_{n,k}\langle QD_xD_lu_{n,h},D_xD_mu_{n,k}\rangle. \end{align*} It thus follows that \begin{align*} \psi_6\le &\bigg (\sum_{h=1}^m|u_{n,h}||Q^{1/2}D_xu_{n,h}|\bigg )^2+ 2\sum_{h,k=1}^m|u_{n,h}||Q^{1/2}D_xu_{n,h}|\sum_{l=1}^d|D_lu_{n,k}||Q^{1/2}D_xD_lu_{n,k}| \\ &+\sum_{h,k=1}^m\sum_{l,m=1}^d|D_lu_{n,h}||D_mu_{n,k}||Q^{1/2}D_xD_lu_{n,h}||Q^{1/2}D_xD_mu_{n,k}| \\ \le & |\uu_n|^2\sum_{k=1}^d\langle QD_xu_{n,k},D_xu_{n,k}\rangle \\ &+2|\uu_n||D_x\uu_n| \bigg (\sum_{k=1}^d\langle QD_xu_{n,k},D_xu_{n,k}\rangle\bigg )^{\frac{1}{2}} \bigg (\sum_{i=1}^d\sum_{k=1}^m\langle QD_xD_iu_{n,k},D_xD_iu_{n,k}\rangle\bigg )^{\frac{1}{2}} \\ &+|D_x\uu_n|^2\sum_{i=1}^d\sum_{k=1}^m\langle QD_xD_iu_{n,k},D_xD_iu_{n,k}\rangle \\ =& \bigg [|\uu_n|\bigg (\sum_{k=1}^d\langle QD_xu_{n,k},D_xu_{n,k}\rangle\bigg )^{\frac{1}{2}}+ |D_x\uu_n|\bigg (\sum_{i=1}^d\sum_{k=1}^m\langle QD_xD_iu_{n,k},D_xD_iu_{n,k}\rangle\bigg )\bigg ]^2 \\ \le & (|\uu_n|^2+|D_x\uu_n|^2)\bigg (\sum_{k=1}^d\langle QD_xu_{n,k},D_xu_{n,k}\rangle+ \sum_{i=1}^d\sum_{k=1}^m\langle QD_xD_iu_{n,k},D_xD_iu_{n,k}\rangle\bigg ) \\ \le &v^{\frac{2}{p}}_n\bigg (\sum_{k=1}^d\langle QD_xu_{n,k},D_xu_{n,k}\rangle+ \sum_{i=1}^d\sum_{k=1}^m\langle QD_xD_iu_{n,k},D_xD_iu_{n,k}\rangle\bigg ). \end{align*} Putting everything together, we get \begin{align*} \sum_{i=1}^5\psi_i+(2-p)\psi_6v_n^{-2/p}\le &\bigg [\sqrt{d}\bigg (\sum_{i,j,l=1}^d|D_{il}q_{ij}|^2\bigg )^{1/2} +\bigg(\sum_{i,j=1}^d |D_j\tilde{B}_i|^2\bigg )^{1/2}+\Lambda_{D_xb}+\Lambda_C \\ &+\bigg (\frac{1}{4a}+p-1+\frac{1}{2}\sqrt{d}m\xi\bigg )\lambda_Q +\frac{1}{2}\bigg(\sum_{i=1}^d |D_i C|^2\bigg)^{1/2}\bigg ]|D_x\uu_n|^2\\ &+[a(dk+\sqrt{d}m\xi)^2-(1-p)]\lambda_Q|D_x^2\uu_n|^2 \\ &+\bigg \{\Lambda_C+\frac{1}{2}\bigg [\sqrt{d} m \xi\lambda_Q + \bigg(\sum_{i=1}^d |D_i C|^2\bigg )^{1/2}\bigg ]\bigg\}|\uu_n|^2 \end{align*} for any $a=a(t)$ and, choosing $a =(p-1)(dk +\sqrt{d}m\xi)^{-2}$, we conclude that \begin{align*} \sum_{i=1}^5\psi_i+(2-p)\psi_6v_n^{-2/p}\le & \bigg [\sqrt{d}\bigg (\sum_{i,j,l=1}^d|D_{il}q_{ij}|^2\bigg )^{1/2} +\bigg(\sum_{i,j=1}^d |D_j\tilde{B}_i|^2\bigg )^{1/2}+\Lambda_{D_x b}+\Lambda_C +M_{\gamma}\lambda_Q \\ &+ \frac{1}{2}\bigg (\sum_{i=1}^d |D_i C|^2\bigg )^{1/2} \bigg ]|D_x \uu_n|^2 \\ &+\bigg [\frac{1}{2}\sqrt{d}m \xi\lambda_Q+\frac{1}{2}\bigg(\sum_{i=1}^d |D_i C|^2\bigg )^{1/2} +\Lambda_C\bigg]|\uu_n|^2 \end{align*} in $J\times \Rd$. Using estimates \eqref{fin1} and \eqref{fin2} we conclude that $D_t v_n- \A(t)v_n\le C_{p,J}v_n$ in $J\times \Rd$ for some positive constant $C_{p,J}$. Hence, the function $w_n(t,\cdot)= v_n(t,\cdot)- e^{C_{p,J}(t-s)}G_n^{\mathcal N}(t,s)(|\f|^2+|D\f|^2+\varepsilon)^{p/2}$ solves the problem \begin{equation*} \left\{\begin{array}{ll} D_t w_n-( \A(t)+C_{p,J})w_n \le 0, \qquad\,\, &(s,T]\times B_n,\\[1mm] \displaystyle\frac{\partial w_n}{\partial \nu} \le 0,\qquad\,\, &(s,T]\times \partial B_n,\\[1.5mm] w_n(s)=0, \qquad\,\, & B_n. \end{array} \right. \end{equation*} The classical maximum principle yields that $w_n\le 0$ in $(s,T)\times B_n$, whence, letting $\varepsilon \to 0^+$, estimate \eqref{stima_grad1_appr} follows at once. \end{proof} \begin{thm}\label{Lp-w1p-thm} Assume that Hypotheses \ref{uni2} (with $\sigma=1$) and Hypotheses \ref{gra_est} are satisfied with $J=I$. If $\Lambda_C \le -2\gamma dm^2 \xi^2\lambda_Q$ in $I \times \Rd$, where $\gamma$ is as in Hypotheses $\ref{gra_est}$, then the estimate \begin{equation}\label{lp-w1p} |D_x {\bf G}(t,s)\f|^p\le k_pe^{h_p(t-s)}(t-s)^{-\frac{p}{2}}G(t,s)|\f|^p, \end{equation} holds in $\Sigma_I\times\Rd$, for any $p \in [1+\frac{1}{4(\beta \wedge \gamma)},+\infty)$, $\f\in C_c^\infty(\Rd,\Rm)$ and some positive constants $k_p$ and $h_p$. \end{thm} \begin{proof} Using the same arguments as in the proof of Theorem \ref{thm-avvvooooccato} we can limit ourselves to proving \eqref{lp-w1p} when $p\in [1+\frac{1}{4(\beta\wedge\gamma)},2]$. Note that, under our assumptions, the estimates \eqref{pointwise} and \eqref{stima_grad1} hold true for any $p \in [1+\frac{1}{4(\beta\wedge\gamma)},2]$, $\f\in C_c^\infty(\Rd,\Rm)$ and $t>s\in I$, with positive constants $K_J$ in \eqref{pointwise} and $C_p$ in \eqref{stima_grad1}, independent of $J$. Moreover, after a rescaling argument we can assume that $K_J<0$. Thus, for any fixed $p \in [1+\frac{1}{4(\beta\wedge\gamma)},2]$, $\f\in C_c^\infty(\Rd,\Rm)$, from \eqref{stima_grad1} and the evolution law it follows that \begin{align*} |D_x {\bf G}(t,s)\f|^p&=|D_x {\bf G}(t,\sigma){\bf G}(\sigma,s)\f|^p \\ &\le c_pe^{C_p(t-\sigma)}G(t,\sigma)[|{\bf G}(\sigma,s)\f|^p+|D_x{\bf G}(\sigma,s)\f|^p] \\ & \le c_pe^{C_p(t-\sigma)} \left[G(t,s)|\f|^p+G(t,\sigma)|D_x{\bf G}(\sigma,s)\f|^p\right] \end{align*} for any $\sigma \in (s,t)$. Since the transition kernel $p_{t,s}(x,y)$ associated with the evolution operator $G(t,s)$ is a positive $L^1$-function with respect to the variable $y$ with $L^1$-norm equal to one (see \cite[Prop. 2.4]{KunLorLun09Non}), using the H\"older inequality we can estimate \begin{align*} G(t,\sigma)|D_x{\bf G}(\sigma,s)\f|^p =&G(t,\sigma)\left[|D_x{\bf G}(\sigma,s)\f|^p(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p(p-2)}{4}} (|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p(2-p)}{4}}\right] \\ \le&\Big (G(t,\sigma)(|D_x{\bf G}(\sigma,s)\f|^2(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p-2}{2}})\Big)^{\frac{p}{2}} \left(G(t,\sigma)(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p}{2}}\right)^{\frac{2-p}{2}} \\ \le& \varepsilon^{\frac{2}{p}}\frac{p}{2}G(t,\sigma)\Big (|D_x{\bf G}(\sigma,s)\f|^2(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p-2}{2}}\Big ) \\ &+\bigg (1-\frac{p}{2}\bigg )\varepsilon^{\frac{2}{p-2}}G(t,\sigma)(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p}{2}} \end{align*} for any $\varepsilon, \delta >0$, whence \begin{align*} e^{-C_p(t-\sigma)}|D_x {\bf G}(t,s)\f|^p \le & c_pG(t,s)|\f|^p+c_p\left(1-\frac{p}{2}\right)\varepsilon^{\frac{2}{p-2}}G(t,\sigma)(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p}{2}}\nonumber \\ &+\frac{p}{2}c_p\varepsilon^{\frac{2}{p}}G(t,\sigma)\left(|D_x{\bf G}(\sigma,s)\f|^2(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p-2}{2}}\right). \end{align*} Integrating the previous estimate with respect to $\sigma \in (s,t)$, we deduce \begin{align}\label{pre} |D_x {\bf G}(t,s)\f|^p \le \frac{C_pc_p}{1-e^{-C_p(t-s)}} \bigg\{&(t-s) G(t,s)|\f|^p+\bigg(1-\frac{p}{2}\bigg )\varepsilon^{\frac{2}{p-2}} \int_s^tG(t,\sigma)(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p}{2}}d\sigma \nonumber\\ &+\frac{p}{2}\varepsilon^{\frac{2}{p}}\int_s^tG(t,\sigma) \left(|D_x{\bf G}(\sigma,s)\f|^2(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p-2}{2}}\right) d\sigma\bigg\}. \end{align} The claim reduces to proving that there exists a positive constant $k_p$ such that \begin{equation}\label{aim_int} \int_s^t G(t,\sigma)\left(|D_x{\bf G}(\sigma,s)\f|^2(|{\bf G}(\sigma,s)\f|^2+\delta)^{\frac{p-2}{2}}\right)d \sigma \le k_p G(t,s)(|\f|^2+\delta)^{\frac{p}{2}} \end{equation} for any $(t,s)\in \Sigma_I$. Indeed, once \eqref{aim_int} is proved, we replace \eqref{aim_int} into \eqref{pre} and, using \cite[Prop. 3.1]{KunLorLun09Non}, we let $\delta \to 0^+$. Finally, using again \eqref{pointwise} to estimate $G(t,\sigma)|{\bf G}(\sigma,s)\f|^p\le G(t,\sigma)G(\sigma,s)|\f|^p=G(t,s)|f|^p$, we get \begin{align*} |D_x {\bf G}(t,s)\f|^p \le &\frac{C_pc_p}{1-e^{-C_p(t-s)}}\left\{\left[1+\bigg(1-\frac{p}{2}\bigg) \varepsilon^{\frac{2}{p-2}}\right](t-s)+ \frac{p}{2}\varepsilon^{\frac{2}{p}}k_p\right\} G(t,s)|\f|^p \end{align*} and, minimising on $\varepsilon$, \begin{align*} |D_x {\bf G}(t,s)\f|^p \le \frac{C_pc_p}{1-e^{-C_p(t-s)}}\left[(t-s)+k_p^{\frac{p}{2}}(t-s)^{1-\frac{p}{2}}\right] G(t,s)|\f|^p \end{align*} whence the claim follows. Therefore, to conclude we prove \eqref{aim_int}. To this aim, we set \[ \psi_n(\sigma)= G_n^{\mathcal N}(t,\sigma)\left(|{\bf G}_n^{\mathcal N}(\sigma,s)\f|^2+\delta\right)^{\frac{p}{2}} =G_n^{\mathcal N}(t,\sigma)\left(|{\bf u}_n(\sigma, \cdot)|^2+\delta\right)^{\frac{p}{2}}= G_n^{\mathcal N}(t,\sigma)(v_n(\sigma, \cdot)) \] for any $\sigma \in [s,t]$ and $n \in \N$, where $G_n^{\mathcal N}(t,\sigma)$ and ${\bf G}_n^{\mathcal N}(t,\sigma)$ are the same evolution operator considered in the proof of Theorem \ref{thm-avvvooooccato}. Since the normal derivative of the function $v_n(\sigma,\cdot)$ vanishes of $\partial B_n$ for any $\sigma\in (s,t)$, classical results on evolution operators show that the function $\psi_n$ is differentiable in $(s,t)$ and a straightforward computation yields \begin{align*} \psi'_n(\sigma)&= G_n^{\mathcal N}(t,\sigma)\left[D_\sigma v_n(\sigma, \cdot)-\mathcal{A}(\sigma) v_n(\sigma, \cdot)\right] \\ &=pG_n^{\mathcal N}(t,\sigma)\left[(v_n(\sigma))^{1-\frac{2}{p}} \left(\sum_{i=1}^d\langle \uu_n, \tilde{B}_i D_i \uu_n\rangle+ \langle \uu_n, C\uu_n\rangle-\sum_{i,j=1}^dq_{ij}\langle D_i\uu,D_j\uu\rangle\right)\right. \\ &\left. \qquad\qquad\quad\;\,+ (2-p)(v_n(\sigma))^{1-\frac{4}{p}}\sum_{i,j=1}^d q_{ij}\langle \uu, D_i \uu\rangle \langle \uu, D_j \uu\rangle\right]. \end{align*} Using \eqref{star}, we get \begin{align*} \psi'_n(\sigma) & \le pG_n^{\mathcal N}(t,\sigma) \left[(v_n(\sigma))^{1-\frac{2}{p}} \left(\sum_{i=1}^d\langle \uu_n, \tilde{B}_i D_i \uu_n\rangle+ \langle \uu_n, C\uu_n\rangle+(1-p)\lambda_Q|D_x \uu_{n}|^2\right)\right]. \end{align*} Thus, taking Hypotheses \ref{uni2}(i) into account, we deduce \begin{align*} \sum_{i=1}^d\langle \uu_n, \tilde{B}_i D_i \uu_n\rangle+ \langle \uu_n, C\uu_n\rangle\le& m\xi\lambda_Q|\uu_n| \sum_{i=1}^d|D_i\uu_n|+ \Lambda_C|\uu_n|^2 \\ \le &(\varepsilon dm^2\xi^2)\lambda_Q|D_x\uu_n|^2+\bigg (\frac{\lambda_Q}{4\varepsilon}+\Lambda_C\bigg )|\uu_n|^2 \end{align*} for any $\varepsilon=\varepsilon(t)>0$. Consequently, \begin{align*} \psi'_n(\sigma) & \le pG_n^{\mathcal N}(t,\sigma)\left[(v_n(\sigma))^{1-\frac{2}{p}} \left((\varepsilon dm^2\xi^2+1-p)\lambda_Q|D_x\uu_n|^2+\bigg (\frac{\lambda_Q}{4\varepsilon}+\Lambda_C\bigg )|\uu_n|^2\right)\right]. \end{align*} Choosing $\varepsilon= (p-1)(2dm^2\xi^2)^{-1}$ implies \begin{equation}\label{final} \psi'_n(\sigma) \le 2^{-1}p(1-p)\nu_0 G_n^{\mathcal N}(t,\sigma)\left[(v_n(\sigma))^{1-\frac{2}{p}}|D_x\uu_n|^2\right] \end{equation} Integrating both sides of \eqref{final} with respect to $\sigma$ in $[s+h,t-h]$ and then letting $n$ to $+\infty$ and $h$ to $0$ we get \eqref{aim_int} with $k_p=2[p(p-1)\nu_0]^{-1}$. The proof is so completed. \end{proof} \begin{coro} Under the same Hypotheses as in Theorem $\ref{Lp-w1p-thm}$ and assuming that $G(t,s)$ satisfies estimate \eqref{est_p} with $p=1$, the evolution operator $\G(t,s)$ is bounded from $W^{\theta_1,p}(\Rd;\R^m)$ in $W^{\theta_2,p}(\Rd;\R^m)$, for any $p\in [1+\frac{1}{4(\beta\wedge \gamma)},+\infty)$, $0 \le \theta_1\le \theta_2\le 1$ and $(t,s)\in \Sigma_I$. \end{coro} \begin{proof} >From Theorem \ref{th2} it follows that $\|{\bf G}(t,s)\f\|_p \le c_p(t-s)\|\f\|_p$ for any $t>s \in I$, $\f \in C^\infty_c(\Rd; \Rm)$ and some positive function $c_p:(0,+\infty)\to (0,+\infty)$. Moreover, integrating the estimates \eqref{stima_grad1} and \eqref{lp-w1p} in $\Rd$, writing \eqref{est_p} with $p=1$ and $G(t,s)$ instead of $\G(t,s)$ and using the above estimate for $\|{\bf G}(t,s)\f\|_p$, it follows that \begin{equation}\label{fragole} \|{\bf G}(t,s)\f\|_{W^{1,p}(\Rd;\R^m)}\le c_p^1(t-s)\|\f\|_{W^{1,p}(\Rd;\R^m)},\qquad\;\,\|{\bf G}(t,s)\f\|_{W^{1,p}(\Rd;\R^m)}\le c_p^2(t-s)\|\f\|_{L^p(\Rd;\R^m)}, \end{equation} for any $t>s \in I$, $p\in [1+\frac{1}{4(\beta\wedge \gamma)},+\infty)$, $\f \in C^\infty_c(\Rd; \Rm)$ and some positive functions $c_p^i:(0,+\infty)\to (0,+\infty)$, $i=1,2$. By density, the first estimate in \eqref{fragole} can be extended to any $\f \in W^{1,p}(\Rd;\Rm)$ and the second to $\f\in L^p(\Rd;\Rm)$. Thus, the claim is proved for $\theta_2=1$ and $\theta_1=0,1$. The remaining cases follows by interpolation, taking into account that for any $\theta \in (0,1)$ and $p \in [1, +\infty)$, $W^{\theta,p}(\Rd;\Rm)$ equals the real interpolation space $(L^p(\Rd;\Rm); W^{1,p}(\Rd;\Rm))_{\theta,p}$ with equivalence of the respective norms (see \cite[Thm. 2.4.1(a)]{triebel}). \end{proof} \section{Examples} Here we exhibit some classes of elliptic operators to which Theorem \ref{th1} can be applied. Indeed examples of operators which satisfy the hypotheses of Theorem \ref{th2} can be found in \cite{AngLor10Com}. \begin{example}{\rm Let $\boldsymbol{\mathcal A}$ be as in \eqref{operat-A} with $Q=I_m$, $B_i(x)=-x_i(1+|x|^2)^a\hat{B}_i$ and $C(x)=-|x|^2(1+|x|^2)^b\hat{C}$ for any $x \in \Rd$, $i=1,\ldots,d$. Here, $\hat B_i$ ($i=1,\ldots,d$) and $\hat C$ are constant, symmetric and positive definite matrices and $b>2a\ge 0$. It is easy to check that \begin{equation*} \mathcal{K}_\eta(x)\ge -(1+|x|^2)^{2a}\sum_{i=1}^d x_i^2|\hat B_i|^2 +4|x|^2(1+|x|^2)^b \lambda_{\hat{C}} \end{equation*} for any $x\in\Rd$. Moreover, choosing $\kappa(x)=-|x|^c$ with $c\in (2+2a,2+2b)$, we get \begin{align*} \tilde{\mathcal{K}}_\eta(x)\ge& -(1+|x|^2)^{2a}\sum_{i=1}^d x_i^2|\hat B_i|^2+4|x|^2(1+|x|^2)^b\lambda_{\hat{C}}-4(1+|x|^2)^a\sum_{i=1}^d \Lambda_{\hat{B}_i}\\ &-8a(1+|x|^2)^{a-1}\sum_{i=1}^d \Lambda_{\hat{B}_i}x_i^2-4|x|^c \end{align*} for any $x\in\Rd$. Since $b>2a$ and $c <2+ 2b$, the functions $\mathcal{K}_\eta$ and $\tilde{\mathcal{K}}_\eta$ blow up at infinity as $|x|\to \infty$, uniformly with respect to $\eta \in \partial B_1$. Therefore, assumption \eqref{weak} is satisfied both by $\mathcal{K}_\eta$ and $\tilde{\mathcal{K}}_\eta$. On the other hand, taking into account that $c >2+2a$, the function $\varphi(x)=1+|x|^2$, $x \in \Rd$, satisfies Hypotheses \ref{uni1}(ii) and \ref{uni11}(ii) for any $\lambda>0$. Finally, a straightforward computation shows that \begin{align*} \Lambda_{2C-\sum_{i=1}^d D_i B_i}(x)\le -2 |x|^2(1+|x|^2)^b \lambda_{\hat C}+(1+|x|^2)^a \sum_{i=1}^d \Lambda_{\hat{B}_i}+2a(1+|x|^2)^{a-1}\sum_{i=1}^d x_i^2 \Lambda_{\hat{B}_i} \end{align*} for any $x\in\Rd$. The choice of $a$ and $b$ yields that estimate \eqref{p-2-infty} is satisfied, too. Since, all the assumptions in Theorem \ref{th1} are satisfied, the evolution operator ${\bf G}(t,s)$ associated with $\boldsymbol{\mathcal A}$ is well-defined in $L^p(\Rd; \Rm)$ for any $p \ge 1$. Moreover, estimate \eqref{est_p} holds true, where $c_p(t-s)$ is defined in Theorem \ref{th1}. }\end{example} In the following example we consider the operator $\boldsymbol{\mathcal A}$ with $B_i$, $C$ as above, but allow the diffusion coefficients $q_{ij}$ to be unbounded as well. \begin{example}{\rm Let $\boldsymbol{\mathcal A}$ be as in \eqref{operat-A} with $Q(x)=(1+|x|^2)^\delta I_m$, $B_i(x)=-x_i(1+|x|^2)^a I_m+(1+|x|^2)^b\hat{B}_i$ $(i=1,\ldots,d)$ and $C(x)=-(1+|x|^2)^c\hat{C}$ for any $x \in \Rd$. We assume that $\hat{B}_i$ ($i=1,\ldots,d$) and $\hat{C}$ are constant, symmetric and positive definite matrices. Finally, $\delta,a,b \in [0,+\infty)$ satisfy $2b \le \delta <a+1$ and $c > 2a\vee (a+1)$. We have that \begin{align*} \mathcal{K}_\eta(x) = (1+|x|^2)^{-\delta+2b}\sum_{i=1}^d\left[\langle \hat{B}_i \eta, \eta\rangle^2 -|\hat{B}_i\eta|^2\right] +4(1+|x|^2)^c \langle \hat{C} \eta, \eta \rangle, \end{align*} for any $x\in \Rd$ and $\eta \in \partial B_1$. Since $\delta \ge 2b$, the first term in the previous formula is bounded in $\Rd$, therefore \eqref{weak} is clearly satisfied by $\mathcal K_\eta$ and also by $\tilde{\mathcal{K}}_\eta$, where $\kappa(x)=-|x|^s$ and $s\in(2+2a,2c)$. Indeed, \begin{align*} \tilde{\mathcal{K}}_\eta(x) \ge\mathcal{K}_\eta(x)- 4(1+|x|^2)^a - 8b|x|^2(1+|x|^2)^{a-1}+8b(1+|x|^2)^{b-1}\sum_{i=1}^d x_i \langle \hat{B}_i \eta,\eta\rangle-|x|^s \end{align*} for any $x\in\Rd$. The choice of $\delta ,a$, $b$ and $s$ yields that the function $\varphi$, defined in (i) is a Lyapunov function in $\R^d$ for both $\mathcal{A}$ and $\tilde{\mathcal A}$. Moreover, \begin{align*} \Lambda_{2C-\sum_{i=1}^d D_i B_i}(x)\le - 2(1+|x|^2)^c \lambda_{\hat C}+ (1+|x|^2)^b+ 2b(1+|x|^2)^{b-1}|x|^2+ 2c(1+|x|^2)^{c-1}\sum_{i=1}^d |x_i|\Lambda_{\hat{B}_i}, \end{align*} and, since the leading term in the previous estimate is the first term in the right-hand side, estimate \eqref{p-2-infty} is clearly satisfied. Thus, Theorem \ref{th1} can be applied. Moreover, since $c > \delta$, $2c> 2b-1$ and $b \le \delta$, the assumptions of Theorems \ref{thm-avvvooooccato} and \ref{Lp-w1p-thm} are satisfied and estimates \eqref{stima_grad1} and \eqref{lp-w1p} hold true in $\Rd$ for any $(t,s)\in \Sigma_I$. }\end{example} \begin{rmk}{\rm In the previous examples we can replace the constant matrices $I_m$, $\hat{B}_j$ ($j=1, \ldots,d$) and $\hat C$ by matrices of the same type, i.e., by ${\rm diag}(q_i(t))$, $\hat{B}_j(t)$ ($i=1, \ldots,m$, $j=1, \ldots,d$) and $\hat C(t)$ respectively, whose entries are functions which belong to $C^{\alpha/2}_{ \rm loc}(I)\cap C_b(I)$ and such that $q_i$, $\lambda_{\hat{B}_i}$ ($i=1, \ldots,m$, $j=1, \ldots,d$) and $\lambda_{\hat{C}}$, have positive infima on $I$. }\end{rmk}
1,314,259,995,464
arxiv
\section{Introduction} The mechanism of solar flares is a puzzle in solar physics that has remained unsolved for more than one century. They originate from the stored magnetic energy around sunspots, emerge from the inner atmosphere, and the impulsive release of the energy produces flares \citep[e.g.,][]{pri02, shi11}. The physical process of sunspot formation and flare eruption originating from dynamo action has been intensely studied by observation and theory \citep[e.g.,][]{fle11, mag13, tor13, tak15, tor17}. In particular, the amount of solar observation data, which is available in the near real time, has markedly increased. However, it is still difficult to predict flares occurring even within the following 24 h by human forecasting. The occurrence of flares has been studied for a long time. The keys to flare occurrence are the energy storage and triggering processes, which are driven by the emerging flux in the photosphere \citep{kus12, ino14, kli14, kan16}. It is empirically known that larger sunspots with a large number of umbra and a more complicated magnetic flux structure tend to produce larger flares \citep[e.g.,][]{sam00, lek03, blo12, mcc16}. The energy is globally stored in an active region for 1-2 days, where the amount of energy determines the maximum class of flares. When a large class of flares occur, large amounts of magnetic shear and magnetic free energy and the appearance of emerging flux are observed along magnetic neutral lines. However, the occurrence of magnetic shear, free energy, and emerging flux does not necessarily foretell a large flare \citep{geo07, mas10, fal14}. On the other hand, trigger mechanisms of flares are locally observed in a shorter time scale 2-3 h before a flare occurs, as a rapid change in the magnetic field or accumulation near magnetic neutral lines \citep{moo04, sai07, nis09, bam13, wan17}. These are sometimes associated with pre-flare events, such as ultraviolet (UV) brightening in the 1600 \,\AA\ continuum and coronal brightening. UV 1600 \,\AA\ brightening represents upper photospheric heating by a small-scale energy release driven by emerging flux. The repetition of flares is frequently observed \citep[e.g.,][]{zir88, zir91, whe04}, and pre-flare brightening in soft X-ray and radio emissions has also been reported \citep{asa06, chi06, chi07, sia09}, as well as turbulence and reconnection outflow before the release of impulsive energy \citep{wal10, mck13, har13}. To deal with the recent large amount of solar observation data, a new approach has been developed using machine-learning algorithms including a neural network \citep{qah07, col09, hig11, ahm13}, a regression model \citep{lee07, son09}, a k-nearest-neighbor algorithm \citep[k-NN;][]{li08, hua13, win15, nis17}, a support vector machine \citep[SVM;][]{ qah07, bob15, mur15, alg15, bou15, rab17, sad17}, the least absolute shrinkage and selection operator \citep{ben18, jon18}[LASSO;][], a random forest \citep{liu17}, an extremely randomized trees \citep[ERT;][]{nis17}, an unsupervised fuzzy clustering \citep{ben18}, and an ensemble of four prediction models \citep{gue15}. Machine learning can clarify which feature is most effective for predicting flares. However, it is still not clear which model is best for prediction in an operational setting i.e., the chronological splitting of the dataset into training and testing datasets. Nishizuka et al. (2017) compared three machine-learning algorithms for flare prediction: the k-NN, the SVM, and the ERT algorithm. They found that the k-NN shows the best performance in the case of random shuffling and dividing the dataset. However, it was also found that the performance of the models changes with differences in the splitting of the dataset into training and testing datasets. In the case of random shuffling and dividing, samples are separated into training and testing datasets within 24 h. Because these datasets, especially those of magnetograms, are similar to each other, the simple criterion of the distance between two data can give us better prediction, which is an advantage of the k-NN algorithm. In contrast, chronological splitting of the dataset into training and testing datasets makes the prediction of flares more difficult, because the training and testing datasets become completely separated. Because we found that our machine-learning models in Nishizuka et al. (2017) using k-NN, SVM, and ERT cannot predict flares with significant precision in an operational setting, we focused on the DNN \citep{hin06, lec15} algorithms, which can generally maximize the prediction accuracy or minimize the cost function. Thus, we developed a flare prediction model using the DNN, named the Deep Flare Net (DeFN) model. In the training datasets, we included new features related to the trend of data, i.e., the maximum intensities of soft X-ray and EUV 131 \,\AA\ emissions 1 and 2 h before an image. In section 2, we briefly explain neural networks. In section 3 we give an overview of our prediction model, which is explained in detail in section 4. The prediction results are described in section 5 and a discussion and conclusion are given in section 6. \clearpage \section{Basic Architecture of Neural Networks} A neural network (NN) is a set of linear and nonlinear conversions of input data. Nonlinear conversion represented by an activation function enables representations that cannot be made by linear conversions, for example, the consideration of curves as the separators or the distortion/curvature of the space of data. In other word, a NN is a classifier consisting of several layers, which repeat linear (affine) and nonlinear conversions to search for the most suitable mapping of the original dataset into a higher dimension to be linearly separated. One layer converts the input ${\bf x}$ into the output ${\bf y}$ as follows, \begin{equation} {\bf y} = f({\bf x}; W, {\bf b}) = f(W{\bf x} + {\bf b}). \end{equation} Here, $W$ is a weight matrix and ${\bf b}$ is bias. This conversion is represented by an image of Figure 1. The function $f$($\cdot$) is called the activation function in the field of machine-learning \citep{hah00, bis06, glo11}, whereas its inverse function $f^{-1}$ is the link function in the statistical field. We determine parameters in Table 1. As the activation function for the first to the second last layers, we adopted the rectified linear function or rectified linear units \citep[$ReLU$;][]{nai10}, \begin{equation} ReLU(x) = \log (1 + \exp (x)) \simeq max(x, 0) \end{equation} This function is commonly used as an activation function giving a value greater than zero, resulting in a sparse distribution and a faster calculation because the derivative is one. To give a binary classification of flares (e.g., $\geq$M class versus $<$M class or $\geq$C class versus $<$C class), we calculate the probability of the two classes as the output. For this purpose, we used a softmax function or a normalized exponential function for the last layer, \begin{equation} Softmax( x_i ) = \frac{ \exp (x_i) }{ \sum_{j=1}^N \exp (x_j) } \end{equation} Figure 1(a) shows a normal NN with one hidden layer and Figure 1(b) shows a simplified representation of Figure 1(a). In Figure 1(b), the hidden layer is still represented by a square, but the input and output layers are simplified to $x$ and $y$, respectively. The biases of the nodes, $b_1$ and $b_2$, are omitted. The arrows indicate multiple connections between each unit through linear conversion. The description of the DeFN model later in Figure 4 follows this simplified style. \begin{figure}[hbtp] \epsscale{.70} \plotone{astroph_f1.eps} \caption{Input and output of two layers. $W_1$ and $W_2$ are weight matrices, and $b_1$ and $b_2$ are bias vectors. Many links are complicated and often represented by a more simple style at the right hand side. \label{fig1}} \end{figure} \begin{deluxetable}{llllrl} \tabletypesize{\normalsize} \tablecaption{Symbol notations. \label{tbl1}} \tablewidth{0pt} \startdata \tableline \colhead{x, {\bf x}} & \colhead{Arbitrary input parameters} \\[+0.1cm] \colhead{y, {\bf y}} & \colhead{Arbitrary output parameters (discrete or continuous)} \\[+0.1cm] \colhead{N} & \colhead{The number of training samples} \\[+0.1cm] \colhead{K} & \colhead{The number of classes/categories} \\[+0.1cm] \colhead{${\bf y}_k^*$ = $\{$ $y_{nk}^*|$ $n$=1,....,N $\}$ } & \colhead{Correct label of n-th training sample} \\[+0.1cm] \colhead{$p$(${\bf y}_k$) = $\{$ $p$($y_{nk}$)$|$ $n$=1,....,N $\}$ } & \colhead{Estimated value of probability of ${\bf y}_k$} \\[+0.1cm] \enddata \end{deluxetable} \section{Overview of Deep Flare Net (DeFN)} We introduce the procedures of our flare prediction model as follows (see also Fig. 2). (i) First, observation data are downloaded from the web archives of Solar Dynamic Observatory \citep[SDO;][]{pes12} and the Geostationary Operational Environmental Satellite (GOES), such as the line-of-sight magnetogram, vector magnetogram, 1600 \,\AA\ and 131 \,\AA\ filter images, and the light curves of the soft X-ray emission. (ii) Second, active regions (ARs) are detected from full-disk images of the line-of-sight magnetogram, and the ARs are tracked using their time evolution. (iii) For each AR, features are calculated from multiple wavelength observations, and flare labels are attached to the solar feature database if an X/M/C-class flare occurs within 24 h after an image. (iv) Supervised machine learning by a DNN is carried out with a 1 h cadence to predict the maximum class of flares occurring in the following 24 h because the features are extracted at a cadence of 1 h. When the corresponding data was missing, the nearest data within 30 min was first searched for, and if this search failed, the prediction was skipped. Our observation data are from June 2010 to December 2015, which were taken by SDO, launched in February 2010. During this period, 26 X-class, 383 M-class, and 4054 C-class flares were observed on the disk, accounting for 90\% of the flares observed during the period. We used the line-of-sight magnetogram taken by the Helioseismic and Magnetic Imager \citep[HMI;][]{sche12, scho12} on board SDO, as well as the vector magnetogram. The UV continuum of the upper photosphere and the transition region were taken by the 1600 \,\AA\ filter of the Atmospheric Imaging Assembly \citep[AIA;][]{lem12} on board SDO, and the hot coronal brightening in the flaring region was taken by the 131 \,\AA\ filter on board SDO. The full-disk integrated X-ray emission over the range of 1-8 \,\AA\ was observed by GOES. The time cadence of the line-of-sight magnetogram is 45 s, that of the vector magnetogram is 12 min, those of the 1600 \,\AA\ and 131 \,\AA\ filters are both 12 s, and that of GOES is less than 1 min. Thus, the total size of the observation dataset is so large that we reduced the cadence to 1 h, in accordance with the forecast operation every hour from 00:00 UT. \begin{figure}[hbtp] \epsscale{.85} \plotone{astroph_f2.eps} \caption{Flow chart of our DeFN model of solar flare prediction. \label{fig2}} \end{figure} \clearpage \section{Details of Deep Flare Net (DeFN) model} \subsection{Detection of ARs} First, we detected ARs to extract solar features from the images of the downloaded observation database. We used 3$\times$10$^5$ full-disk images of the line-of-sight magnetogram for detection with a reduced cadence of 1 hr. The line-of-sight magnetogram was selected for AR detection because it is less noisy than the vector magnetogram and more suitable for the processing carried out for detection. After determining the ARs in magnetogram images using a threshold of 140 G, the frame coordinates of the ARs were applied to other images with different wavelengths (Fig. 3). The detection rules are the same as in Nishizuka et al. (2017). We neglected ARs whose frames are across the limb determined by a threshold on intensity in photospheric images. By tracking the same ARs, we numbered them for identification; these numbers are different from the NOAA AR numbers. \begin{figure}[hbtp] \epsscale{.55} \plotone{astroph_f3.eps} \caption{Full-disk images of (a) the while light intensity taken by HMI/SDO with detected active regions framed in yellow or red, (b) the line-of-sight magnetogram taken by HMI/SDO, (c) the UV continuum taken with the 1600 \,\AA\ filter of AIA/SDO, and (d) the coronal brightening taken by the 131 \,\AA\ filter of AIA/SDO. The region with a red frame produced X5.4 flare, 3 hr after this image was taken. \label{fig3}} \end{figure} \subsection{Extraction of solar features} Using the database of detected ARs, we next extracted solar features from each AR. We adopted solar features used in previous papers, which were extracted from the line-of-sight magnetogram \citep[e.g.,][]{ste11, ahm13}, the vector magnetogram \citep{lek03, bob15}, the UV continuum taken by the 1600 \,\AA\ filter \citep{nis17}, and GOES X-ray data in the range of 1-8 \AA. These extracted features are the same as Table 1 in Nishizuka et al. (2017). Furthermore, in this study, we extracted the feature of hot coronal brightening observed by the 131 \,\AA\ filter of AIA/SDO, showing the emissions of iron-20 (Fe$_{XX}$) and iron-23 (Fe$_{XXIII}$) at temperatures greater than 10$^7$ K, for the first time. We also added the data of 131 \,\AA\ and GOES X-ray emissions 1 and 2 h before an image, because they are expected to be useful for the operational prediction. Since the data of the GOES X-ray is integrated over the full disk, the individual X-ray intensity is not available for each AR, but the integrated value is used. Instead of the individual X-ray intensity, we calculated the maximum intensity of 131 \,\AA\ emission for each AR, so that every AR has its own dataset of 131 \,\AA\ emission, which is more efficient for prediction. \subsection{Classification by DeFN model} Figure 4 shows our model, which we name Deep Flare Net (DeFN). This model is based on deep-learning techniques and consists of multilayers. The input is our solar feature database, i.e., 79-dimensional vectors of standardized features, and the output is the prediction probability of each class of flares, $p(y)$. Here $y$ = ($y_1$, $y_2$) is the class of flares: ($y_1$, $y_2$) = (0, 1) for $\geq$M-class flare events and ($y_1$, $y_2$) = (1, 0) for $<$M-class or non-flare events. We calculate two probabilities for $\geq$M-class flare events and for $<$M-class or non-flare events, and finally we simply select the category with the larger probability. Each layer of the neural network in Figure 4 represents a map from the input to the output with a linear combination and an activation function (generally nonlinear). In this model, we used ReLU \citep{nai10} and softmax function only for the last layer as activation functions. To increase the number of layers, we used a simple skip connection \citep{he15}, which has the role of increasing the precision of the model and avoiding the divergence in the case of multiple layers (for more detail, see Appendix A.1). The notation ${\bf BN}$ represents batch normalization \citep{iof15}, which standardizes the input parameters of each layer to stabilize the training and to improve the precision (see Appendix A.2 for more details). \newcommand{\argmax}{\mathop{\rm argmax}\limits} To maximize the prediction accuracy, the training was performed to minimize a cost function $J$. For a classification problem, parameters are optimized to minimize the cross entropy. However, since the flare occurrence ratio is imbalanced, we optimized parameters instead by the summation of the weighted cross entropy, \begin{equation} J = \sum_{n=1}^N \sum_{k=1}^K w_k y_{nk}^* \log p(y_{nk}). \end{equation} The parameters used here are summarized in Tables 1 and 2. In Table 2, we used the recommended default values of Adam\footnote{Adam (Adaptive moment estimation) is a method for stochastic optimization, which is extended from AdaGrad, RMSprop, and AdaDelta \citep{kin14}.}. $w_k$ is the weight of each class, which is the inverse of the class occurrence ratio. The number of nodes and the batch size were investigated in the range of 50-200. The architecture of the DeFN model with 5-9 layers was surveyed by attaching or detaching skip connections. The output of the flare prediction for a dataset is determined by the value of $k$ that gives the maximum probability: \begin{equation} \hat{y} = \argmax_{k} p(y_k). \end{equation} In the two-class classification ($\geq$M-class or $<$M-class), we simply select the category with the larger probability. \begin{figure}[hbtp] \epsscale{.90} \plotone{astroph_f4.eps} \caption{Overview of Deep Flare Net (DeFN) model.\label{fig4}} \end{figure} \begin{deluxetable}{llllrl} \tabletypesize{\normalsize} \tablecaption{Parameter settings of the DeFN model. \label{tbl2}} \tablewidth{0pt} \startdata \tableline \colhead{Optimization method} & \colhead{Adam (Learning rate = 0.001, $\beta_1$ = 0.9, $\beta_2$ = 0.999)} \\[+0.1cm] \colhead{$w_k$ =} & \colhead{(1, 50) for $\geq$M-class flares, (1, 4) for $\geq$C-class flares} \\[+0.1cm] \colhead{Num. of nodes =} & \colhead{79 (input), 200, 200, 79, 200, 200, 79, 200, 2 (output)} \\[+0.1cm] \colhead{Batch size =} & \colhead{150} \\[+0.1cm] \enddata \end{deluxetable} \clearpage \section{Results} Using the DeFN model and solar feature database, we performed solar flare prediction within the following 24 h in an operational setting. We divided the database of 2010-2015 into two: the dataset in 2010-2014 for training and validation, and the one in 2015 for testing (Fig. 5). These chronological datasets for training and testing make it more difficult to predicting flares than the randomly shuffled and divided datasets \citep[e.g.,][]{bob15, mur15, nis17, nis18}. Using mini-batches of the training datasets, the optimization was repeated many times. Here a mini-batch corresponds to the number of training samples in one forward and backward pass \citep{goo16}. A mini-batch is randomly selected from all the training samples, avoiding overlapping. The update of the weight parameters in an epoch, i.e., one forward and backward pass, is stabilized to converge faster by using mini-batches. We performed two-category predictions: (i) $\geq$M-class flare events or $<$M-class/non-flare events, (ii) $\geq$C-class flare events or $<$C-class/non-flare events. The number of X-class flares was small during 2010-2015; thus, X-class flares were not solely predicted by the DeFN model because the training sample was insufficient. We evaluated models using test datasets in each epoch, i.e., one forward pass and one backward pass of all the training examples, and we selected the model giving the maximum skill score, named the true skill statistic (TSS). The prediction results obtained by the DeFN model are summarized in Figure 5. The contingency tables show the number of true positive (TP), true negative (TN), false positive (FP), and true negative (TN) events. TN is very large because flare prediction is an imbalanced problem. TP for $\geq$C-class flares is much larger than for $\geq$M-class flares. This is a result of the underlying distribution of the data (more flares for $\geq$C than $\geq$M). The occurrence of flares is over predicted, resulting in a large FP. Here we note that TSS is defined by two terms: $TSS$ = TP/(TP+FN) - FP/ (FP+TN). Thus, if we over predict the flare occurrence, the first term increases and the number of missing flares decreases. At the same time, the second term increases, but TN is so large that the second term changes only slightly. Therefore, the net value of TSS tends to increase when over predicting flares. From the contingency tables, we calculated six skill scores: the probability of detection (POD), the critical success of index (CSI), the false alarm ratio (FAR), Heidke skill score (HSS), TSS, and accuracy (for the determination of the skill scores, see Appendix). We show the results in Table 3. We achieved TSS=0.80 for $\geq$M-class flare prediction in an operational setting and TSS=0.63 for $\geq$C-class flare prediction in an operational setting. These results are better than human forecasts \citep{cro12, dev14, kub17} and other baseline algorithms such as SVM, kNN and ERT (see Table 3). \begin{figure}[hbtp] \epsscale{.75} \plotone{astroph_f5.eps} \caption{Prediction results of $\geq$M-class and $\geq$C-class flares occurring in the following 24 hours obtained by DeFN model for the operational chronological training/testing datasets. \label{fig5}} \end{figure} \begin{deluxetable}{lll|lrl} \tabletypesize{\normalsize} \tablecaption{Skill scores of prediction results in an operational setting (the dataset in 2010-2014 for training, and the dataset in 2015 for testing), by DeFN and other baseline algorithms such as SVM, kNN, and ERT with default parameter settings: Probability of Detection (POD), Critical Success of Index (CSI), False Alarm Ratio (FAR), Heidke skill score (HSS), True skill statistic (TSS), and accuracy. \label{tbl3}} \tablewidth{0pt} \startdata \tableline & \multicolumn{2}{c}{DeFN} & \colhead{SVM} & \colhead{kNN} & \colhead{ERT} \\ \colhead{} & \colhead{$\geq$M-class} & \colhead{$\geq$C-class} & \colhead{$\geq$M-class} & \colhead{$\geq$M-class} & \colhead{$\geq$M-class} \\[+0.1cm] \hline \colhead{POD} & \colhead{0.95} & \colhead{0.81} & \colhead{0.36} & \colhead{0.19} & \colhead{0.17} \\[+0.1cm] \colhead{CSI} & \colhead{0.18} & \colhead{0.47} & \colhead{0.16} & \colhead{0.16} & \colhead{0.16} \\[+0.1cm] \colhead{FAR} & \colhead{0.82} & \colhead{0.47} & \colhead{0.75} & \colhead{0.53} & \colhead{0.36} \\[+0.1cm] \colhead{HSS} & \colhead{0.26} & \colhead{0.53} & \colhead{0.27} & \colhead{0.26} & \colhead{0.26} \\[+0.1cm] \colhead{TSS} & \colhead{0.80} & \colhead{0.63} & \colhead{0.33} & \colhead{0.19} & \colhead{0.17} \\[+0.1cm] \colhead{Accuracy} & \colhead{0.86} & \colhead{0.82} & \colhead{0.94} & \colhead{0.97} & \colhead{0.97} \\[+0.1cm] \enddata \end{deluxetable} \clearpage \section{Summary and Discussion} We developed a flare prediction model, named Deep Flare Net (DeFN), with supervised machine-learning techniques, particularly a DNN. We used solar observation images of a vector magnetogram, UV 1600 \,\AA\ brightening in the photosphere, and coronal brightening in soft X-ray and EUV 131 \,\AA\ emissions. By detecting ARs as shown in Nishizuka et al. (2017), we extracted novel features, i.e., the features added in previous work, namely, the histories of GOES X-ray and AIA 131 \,\AA\ emission intensities. Then, we attached flare labels to the feature database. We divided the dataset into two: the dataset in 2010-2014 for training and the dataset in 2015 for testing. Then, we ran the DeFN model to predict the maximum classes of flares that occur in the following 24 h after observation images by calculating the probabilities of flares in each region with a binary classification (i.e., $\geq$M-class versus $<$M-class or $\geq$C-class versus $<$C-class; Fig. 6). In an operational setting, our DeFN model achieved skill scores of TSS =0.8 for $\geq$M-class flares and TSS=0.63 for $\geq$C-class flares. The number of X-class flares in the testing dataset was insufficient for a DNN; thus, the results of X-class flare prediction are not shown here. Generally speaking, a DNN is constructed to maximize the accuracy of prediction, making it highly suitable for flare prediction. In fact, we achieved good skill scores using fully shuffled and divided datasets with other machine-learning methods, such as SVM, k-NN, and ERT, in Nishizuka et al. (2017). However, we found that in an operational setting, the models do not have the same performance, because the operational setting is more difficult than the setting with the shuffled and divided datasets (see Table 3). Note that in DNN models, cross-validation is not used. Instead, parameters are updated every epoch in the DNN models. An epoch corresponds to one forward/backward pass of all the training examples. The parameters in the first epoch, second epoch, and n-th epoch are different. In each epoch, the test dataset is evaluated using the model with updated parameters. Through the iterations in epochs, the cost function decreases and the generalization error, TSS, the updated parameters and the model also change. Finally, from all the models, we select the best model with the highest test-set TSS. The easiest evaluation is to only use the test datasets, while, for greater precision, both test and validation datasets are prepared for the evaluation. In this study, to deal with the sample datasets effectively, we selected the best model only using the test datasets. The advantage of this DeFN model is that the features are manually selected and they can be analyzed to search for the most effective features for flare prediction. This is different from other DNN models, where a convolution network extracts imaging features that humans cannot understand, making it impossible to explain the high precision obtained using a feature database \citep{nag17, par17, yi17, had17, hua18}. Using the DeFN model, the ranking of the features can be shown in principle, although this requires huge computational resources so is not shown here\footnote{The calculation time using a GPU is estimated as 2-3 h $\times$ 80 features = 160-240 h = 7-10 days. This is 8-36 times that of other machine-learning models (e.g., SVM, ERT, kNN) run on a CPU.}. The ranking is not derived from the weights of the features. By removing features one-by-one from the original feature database, the ranking of features can be investigated on the basis of the skill score variation. In the daily forecast operations at NICT Space Weather Forecast Center, which use the knowledge of experts, TSS was 0.21 for X-class flares and 0.50 for $\geq$M-class flares during the period 2000-2015 \citep{kub17}. At the Solar Influences Data Center of the Royal Observatory of Belgium, TSS was 0.34 for $\geq$M-class flares during the period 2004-2012 \citep{dev14}. At NOAA Space Weather Prediction Center, TSS was 0.49 for X-class flares, 0.53 for $\geq$M-class flares, and 0.57 for $\geq$C-class flares \citep{cro12}. The Met Office in UK also reported their prediction results \citep{murr17}, although TSS was not derived. Therefore, by comparing TSS our DeFN prediction model appears to achieve better performance than human operations. For the verification of forecasting skill, TSS is recommended in the space weather forecasting community \citep{blo12} because it is a base-rate-independent measure. However, there is still discussion on which measure is more suitable for verifying rare-event forecasts \citep{bar16}. \begin{figure}[hbtp] \epsscale{.55} \plotone{astroph_f6.eps} \caption{Four solar images of SDO observed at different wavelengths, i.e., using white light (HMI), photospheric magnetogram (HMI), 1600, and 131 \,\AA\ filters (AIA). The probabilities of predicted flares with a binary classification on each AR are attached to the image of the magnetogram. \label{fig9}} \end{figure} \clearpage
1,314,259,995,465
arxiv
\section{Related work} \textbf{Adversarial patch attacks and defenses.} It is widely known that standard deep network models suffer when presented with occlusion~\cite{zhu2019robustness, compnet_wacv, korty2020compnets_cvpr}. When a fraction of the image is occluded, CNNs perform poorly, even when human subjects were able to handle such occlusion~\cite{zhu2008unsupervised}. Deep networks' fragility under occlusion is not limited to ``natural" examples: their accuracy drops to near zero when presented with carefully crafted adversarial patches. The first work on adversarial patches showed that monochromatic patches with randomized locations can fool networks~\cite{fawzi2016measuring}. This attack was further extended to the white-box setting by using model gradients to optimize the patch texture~\cite{brown2017adversarial}. Newer black-box adversarial patch attacks improve upon these works by adapting the texture of the patch to improve the success rate with less area~\cite{croce2020sparsers, yang2020patchattack}, and refine the location search strategy using reinforcement learning rather than random search~\cite{yang2020patchattack}. These algorithmic improvements reduce the patch area and number of queries needed to induce a misclassification, and they are able to circumvent even state-of-the-art defenses against perturbation-based attacks~\cite{yang2020patchattack}. Many defenses against perturbation-based adversarial attacks have been proposed~\cite{xie2019denoising, kannan2018adversarial, goodfellow2014explaining, madry2017towards_resistant}; however, defenses against patch-based attacks are less well studied. Two recent works have adapted adversarial training to the patch attack setting. Chiang \etal~\cite{certified_patch_defenses} show that adversarial training can lead to certified robustness against attacks based on small patches (less than 1\% of image area). Moreover, Rao \etal~\cite{rao2020adversarial_patch_training} use adversarial training to defend against black-box patch attacks. Both of these approaches rely on adversarial training to improve the the robustness of standard CNN architectures to patch-based attacks. In contrast, we are the first to show that black-box patch-based adversarial attacks can be defended against \emph{without adversarial training} by \emph{using an architecture that is innately robust to occlusion}. This is important from a theoretical standpoint, as we approach network architectures that are inherently robust to attacks. This is also important from a practical standpoint, as adversarial training is expensive, especially for sophisticated threat models~\cite{xie2019denoising}. \textbf{Robustness to partial occlusion.} Occlusion robustness has been widely studied in the computer vision literature. Like adversarial training, some approaches attempt to make models more robust by augmenting the training data with occluded examples~\cite{yun2019cutmix, devries2017improved}. Others have proposed architectural improvements that allow the model to detect and ignore occluders~\cite{xiao2019tdapnet}. Compositional models~\cite{geman2002,jin2006,fidler2007,zhu2008,wu2010,kortylewski2019greedy} have been studied as one possible architecture that can naturally detect and ignore occlusion~\cite{kortylewski2017model}. Liao \etal~\cite{liao2016learning} integrate compositionality in CNN models by regularizing their features to represent part-like detectors. Zhang \etal~\cite{zhang2018interpretable} also use part detectors with a localized spatial distribution. Compositional Convolutional Neural Networks extend this approach to use a generative model of part activations~\cite{korty2020compnets_cvpr, compnet_wacv}, allowing the model to explain away occlusions. Kortylewski \etal~\cite{compnet_wacv} propose a method to combine the output of standard deep networks with compositional networks to improve fine-grained recognition. However, we are the first to consider an explicit approach that improves CompNets' fine-grained recognition accuracy, and to study their robustness to adversarial patch attacks. \begin{figure*}[t] \centering \begin{subfigure}{0.49\linewidth} \centering \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/1.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/2.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/3.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/4.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/5.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/6.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/7.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/8.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/no_finetuning/9.jpg} \end{subfigure} \caption{Without part-based finetuning} \end{subfigure} \begin{subfigure}{0.49\linewidth} \centering \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/1.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/2.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/3.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/4.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/5.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/6.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/7.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/8.jpg} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{vcs/finetuning/9.jpg} \end{subfigure} \caption{With part-based finetuning} \end{subfigure} \caption{Part-based finetuning learns more class-specific and discriminative vMF clusters, increasing CompNets' fine-grained classification performance.} \label{fig:part_finetuning} \end{figure*} \section{Methods} \label{sec:methods} In this section, we explain the prior formulation of CompNets and describe part-based finetuning, our improvement of CompNets for fine-grained classification. \subsection{Prior work} \label{sec:prior} \textbf{Notation.} The output of the layer $l$ in the DCNN is referred to as \textit{feature map} $F^l = \psi(I,\Omega) \in \mathbb{R}^{H \times W \times D}$, where $I$ and $\Omega$ are the input image and the parameters of the feature extractor, respectively. \textit{Feature vectors} are vectors in the feature map, $f_i^l \in \mathbb{R}^D$ at position $i$, where $i$ is defined on the 2D lattice of $F^l$ with $D$ being the number of channels in the layer $l$ . We omit subscript $l$ in the following for clarity since the layer $l$ is fixed \emph{a priori} in the experiments. \textbf{CompNets.} Compositional Convolutional Neural Networks (CompNets) \cite{korty2020compnets_cvpr} are deep neural network architectures in which the fully connected classification head is replaced with a differentiable compositional model. In particular, the classification head defines a probabilistic generative model $p(F|y)$ of the features $F$ for an object class $y$: \begin{equation} \label{eq:vmf1} p(F|\Theta_y) = \sum_m \nu_m p(F|\theta^m_y), \hspace{.1cm}\nu_m \in\{0,1\}, \sum_{m=1}^M \nu_m = 1. \end{equation} Here, $M$ is the number of mixtures of compositional models per object class and $\nu_m$ is a binary assignment variable that indicates which mixture component is active. $\Theta_y= \{\theta^m_y = \{\mathcal{A}^m_y,\Lambda\}|m=1,\dots,M\}$ are the overall compositional model parameters for the class $y$. The individual mixture components are defined as: \begin{equation} \label{eq:vmf2} p(F|\theta^m_y) = \prod_{i} p(f_i|\mathcal{A}_{i,y}^m,\Lambda). \end{equation} Note how the distribution \textit{decomposes} the feature map $F$ into a set of individual feature vectors $f_i$. $\mathcal{A}^m_y=\{\mathcal{A}^m_{i,y}|i \in [H, W] \}$ are the parameters of the mixture components at every position $i$ on the 2D lattice of the feature map $F$. In particular, $\mathcal{A}^m_{i,y} = \{\alpha^m_{i,k,y}|k=1,\dots,K\}$ are mixture coefficients and $\Lambda = \{\lambda_k = \{\sigma_k,\mu_k \} | k=1,\dots,K \}$ are the parameters of von-Mises-Fisher (vMF) distributions: \begin{equation} \label{eq:vmf3} p(f_i|\mathcal{A}_{i,y}^m,\Lambda) = \sum_k \alpha_{i,k,y}^m p(f_i|\lambda_k), \end{equation} \begin{equation} \label{eq:vmfprob} p(f_i|\lambda_k) = \frac{e^{\sigma_k \mu_k^T f_i}}{Z(\sigma_k)}, ||f_i|| = 1, ||\mu_k|| = 1. \end{equation} Note that $K$ is the number of components in the vMF mixture distributions and $\sum_{k=0}^K \alpha^m_{i,k,y} = 1$. $Z(\sigma_k)$ is the normalization constant. The model parameters $\{\Omega,\{\Theta_y\}\}$ can be trained end-to-end as discussed in \cite{korty2020compnets_cvpr}. \textbf{Partial occlusion.} Compositional networks can be augmented with an outlier model to enhance their robustness to partial occlusion and patch attacks. The intuition is that at each position $i$ in the image, either the object model $p(f_i|\mathcal{A}^m_{i,y},\Lambda)$ or an outlier model $p(f_i|\beta,\Lambda)$ is active: \begin{align} &p(F|\theta^m_y,\beta)\hspace{-0.075cm} =\hspace{-0.075cm} \prod_{i} p(f_i|\beta,\Lambda)^{1-z^m_i} p(f_i|\mathcal{A}^m_{i,y},\Lambda)^{z^m_i}.\label{eq:occ} \end{align} The binary variables $\mathcal{Z}^m=\{z^m_i \in \{0,1\} | i \in \mathcal{P}\}$ indicate if the object is occluded at position $i$ for mixture component $m$. The outlier model is defined as: \begin{align} p(f_i|\beta,\Lambda) = \sum_{k} \beta_{n,k} p(f_i|\sigma_k,\mu_k). \end{align} Note that the model parameters $\beta$ are independent of the position $i$ in the feature map and thus the model has no spatial structure. The parameters of the occluder models $\beta$ are learned from clustered features of random natural images that do not contain any object of interest \cite{kortylewski2020compnets_ijcv}. When the occlusion model assigns a higher probability (or weight) to a particular region than the class-specific mixture model, the CompNet recognizes an occluder and ignores this region when making a classification decision. This allows one to visualize a spatial map of ``occlusion scores," which provides the basis for CompNets' interpretability (see the end of Section~\ref{sec:experiments}). This occlusion model can also fire for the background, as in Figure~\ref{fig:stopsign}. \subsection{Part-based finetuning} \label{sec:part_based_finetuning} When training on German Traffic Sign dataset, CompNets have trouble differentiating speed limit signs, which share the overall shape and color but differ in their digits. To deal with this difficulty, we propose a principled way of improving CompNets' fine-grained classification accuracy. To make feature vectors more class-specific, we finetune them to be \emph{predictive of the image class}. This causes the CompNet to learn to more directly associate specific features with certain classes, improving fine-grained classification performance. In particular, rather than using a fully-connected layer to synthesize features across the image, we predict the class \emph{directly} from each image feature location using a simple linear classifier: \begin{equation} p(y|f_i) = \text{softmax}(W f_i), \end{equation} where $i$ indexes over different local features in the image. However, since CNN features are local, not all regions can be predictive of the class. Rather, our backbone should use the \emph{most predictive} features to determine the final class output. This justifies a max-pooling operation over classification scores: \begin{align} p(y|F) &= \max_i p(y|f_i)\\ &= \max_i \text{softmax}(W f_i). \end{align} Thus, if a feature tends to be associated with many different classes, its class probability will be exceeded by a feature that is more predictive of a particular class. Part-based finetuning leads to class-specific vMF clusters. In Figure~\ref{fig:part_finetuning}, vMF clusters corresponding specific classes arise: \eg, the ``70" speed limit and for the ``30" speed limit. In contrast, without part-based finetuning, we observe generic clusters that are shared between the different speed limits. It is also worth noting that adding part-based finetuning makes vMF clusters less redundant---specificity leads to fewer duplicated clusters. As we show in Section~\ref{sec:results_main}, part-based finetuning is one key improvement needed to achieve parity in classification accuracy on the fine-grained GTSRB dataset. \subsection{Combining CNNs with CompNets} \label{sec:combining_cnns_compnets} To improve fine-grained recognition accuracy further, we use the combination approach proposed in~\cite{compnet_wacv}. In this method, we first classify the image with a CNN, and only predict with a CompNet if the classifier's confidence drops below a certain threshold. For most of our experiments, we use a confidence threshold of 0.95. \section{Experiments} \label{sec:experiments} \input{experiments/01_setup} \input{experiments/02_main} \input{experiments/03_ablations} \input{experiments/04_occlusion_localization} \section{Conclusion} In this paper, we show that compositional representations are robust to patch attacks out of the box. Without expensive adversarial training, CompNets are able to detect, locate, and ignore adversarial patches. We confirm earlier findings that adversarial training of standard CNNs improves robustness to patch attacks, but show that adversarial training helps less than compositional representations. We also introduce part-based finetuning, a novel improvement to CompNets' training that boosts their fine-grained classification performance. Finally, we show that CompNets' adversarial robustness is uniquely interpretable: their generative model can explain why a defense succeeds or fails. We are the first to demonstrate an architecture that can naturally resist patch-based attacks without adversarial training. \section{Introduction} \begin{figure} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=.9\linewidth]{occlusion_detection/gtsrb/attack_failure/cropped_712_14_good.jpg} \end{subfigure}\\ \begin{subfigure}{\linewidth} \centering \includegraphics[width=.9\linewidth]{occlusion_detection/gtsrb/attack_failure/cropped_712_14_adv.jpg} \end{subfigure} \caption{An attempted Texture Patch Attack~\cite{yang2020patchattack} that fails to induce misclassification. The CompNet has detected the adversarial patch and ignored it. In the right figures, brighter color intensity corresponds to higher \emph{occlusion scores}---regions that the CompNet recognizes as occluders or background.} \label{fig:stopsign} \end{figure} Patch-based adversarial examples are a powerful class of adversarial attacks, introduced in \cite{brown2017adversarial}. In contrast with gradient-based adversarial examples, these attacks modify the input in a perceptible way but only in a localized region. They are general, and they can be performed in a black-box manner~\cite{yang2020patchattack, croce2020sparsers}, \ie, they do not require access to the parameters of the model. Patch-based adversarial attacks have been shown to fool state-of-the-art defenses, even in the black-box setting~\cite{yang2020patchattack}. Reliable defenses against black-box patch attacks remain elusive, and still little is known why deep networks fail to resist adversarial patches. In this work, we study two orthogonal perspectives for defending against black-box patch attacks: \emph{adversarial training} and \emph{deep compositional architectures}. The first perspective---that adversarial training \cite{madry2017towards_resistant} is a potential solution for protecting against patch attacks---is based on the assumption that deep networks can learn to be invariant to adversarial examples, if those examples are included in the training data. Adversarial training has been shown to be highly effective against imperceptible attacks \cite{madry2017towards_resistant,xie2019denoising}. Moreover, a recent study showed promising results for defending against patch attacks using adversarial training~\cite{rao2020adversarial_patch_training}. Our extensive experiments confirm that adversarial training, as suggested in \cite{rao2020adversarial_patch_training}, improves the robustness of deep networks to state-of-the-art patch attacks, reducing the success rate to about 80\% for PatchAttack \cite{yang2020patchattack} and 75\% for Sparse-RS \cite{croce2020sparsers} on the PASCAL3D+ dataset. However, in general, the attack success rates remain high: deep networks have difficulties in learning to be invariant to patch-based attacks. The challenging combinatorial variability of patch position and texture remains. Similar limitations of deep networks have also been observed in recent work on training with data augmentation to induce robustness to occlusion~\cite{kortylewski2020compnets_ijcv}. From a computer vision perspective, patch attacks can be interpreted as a form of partial occlusion. In natural images, objects are frequently occluded by other objects, and robustness to partial occlusion is a long-standing problem in computer vision that has received significant attention \cite{wang2017detecting, zhang2018deepvoting, george2017generative}. The difference between patch attacks and natural occlusion is that the occluder's position and texture is optimized to fool the model; as such, it is a ``maximally difficult" occluder. In this work, we investigate whether models with state-of-the-art robustness to natural occlusion also have enhanced robustness to patch attacks. This paper builds on recent work that introduces compositional deep networks (CompNets). CompNets have been shown to be highly robust to partial occlusion~\cite{compnet_wacv,korty2020compnets_cvpr,wang2020robust,kortylewski2020compnets_ijcv}, but little has been studied as to whether these models can resist occlusions whose textures and locations are adversarially selected. This is important, since perceptible but localized patches are a primary way that adversarial examples can be manifested in the real world~\cite{brown2017adversarial}. Our experiments demonstrate that CompNets have a strong natural robustness to patch-based adversarial attacks. As these models are robust to occlusion by design, we find that CompNets \textbf{do not need to be adversarially trained to be robust} to adversarial patches. Moreover, we find that CompNets are \textbf{significantly more robust than comparable CNN models that are trained with adversarial patches}~\cite{rao2020adversarial_patch_training}. We are the first to defend against black-box adversarial patches without adversarial training. As part of our empirical studies, we observe that CompNets have trouble differentiating classes that are visually similar, \eg, speed limit signs with different numbers. To improve CompNets' ability to differentiate similar classes, we propose a novel finetuning technique, \emph{part-based finetuning}, which makes CompNets' part features more relevant and class-specific. We find that this method, along with other techniques found in the literature~\cite{compnet_wacv}, improves CompNets' classification accuracy on a fine-grained classification dataset (the German Traffic Sign Recognition Benchmark~\cite{Stallkamp2012gtsrb}). This brings CompNets' accuracy to parity with CNNs trained normally and adversarially, while maintaining to superior robustness to patch attacks. One of the unique properties of CompNets is their interpretability. Because CompNets build a generative model of each class, they can locate and ignore occluders~\cite{compnet_wacv, korty2020compnets_cvpr}. This allows one to visualize a semantic ``occlusion score" corresponding to blocked regions in the image. We find that this interpretability extends to the adversarial case: \textbf{CompNets can detect adversarial patches and ignore them. As such, their robustness is highly interpretable}. We verify these results both quantitatively and qualitatively. In summary, we make several important contributions in this work. We show: \begin{enumerate} \item Compositional architectures are robust out of the box. \item Adversarial training of standard networks does improve robustness, but is not as effective as compositional architectures at defending against patch attacks. \item Combining the outputs of standard architectures and compositional architectures leads to the best accuracy-robustness trade-off. These models achieve a high accuracy for non-attacked images and have enhanced robustness to patch attacks compared to adversarially trained models. \item Enhancing compositional representations with part-based finetuning leads to even better performance due to improved fine-grained recognition. \end{enumerate} \subsection{Results} \label{sec:results_main} \begin{table} \centering \begin{tabular}{@{}llcccc@{}} \toprule & \multicolumn{5}{c}{Attack success rates: PASCAL3D+} \\ \midrule & & \multicolumn{1}{c|}{Acc.} & \begin{tabular}[c]{@{}c@{}}TPA\\ $(n=1)$\end{tabular} & \begin{tabular}[c]{@{}c@{}}TPA\\ $(n=4)$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Sparse-RS\\ $(n=1)$\end{tabular} \\ \midrule \parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{untargeted}}} & VGG16 & \multicolumn{1}{c|}{\textbf{98.8}} & 91.6 & 95.4 & 99.6 \\ & \begin{tabular}[c]{@{}l@{}}VGG16\\\hspace{2mm}\small{+ adv. train~\cite{rao2020adversarial_patch_training}}\end{tabular} & \multicolumn{1}{c|}{96.0} & 34.2 & 79.5 & 75.4 \\ & CompNet & \multicolumn{1}{c|}{98.2} & \textbf{7.8} & \textbf{24.9} & \textbf{18.0} \\ \midrule \parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{targeted}}} & VGG16 & \multicolumn{1}{c|}{\textbf{98.8}} & 52.6 & 88.0 & 84.7 \\ & \begin{tabular}[c]{@{}l@{}}VGG16\\\hspace{2mm}\small{+ adv. train~\cite{rao2020adversarial_patch_training}}\end{tabular} & \multicolumn{1}{c|}{96.0} & 8.6 & 53.3 & 33.5 \\ & CompNet & \multicolumn{1}{c|}{98.2} & \textbf{2.4} & \textbf{8.2} & \textbf{5.8} \\ \bottomrule \end{tabular} \caption{CompNets are significantly more robust than normal and adversarially trained CNNs under targeted and untargeted Texture Patch Attacks~\cite{yang2020patchattack} and Sparse-RS attacks~\cite{croce2020sparsers}.} \label{table:pascal_all} \end{table} \begin{table} \centering \begin{tabular*}{\linewidth}{@{}lccc@{}} \toprule \multicolumn{4}{c}{Untargeted attack success rates: GTSRB} \\ \midrule & \multicolumn{1}{c|}{Acc.} & \begin{tabular}[c]{@{}c@{}}TPA\\ $(n=1)$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Sparse-RS\\ $(n=1)$\end{tabular} \\ \midrule VGG16 & \multicolumn{1}{c|}{95.2} & 90.1 & 92.6 \\ \begin{tabular}[c]{@{}l@{}}VGG16\\\hspace{2mm}\small{+ adv. train~\cite{rao2020adversarial_patch_training}}\end{tabular} & \multicolumn{1}{c|}{95.5} & 79.9 & 79.4 \\ CompNet & \multicolumn{1}{c|}{61.7} & \textbf{23.7} & \textbf{43.9} \\ \begin{tabular}[c]{@{}l@{}}CompNet\\\hspace{2mm}\small{+ part-based finetuning}\end{tabular} & \multicolumn{1}{c|}{75.9} & 40.6 & 64.2 \\ \begin{tabular}[c]{@{}l@{}}CompNet\\\hspace{2mm}\small{+ part-based finetuning}\\\hspace{2mm}\small{+ two-stage combination~\cite{compnet_wacv}}\\\hspace{2mm}\small{threshold=0.99, temperature 2}\end{tabular} & \multicolumn{1}{c|}{85.1} & 36.4 & 66.0 \\ \begin{tabular}[c]{@{}l@{}}CompNet\\\hspace{2mm}\small{+ part-based finetuning}\\\hspace{2mm}\small{+ two-stage combination~\cite{compnet_wacv}}\\\hspace{2mm}\small{threshold=0.95, temperature 1}\end{tabular} & \multicolumn{1}{c|}{\textbf{93.0}} & 64.4 & 75.8 \\ \bottomrule \end{tabular*} \caption{CompNets are significantly more robust than normal and adversarially trained CNNs on the GTSRB dataset. Adding our part-based finetuning (Section~\ref{sec:part_based_finetuning}) and two-stage combination~\cite{compnet_wacv} improves accuracy while maintaining CompNets' robustness advantage.} \label{table:gtsrb_untargeted} \end{table} \begin{table} \centering \begin{tabular}{llll} \toprule \multicolumn{4}{c}{Untargeted TPA success rate: PASCAL3D+} \\ \midrule \# patches: & $n=1$ & $n=4$ & $n=8$ \\ \midrule VGG16 & 91.6 & 95.4 & 94.1 \\ VGG16 (+adv. train) & 34.2 & 79.5 & 95.7 \\ CompNet & \textbf{7.8} & \textbf{24.9} & \textbf{49.2} \\ \bottomrule \end{tabular} \caption{CompNets are more robust than adversarially trained CNNs, even with more patches.} \label{table:num_patch_ablation} \end{table} \begin{table} \centering \begin{tabular}{llll} \toprule \multicolumn{4}{c}{Untargeted TPA success rate: PASCAL3D+} \\ \midrule occlusion area: & 1\% & 10\% & 50\% \\ \midrule VGG16 & 33.6 & 91.6 & 100.0 \\ VGG16 (+adv. train) & 13.1 & 34.2 & 96.1 \\ CompNet & \textbf{2.7} & \textbf{7.8} & \textbf{71.8} \\ \bottomrule \end{tabular} \caption{CompNets are more robust than adversarially trained CNNs, even with larger occlusion area.} \label{table:area_ablation} \end{table} \textbf{CompNets are robust to patch attacks.} Tables~\ref{table:pascal_all} and~\ref{table:gtsrb_untargeted} show that that CompNets are robust to patch attacks. When only one patch is used, CompNets are able to defend against more than 90\% of TPA attacks and 80\% of Sparse-RS attacks on PASCAL3D+, and more than 50\% of attacks on GTSRB. Generally, we find that CompNets are more robust on PASCAL3D+ than on GTSRB, which has a larger number of similar classes. This shows that CompNets are the first architecture that is naturally robust to black-box patch attacks. \textbf{CompNets are more robust than adversarially trained architectures.} Our results show that CompNets are significantly more robust than normal and adversarially trained CNNs on both PASCAL3D+ (Table~\ref{table:pascal_all}) and GTSRB (Table~\ref{table:gtsrb_untargeted}). For example, CompNets are up to 4x more robust than a comparable adversarially trained CNN on PASCAL3D+, and up to 3x more robust on GTSRB. Remarkably, training the CompNet comes at negligible computational cost compared to adversarial training, and it has superior robustness. This result has never been shown before in prior work. \textbf{Part-based finetuning (Section~\ref{sec:part_based_finetuning}) improves fine-grained recognition.} On the GTSRB traffic sign dataset, the accuracy of a normal CompNet is unsatisfactory. This is due to failures in fine-grained classification. Adding part-based finetuning improves accuracy by nearly 15 points, and using the ensembling approach presented in~\cite{compnet_wacv} improves the accuracy to 93\%, on par with a standard deep network backbone. The best of these models is still more robust than an adversarially trained standard network under these attacks. \textbf{Trading accuracy for robustness.} It is worth noting that the improvements in fine-grained recognition come at the expense of slightly worse robustness. Adding part-based finetuning decreases robustness slightly, and combining the output of the model with a standard deep network classifier (as in Section~\ref{sec:combining_cnns_compnets}) harms robustness further. This trade-off between accuracy and robustness has been studied before~\cite{zhang2019theoretically}. Interestingly, the adversarially trained model seems to perform approximately as well as the non-adversarially trained model, suggesting that this trade-off is not at play here. This observation is supported by other works~\cite{xie2020adversarial}. Fine-grained recognition accuracy with part-based finetuning is still lower than ideal, and combining the output of a CompNet with the output of a standard deep network trades accuracy for robustness. We leave open the question of how to improve CompNets' fine-grained recognition performance to future work. \subsection{Interpretable robustness: CompNets can detect adversarial occluders} \textbf{CompNets' robustness is interpretable.} Improved robustness to patch attacks is not the only benefit of CompNets to adversarially trained CNNs. CompNets' robustness is highly interpretable: we can visualize which image regions the CompNet recognizes as occluded and explain why the CompNet is able to resist attacks. To locate occluders, we measure the \emph{occlusion scores}: the score (or unnormalized log-probability) that the CompNet assigns to the occlusion model, if the probability of occlusion exceeds some threshold (see Section~\ref{sec:prior}). \begin{figure} \centering \includegraphics[width=\linewidth]{roc_fig} \caption{ROC curve for adversarial patch localization on PASCAL3D+.} \label{fig:roc} \end{figure} \begin{figure*} \centering \begin{subfigure}[b]{0.352\linewidth} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/pascal/attack_success/cropped_436_motorbike_good.jpg} \end{subfigure}\\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/pascal/attack_success/cropped_436_motorbike.jpg} \end{subfigure} \caption{} \label{fig:defense_failure_1} \end{subfigure} \begin{subfigure}[b]{0.54\linewidth} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/pascal/attack_success/cropped_568_car_good.jpg} \end{subfigure}\\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/pascal/attack_success/cropped_568_bus_adv.jpg} \end{subfigure} \caption{} \label{fig:defense_failure_2} \end{subfigure} \begin{subfigure}[b]{0.22\linewidth} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/gtsrb/attack_success/cropped_211_15_good.jpg} \end{subfigure}\\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/gtsrb/attack_success/cropped_211_16_adv.jpg} \end{subfigure} \caption{} \label{fig:defense_failure_3} \end{subfigure} \begin{subfigure}[b]{0.22\linewidth} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/gtsrb/attack_success/cropped_58_25_good.jpg} \end{subfigure}\\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/gtsrb/attack_success/cropped_58_25_adv.jpg} \end{subfigure} \caption{} \label{fig:defense_failure_4} \end{subfigure} \begin{subfigure}[b]{0.22\linewidth} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/gtsrb/attack_success/cropped_349_2_good.jpg} \end{subfigure}\\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{occlusion_detection/gtsrb/attack_success/cropped_349_5_adv.jpg} \end{subfigure} \caption{} \label{fig:defense_failure_5} \end{subfigure} \caption{Failed TPA defenses and occlusion maps. Figures~(a,b) are from PASCAL3D+ and~(c-e) are from GTSRB.} \label{fig:failed_defenses} \end{figure*} As shown in Figure~\ref{fig:roc}, these occlusion scores can be used to locate patches generated by either the TPA or Sparse-RS attacks. Note that this occlusion score does yield some false positives (if the model sees something in the image that does not conform to the ``object prototype"). False positives may arise because the model segments out the background, as in Figure~\ref{fig:stopsign}. Nevertheless, the occlusion scores achieve high recall at low thresholds, demonstrating that CompNets can successfully locate adversarial patches. Occlusion maps also provide useful visualizations that explain how and why CompNets defend against patch attacks. In Figure~\ref{fig:successful_defenses}, we can see some cases where the CompNet detects the occluder and successfully defends against a patch attack for a variety of object classes. In Figure~\ref{fig:bus}, we see that model is able to detect and ignore patches even when they blend in. In Figure~\ref{fig:train}, the attack attempts to fool the model with semantically meaningful patches---wheels---but the CompNet recognizes them as occluders and ignores them. Moreover, the model can detect more than one patch well (Figures~\ref{fig:plane} and~\ref{fig:train}), backing up the quantitative results in Table~\ref{table:area_ablation}. Even when the CompNet misclassifies the attacked image, the results are explainable. For instance, in Figure~\ref{fig:defense_failure_1}, we can see that the model has trouble determining the correct class of car because there is already significant occlusion present in the image (due to the signs on the car door and window). This is also seen in Figure~\ref{fig:defense_failure_2}, where it is hard to make out the main object, due to lack of contrast. Finally, the model may misclassify the image when salient and discriminative parts of the image are occluded: in Figure~\ref{fig:defense_failure_2}, the wheels are occluded, which may contribute to a misclassification. Moreover, in Figures~\ref{fig:defense_failure_3},~\ref{fig:defense_failure_4}, and~\ref{fig:defense_failure_5}, occluding a salient part of the traffic sign causes the model to confuse it for a similar class (even though it still detects the patches). One practical advantage of this interpretability is that it allows us to better gauge our uncertainty about the model's predictions. In a real-world system, we could allow the model to predict an alternative ``uncertain" class whenever there is significant occlusion in the image. This could allow for graceful handling of adversarial patches and natural occluders, perhaps by deferring to other sensors or acting more cautiously. We hope that this advantage of compositional models is explored more thoroughly in future work.
1,314,259,995,466
arxiv
\section{Introduction} To solve dense generalized eigenvalue problems, the most popular algorithm is the QZ algorithm \cite{moler1973algorithm}. It consists of a direct reduction to Hessenberg-triangular form, an iterative reduction to generalized Schur form, and finally a direct step to calculate the eigenvectors. This text focuses on the iterative reduction to generalized Schur form. Consider a pencil $(H,T)$ in Hessenberg-triangular form: \begin{equation} (H,T) = \left( \begin{bmatrix} h_{11} & \dots & \dots & h_{1,n} \\ h_{21} & \ddots & & \vdots \\ & \ddots & \ddots & \vdots \\ 0 & & h_{n,n-1} & h_{nn} \\ \end{bmatrix}, \begin{bmatrix} t_{11} & \dots & \dots & t_{1,n} \\ & \ddots & & \vdots \\ & & \ddots & \vdots \\ 0 & & & t_{nn} \\ \end{bmatrix} \right). \label{eq:pencil} \end{equation} On a high level, the iterative reduction of this pencil to Schur form consists of the following steps: \begin{enumerate} \item Select shifts. Often, Wilkinson shifts are used: the eigenvalues of the $2\times 2$ subpencil in the bottom right. \item Chase a bulge through the pencil using these shifts. \item If a subdiagonal entry in $H$ is small enough, it is set to zero and the eigenvalue problem is split into separate subproblems. \item If a diagonal entry in $T$ is small enough, it is set to zero, and the pencil is transformed to move this entry to the top or bottom of the pencil. Once at the edge, a single infinite eigenvalue can be split off from the pencil. \item Repeat steps 1 through 4 until every subproblem is of size $1\times 1$ (or in the case of complex eigenvalues in real pencils: $2\times 2$). \end{enumerate} More detailed explanations of the QZ algorithm can be found in \cite{moler1973algorithm, kressnerbook}. This paper focuses on steps 3 and 4, and more precisely on the criterion to determine whether the elements are small enough that they can be set to zero. If this criterion is too strict, we may need to do extra iterations when the computed eigenvalues were already accurate enough. If the criterion is not strict enough, we may get inaccurate eigenvalues. In the original paper by Moler and Stewart \cite{moler1973algorithm}, a subdiagonal entry is considered small enough if $|h_{i+1,i}| \le u\|H\|_F$ and a diagonal entry is considered small enough if $|t_{ii}| \le u\|T\|_F$. Here and throughout the rest of this text, $u$ will refer to the machine precision\footnote{There are different definitions of the machine precision in literature. We define $u$ as the distance between 1 and the smaller floating point number larger than 1.}. We will refer to this as the \emph{normwise criterion}. If we want a backward stable decomposition, we cannot set any entries that do not satisfy this criterion to zero. In the QR algorithm, it is known that a similar normwise criterion leads to suboptimal results for graded matrices \cite{stewart1990eigenvalues}. Accuracy for such matrices is improved by using a criterion we will refer to as \emph{elementwise}. This criterion was also adapted to the QZ algorithm \cite{adlerborn2006parallel, kaagstrom2007multishift, steel2021multishift}, although we are not aware of detailed research supporting it. With this criterion, a subdiagonal entry is considered small enough if $|h_{i+1,i}| \le u(|h_{i i}| + |h_{i+1,i+1}|)$ and a diagonal entry is considered small enough if $|t_{ii}| \le u(|t_{i-1,i}| + |t_{i,i+1}|)$. Note that $|h_{i i}| + |h_{i+1,i+1}| \le \sqrt{2}\sqrt{h_{i i}^2 + h_{i+1,i+1}^2} \le \sqrt{2}\|H\|_F$ so an entry that satisfies the elementwise criterion will also satisfy the normwise criterion up to a constant factor. Additionally, $|h_{i i}| + |h_{i+1,i+1}|$ is usually much smaller than $\|H\|_F$ because only 2 entries are considered. In the rest of this text, we will ignore this constant factor and say that the elementwise criterion is stricter than the normwise criterion. The criterion in the QR algorithm has received more attention in the work of Ahues and Tisseur \cite{ahues1997new}. Their paper provides a theoretical basis to support a different deflation criterion. They approximate the sensitivity of the eigenvalues w.r.t. the subdiagonal elements and use that approximation to argue that a subdiagonal element is small enough if it satisfies: \begin{equation*} |h_{i,i-1}| |h_{i-1,i}| \le u | h_{ii} - h_{i-1,i-1} | \text{ and } |h_{i+1,i}| \le u(|h_{i i}| + |h_{i+1,i+1}|). \end{equation*} This paper aims to extend their work to the QZ algorithm. \section{Deflation of finite eigenvalues} In this section, we assume that the pencil only has finite eigenvalues. \subsection{Derivation} To derive our new criterion, we will focus our attention on a $2\times 2$ subpencil of Equation \eqref{eq:pencil}: \begin{equation} (H_i(\epsilon), T_i) = \left(\begin{bmatrix} h_{i-1,i-1} & h_{i-1,i} \\ \epsilon & h_{i,i} \\ \end{bmatrix}, \begin{bmatrix} t_{i-1,i-1} & t_{i-1,i} \\ 0 & t_{i,i} \\ \end{bmatrix} \right). \end{equation} Let $\lambda(\epsilon)$ be the eigenvalue of the subpencil closest to $h_{i,i}/t_{i,i}$. It is important to note that $\lambda(\epsilon)$ is not necessarily an eigenvalue of the full pencil. In the rest of this analysis, we will assume that the influence of entries outside this subpencil are negligible. At this point we would like to stress that this is not a detailed error analysis. We are designing a heuristic and significant terms will be approximated. We want to know when $\epsilon$ is small enough to be considered zero. This is the case when \begin{equation}\label{eq:forwarderrorcheck} |\lambda(\epsilon) - \lambda(0)| \le u |\lambda(0)|, \end{equation} where $u$ is the machine precision. We don't know $\lambda(\epsilon)$, so we must approximate. If the function $\lambda(\epsilon)$ is analytic, then a first order expansion gives the bound: \begin{equation}\label{eq:forwarderrorbound} |\lambda(\epsilon) - \lambda(0)| \le |\lambda'(0)\epsilon| + O(|\epsilon|^2). \end{equation} To calculate $|\lambda'(0)|$, we first calculate the characteristic polynomial: \begin{equation*} \begin{split} p(\epsilon, x) =& det( H_i - x T_i ) \\ =& (h_{i-1,i-1} - xt_{i-1,i-1})(h_{i,i}-xt_{i,i}) - (h_{i-1,i} - xt_{i-1,i})\epsilon \\ \end{split} \end{equation*} The characteristic polynomial evaluated in one of the eigenvalues is always zero, so $\frac{d}{d\epsilon}p(\epsilon,\lambda(\epsilon)) = 0$ and we can extract an expression for $\lambda'(\epsilon)$: \begin{equation}\label{eq:dpexpansion} \begin{split} \frac{d}{d\epsilon}p(\epsilon,\lambda(\epsilon)) &= 0\\ \Rightarrow \left. \frac{\partial p}{\partial \epsilon}(\epsilon,x)\right\vert_{x = \lambda(\epsilon)} + \lambda'(\epsilon)\left.\frac{\partial p}{\partial x}(\epsilon,x)\right\vert_{x = \lambda(\epsilon)} &= 0\\ \Rightarrow \lambda'(\epsilon) &= \dfrac{\left. \frac{\partial p}{\partial \epsilon}(\epsilon,x)\right\vert_{x = \lambda(\epsilon)}}{\left.\frac{\partial p}{\partial x}(\epsilon,x)\right\vert_{x = \lambda(\epsilon)}}. \end{split} \end{equation} Next, we extract an expression for $\frac{\partial p}{\partial x}(\epsilon,x)$. \begin{equation*} \begin{split} \frac{\partial p}{\partial x}(\epsilon,x) =& -t_{i-1,i-1}(h_{i,i}-xt_{i,i})\\ & - t_{i,i}(h_{i-1,i-1} - xt_{i-1,i-1})\\ & + t_{i-1,i}\epsilon\\ \end{split} \end{equation*} Luckily, almost all terms in this expression are zero when evaluated for $\epsilon = 0$ and $x = h_{ii}t_{ii}^{-1}$. \begin{equation}\label{eq:dpdx} \begin{split} \frac{\partial p}{\partial x}(0,h_{ii}t_{ii}^{-1}) =& - (h_{i-1,i-1}t_{ii} - h_{ii}t_{i-1,i-1}).\\ \end{split} \end{equation} Next, we extract an expression for $\frac{\partial p}{\partial \epsilon}(\epsilon,x)$: \begin{equation*} \begin{split} \frac{\partial p}{\partial \epsilon}(\epsilon,x) =& - (h_{i-1,i} - xt_{i-1,i}).\\ \end{split} \end{equation*} Evaluated for $\epsilon = 0$ and $x = h_{ii}t_{ii}^{-1}$, this becomes: \begin{equation}\label{eq:dpdeps} \begin{split} \frac{\partial p}{\partial \epsilon}(0,h_{ii}t_{ii}^{-1}) =& - t_{ii}^{-1}(h_{i-1,i}t_{ii} - h_{ii}t_{i-1,i})\\ \end{split} \end{equation} Combining Equations \eqref{eq:dpexpansion}, \eqref{eq:dpdx} and \eqref{eq:dpdeps} leads to: \begin{equation} \lambda'(0) = \frac{h_{i-1,i}t_{ii} - h_{ii}t_{i-1,i}}{t_{ii}(h_{i-1,i-1}t_{ii} - h_{ii}t_{i-1,i-1})}. \label{eq:lambdaderivative} \end{equation} We can follow the same derivation for $\bar{\lambda}$, the eigenvalue closest to $\frac{h_{i-1,i-1}}{t_{i-1,i-1}}$ to get the similar expression: \begin{equation} \bar{\lambda}'(0) = \frac{h_{i-1,i-1}t_{i-1,i} - h_{i-1,i}t_{i-1,i-1}}{t_{i-1,i-1}(h_{i-1,i-1}t_{ii} - h_{ii}t_{i-1,i-1})}. \label{eq:lambdaderivative2} \end{equation} To design a deflation criterion using these equations, we will combine Equation \eqref{eq:lambdaderivative}, Equation \eqref{eq:forwarderrorcheck} and \eqref{eq:forwarderrorbound}. This leads to: \begin{equation} |h_{i-1,i}t_{ii} - h_{ii}t_{i-1,i}||h_{i,i-1}| \le u |h_{ii}||h_{i-1,i-1}t_{ii} - h_{ii}t_{i-1,i-1}|. \label{eq:finitedeflation} \end{equation} Finally, imagine a pencil where $h_{i-1,i} = t_{i-1,i} = 0$. The eigenvalues of the subpencil will now be independent of $\epsilon$, so the deflation criterion will be satisfied, but setting $\epsilon$ to zero could lead to a large backward error. To guarantee backward stability the criterion will also check that: \begin{equation*} |h_{i,i-1}| \le u (|h_{ii}| + |h_{i-1,i-1}|) \end{equation*} \subsection{Experiments} In these experiments, we will evaluate the accuracy of the QZ algorithm while using the following criteria: \begin{itemize} \item normwise: $|h_{i,i-1}| \le u \|H\|_F$ \item elementwise: $|h_{i,i-1}| \le u (|h_{i-1,i-1}| + |h_{i,i}|)$ \item strict (our new criterion): $|h_{i,i-1}| \le u (|h_{i-1,i-1}| + |h_{i,i}|)$ and $|h_{i-1,i}t_{ii} - h_{ii}t_{i-1,i}||h_{i,i-1}| \le u |h_{ii}||h_{i-1,i-1}t_{ii} - h_{ii}t_{i-1,i-1}|$ \end{itemize} Before describing the different test cases, we will describe a procedure to generate a pencil with predetermined eigenvalues and a predetermined condition number of the eigenvector matrix. \begin{algorithm} \caption{Generation of test pencil}\label{alg:testpencil} \KwData{size $n$, eigenvalues $(\alpha_i,\beta_i)$, eigenvector matrix condition number $\kappa$} \KwResult{$(A,B)$} Make $D_1$ and $D_2$ diagonal matrices, setting the diagonal entries to the desired eigenvalue pairs $(\alpha_i,\beta_i)$. (Generalized eigenvalues are often described as pairs, with $\lambda_i = \frac{\alpha_i}{\beta_i})$\; Generate diagonal matrices $V$ and $W$ with diagonal entries logarithmically spaced between 1 and $\kappa^{-1}$\; Multiply $V$ and $W$ from the left and right with random unitary matrices\; Generate the final pencil as $(H,T) = V(D_1,D_2)W$\; \end{algorithm} To show that the new deflation criterion can improve the accuracy of the computed eigenvalues significantly, we propose a small example similar to the one by Ahues and Tisseur \cite{ahues1997new}. Consider the pencil: \begin{equation} (H,T) = \left( \begin{bmatrix} 1 & c & 0 \\ \eta & (1 + d) & 1 \\ 0 & \eta & (1+ 2d)c^{-1} \end{bmatrix}, \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & c^{-1} \end{bmatrix} \right) \end{equation} with $\eta = 1.1 \cdot 10^{-8}$, $c = 1.1 \cdot 10^5$ and $d=10^{-2}$. The eigenvalues of this pencil are approximately: $0.95371503$, $1.0261424$ and $1.0501426$. This pencil shows that a good criterion needs to take both $H$ and $T$ into account. Criteria applied only to $H$ (the normwise criterion, the elementwise criterion or even the criterion by Ahues and Tisseur \cite{ahues1997new}) would consider $h_{3,2}$ to be small enough in IEEE single precision. The returned eigenvalues are then $1$, $1.02$ and $1.04$, resulting in less than 2 digits of accuracy. When using our new criterion, the QZ algorithm performs 2 more iterations, resulting in smaller subdiagonal entries. The returned eigenvalues are: $0.95371503$, $1.0261424$ and $1.0501425$, which are almost accurate to the working precision. From this example, we can already conclude that there are cases where our new criterion significantly improves the accuracy of the eigenvalues. It is also very unlikely that the other criteria would lead to better results because our new criterion (if we include the elementwise criterion for backward stability) is stricter than the other criteria. We now test the criteria on matrices of certain classes with randomness in them: \begin{itemize} \item Pencils with well-conditioned eigenvalues. For these pencils, we use Algorithm \ref{alg:testpencil} with the $\alpha_i$ randomly generated, $\beta_i = 1$ and the eigenvector condition is 1. \item Pencils with ill-conditioned eigenvalues. For these pencils, we use Algorithm \ref{alg:testpencil} with the $\alpha_i$ randomly generated, $\beta_i = 1$ and the eigenvector condition is 1000. \item Graded matrices. These pencils are generated by first generating two full random matrices $A$ and $B$ and then scaling the rows and columns with values varying between $1$ and $10^{-3}$. \end{itemize} To perform the test, the pencils are reduced to Hessenberg-triangular form, and then the eigenvalues are computed in double precision and compared with the eigenvalues computed in single precision. For each class, 10000 pencils were generated. Histograms of the maximum relative error of the eigenvalues for the three classes of matrices with the three different types of deflation criteria are shown in Figure \ref{fig:expnormal}, \ref{fig:expnonnormal} and \ref{fig:expgraded}. The averages of the distributions are shown in Table \ref{table: averages}. \begin{table} \centering \begin{tabular}{ c | c c c } & normwise & elementwise & strict \\ \hline well-conditioned & 6.28 (8.9) & 6.28 (9.1) & 6.28 (9.1) \\ ill-conditioned & 3.20 (9.4) & 3.27 (11.2) & 3.27 (11.4) \\ graded & 2.70 (5.5) & 3.56 (7.1) & 3.57 (7.4) \\ \end{tabular} \caption{Average number of accurate digits in the eigenvalues for different classes of pencils and different deflation criteria. The average number of required iterations is shown between brackets.} \label{table: averages} \end{table} We notice a significant increase in accuracy for all of the pencils from the normwise to the elementwise and strict criteria, with a larger increase for graded matrices. If we look at the histogram, there is a slight difference in accuracy when using the strict criterion over the elementwise criterion. However, performing a t-test on the logarithm of the accuracy of the eigenvalues of the ill-conditioned pencils reveals that we cannot conclude that the strict criterion results in higher accuracy on average $(p = 0.704)$. No convergence failures occurred. These experiments lead us to conclude that while there are certainly pencils where the strict criterion leads to a significant increase in accuracy, the average accuracy of the two criteria is very similar. We cannot rule out that a class of generalized eigenvalue problems for which there would be an increase in the average accuracy exists. For a general-purpose library like LAPACK, we believe changing to the strict criterion is warranted so that it can handle all corner cases. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \resizebox{\textwidth}{!}{% \begin{tikzpicture} \begin{axis}[ ybar, bar width=2mm, area style, xmin = 3.5, xtick={4, 5, 6, 7}, xticklabels={$10^{-4}$,$10^{-5}$,$10^{-6}$,$10^{-7}$}, xlabel=Relative error, ylabel=Bin count, legend style={ at={(0,1)}, anchor=north west, }, ] \addplot+[ybar,mark=no,color=black,bar shift=-3mm,fill=black!10,postaction={pattern=dots}] table [x=bins, y=normwise, col sep=comma] {Figures/accuracy_well_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=-1mm,fill=black!50,postaction={pattern=north east lines}] table [x=bins, y=elementwise, col sep=comma] {Figures/accuracy_well_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=1mm, fill=black!50] table [x=bins, y=strict, col sep=comma] {Figures/accuracy_well_conditioned.csv}; \legend{normwise,elementwise,strict} \end{axis} \end{tikzpicture} } \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \resizebox{\textwidth}{!}{% \begin{tikzpicture} \begin{axis}[ ybar, bar width=2mm, area style, xlabel=Number of iterations, ylabel=Bin count ] \addplot+[ybar,mark=no,color=black,bar shift=-3mm,fill=black!10,postaction={pattern=dots}] table [x=bins, y=normwise, col sep=comma] {Figures/iterations_well_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=-1mm,fill=black!50,postaction={pattern=north east lines}] table [x=bins, y=elementwise, col sep=comma] {Figures/iterations_well_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=1mm, fill=black!50] table [x=bins, y=strict, col sep=comma] {Figures/iterations_well_conditioned.csv}; \end{axis} \end{tikzpicture} } \end{subfigure} \caption{Histogram of the maximum relative errors (left) and the number of QZ iterations (right) for the eigenvalues of randomly generated pencils with good eigenvector condition for different deflation criteria.} \label{fig:expnormal} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \resizebox{\textwidth}{!}{% \begin{tikzpicture} \begin{axis}[ ybar, bar width=2mm, area style, xmin = -0.5, xtick={0, 1, 2, 3, 4, 5, 6, 7}, xticklabels={$10^{0}$,$10^{-1}$,$10^{-2}$,$10^{-3}$,$10^{-4}$,$10^{-5}$,$10^{-6}$,$10^{-7}$}, xlabel=Relative error, ylabel=Bin count, legend style={ at={(0,1)}, anchor=north west, }, ] \addplot+[ybar,mark=no,color=black,bar shift=-3mm,fill=black!10,postaction={pattern=dots}] table [x=bins, y=normwise, col sep=comma] {Figures/accuracy_ill_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=-1mm,fill=black!50,postaction={pattern=north east lines}] table [x=bins, y=elementwise, col sep=comma] {Figures/accuracy_ill_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=1mm, fill=black!50] table [x=bins, y=strict, col sep=comma] {Figures/accuracy_ill_conditioned.csv}; \legend{normwise,elementwise,strict} \end{axis} \end{tikzpicture} } \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \resizebox{\textwidth}{!}{% \begin{tikzpicture} \begin{axis}[ ybar, bar width=2mm, area style, xlabel=Number of iterations, ylabel=Bin count ] \addplot+[ybar,mark=no,color=black,bar shift=-3mm,fill=black!10,postaction={pattern=dots}] table [x=bins, y=normwise, col sep=comma] {Figures/iterations_ill_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=-1mm,fill=black!50,postaction={pattern=north east lines}] table [x=bins, y=elementwise, col sep=comma] {Figures/iterations_ill_conditioned.csv}; \addplot+[ybar,mark=no,color=black,bar shift=1mm, fill=black!50] table [x=bins, y=strict, col sep=comma] {Figures/iterations_ill_conditioned.csv}; \end{axis} \end{tikzpicture} } \end{subfigure} \caption{Histogram of the maximum relative errors (left) and the number of QZ iterations (right) for the eigenvalues of randomly generated pencils with ill-conditioned eigenvectors for different deflation criteria.} \label{fig:expnonnormal} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \resizebox{\textwidth}{!}{% \begin{tikzpicture} \begin{axis}[ ybar, bar width=2mm, area style, xmin=-1.5, xtick={-1,0, 1, 2, 3, 4, 5, 6, 7}, xticklabels={$10^{1}$,$10^{0}$,$10^{-1}$,$10^{-2}$,$10^{-3}$,$10^{-4}$,$10^{-5}$,$10^{-6}$,$10^{-7}$}, xlabel=Relative error, ylabel=Bin count, legend style={ at={(0,1)}, anchor=north west, }, ] \addplot+[ybar,mark=no,color=black,bar shift=-3mm,fill=black!10,postaction={pattern=dots}] table [x=bins, y=normwise, col sep=comma] {Figures/accuracy_graded.csv}; \addplot+[ybar,mark=no,color=black,bar shift=-1mm,fill=black!50,postaction={pattern=north east lines}] table [x=bins, y=elementwise, col sep=comma] {Figures/accuracy_graded.csv}; \addplot+[ybar,mark=no,color=black,bar shift=1mm, fill=black!50] table [x=bins, y=strict, col sep=comma] {Figures/accuracy_graded.csv}; \legend{normwise,elementwise,strict} \end{axis} \end{tikzpicture} } \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \resizebox{\textwidth}{!}{% \begin{tikzpicture} \begin{axis}[ ybar, bar width=2mm, area style, xlabel=Number of iterations, ylabel=Bin count ] \addplot+[ybar,mark=no,color=black,bar shift=-3mm,fill=black!10,postaction={pattern=dots}] table [x=bins, y=normwise, col sep=comma] {Figures/iterations_graded.csv}; \addplot+[ybar,mark=no,color=black,bar shift=-1mm,fill=black!50,postaction={pattern=north east lines}] table [x=bins, y=elementwise, col sep=comma] {Figures/iterations_graded.csv}; \addplot+[ybar,mark=no,color=black,bar shift=1mm, fill=black!50] table [x=bins, y=strict, col sep=comma] {Figures/iterations_graded.csv}; \end{axis} \end{tikzpicture} } \end{subfigure} \caption{Histogram of the maximum relative errors (left) and the number of QZ iterations (right) for the eigenvalues of randomly generated graded pencils for different deflation criteria.} \label{fig:expgraded} \end{figure} \section{Infinite eigenvalues} In a generalized eigenvalue problem, some of the eigenvalues can be infinite. To avoid working with infinity, we will use the pair $(\alpha,\beta)$ instead of $\lambda$ in this section. For such a pair, we have that $det( \beta H - \alpha T ) = 0$. If $\beta \neq 0$, we can simply use the old definition and use $\lambda = \frac{\alpha}{\beta}$. If $\beta = 0$ we say that the eigenvalue is infinite. In subsection \ref{subsection:numericallyinfinitedefinition}, we will define what it means for a pencil to have a numerically infinite eigenvalue. In subsection \ref{subsection:heuristics}, we will evaluate some commonly used criteria for infinite eigenvalues using this definition. \subsection{Numerically infinite eigenvalues}\label{subsection:numericallyinfinitedefinition} An eigenvalue is infinite if $\beta = 0$. This implies that the pencil has an infinite eigenvalue if and only if $T$ is singular. In practice, it is often the case that a matrix is very close to, but not exactly singular. We call such a matrix numerically singular. For our purposes, we will consider a matrix numerically singular if $\kappa_2(A) \ge u^{-1}$. This criterion is commonly used in practice. $\kappa_2(A) \ge u^{-1}$ if and only if there exists a perturbation $E$ so that $A + E$ is exactly singular and $\|E\|_2 \le u\|A\|_2$ \cite{demmel1997applied}, i.e. $A$ is only a small perturbation away from being singular. In a similar vein, we will say that a pencil $(H, T)$ has a numerically infinite eigenvalue if and only if there exist matrices $E_1$ and $E_2$, with $\|E_1\|_2 \le u\|H\|$ and $\|E_2\|_2 \le u\|T\|$ so that $(H+E_1, T+E_2)$. It is easy to check that this is equivalent to defining that a pencil has a numerically infinite eigenvalue if and only if $T$ is numerically singular. For finite eigenvalues, an essential property of our new deflation criterion is that it considers values in both $H$ and $T$ and is thus invariant to diagonal scaling. If we consider an eigenvalue infinite if and only if $T$ is numerically singular, then values in $H$ are ignored. This would seem to be a valid argument against our definition, but this subsection will argue that invariance to diagonal scaling is not required. Consider the two pencils: \begin{equation*} (H_1, T_1) = \left( \begin{bmatrix} 1 & 0 \\ 0 & 10 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \right) \end{equation*} and \begin{equation*} (H_2, T_2) = \left( \begin{bmatrix} 1 & 0 \\ 0 & 10^{-7} \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & 10^{-8} \end{bmatrix} \right) \end{equation*} In exact arithmetic, these pencils can be transformed into each other through row or column scaling and they both have one eigenvalue $1$ and one eigenvalue $10$. In single precision arithmetic, $T_2$ can be considered numerically singular, whereas $T_1$ is unitary and is as far from singular as possible. According to our definition, $T_1$ has only finite eigenvalues, whereas $T_2$ has a numerically infinite eigenvalue. It is common to perform row and column scalings to improve the conditioning of the eigenvalue problem \cite{ward1981balancing}, so it would seem that a good criterion to identify infinite eigenvalues should be invariant to such scalings. To be invariant to scaling, we need to consider the ratio of $h_{ii}$ and $t_{ii}$ and for the pencil $(H_2, T_2)$ this seems like a good idea. However, consider the following pencils: \begin{equation*} (H_3, T_3) = \left( \begin{bmatrix} 1 & 0 \\ 0 & 10^8 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \right) \end{equation*} and \begin{equation*} (H_4, T_4) = \left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & 10^{-8} \end{bmatrix} \right) \end{equation*} Just like before, these pencils can be transformed into each other through diagonal similarity transformation. They both have eigenvalues 1 and $10^8$. In single precision, $(H_3, T_3)$ has finite eigenvalues and $(H_4, T_4)$ has an infinite eigenvalue. In the first example, it may seem clear that the pencil is badly scaled, but here it is more difficult. If we decide that $(H_3, T_3)$ is the `correct' one, then any pencil with infinite eigenvalues should be converted to a pencil with large eigenvalues so long as $t_{ii}$ is not exactly zero. If we decide that $(H_4, T_4)$ is the `correct' one, then any pencil with large eigenvalues should be converted to a pencil with infinite eigenvalues. We can only solve this paradox by posing that $(H_3, T_3)$ and $(H_4, T_4)$ are fundamentally different. In exact arithmetic, similarity transforms preserve the eigenvalues, but there can be a meaningful difference in finite precision. \subsection{Heuristics}\label{subsection:heuristics} We have defined that a pencil has an infinite eigenvalue if and only if $T$ is numerically singular. However, exactly determining the condition number of $T$ in each iteration would be prohibitively expensive. The normwise criterion: \begin{equation} t_{ii} \le u \|T\|_F \end{equation} and the elementwise criterion: \begin{equation} t_{ii} \le u (t_{i,i-1} + t_{i+1,i}) \end{equation} can be interpreted as cheap heuristics to determine whether the upper triangular $T$ is numerically singular. We will study two types of errors the heuristics might make: they could identify a finite eigenvalue as numerically infinite or they could identify a numerically infinite eigenvalue as finite. A lower bound on the condition number of a triangular matrix is: \cite{higham1987survey} \begin{equation}\label{eq:2normconditionbound} \kappa_2(T) := \|T\|_2 \|T^{-1}\|_2 \ge \|T\|_2 (\min |t_{ii}|)^{-1}. \end{equation} The normwise criterion doesn't use the 2-norm, but the Frobenius norm (because it is cheaper to compute). The Frobenius norm and the 2-norm of any matrix $A$ are related via: \begin{equation}\label{eq:relation2andfrobnorm} \|A\|_2 \le \|A\|_F \le \sqrt{n}\|A\|_2. \end{equation} Combining Equations \eqref{eq:2normconditionbound} and \eqref{eq:relation2andfrobnorm}, we get that: \begin{equation} \begin{split} &t_{ii} \le u \|T\|_F\\ &t_{ii} \le \sqrt{n} u \|T\|_2\\ &\Rightarrow \kappa_2(T) \ge u^{-1}n^{-\frac{1}{2}}.\\ \end{split} \end{equation} So even if the normwise criterion misclassifies a finite eigenvalue as infinite, it will at least be close to numerically infinite by a factor $n^{-\frac{1}{2}}$. However, the normwise criterion may misclassify a numerically infinite eigenvalue as finite. The elementwise criterion, which is stricter than the normwise criterion, is then also guaranteed to only identify eigenvalues that are at least close to being infinite. Because it is stricter, it is also more likely to falsely identify numerically infinite eigenvalues as finite. In the standard eigenvalue problem, an elementwise criterion was relevant to accurately determine small eigenvalues in graded matrices. However, graded matrices are often numerically singular and therefore (by our definition) lead to infinite eigenvalues so that argument is not valid here. We now present some experiments that test both types of errors the criteria can make. They will show that (at least for these tests) neither of the criteria falsely identifies a finite eigenvalue as infinite, but sometimes they fail to identify numerically infinite eigenvalues. To perform the tests, the pencils are always reduced to Hessenberg-triangular form and then the QZ algorithm is run in double precision. First, we consider a pencil $(A,B)$ where the sparsity pattern of $B$ is such that it has a number of infinite eigenvalues. \begin{equation} B = \begin{bmatrix} B_1 & 0\\ 0 & B_2, \end{bmatrix} \end{equation} with $B_1$ an $m_1 \times m_2$ matrix with randomly drawn entries and $B_2$ an $m_2 \times m_1$ matrix with randomly drawn entries. $A$ is a full matrix with randomly drawn entries. The pencil has $|m_2 - m_1|$ infinite eigenvalues. We test 1000 of these pencils, with $m_1 = 22$ and $m_2 = 28$, so we expect to find 6 infinite eigenvalues. Using the normwise criterion, we find always find 6 infinite eigenvalues. Using the elementwise criterion, we only find 4.41 infinite eigenvalues on average. Next, we generate pencils using the procedure described in the previous section, with $\alpha_i = 1$, $\beta_i = 10^{-16i/n}$ and the eigenvector condition equal to 1. In double precision for matrices of size 50 the last two singular values are $1.0\cdot 10^{-16}$ and $2.089\cdot 10^{-16}$, and $u = 2.2204\cdot 10^{-16}$, so we expect to find 2 numerically infinite eigenvalues. The numerically infinite eigenvalues are very close to being finite, and several finite eigenvalues are very close to being numerically infinite so this is a good stress test of the criterion. Just like before, we test 1000 of these. On average, we find 1.94 and 0 infinite eigenvalues using the normwise and elementwise criteria respectively. In both experiments, the normwise criterion performed better. Combining the theoretical arguments and the experimental results, we conclude that the normwise criterion is a better estimate of our definition. \section{Conclusion} We derived a new deflation criterion for finite eigenvalues in the QZ algorithm. It has a better mathematical foundation and both theoretical arguments and numerical experiments indicate that it is at least as, and sometimes more accurate than other commonly used criteria. We also evaluated commonly used criteria for infinite eigenvalues and concluded that the normwise criterion performs best. \section{Acknowledgement} We are grateful to the Mathworks team for reporting the issues related to deflations of infinite eigenvalues in LAPACK version 3.10 that formed the inspiration for this paper. \clearpage \bibliographystyle{plain}
1,314,259,995,467
arxiv
\section{Introduction} Liquid crystals, whose discovery is attributed to Reinitzer in 1888 \cite{Reinitzer1}, are substances that possess mesophases with properties intermediate between liquids and crystals. The mesophases exist at different temperatures or solvent concentrations. In recent years, research on the novel properties of liquid crystals has rapidly expanded. Modern applications include nanoparticle organization, liquid crystal-functionalized polymer fibers \cite{Lagerwall1}, and liquid crystal elastomers designed to produce effective actuator devices such as light driven motors \cite{Yamada1} and artificial muscles \cite{Thomsen1}. The focus of this paper is on nematic liquid crystal phases, which are formed by rod-like molecules that self-assemble into an ordered structure, such that the molecules tend to align along a preferred orientation. The preferred average direction at any point in a domain, $\Omega$, is known as the director, denoted $\vec{n}(x,y,z) = (n_1, n_2, n_3)^T$. The director is taken to be of unit length at every point and headless, that is $\vec{n}$ and $-\vec{n}$ are indistinguishable, reflecting the observed experimental symmetry of the phase. In addition to their self-structuring properties, the orientation of a nematic liquid crystal may be affected by applied electric fields. Moreover, since these materials are birefringent, that is these materials' refractive indices depend on the polarization of light, they can be used to control the propagation of light through a nematic structure. These traits have led, and continue to lead, to important discoveries in display technologies and beyond \cite{Lagerwall1}. Thorough overviews of liquid crystal physics and properties are found in \cite{Stewart1, deGennes1, Chandrasekhar1}. Many mathematical and computational models of liquid crystal continuum theory lead to complicated systems involving unit length constrained vector fields. Currently, the complexity of such systems has restricted the existence of known analytical solutions to simplified geometries in one (1-D) or two-dimensions (2-D), often under strong simplifying assumptions. When coupled with electric fields and other effects, far fewer analytical solutions exist, even in 1-D \cite{Stewart1}. In addition, associated systems of partial differential equations, such as the equilibrium equations \cite{Stewart1, Ericksen4}, suffer from non-unique solutions, which must be distinguished via energy arguments. Due to such difficulties, efficient, theoretically supported, numerical approaches to the modeling of nematic liquid crystals under free elastic and augmented electric effects are of great importance. This paper discusses the modeling of free elastic effects. The addition of electric field effects will be the subject of future work. A number of computational techniques for liquid crystal equilibrium and dynamics problems exist \cite{Liu1, Liu2, Liu3, Stewart1}, including least-squares finite-element methods \cite{Atherton1} and discrete Lagrange multiplier approaches \cite{Ramage1, Ramage2}. In this paper, we propose a method that directly targets energy minimization in the continuum, via Lagrange multiplier theory on Banach spaces. The approach is derived absent the often used one-constant approximations \cite{Ramage1, Liu1, Liu2, Liu3, Ramage2, Stewart1, Cohen1}; that is, the method described here and the accompanying theory are applied for a wide range of physical parameters. This allows for significantly improved modeling of physical phenomena not captured in many models. Furthermore, most models and analytical approaches rely on assumptions to reduce the dimensionality of the problem. Here, the method and theory are suitable for use on 2-D and 3-D domains and are easily combined with additional energy effects. After defining the energy functional to be minimized, first-order optimality conditions are computed. These first-order conditions contain highly nonlinear terms and are, therefore, linearized with a generalized Newton's method. The resulting Newton linearization resembles a typical mixed finite-element method formulation \cite{Brenner1, Braess1, Boffi1}. However, these forms present unique difficulties not found, for instance, in the Stokes' problem. In particular, the forms related to the nonlinear unit-length constraint for $\vec{n}$ require novel treatment. Additionally, the proofs of continuity and coercivity differ significantly from many standard approaches due to the inherent complexity of the bilinear forms. In the continuum, it is possible to demonstrate coercivity for the relevant bilinear form with moderate simplifying assumptions. With auxiliary regularity assumptions, continuity of the involved bilinear forms is also established. On the other hand, for a pair of discrete spaces, continuity, coercivity, and weak coercivity for the relevant bilinear forms are proved. The main result of this paper proves the existence and uniqueness of solutions to each discrete Newton iteration. Error analysis is also performed to elaborate the convergence order of the approximations. The method is implemented and run for a number of configurations, including those relevant to ongoing research. This paper is organized as follows. We first introduce the liquid crystal model under consideration, derive the method, and discuss Dirichlet boundary condition simplifications in Section \ref{energymodels}. In Section \ref{existuniquesection}, well-posedness of the Newton iterations for a pair of discrete spaces is proved and an error analysis is performed. The numerical methodology and numerical experiments are detailed in Section \ref{nummethodology}. Finally, Section \ref{conclusion} gives some concluding remarks and future work is discussed. \section{Energy Model} \label{energymodels} At equilibrium, absent any external forces, fields, or boundary conditions, the free elastic energy present in a liquid crystal sample is given by an integral functional, $\mathcal{F}$, which depends on the state variables of the system. A liquid crystal sample tends to the state of lowest free energy. While a number of free-energy models exist cf. \cite{Davis1}, this paper considers the Frank-Oseen free elastic model \cite{Stewart1, Virga1, Frank1}. The Frank-Oseen equations represent the free elastic energy density, $w_F$, in a sample as \begin{equation*} \label{FrankOseenFree} w_F = \frac{1}{2}K_1(\vec{\nabla}\cdot \vec{n})^2+ \frac{1}{2}K_2(\vec{n} \cdot \vec{\nabla}\times \vec{n})^2+ \frac{1}{2}K_3\vert \vec{n} \times \vec{\nabla}\times \vec{n} \vert^2 + \frac{1}{2}(K_2+K_4)\vec{\nabla}\cdot[(\vec{n} \cdot \vec{\nabla}) \vec{n} - (\vec{\nabla}\cdot \vec{n}) \vec{n}]. \end{equation*} Throughout this paper, the standard Euclidean inner product and norm are denoted $(\cdot, \cdot)$ and $\vert \cdot \vert$, respectively. The $K_i$, $i=1,2,3,4$, are known as the Frank elastic constants \cite{Frank1}, which vary depending on temperature and liquid crystal type. By Ericksen's inequalities \cite{Ericksen2}, $K_j \geq 0$ for $j=1,2,3$. Each term represents an energy penalty for the presence of splay, twist, bend, and saddle-splay, respectively. It can be shown that \begin{equation} \label{SaddleSplayIdentity} \vec{\nabla}\cdot[(\vec{n} \cdot \vec{\nabla}) \vec{n} - (\vec{\nabla}\cdot \vec{n}) \vec{n}] = \nabla n_1 \cdot \frac{\partial \vec{n}}{\partial x}+\nabla n_2 \cdot \frac{\partial \vec{n}}{\partial y}+\nabla n_3 \cdot \frac{\partial \vec{n}}{\partial z}-(\vec{\nabla}\cdot \vec{n})^2. \end{equation} Additionally, let \begin{equation} \label{matrixD} \vec{Z} = \kappa \vec{n} \otimes \vec{n} + (\vec{I} - \vec{n} \otimes \vec{n}) = \vec{I} - (1-\kappa) \vec{n} \otimes \vec{n}, \end{equation} where $\kappa = K_2/K_3$; in general, we consider the case that $K_2, K_3 > 0$. Denote the classical $\Ltwo{\Omega}$ inner product and norm as $\Ltwoinner{\cdot}{\cdot}{\Omega}$ and $\Ltwonorm{\cdot}{\Omega}$, respectively. Employing \eqref{SaddleSplayIdentity}, \eqref{matrixD}, and the fact that $\vec{n}$ has unit length, the total free energy for a domain, $\Omega$, is \begin{align} \int_{\Omega} w_F \diff{V} &=\frac{1}{2}(K_1-K_2-K_4) \Ltwonorm{\vec{\nabla}\cdot \vec{n}}{\Omega}^2 + \frac{1}{2}K_3\Ltwoinnerndim{\vec{Z} \vec{\nabla}\times \vec{n}}{\vec{\nabla}\times \vec{n}}{\Omega}{3} \nonumber \\ & \qquad + \frac{1}{2}(K_2+K_4) \big(\Ltwoinnerndim{\nabla n_1}{\frac{\partial \vec{n}}{\partial x}}{\Omega}{3}+ \Ltwoinnerndim{\nabla n_2}{\frac{\partial \vec{n}}{\partial y}}{\Omega}{3}+ \Ltwoinnerndim{\nabla n_3}{\frac{\partial \vec{n}}{\partial z}}{\Omega}{3} \big). \label{SystemEnergy} \end{align} For the special case of full Dirichlet boundary conditions, we consider a fixed director $\vec{n}$ at each point on the boundary of $\Omega$. Considering the integration carried out on the terms in \eqref{SaddleSplayIdentity}, \begin{align} \label{stronganchoringdivthm} &\frac{1}{2}(K_2+K_4) \int_{\Omega} \vec{\nabla}\cdot[(\vec{n} \cdot \vec{\nabla}) \vec{n} - (\vec{\nabla}\cdot \vec{n}) \vec{n}] \diff{V} \nonumber \\ &\qquad \qquad \qquad = \frac{1}{2}(K_2+K_4)\int_{\partial \Omega} [(\vec{n} \cdot \vec{\nabla}) \vec{n} - (\vec{\nabla}\cdot \vec{n}) \vec{n}] \cdot \mathbf{\nu} \diff{S}, \end{align} by the divergence theorem. Further, since $\vec{n}$ is fixed along $\partial \Omega$, the energy contributed by $\vec{n}$ on the boundary is constant regardless of the configuration of $\vec{n}$ on the interior of $\Omega$. Thus, in the minimization to follow, the energy contribution from this term is ignored. For this reason, \eqref{stronganchoringdivthm} is often referred to as a null Lagrangian \cite{Virga1}. A number of methods involving computation of liquid crystal equilibria or dynamics utilize the so called one-constant approximation that $K_1=K_2=K_3$ and $K_4 = 0$ \cite{Ramage1, Liu1, Stewart1, Cohen1}, in order to significantly simplify the free elastic energy density to \begin{equation*} \hat{w}_F = \frac{1}{2}K_1 \vert \nabla \vec{n} \vert^2, \qquad \text{where } \vert \nabla \vec{n} \vert^2 = \sum_{i,j=1}^3 \left ( \pd{n_i}{x_j} \right)^2. \end{equation*} This expression for the free energy density is more amenable to theoretical development but ignores significant physical characteristics of the nematic \cite{Lee1, Atherton2}. The following method is derived without such an assumption. \subsection{Free Elastic Energy Minimization} \label{freeenergymin} In this section, a general approach for computing the free elastic equilibrium state for $\vec{n}$ is derived. This equilibrium state corresponds to the configuration which minimizes the system free energy subject to the local constraint that $\vec{n}$ is of unit length throughout the sample volume, $\Omega$. That is, the minimizer must satisfy $\vec{n} \cdot \vec{n} = 1$ pointwise throughout the volume. To compute this state, define the functional, equivalent to \eqref{SystemEnergy}, \begin{align} \label{functional2} \mathcal{F}_1(\vec{n}) &= (K_1-K_2-K_4) \Ltwonorm{\vec{\nabla}\cdot \vec{n}}{\Omega}^2 + K_3\Ltwoinnerndim{\vec{Z} \vec{\nabla}\times \vec{n}}{\vec{\nabla}\times \vec{n}}{\Omega}{3} \nonumber \\ & \qquad + (K_2+K_4) \big(\Ltwoinnerndim{\nabla n_1}{\frac{\partial \vec{n}}{\partial x}}{\Omega}{3}+ \Ltwoinnerndim{\nabla n_2}{\frac{\partial \vec{n}}{\partial y}}{\Omega}{3}+ \Ltwoinnerndim{\nabla n_3}{\frac{\partial \vec{n}}{\partial z}}{\Omega}{3} \big). \end{align} Define \begin{align*} \Hdiv{\Omega} &= \{\vec{v} \in L^2(\Omega)^3 : \vec{\nabla}\cdot \vec{v} \in L^2(\Omega) \},\\ \Hcurl{\Omega} &= \{ \vec{v} \in L^2(\Omega)^3 : \vec{\nabla}\times \vec{v} \in L^2(\Omega)^3 \}. \end{align*} Further, let \begin{align*} \Hdivnot{\Omega} &= \{\vec{v} \in \Hdiv{\Omega} : \mathbf{\nu} \cdot \vec{v} = 0 \text{ on } \partial \Omega\}, \\ \Hcurlnot{\Omega} &= \{\vec{v} \in \Hcurl{\Omega} : \mathbf{\nu} \times \vec{v} = \vec{0} \text{ on } \partial \Omega\}, \end{align*} where $\mathbf{\nu}$ is the outward unit normal for $\partial \Omega$. Define \begin{equation*} \mathcal{H}^{DC}{(\Omega)}= \{ \vec{v} \in \Hdiv{\Omega} \cap \Hcurl{\Omega} : B(\vec{v}) = g \}, \end{equation*} with norm $\Hdcnorm{\vec{v}}{\Omega}^2 = \Ltwonormndim{\vec{v}}{\Omega}{3}^2 + \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2$ and appropriate boundary conditions $B(\vec{v})=g$. Further, let $\mathcal{H}^{DC}_0{(\Omega)} = \{ \vec{v} \in \Hdiv{\Omega} \cap \Hcurl{\Omega} : B(\vec{v}) = \vec{0} \}$. Finally, denote the unit sphere as $\mathcal{S}^2$. The desired minimization becomes \begin{equation*} \label{minioversphere} \vec{n}_{*} = \argmin_{\vec{n} \in \mathcal{S}^2 \cap \mathcal{H}^{DC}{(\Omega)}} \mathcal{F}_1(\vec{n}). \end{equation*} In the presence of full Dirichlet boundary conditions, the functional to be minimized is significantly simplified as \begin{equation} \label{functional3} \mathcal{F}_2(\vec{n}) = K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}}{\Omega}^2 + K_3\Ltwoinnerndim{\vec{Z} \vec{\nabla}\times \vec{n}}{\vec{\nabla}\times \vec{n}}{\Omega}{3}, \end{equation} by the application of \eqref{stronganchoringdivthm}. However, the functional still contains nonlinear terms introduced by the presence of $\vec{Z} = \vec{Z}(\vec{n})$. Note that this simplification is also applicable to a rectangular domain with mixed Dirichlet and periodic boundary conditions. Such a domain is considered in the numerical experiments presented here. We proceed with the functional in \eqref{functional2} in building a framework for minimization under general boundary conditions. However, in the treatment of existence and uniqueness theory, we assume the application of full Dirichlet or mixed Dirichlet and periodic boundary conditions and, therefore, utilize the simplified form in \eqref{functional3}. \subsection{First-Order Optimality Conditions and Newton Linearization} \label{newtonstepssection} Since $\vec{n}$ must be of unit length, it is natural to employ a Lagrange multiplier approach. This length requirement represents a pointwise equality constraint, such that $\ltwoinner{\vec{n}}{\vec{n}} - 1 = 0$. Thus, following general constrained optimization theory \cite{Luenberger1}, define the Lagrangian \begin{align*} \mathcal{L}(\vec{n}, \lambda) &= \mathcal{F}_1(\vec{n}) + \int_{\Omega} \lambda(\vec{x})(\ltwoinner{\vec{n}}{\vec{n}}-1) \diff{V}, \end{align*} where $\lambda \in \Ltwo{\Omega}$. In order to minimize \eqref{functional2}, we compute the G\^{a}teaux derivatives of $\mathcal{L}$ with respect to $\vec{n}$ and $\lambda$ in the directions $\vec{v} \in \mathcal{H}^{DC}_0{(\Omega)}$ and $\gamma \in L^2(\Omega)$, respectively. Hence, the necessary continuum first-order optimality conditions are \begin{align} \mathcal{L}_{\director}[\vec{v}] &= \frac{\partial}{\partial \vec{n}} \mathcal{L}(\vec{n}, \lambda) [\vec{v}] =0, & & \forall \vec{v} \in \mathcal{H}^{DC}_0{(\Omega)}, \label{lagrangeweakforminitial} \\ \mathcal{L}_{\lambda}[\gamma] &= \frac{\partial}{\partial \lambda} \mathcal{L}(\vec{n}, \lambda) [\gamma] =0,& & \forall \gamma \in L^2(\Omega). \label{constraintinitial} \end{align} Computing these derivatives yields \begin{align*} \mathcal{L}_{\director}[\vec{v}] = &2(K_1-K_2-K_4)\Ltwoinner{\vec{\nabla}\cdot \vec{n}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + 2K_3\Ltwoinnerndim{\vec{Z}(\vec{n}) \vec{\nabla}\times \vec{n}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ &+ 2(K_2-K_3)\Ltwoinner{\vec{n} \cdot \vec{\nabla}\times \vec{n}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}}{\Omega} + 2(K_2+K_4)\big(\Ltwoinnerndim{\nabla n_1}{\pd{\vec{v}}{x}}{\Omega}{3} \nonumber \\ &+\Ltwoinnerndim{\nabla n_2}{\pd{\vec{v}}{y}}{\Omega}{3} +\Ltwoinnerndim{\nabla n_3}{\pd{\vec{v}}{z}}{\Omega}{3} \big) +2 \int_{\Omega} \lambda \ltwoinner{\vec{n}}{\vec{v}} \diff{V}, \end{align*} and \begin{align*} \mathcal{L}_{\lambda}[\gamma]&= \int_{\Omega} \gamma (\ltwoinner{\vec{n}}{\vec{n}} -1) \diff{V}. \end{align*} The variational system contains nonlinearities in both \eqref{lagrangeweakforminitial} and \eqref{constraintinitial}. Therefore, Newton iterations are employed by computing a generalized first-order Taylor series expansion, requiring computation of the Hessian \cite{Benzi1, Nocedal1}. Let $\vec{n}_k$ and $\lambda_k$ be the current approximations for $\vec{n}$ and $\lambda$, respectively. Additionally, let $\delta \director= \vec{n}_{k+1} - \vec{n}_k$ and $\delta \lambda = \lambda_{k+1}-\lambda_k$ be updates to these approximations. Then, the Newton iterations are denoted \begin{equation} \label{newtonhessian} \left [ \begin{array}{c c} \mathcal{L}_{\vec{n} \vec{n}} & \mathcal{L}_{\vec{n} \lambda} \\ \mathcal{L}_{\lambda \vec{n}} & \mathcal{L}_{\lambda \lambda} \end{array} \right ] \left [ \begin{array}{c} \delta \director \\ \delta \lambda \end{array} \right] = - \left[ \begin{array}{c} \mathcal{L}_{\vec{n}} \\ \mathcal{L}_{\lambda} \end{array} \right], \end{equation} where each of the system components are evaluated at $\vec{n}_k$ and $\lambda_k$. The matrix-vector multiplication indicates the direction that the derivatives in the Hessian are taken. That is, \begin{align*} \mathcal{L}_{\vec{n} \vec{n}}[\vec{v}] \cdot \delta \director = \pd{ }{\vec{n}} \left ( \mathcal{L}_{\vec{n}} (\vec{n}_k, \lambda_k)[\vec{v}]\right)[\delta \director], & & \mathcal{L}_{\vec{n} \lambda}[\vec{v}] \cdot \delta \lambda = \pd{ }{\lambda} \left( \mathcal{L}_{\vec{n}} (\vec{n}_k, \lambda_k)[\vec{v}] \right ) [\delta \lambda], \\ \mathcal{L}_{\lambda \vec{n}}[\gamma] \cdot \delta \director = \pd{ }{\vec{n}} \left( \mathcal{L}_{\lambda} (\vec{n}_k, \lambda_k)[\gamma] \right)[\delta \director], & & \mathcal{L}_{\lambda \lambda}[\gamma] \cdot \delta \lambda = \pd{ }{\lambda} \left( \mathcal{L}_{\lambda} (\vec{n}_k, \lambda_k)[\gamma] \right)[\delta \lambda], \end{align*} where the partials denote G\^{a}teaux derivatives in the respective variables. Since $\mathcal{L}(\vec{n}, \lambda)$ is linear in $\lambda$, $\mathcal{L}_{\lambda \lambda}[\gamma] \cdot \delta \lambda = 0$. Hence, the Hessian in $\eqref{newtonhessian}$ simplifies to a saddle-point structure, \begin{equation*} \left [ \begin{array}{c c} \mathcal{L}_{\vec{n} \vec{n}} & \mathcal{L}_{\vec{n} \lambda} \\ \mathcal{L}_{\lambda \vec{n}} & \vec{0} \end{array} \right ]. \end{equation*} The discrete form of this Hessian leads to a saddle-point matrix, which poses unique difficulties for the efficient computation of the solution to the resulting linear system. Such structures commonly appear in constrained optimization and other settings; for a comprehensive overview of discrete saddle-point problems see \cite{Benzi2}. Here, we focus only on the linearization step rather than the underlying linear solvers, which will be investigated in future work. Computing the remaining G\^{a}teaux derivatives yields \begin{align} \mathcal{L}_{\vec{n} \lambda}[\vec{v}] \cdot \delta \lambda &= 2 \int_{\Omega} \delta \lambda \ltwoinner{\vec{n}_k}{\vec{v}} \diff{V} \label{freeElasticLagnlam},\\ \mathcal{L}_{\lambda \vec{n}}[\gamma] \cdot \delta \director &= 2 \int_{\Omega} \gamma \ltwoinner{\vec{n}_k}{\delta \director} \diff{V} \label{freeElasticLaglamn}, \end{align} and \begin{align} \mathcal{L}_{\vec{n} \vec{n}}[\vec{v}] \cdot \delta \director =&2(K_1 - K_2 - K_4)\Ltwoinner{\vec{\nabla}\cdot \delta \director}{\vec{\nabla}\cdot \vec{v}}{\Omega} + 2K_3 \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \delta \director}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ & + 2(K_2 - K_3) \Big(\Ltwoinner{\delta \director \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}+\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\delta \director \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ & + \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \delta \director}{\Omega} + \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \delta \director}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ & + \Ltwoinner{\delta \director \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}\Big) + 2(K_2+K_4) \big( \Ltwoinnerndim{\nabla \delta n_1}{\pd{\vec{v}}{x}}{\Omega}{3} \nonumber \\ & +\Ltwoinnerndim{\nabla \delta n_2}{\pd{\vec{v}}{y}}{\Omega}{3} + \Ltwoinnerndim{\nabla \delta n_3}{\pd{\vec{v}}{z}}{\Omega}{3} \big) + 2\int_{\Omega} \lambda_k \ltwoinner{\delta \director}{\vec{v}} \diff{V}\label{freeElasticLagnn}. \end{align} Constructing \eqref{newtonhessian} using \eqref{freeElasticLagnlam}-\eqref{freeElasticLagnn} yields a linearized variational system. For these iterations, we compute $\delta \director$ and $\delta \lambda$ satisfying this system for all $\vec{v} \in \mathcal{H}^{DC}_0{(\Omega)}$ and $\gamma \in L^2(\Omega)$ with the current approximations $\vec{n}_k$ and $\lambda_k$. The current approximations are then corrected with the solutions $\delta \director$ and $\delta \lambda$ to yield $\vec{n}_{k+1}$ and $\lambda_{k+1}$. While they typically improve robustness and efficiency, we do not consider the use of line searches or trust regions \cite{Nocedal1} in the work presented here, leaving this for future work. If we are considering a system with Dirichlet or mixed periodic and Dirichlet boundary conditions, as described above, we eliminate the $(K_2 + K_4)$ terms from \eqref{newtonhessian}, simplifying the linearization. \subsection{Uniform Symmetric Positive Definiteness of \vec{Z}} In subsequent sections, theory establishing the existence and uniqueness of solutions to the Newton linearizations is developed. A key property exploited in these proofs is that $\vec{Z}$ is uniformly symmetric positive definite (USPD) under reasonable assumptions. It is relatively routine to show that $\vec{Z}$ is symmetric, self-adjoint in $\Ltwondim{\Omega}{3}$, and has, at each point in $\Omega$, eigenvalues $\mu = 1,1, 1+(\kappa-1)(n_1^2 + n_2^2+n_3^2)$. Ericksen's inequalities \cite{Ericksen2} guarantee that $K_2, K_3 \geq 0$. Throughout this paper, we consider the case where the inequality is strict; thus, $\kappa > 0$. We also assume that, in the Newton iterations, control has been maintained over the director length such that \begin{equation} \label{limitsonnlength} \alpha \leq n_1^2 + n_2^2+n_3^2 \leq \beta, \qquad \forall \vec{x} \in \Omega, \end{equation} with constants $0 < \alpha \leq 1 \leq \beta$. \begin{lemma} \label{USPDlemma} Assume that $\alpha \leq \ltwoinner{\vec{n}}{\vec{n}} \leq \beta \text{ for all } \vec{x} \in \Omega$. If $\kappa>1$, then $\vec{Z}$ is USPD on $\Omega$. For $0<\kappa<1$, if $\beta<\frac{1}{1-\kappa}$, then $\vec{Z}$ is USPD on $\Omega$. \end{lemma} \begin{proof} For a fixed $\vec{x} \in \Omega$, note that \begin{align*} \ltwonorm{\vec{Z}(\vec{x})} &=\max_{1 \leq i \leq 3} \mu_i(\vec{Z}(\vec{x}) \label{spectralnormeq}), \end{align*} where $\mu_i(\vec{Z}(\vec{x}))$ denotes the $i^{\text{th}}$ eigenvalue of $\vec{Z}$. In order to keep the eigenvalues of $\vec{Z}$ positive, it is neccessary that $\mu_3 = (1+(\kappa-1)(n_1^2 + n_2^2+n_3^2)) > 0$. We consider two cases. \begin{caseof} \case{$\kappa >1$.}{ If $\kappa >1$, then \begin{equation} 0<1+(\kappa-1) \alpha \leq \mu_3 \leq 1+(\kappa-1)\beta, \qquad \forall \vec{x} \in \Omega. \label{kappalargeeigenineq} \end{equation} Thus, \eqref{kappalargeeigenineq} implies that the eigenvalues of $\vec{Z}(\vec{x})$ are bounded by \begin{equation} 1 \leq \mu_i \leq 1+(\kappa-1)\beta \label{bigkapbound}, \qquad \forall \vec{x} \in \Omega. \end{equation} Using standard functional analysis arguments \cite{Griffel1}, Inequality \eqref{bigkapbound} implies that \begin{equation} \label{bigkappaUSPD} 1 \leq \frac{\vec{\xi}^T \vec{Z}(\vec{x}) \vec{\xi}}{\vec{\xi}^T \vec{\xi}} \leq 1+(\kappa-1)\beta, \qquad \forall \vec{x} \in \Omega, \vec{\xi} \in \mathbb{R}^3. \end{equation} } \case{$0< \kappa <1$.}{ For this case, \begin{equation*} 1+(\kappa-1)\beta \leq \mu_3 \leq 1+(\kappa-1) \alpha. \end{equation*} However, the assumption that $\beta < \frac{1}{1-\kappa}$ implies that \begin{equation*} 0 < 1+(\kappa-1)\beta \leq \mu_3 \leq 1+(\kappa-1) \alpha \label{kappasmalleigenineq}. \end{equation*} Hence, the eigenvalues of $\vec{Z}(\vec{x})$ are bounded by \begin{equation} 0 < 1+(\kappa -1)\beta \leq \mu_i \leq 1 \label{smallkapbound}, \qquad \forall \vec{x} \in \Omega. \end{equation} As in the previous case, \eqref{smallkapbound} implies \begin{equation} 1+(\kappa-1)\beta \leq \frac{\vec{\xi}^T \vec{Z}(\vec{x}) \vec{\xi}}{\vec{\xi}^T \vec{\xi}} \leq 1, \qquad \forall \vec{x} \in \Omega, \vec{\xi} \in \mathbb{R}^3.\label{smallkappaUSPD} \end{equation} } \end{caseof} \noindent Thus, $\vec{Z}$ is USPD for any $\kappa>0$, as long as sufficient control is maintained on the length of $\vec{n}$. \end{proof} The USPD property of $\vec{Z}$ plays an important role in the proofs of existence and uniqueness of solutions to the linearization undertaken in the next section. \section{Existence and Uniqueness for the Newton Linearizations} \label{existuniquesection} Here and in the following subsections, we will routinely make use of the following set of assumptions. \begin{assumption} \label{secass} Consider an open bounded domain, $\Omega$, which is a convex polyhedron or has a $C^{1,1}$ boundary. Note that this implies that the boundary is also Lipschitz continuous. Further, assume that $\alpha \leq \ltwonorm{\vec{n}_k}^2 \leq \beta$, such that $\vec{Z}(\vec{n}_k(\vec{x}))$ remains USPD with lower and upper bounds, $\eta$ and $\Lambda$, respectively. Finally, Dirichlet boundary conditions are applied. Therefore, both $\delta \director$ and $\vec{v}$ are in $\Hdivnot{\Omega} \cap \Hcurlnot{\Omega}$. \end{assumption} In the continuum, the above Newton systems are written in a general form as \begin{align} a(\delta \director, \vec{v}) + b(\vec{v}, \delta \lambda) &= F(\vec{v}),& & \forall \vec{v} \in \mathcal{H}^{DC}_0{(\Omega)}, \label{contgeneralizedNewtoniterationweakform1} \\ b(\delta \director, \gamma) &= G(\gamma),& & \forall \gamma \in \Ltwo{\Omega}, \label{contgeneralizedNewtoniterationweakform2} \end{align} where $a(\cdot, \cdot)$ is a symmetric bilinear form, $b(\cdot, \cdot)$ is a bilinear form, and $F$ and $G$ are linear functionals. For simplicity, throughout this section, we drop the notation of $\delta \director$, $\delta \lambda$. Thus, \begin{align} a(\vec{u}, \vec{v}) = &K_1\Ltwoinner{\vec{\nabla}\cdot \vec{u}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ & \qquad + (K_2-K_3) \Big(\Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}+\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ &\qquad + \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} + \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ & \qquad + \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}\Big) + \int_{\Omega} \lambda_k \ltwoinner{\vec{u}}{\vec{v}} \diff{V} \label{auvform}, \end{align} and \begin{align*} b(\vec{v}, \gamma) = \int_{\Omega} \gamma \ltwoinner{\vec{n}_k}{\vec{v}} \diff{V}. \end{align*} Moreover, \begin{align*} F(\vec{v}) &= - \Big(K_1\Ltwoinner{\vec{\nabla}\cdot \vec{n}_k}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3\Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ &\qquad + (K_2-K_3)\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} + \int_{\Omega} \lambda_k \ltwoinner{\vec{n}_k}{\vec{v}} \diff{V} \Big), \end{align*} and \begin{align*} G(\gamma) = -\frac{1}{2} \int_{\Omega} \gamma (\ltwoinner{\vec{n}_k}{\vec{n}_k} -1) \diff{V}. \end{align*} In this section, we aim to show that the system in \eqref{newtonhessian} is well-posed. Therefore, continuity, coercivity, and weak coercivity results are desired for the bilinear forms $a(\cdot, \cdot)$ and $b(\cdot, \cdot)$. Due to the complexity of the bilinear forms, deriving theoretical results in the continuum is challenging. However, the following lemmas hold. \begin{lemma} \label{contcoercivityauv} Under Assumption \ref{secass} and the assumption that $\lambda_k$ is pointwise non-negative, if $\kappa =1$, there exists an $\alpha_0 >0$ such that $\alpha_0 \Hdcnorm{\vec{v}}{\Omega}^2 \leq a(\vec{v}, \vec{v})$ for all $\vec{v} \in \mathcal{H}^{DC}_0{(\Omega)}$. \end{lemma} \begin{proof} The proof of this lemma is identical to that of Lemma \ref{coercivityauv} below. \end{proof} \\ If additional regularity is asserted, such that $\delta \director$ and $\vec{v}$ are elements of $\mathcal{H}^{DC^1}_0 {(\Omega)} = \{ \vec{w} \in \mathcal{H}^{DC}_0{(\Omega)} : \vec{\nabla}\times \vec{w} \in \Hone{\Omega}^3 \}$ with norm $\Hdconenorm{\vec{w}}{\Omega}^2 = \Ltwonormndim{\vec{w}}{\Omega}{3}^2 + \Ltwonorm{\vec{\nabla}\cdot \vec{w}}{\Omega}^2 + \Honenorm{\vec{\nabla}\times \vec{w}}{\Omega}^2$, where $\Honenorm{\cdot}{\Omega}$ denotes the standard norm on $\Hone{\Omega}$, then the next two lemmas hold for arbitrary $\kappa$. \begin{lemma} \label{contboundedlinearforms} Under Assumption \ref{secass}, $F$ and $G$ are bounded linear functionals on $\mathcal{H}^{DC^1}_0 {(\Omega)}$ and $\Ltwo{\Omega}$, respectively. \end{lemma} \begin{proof} A simple application of the Cauchy-Schwarz inequality shows that $G(\gamma)$ is a bounded linear functional. For $F(\vec{v})$, observe that \begin{align} \vert F(\vec{v}) \vert &\leq K_1 \vert \Ltwoinner{\vec{\nabla}\cdot \vec{n}_k}{\vec{\nabla}\cdot \vec{v}}{\Omega} \vert + K_3 \vert \Ltwoinnerndim{\vec{A}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert \nonumber \\ &\qquad + \vert K_2-K_3 \vert \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert + \vert\int_{\Omega} \lambda_k \ltwoinner{\vec{n}_k}{\vec{v}} \diff{V} \vert \label{Ftriangleinequalcont}, \end{align} by the triangle inequality. Applying Cauchy-Schwarz inequalities to \eqref{Ftriangleinequalcont}, one obtains \begin{align} \vert F(\vec{v}) \vert & \leq K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}_k}{\Omega} \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwonormndim{\vec{A}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ &\qquad + \vert K_2-K_3 \vert \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert + \Ltwonormndim{\lambda_k \vec{n}_k}{\Omega}{3}\Ltwonormndim{\vec{v}}{\Omega}{3} \nonumber \\ &\leq K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}_k}{\Omega} \Hdconenorm{\vec{v}}{\Omega} + K_3 \Ltwonormndim{\vec{A}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Hdconenorm{\vec{v}}{\Omega} \nonumber \\ &\qquad + \vert K_2-K_3 \vert \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert + \Ltwonormndim{\lambda_k \vec{n}_k}{\Omega}{3}\Hdconenorm{\vec{v}}{\Omega}. \label{partialFinequalitycont} \end{align} In order to bound $\vert F(\vec{v}) \vert$, consider the final three summands separately. Note that since $\ltwonorm{\vec{A}(\vec{n}_k)} \leq \Lambda$, where $\Lambda$ is the relevant upper bound from Lemma \ref{USPDlemma}, it is evident that \begin{equation} \Ltwonormndim{\vec{A}(\vec{n}_k)\vec{\nabla}\times \vec{n}_k}{\Omega}{3} \leq \Lambda \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3}, \label{matrixcurltermcont} \end{equation} and that \begin{align} \label{klambkdirtermcont} \Ltwonormndim{\lambda_k \vec{n}_k}{\Omega}{3}^2 &\leq \beta \int_{\Omega} \lambda_k^2 \diff{V} = C_{\lambda \vec{n}}^2, \end{align} where $\beta$ is the upper bound for \eqref{limitsonnlength}. Finally, consider \begin{equation*} \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert = \vert \Ltwoinnerndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{v}}{\Omega}{3} \vert. \end{equation*} Applying the Cauchy-Schwarz inequality, \begin{align} \vert \Ltwoinnerndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{v}}{\Omega}{3} \vert &\leq \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Ltwonormndim{\vec{v}}{\Omega}{3} \nonumber \\ &\leq \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Hdconenorm{\vec{v}}{\Omega} \label{lastboundedfunctionalsummandcont}. \end{align} By Corollary 1.1 in \cite{Girault1}, since $\vec{\nabla}\times \vec{n}_k \in \Hone{\Omega}^3$, $\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k \in \Ltwo{\Omega}$. Note that \begin{align*} (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k \cdot (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k &= (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 (\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \nonumber \\ &\leq (\vert \vec{n}_k \vert \cdot \vert \vec{\nabla}\times \vec{n}_k \vert)^2 (\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \\ &\leq \beta \cdot (\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2. \end{align*} Employing this in \eqref{lastboundedfunctionalsummandcont} and letting $\Ltwonorm{\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} = C_{\vec{n} \vec{n}}$, \begin{align} \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} &\leq \sqrt{\beta} \big( \int_{\Omega}(\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 \diff{V} \big)^{1/2} \nonumber \\ &\leq \sqrt{\beta} C_{\vec{n} \vec{n}} \label{regularcurlcurltermcont}. \end{align} Therefore, using \eqref{partialFinequalitycont}-\eqref{klambkdirtermcont}, and \eqref{regularcurlcurltermcont}, \begin{align*} \vert F(\vec{v}) \vert \leq& K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}_k}{\Omega} \Hdconenorm{\vec{v}}{\Omega} + K_3 \Lambda \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3}\Hdconenorm{\vec{v}}{\Omega} \\ \nonumber & + \vert K_2-K_3 \vert \sqrt{\beta} C_{\vec{n} \vec{n}}\Hdconenorm{\vec{v}}{\Omega} + C_{\lambda \vec{n}}\Hdconenorm{\vec{v}}{\Omega}. \end{align*} \end{proof} \begin{lemma} \label{contbilinearformcontinuity} Under Assumption \ref{secass}, $a(\vec{u}, \vec{v})$ and $b(\vec{v}, \gamma)$ are continuous for the norms $\Hdconenorm{\cdot}{\Omega}$ and $\Ltwonorm{\cdot}{\Omega}$. \end{lemma} \begin{proof} First consider \begin{align*} \vert b(\vec{v}, \gamma) \vert &= \vert \int_{\Omega} \gamma \ltwoinner{\vec{v}}{\vec{n}_k} \diff{V} \vert \nonumber \\ &\leq \Ltwonorm{\gamma}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{n}_k}{\Omega} \nonumber \\ &\leq \Ltwonorm{\gamma}{\Omega} \sqrt{\beta} \Ltwonorm{\vec{v}}{\Omega}, \end{align*} by H\"{o}lder's inequality and \eqref{limitsonnlength}. Therefore, $b(\vec{v}, \gamma)$ is a continuous bilinear form. For the continuity of $a(\vec{u}, \vec{v})$, observe that \begin{align} \vert a(\vec{u}, \vec{v}) \vert \leq & K_1 \vert \Ltwoinner{\vec{\nabla}\cdot \vec{u}}{\vec{\nabla}\cdot \vec{v}}{\Omega} \vert + K_3 \vert \Ltwoinnerndim{\vec{A}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert \nonumber \\ & + \vert K_2-K_3 \vert \Big( \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert+\vert\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \nonumber \\ & + \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} \vert + \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \nonumber \\ & + \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \Big) + \vert \int_{\Omega} \lambda_k \ltwoinner{\vec{u}}{\vec{v}} \diff{V} \vert \label{triangleauvinequalitycont}, \end{align} by the triangle inequality. For simplicity, consider the components of the sum above. Note that \begin{equation} \vert \Ltwoinner{\vec{\nabla}\cdot \vec{u}}{\vec{\nabla}\cdot \vec{v}}{\Omega} \vert \leq \Ltwonorm{\vec{\nabla}\cdot \vec{u}}{\Omega} \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega} \leq \Hdconenorm{ \vec{u}}{\Omega} \Hdconenorm{ \vec{v}}{\Omega}. \label{divergenceauvcontineqcont} \end{equation} Considering $\vert \Ltwoinnerndim{\vec{A}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert$, using \eqref{matrixcurltermcont} implies that \begin{align} \vert \Ltwoinnerndim{\vec{A}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert &\leq \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3} \Ltwonormndim{\vec{A}(\vec{n}_k)\vec{\nabla}\times \vec{u}}{\Omega}{3} \nonumber \\ &\leq \Lambda \Hdconenorm{\vec{v}}{\Omega} \Ltwonormndim{\vec{\nabla}\times \vec{u}}{\Omega}{3} \nonumber \\ & \leq \Lambda \Hdconenorm{\vec{v}}{\Omega} \Hdconenorm{\vec{u}}{\Omega}. \label{Amatrixauvcontineqcont} \end{align} From the imbedding in Lemma 2.5 of \cite{Girault1}, if $\Omega$ is a convex polyhedron or has a $C^{1,1}$ boundary, then for any $\vec{w} \in \Hdivnot{\Omega} \cap \Hcurlnot{\Omega}$ there exists a $C_{\text{imb}}>0$ such that \begin{equation*} \Honenorm{\vec{w}}{\Omega}^2 \leq C_{\text{imb}} \Hdconenorm{\vec{w}}{\Omega}^2. \end{equation*} Furthermore, $\vec{w} \in \Honenot{\Omega}^3$ by \cite[Lemma 2.5]{Girault1}. Consider $\vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert$ from \eqref{triangleauvinequalitycont}. By Corollary 1.1 in \cite{Girault1}, the map $\vec{u} \cdot \vec{\nabla}\times \vec{v}$ is a \emph{continuous} bilinear map, $\Hone{\Omega}^3 \times \Hone{\Omega}^3 \to \Ltwo{\Omega}$. Therefore, there exists a $C_{\text{con}} >0$ such that \begin{equation*} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\Omega} \leq C_{\text{con}} \Honenorm{\vec{u}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{v}}{\Omega}. \end{equation*} By the Cauchy-Schwarz inequality \begin{equation} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\Omega} \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \label{csk3kappafirstcont}. \end{equation} Let $C' = C_{\text{con}} C_{\text{imb}}$ and note that \begin{align} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\Omega} &\leq C_{\text{con}} \Honenorm{\vec{u}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{v}}{\Omega} \label{ucurlvcontinuitycont} \\ &\leq C' \Hdconenorm{\vec{u}}{\Omega} \Hdconenorm{\vec{v}}{\Omega} \label{ucurlvembiddingcont}, \end{align} where \eqref{ucurlvcontinuitycont} is given by continuity and \eqref{ucurlvembiddingcont} is given by the imbedding. Hence, \begin{align} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert &\leq C' \Hdconenorm{\vec{u}}{\Omega} \Hdconenorm{\vec{v}}{\Omega} \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ & \leq C' \sqrt{\beta} \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Hdconenorm{\vec{u}}{\Omega} \Hdconenorm{\vec{v}}{\Omega} \label{ucurlvinftyinequalcont}. \end{align} The next summand from \eqref{triangleauvinequalitycont} is \begin{equation*} \vert\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\Omega} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}. \end{equation*} Again bound \begin{align*} \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\Omega}& \leq \sqrt{\beta} \Hdconenorm{\vec{v}}{\Omega}. \end{align*} Since $\vec{u} \in \Honenot{\Omega}^3$ and $\vec{\nabla}\times \vec{n}_k \in \Hone{\Omega}^3$, \begin{align*} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} &\leq C_{\text{con}}\Honenorm{\vec{u}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ & \leq C' \Hdconenorm{\vec{u}}{\Omega}\Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega}. \end{align*} Therefore, \begin{equation} \label{secondk3kappacontineqcont} \vert\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \sqrt{\beta} C' \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} \Hdconenorm{\vec{u}}{\Omega}\Hdconenorm{\vec{v}}{\Omega}. \end{equation} Now consider $\vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} \vert$ and note that this inner product is the same as that in \eqref{csk3kappafirstcont} with the roles of $\vec{u}$ and $\vec{v}$ reversed. Since $\vec{u}$ and $\vec{v}$ are from the same space, the steps for deriving \eqref{ucurlvinftyinequalcont} are equally valid. Thus, \begin{equation} \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} \vert\leq C' \sqrt{\beta} \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Hdconenorm{\vec{u}}{\Omega} \Hdconenorm{\vec{v}}{\Omega} \label{thirdk3kappacontineqcont}. \end{equation} Similarly, the inequality for $\vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert$ is derived in an analogous manner to that of \eqref{secondk3kappacontineqcont}. Thus, \begin{equation} \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \sqrt{\beta} C' \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} \Hdconenorm{\vec{u}}{\Omega}\Hdconenorm{\vec{v}}{\Omega} \label{forthk3kappacontineqcont}. \end{equation} Next, examine \begin{equation*} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}. \end{equation*} Since $\vec{\nabla}\times \vec{n}_k \in \Hone{\Omega}^3$, \begin{align*} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \leq C_{\text{con}} \Honenorm{\vec{u}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} \leq C' \Hdconenorm{\vec{u}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} ,\\ \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \leq C_{\text{con}} \Honenorm{\vec{v}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} \leq C' \Hdconenorm{\vec{v}}{\Omega} \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega}. \end{align*} Thus, \begin{equation} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq (C')^2 \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega}^2 \Hdconenorm{\vec{u}}{\Omega}\Hdconenorm{\vec{v}}{\Omega} \label{k3kappafiftcontineqcont}. \end{equation} Finally, \begin{align} \vert \int_{\Omega} \lambda_k \ltwoinner{\vec{u}}{\vec{v}} \diff{V} \vert &\leq \Ltwonorm{\lambda_k}{\Omega} \Ltwonorm{\vec{u} \cdot \vec{v}}{\Omega} \nonumber \\ &\leq \Ltwonorm{\lambda_k}{\Omega} C_{\text{con}} \Honenorm{\vec{u}}{\Omega} \Honenorm{\vec{v}}{\Omega}\nonumber \\ & \leq \Ltwonorm{\lambda_k}{\Omega} C'C_{\text{imb}} \Hdconenorm{\vec{u}}{\Omega} \Hdconenorm{\vec{v}}{\Omega} \label{lambdaimbedineqcont}. \end{align} Combining \eqref{divergenceauvcontineqcont}, \eqref{Amatrixauvcontineqcont}, and \eqref{ucurlvinftyinequal}-\eqref{lambdaimbedineqcont}, \begin{align*} a(\vec{u},\vec{v}) &\leq \Big( K_1 + K_3 \Lambda + \vert K_2-K_3 \vert \big( 2C' \sqrt{\beta} \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3}+ 2\sqrt{\beta} C' \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ &\qquad + (C')^2 \Honenorm{\vec{\nabla}\times \vec{n}_k}{\Omega}^2\big) + \Ltwonorm{\lambda_k}{\Omega} C'C_{\text{imb}} \Big) \Hdconenorm{\vec{u}}{\Omega} \Hdconenorm{\vec{v}}{\Omega}. \end{align*} \end{proof} The auxiliary regularity above poses a number of theoretical problems. For the well-posedness of the continuum system, coercivity and weak coercivity must be shown in the more intricate $\mathcal{H}^{DC^1} {(\Omega)}$ norm. Moreover, conforming finite elements for this space, such as Bogner-Fox-Schmit elements \cite{Bogner1}, are undesirably cumbersome and present notable difficulties in demonstrating stability for this linearization system. However, in the discrete setting, results guaranteeing the existence and uniqueness of solutions to the discrete Newton systems at each step are attained under less strict assumptions. \subsection{Discrete System Preliminaries} Performing the outlined Newton iterations for free elastic effects necessitates solving the above Newton systems for the update functions $\delta \director$ and $\delta \lambda$. Thus, finite elements are used to numerically approximate the updates. Finite dimensional spaces $V_h \subset \mathcal{H}^{DC}_0{(\Omega)}$ and $\Pi_h \subset \Ltwo{\Omega}$ are considered, yielding the discrete variational problem \begin{align} \vec{a}(\delta \director_h, \vec{v}_h) + b(\vec{v}_h, \delta \lambda_h) &= \vec{F}(\vec{v}_h),& &\forall \vec{v}_h \in V_h, \label{generalizedNewtoniterationweakform1} \\ b(\delta \director_h, \gamma_h) &= G(\gamma_h),& &\forall \gamma_h \in \Pi_h. \label{generalizedNewtoniterationweakform2} \end{align} Note that Assumption \ref{secass} implies that $\delta \director_h$ and $\vec{v}_h$ are also elements of $\Hdivnot{\Omega} \cap \Hcurlnot{\Omega}$. Throughout the rest of this section, the developed theory applies exclusively to discrete spaces. Therefore, except when necessary for clarity, we drop the subscript $h$ along with the notation $\delta \director$ and $\delta \lambda$. For instance, we write $a(\vec{u}, \vec{v})$ to indicate the bilinear form in \eqref{auvform} operating on the discrete space $V_h \times V_h$. The existence and uniqueness theory in the following subsections is explicitly developed in the presence of full Dirichlet boundary conditions. However, the theory is equally applicable for a rectangular domain with mixed Dirichlet and periodic boundary conditions. Such a domain is considered in the numerical experiments presented herein. Let $\{\mathcal{T}_h \}$, $0< h \leq 1$, be a family of quadrilateral subdivisions of $\Omega$, such that \begin{equation} \max \{\text{diam } T : T \in \mathcal{T}_h \} \leq h\ \text{diam } \Omega. \label{maxuniformity} \end{equation} Further, assume that $\{\mathcal{T}_h \}$ is quasi-uniform so that there exists a $\rho > 0$, such that \begin{equation} \label{quasiuniform} \min \{\text{diam } B_T : T \in \mathcal{T}_h \} \geq \rho\ h\ \text{diam } \Omega, \end{equation} for all $h \in (0, 1]$, where $B_T$ is the largest ball contained in $T$, such that $T$ is star-shaped with respect to $B_T$ \cite{Brenner1}. Denote the measure of $T \in \mathcal{T}_h$ as $\vert T \vert$. Furthermore, let $Q_p$ denote piecewise $C^0$ polynomials of degree $p \geq 1$ on $\mathcal{T}_h$ and $P_0$ denote the space of piecewise constants on $\mathcal{T}_h$. Next, define a bubble space \begin{equation*} V_h^b = \{\vec{v} \in C_c(\Omega)^3 : \vec{v}|_{T} =a_T b_T \vec{n}_k |_T, \forall T \in \mathcal{T}_h \}, \end{equation*} where $C_c(\Omega)$ denotes the space of compactly supported continuous functions on $\Omega$, $b_T$ is the quadratic bubble function \cite{Mourad1} that vanishes on $\partial T \in \mathcal{T}_h$, and $a_T$ is a constant coefficient associated with $b_T$. The bubble functions are constructed \cite{Pierre1}, such that \begin{align} \int_T b_T \diff{V} &= 1,& &\forall T \in \mathcal{T}_h, \label{bubbleprop1} \\ b_T &> 0,& &\forall \vec{x} \in T \label{bubbleprop2}. \end{align} Then, we consider the pair of spaces \begin{align} \Pi_h &= P_0, \label{pispace} \\ V_h &= \{ \vec{v} \in Q_m \times Q_m \times Q_m \oplus V_h^b : \vec{v} = \vec{0} \text{ on } \partial \Omega \}. \label{vspace} \end{align} In the following sections, to demonstrate the existence and uniqueness of solutions to the system given by \eqref{generalizedNewtoniterationweakform1} and \eqref{generalizedNewtoniterationweakform2}, we show that $a(\vec{u}, \vec{v})$ is a coercive and continuous bilinear form and that $b(\vec{v}, \gamma)$ is a continuous and weakly coercive bilinear form \cite{Brenner1, Boffi1, Braess1, Babuska1} for the above spaces, $V_h$ and $\Pi_h$. Throughout the remainder of this section, we further assume that $\vec{n}_k \in Q_p$, for some $p \geq 1$, so that $V_h \subset Q_l \times Q_l \times Q_l$ for $l = \max (m, p+2)$. \subsection{Discrete Continuity} In this section, we show that the right hand sides of \eqref{generalizedNewtoniterationweakform1} and \eqref{generalizedNewtoniterationweakform2} are continuous linear functionals and that the bilinear forms $a(\vec{u}, \vec{v})$ and $b(\vec{v}, \gamma)$ are continuous for the assumptions discussed above. \begin{lemma} \label{boundedlinearforms} Under Assumption \ref{secass}, $F$ and $G$ are bounded linear functionals on $V_h$ and $\Pi_h$, respectively. \end{lemma} \begin{proof} A simple application of the Cauchy-Schwarz inequality shows that $G(\gamma)$ is a bounded linear functional. For $F(\vec{v})$, observe that \begin{align} \vert F(\vec{v}) \vert &\leq K_1 \vert \Ltwoinner{\vec{\nabla}\cdot \vec{n}_k}{\vec{\nabla}\cdot \vec{v}}{\Omega} \vert + K_3 \vert \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert \nonumber \\ &\qquad + \vert K_2-K_3 \vert \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert + \left \vert \int_{\Omega} \lambda_k \ltwoinner{\vec{n}_k}{\vec{v}} \diff{V} \right \vert \label{Ftriangleinequal}, \end{align} by the triangle inequality. Applying Cauchy-Schwarz inequalities to \eqref{Ftriangleinequal}, one obtains \begin{align} \vert F(\vec{v}) \vert & \leq K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}_k}{\Omega} \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwonormndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ &\qquad + \vert K_2-K_3 \vert \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert + \Ltwonormndim{\lambda_k \vec{n}_k}{\Omega}{3}\Ltwonormndim{\vec{v}}{\Omega}{3} \nonumber \\ &\leq K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}_k}{\Omega} \Hdcnorm{\vec{v}}{\Omega} + K_3 \Ltwonormndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Hdcnorm{\vec{v}}{\Omega} \nonumber \\ &\qquad + \vert K_2-K_3 \vert \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert + \Ltwonormndim{\lambda_k \vec{n}_k}{\Omega}{3}\Hdcnorm{\vec{v}}{\Omega}. \label{partialFinequality} \end{align} In order to bound $\vert F(\vec{v}) \vert$, consider the final three summands separately. Note that since $\ltwonorm{\vec{Z}(\vec{n}_k)} \leq \Lambda$, where $\Lambda$ is the relevant upper bound from Lemma \ref{USPDlemma}, it is evident that \begin{equation} \Ltwonormndim{\vec{Z}(\vec{n}_k)\vec{\nabla}\times \vec{n}_k}{\Omega}{3} \leq \Lambda \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3}, \label{matrixcurlterm} \end{equation} and that \begin{align} \Ltwonormndim{\lambda_k \vec{n}_k}{\Omega}{3}^2 &\leq \beta \int_{\Omega} \lambda_k^2 \diff{V} = C_1^2 \label{klambkdirterm}, \end{align} where $\beta$ is the upper bound in \eqref{limitsonnlength}. Finally, consider \begin{equation*} \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert = \vert \Ltwoinnerndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{v}}{\Omega}{3} \vert. \end{equation*} Applying the Cauchy-Schwarz inequality, \begin{align} \vert \Ltwoinnerndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\vec{v}}{\Omega}{3} \vert &\leq \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Ltwonormndim{\vec{v}}{\Omega}{3} \nonumber \\ &\leq \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} \Hdcnorm{\vec{v}}{\Omega} \label{lastboundedfunctionalsummand}. \end{align} Next, note that \begin{align} (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k \cdot (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k &= (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 (\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \nonumber \\ &\leq (\vert \vec{n}_k \vert \cdot \vert \vec{\nabla}\times \vec{n}_k \vert)^2 (\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \nonumber \\ &\leq \beta \cdot (\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 \label{curlcurlinequal}. \end{align} Furthermore, $\vec{\nabla}\times \vec{n}_k$ is a vector of piecewise polynomials. Therefore, $\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k \in \Ltwo{\Omega}$. Employing \eqref{curlcurlinequal} and letting $\Ltwonorm{\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} = C_2$, \begin{align} \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k) \vec{\nabla}\times \vec{n}_k}{\Omega}{3} &\leq \sqrt{\beta} \big( \int_{\Omega}(\vec{\nabla}\times \vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 \diff{V} \big)^{1/2} \nonumber \\ &\leq \sqrt{\beta} C_2 \label{regularcurlcurlterm}. \end{align} Therefore, using \eqref{partialFinequality}-\eqref{lastboundedfunctionalsummand}, and \eqref{regularcurlcurlterm}, \begin{align*} \vert F(\vec{v}) \vert \leq& K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{n}_k}{\Omega} \Hdcnorm{\vec{v}}{\Omega} + K_3 \Lambda \Ltwonormndim{\vec{\nabla}\times \vec{n}_k}{\Omega}{3}\Hdcnorm{\vec{v}}{\Omega} \\ \nonumber & + \vert K_2-K_3 \vert \sqrt{\beta} C_2\Hdcnorm{\vec{v}}{\Omega} + C_1\Hdcnorm{\vec{v}}{\Omega}, \end{align*} implying $F(\vec{v})$ is a bounded linear functional on $V_h$. \end{proof} \begin{lemma} \label{bilinearformcontinuity} Under Assumption \ref{secass}, $a(\vec{u}, \vec{v})$ and $b(\vec{v}, \gamma)$ are continuous. \end{lemma} \begin{proof} First consider \begin{align*} \vert b(\vec{v}, \gamma) \vert &= \left \vert \int_{\Omega} \gamma \ltwoinner{\vec{v}}{\vec{n}_k} \diff{V} \right \vert \nonumber \\ &\leq \Ltwonorm{\gamma}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{n}_k}{\Omega} \nonumber \\ &\leq \Ltwonorm{\gamma}{\Omega} \sqrt{\beta} \Ltwonorm{\vec{v}}{\Omega}, \end{align*} by H\"{o}lder's inequality and \eqref{limitsonnlength}. Therefore, $b(\vec{v}, \gamma)$ is a continuous bilinear form. For the continuity of $a(\vec{u}, \vec{v})$, observe that \begin{align} \vert a(\vec{u}, \vec{v}) \vert \leq & K_1 \vert \Ltwoinner{\vec{\nabla}\cdot \vec{u}}{\vec{\nabla}\cdot \vec{v}}{\Omega} \vert + K_3 \vert \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert \nonumber \\ & + \vert K_2-K_3 \vert \Big( \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert+\vert\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \nonumber \\ & + \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} \vert + \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \nonumber \\ & + \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \Big) + \left \vert \int_{\Omega} \lambda_k \ltwoinner{\vec{u}}{\vec{v}} \diff{V} \right \vert \label{triangleauvinequality}, \end{align} by the triangle inequality. For simplicity, consider the components of the sum above. Note that \begin{equation} \vert \Ltwoinner{\vec{\nabla}\cdot \vec{u}}{\vec{\nabla}\cdot \vec{v}}{\Omega} \vert \leq \Ltwonorm{\vec{\nabla}\cdot \vec{u}}{\Omega} \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega} \leq \Hdcnorm{ \vec{u}}{\Omega} \Hdcnorm{ \vec{v}}{\Omega}. \label{divergenceauvcontineq} \end{equation} Considering $\vert \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert$, using \eqref{matrixcurlterm} implies that \begin{align} \vert \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \vert &\leq \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3} \Ltwonormndim{\vec{Z}(\vec{n}_k)\vec{\nabla}\times \vec{u}}{\Omega}{3} \nonumber \\ &\leq \Lambda \Hdcnorm{\vec{v}}{\Omega} \Ltwonormndim{\vec{\nabla}\times \vec{u}}{\Omega}{3} \nonumber \\ & \leq \Lambda \Hdcnorm{\vec{v}}{\Omega} \Hdcnorm{\vec{u}}{\Omega}. \label{Amatrixauvcontineq} \end{align} By the Cauchy-Schwarz inequality, \begin{align} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert &= \vert \Ltwoinner{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)\vec{u}}{\vec{\nabla}\times \vec{v}}{\Omega} \vert \nonumber \\ &\leq \Ltwonorm{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)\vec{u}}{\Omega} \Ltwonorm{\vec{\nabla}\times \vec{v}}{\Omega} \label{csk3kappafirst}. \end{align} Note that \begin{equation*} (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 \leq \ltwonorm{\vec{n}_k}^2 \ltwonorm{\vec{\nabla}\times \vec{n}_k}^2 \leq \beta \ltwonorm{\vec{\nabla}\times \vec{n}_k}^2. \end{equation*} Furthermore, since $\vec{\nabla}\times \vec{n}_k$ is a vector of piecewise polynomials, $\ltwonorm{\vec{\nabla}\times \vec{n}_k}^2$ is bounded. Letting $\displaystyle{C_{\text{sup}} = \sup_{\vec{x} \in \Omega} \ltwonorm{\vec{\nabla}\times \vec{n}_k}^2}$, \begin{align*} \Ltwonormndim{(\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)\vec{u}}{\Omega}{3} &= \left ( \int_{\Omega} (\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k)^2 (\vec{u} \cdot \vec{u}) \diff{V} \right)^{1/2} \nonumber \\ & \leq \sqrt{\beta} \left ( \int_{\Omega} \ltwonorm{\vec{\nabla}\times \vec{n}_k}^2 (\vec{u} \cdot \vec{u}) \diff{V} \right)^{1/2} \nonumber \\ & \leq \sqrt{\beta C_{\text{sup}}} \Ltwonormndim{\vec{u}}{\Omega}{3}. \end{align*} Hence, \begin{equation} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \sqrt{\beta C_{\text{sup}}} \Hdcnorm{\vec{u}}{\Omega}\Hdcnorm{\vec{v}}{\Omega} \label{ucurlvinftyinequal}. \end{equation} The next summand from \eqref{triangleauvinequality} is \begin{equation*} \label{thirdk3kappainner} \vert\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\Omega} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}, \end{equation*} with \begin{align*} \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\Omega}& \leq \sqrt{\beta} \Hdcnorm{\vec{v}}{\Omega}. \end{align*} Furthermore, \begin{equation*} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \leq \sqrt{C_{\sup}} \Ltwonormndim{\vec{u}}{\Omega}{3}. \end{equation*} Therefore, \begin{equation} \label{secondk3kappacontineq} \vert\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \sqrt{\beta C_{\text{sup}}} \Hdcnorm{\vec{v}}{\Omega} \Hdcnorm{\vec{u}}{\Omega}. \end{equation} Now consider $\vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} \vert$ and note that this inner product is the same as that in \eqref{csk3kappafirst} with the roles of $\vec{u}$ and $\vec{v}$ reversed. Since $\vec{u}$ and $\vec{v}$ are from the same space, the steps for deriving \eqref{ucurlvinftyinequal} are equally valid. Thus, \begin{equation} \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{u}}{\Omega} \vert \leq \sqrt{\beta C_{\text{sup}}} \Hdcnorm{\vec{u}}{\Omega}\Hdcnorm{\vec{v}}{\Omega} \label{thirdk3kappacontineq}. \end{equation} Similarly, the inequality for $\vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert$ is derived in an analogous manner to that of \eqref{secondk3kappacontineq}. Thus, \begin{equation} \vert \Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{u}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \sqrt{\beta C_{\text{sup}}} \Hdcnorm{\vec{v}}{\Omega} \Hdcnorm{\vec{u}}{\Omega}. \label{forthk3kappacontineq} \end{equation} Next, examine \begin{equation*} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}. \end{equation*} Since $\vec{\nabla}\times \vec{n}_k$ is a vector of piecewise polynomials, \begin{align*} \Ltwonorm{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \leq \sqrt{C_{\sup}} \Ltwonormndim{\vec{u}}{\Omega}{3},\\ \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \leq \sqrt{C_{\sup}} \Ltwonormndim{\vec{v}}{\Omega}{3}. \end{align*} Thus, \begin{equation} \vert \Ltwoinner{\vec{u} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \vert \leq C_{\text{sup}} \Hdcnorm{\vec{u}}{\Omega}\Hdcnorm{\vec{v}}{\Omega} \label{k3kappafiftcontineq}. \end{equation} Finally, since $\lambda_k$ is piecewise constant, $\lambda_k^2$ is bounded. Letting $\displaystyle{C_{\lambda} = \sup_{\vec{x} \in \Omega} \lambda_k^2}$, \begin{align} \left \vert \int_{\Omega} \lambda_k \ltwoinner{\vec{u}}{\vec{v}} \diff{V} \right \vert &\leq \Ltwonormndim{\lambda_k \vec{u}}{\Omega}{3} \Ltwonorm{\vec{v}}{\Omega} \nonumber \\ & \leq \sqrt{C_{\lambda}} \Ltwonormndim{\vec{u}}{\Omega}{3} \Hdcnorm{\vec{v}}{\Omega} \nonumber \\ &\leq \sqrt{C_{\lambda}} \Hdcnorm{\vec{u}}{\Omega} \Hdcnorm{\vec{v}}{\Omega} \label{lambdaimbedineq}. \end{align} Combining \eqref{divergenceauvcontineq}, \eqref{Amatrixauvcontineq}, and \eqref{ucurlvinftyinequal}-\eqref{lambdaimbedineq}, \begin{align*} a(\vec{u},\vec{v}) &\leq \Big( K_1 + K_3 \Lambda + \vert K_2-K_3 \vert \big( 4 \sqrt{\beta C_{\text{sup}}} + C_{\text{sup}} \big) + \sqrt{C_{\lambda}} \Big) \Hdcnorm{\vec{u}}{\Omega} \Hdcnorm{\vec{v}}{\Omega}. \end{align*} \end{proof} \subsection{Discrete Coercivity} In this section, two proofs of the coercivity of $a(\vec{u}, \vec{v})$ are given. The first is for the case when $\kappa=1$. The second addresses coercivity when $\kappa$ lies in a neighborhood of unity. For both proofs, we use the additional assumption that the approximation is close enough to the solution such that the Lagrange multiplier, $\lambda_k$, is pointwise non-negative. This assumption is reasonable since at the solution, $\vec{n}_{*}$, $\lambda_{*}$ may be chosen arbitrarily. \begin{lemma} \label{coercivityauv} Under Assumption \ref{secass} and the assumption that $\lambda_k$ is pointwise non-negative, if $\kappa =1$, there exists an $\alpha_0 >0$ such that $\alpha_0 \Hdcnorm{\vec{v}}{\Omega}^2 \leq a(\vec{v}, \vec{v})$ for all $\vec{v} \in V_h$. \end{lemma} \begin{proof} Note that since $\kappa=1$, $(K_2-K_3) = 0$, and \begin{align*} a(\vec{v}, \vec{v}) = &K_1\Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} + \int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V}. \end{align*} Thus, it remains to show that there exists $\alpha_0 >0$ such that \begin{align*} \alpha_0 \Hdcnorm{\vec{v}}{\Omega}^2 \leq &K_1\Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} + \int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V}. \end{align*} From Remark 2.7 in \cite{Girault1}, there exists $C_3 >0$ such that \begin{equation*} \Ltwonormndim{\nabla \vec{v}}{\Omega}{3}^2 \leq C_3^2 \big(\Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2 \big). \end{equation*} Moreover, recall that $\Ltwonormndim{\vec{v}}{\Omega}{3}^2 \leq C_4 \Ltwonormndim{\nabla \vec{v}}{\Omega}{3}^2$ by the classical Poincar\'{e}-Friedrichs' inequality. Hence, for $C = C_4C_3^2>0$, \begin{equation} \label{PFineqforDCuse} \Ltwonormndim{\vec{v}}{\Omega}{3}^2 \leq C \big(\Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2 \big). \end{equation} Since $\Hdcnorm{\vec{v}}{\Omega}^2 = \Ltwonormndim{\vec{v}}{\Omega}{3}^2 + \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2$, then \begin{equation*} \Hdcnorm{\vec{v}}{\Omega}^2 \leq (C+1) \big(\Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2 \big). \end{equation*} Letting $K = \min(K_1, K_3) >0$ and $\alpha_0 = K/(C+1)$, it follows that \begin{equation} \alpha_0 \Hdcnorm{\vec{v}}{\Omega}^2 \leq K \big(\Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + \Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2 \big) \leq K_1 \Ltwonorm{\vec{\nabla}\cdot \vec{v}}{\Omega}^2 + K_3\Ltwonormndim{\vec{\nabla}\times \vec{v}}{\Omega}{3}^2. \label{divcurlcoercivity} \end{equation} Finally, it was assumed that $\lambda_k$ is pointwise non-negative, implying \begin{equation*} \int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V} \geq 0. \end{equation*} Therefore, \eqref{divcurlcoercivity} implies that \begin{equation*} \alpha_0 \Hdcnorm{\vec{v}}{\Omega}^2 \leq K_1\Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} + \int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V}. \end{equation*} \end{proof} The assumption that $\kappa=1$ is a common modeling approach. In fact, this supposition represents a weaker constraint than is seen in the many models that utilize the one-constant approximation, cf. \cite{Ramage1, Liu1, Stewart1, Cohen1}. However, it is possible to loosen the restriction that $\kappa = 1$ and still maintain the coercivity of $a(\vec{u},\vec{v})$ with a small data type assumption on $\kappa$. That is, we assume that $\kappa$ varies within a certain, possibly small, range of unity. Small data assumptions are common, for instance, in the study of solutions to the Navier-Stokes' equations \cite{Fujita1, Leray1, Marusic-Paloka1}, where bounds are imposed on certain norms of the initial data in order to demonstrate existence and uniqueness of solutions. \begin{lemma}[Small Data] \label{coercivitysmalldata} Under Assumption \ref{secass} and the assumption that $\lambda_k$ is pointwise non-negative, there exists $\epsilon_1, \epsilon_2 > 0$, dependent on $\beta=\max \ltwonorm{\vec{n}}^2$, such that if $\kappa \in (1-\epsilon_2, 1+\epsilon_1)$, then $a(\vec{u},\vec{v})$ is coercive. \end{lemma} \begin{proof} Since $\vec{Z}(\vec{n}_k)$ is USPD by assumption, \begin{align*} \eta K_3 \Ltwoinnerndim{\vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \leq K_3\Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3},\label{USPDinnerineq} \end{align*} where $\eta$ is the relevant lower bound from Lemma \ref{USPDlemma}. Defining $K' = \min (K_1, \eta K_3)>0$ and $\alpha_1 = K'/(C+1)$, where $C = C_4C_3^2$ is the constant defined in \eqref{PFineqforDCuse}, then, \begin{equation*} \alpha_1 \Hdcnorm{\vec{v}}{\Omega}^2 \leq K_1 \Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + \eta K_3 \Ltwoinnerndim{\vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3}. \end{equation*} Thus, using the assumption that $\lambda_k$ is pointwise non-negative, \begin{equation} \alpha_1 \Hdcnorm{\vec{v}}{\Omega}^2 \leq K_1 \Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3}+\int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V}. \label{partialcoercivitysmalldata} \end{equation} It should be noted that the constant $\eta$ may depend on $\kappa$. Thus, the following three cases are considered. \begin{caseof} \case{$\kappa=1+\epsilon_1$, for $\epsilon_1 >0$.}{If this case holds, then $\eta=1$. Hence, $\alpha_1$, defined for \eqref{partialcoercivitysmalldata}, is independent of $\kappa$. Since $K_2-K_3 = K_3(\kappa-1)$, the discrete bilinear form of \eqref{auvform} becomes \begin{align} a(\vec{v}, \vec{v}) =& K_1 \Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ &+ \epsilon_1 K_3 \Big(2 \Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} + 2\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ &+\Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \Big) + \int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V} \label{avvsmalldata1}. \end{align} Observe that from \eqref{partialcoercivitysmalldata}, \begin{align} \alpha_1 \Hdcnorm{\vec{v}}{\Omega}^2 \leq& K_1 \Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3}+\int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V} \nonumber \\ &+\epsilon_1 K_3 \Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}. \label{partialcoercivitysmalldatacase1} \end{align} Consider the magnitude of the terms in \eqref{avvsmalldata1} not bounded from below in \eqref{partialcoercivitysmalldatacase1}, denoted as $\mathcal{G}(\vec{v}, \vec{v})$, \begin{align*} \vert \mathcal{G}(\vec{v}, \vec{v}) \vert &= \vert 2\epsilon_1 K_3 \big(\Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}+\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \big) \vert \nonumber \\ &\leq 2\epsilon_1 K_3 \big( \vert \Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}\vert + \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \big). \end{align*} Using bounds derived in the proof of Lemma \ref{bilinearformcontinuity}, \begin{align*} \vert \mathcal{G}(\vec{v}, \vec{v}) \vert \leq& 4\epsilon_1 K_3\sqrt{\beta C_{\text{sup}}} \Hdcnorm{\vec{v}}{\Omega}^2. \end{align*} Denoting $\alpha_3 =4 K_3\sqrt{\beta C_{\text{sup}}}$, then \begin{equation*} \vert \mathcal{G}(\vec{v}, \vec{v}) \vert \leq \epsilon_1 \alpha_3 \Hdcnorm{\vec{v}}{\Omega}^2. \end{equation*} Utilizing \eqref{partialcoercivitysmalldatacase1}, \begin{equation*} a(\vec{v}, \vec{v}) \geq \alpha_1 \Hdcnorm{\vec{v}}{\Omega}^2 - \epsilon_1 \alpha_3 \Hdcnorm{\vec{v}}{\Omega}^2 = (\alpha_1-\epsilon_1 \alpha_3) \Hdcnorm{\vec{v}}{\Omega}^2. \end{equation*} It is, thus, sufficient to have $\epsilon_1 < \alpha_1/\alpha_3$, guaranteeing that $(\alpha_1-\epsilon_1 \alpha_3)>0$.} \case{$\kappa=1-\epsilon_2>0$, for $\epsilon_2>0$, and $K_1<K_3$.} {Since $\kappa<1$, $\eta = 1+(\kappa-1)\beta=(1-\epsilon_2 \beta)$. For $K_1 < K_3$, there exists an $\epsilon_2$ small enough, such that $K_1 < (1-\epsilon_2 \beta)K_3$. This implies that, for small enough $\epsilon_2$, \begin{equation*} \alpha_1 = \frac{\min(K_1, (1-\epsilon_2 \beta)K_3)}{(C+1)} = \frac{K_1}{(C+1)}. \end{equation*} Therefore, $\alpha_1$ is again independent of $\kappa$. Since $K_2-K_3 = K_3 (\kappa-1)$, the discrete bilinear form of \eqref{auvform} becomes \begin{align} a(\vec{v}, \vec{v}) =& K_1 \Ltwoinner{\vec{\nabla}\cdot \vec{v}}{\vec{\nabla}\cdot \vec{v}}{\Omega} + K_3 \Ltwoinnerndim{\vec{Z}(\vec{n}_k) \vec{\nabla}\times \vec{v}}{\vec{\nabla}\times \vec{v}}{\Omega}{3} \nonumber \\ &- \epsilon_2 K_3 \Big(2 \Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}+2\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}\nonumber \\ &+\Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \Big) + \int_{\Omega} \lambda_k \ltwoinner{\vec{v}}{\vec{v}} \diff{V} \label{avvsmalldatacase2}. \end{align} The terms of \eqref{avvsmalldatacase2}, not already bounded from below in \eqref{partialcoercivitysmalldata}, are bounded as \begin{align*} \vert \mathcal{G}(\vec{v}, \vec{v}) \vert &= \vert \epsilon_2 K_3 \big(2 \Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \nonumber \\ &\qquad +2\Ltwoinner{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}+\Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \big) \vert& \nonumber \\ &\leq \epsilon_2 K_3 \big( 2 \vert \Ltwoinner{\vec{v} \cdot \vec{\nabla}\times \vec{v}}{\vec{n}_k \cdot \vec{\nabla}\times \vec{n}_k}{\Omega}\vert \nonumber \\ &\qquad + 2 \Ltwonorm{\vec{n}_k \cdot \vec{\nabla}\times \vec{v}}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} + \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \Ltwonorm{\vec{v} \cdot \vec{\nabla}\times \vec{n}_k}{\Omega} \big). \end{align*} Again using the bounds derived in the proof of Lemma \ref{bilinearformcontinuity}, \begin{align*} \vert \mathcal{G}(\vec{v}, \vec{v}) \vert \leq \epsilon_2 K_3 \big(4 \sqrt{\beta C_{\text{sup}}} + C_{\text{sup}} \big) \Hdcnorm{\vec{v}}{\Omega}^2. \end{align*} Denoting $\alpha_4 =K_3 \big(4 \sqrt{\beta C_{\text{sup}}} + C_{\text{sup}} \big)$, then, \begin{equation*} \vert \mathcal{G}(\vec{v}, \vec{v}) \vert \leq \epsilon_2 \alpha_4 \Hdcnorm{\vec{v}}{\Omega}^2. \end{equation*} Using \eqref{partialcoercivitysmalldata} implies, \begin{equation*} a(\vec{v}, \vec{v}) \geq \alpha_1 \Hdcnorm{\vec{v}}{\Omega}^2 - \epsilon_2 \alpha_4 \Hdcnorm{\vec{v}}{\Omega}^2 = (\alpha_1-\epsilon_2 \alpha_4) \Hdcnorm{\vec{v}}{\Omega}^2. \end{equation*} Thus, possibly requiring $\epsilon_2$ to be even smaller, $\epsilon_2 < \alpha_1/\alpha_4$, so that $(\alpha_1-\epsilon_2 \alpha_4)>0$. In the case that $\kappa<1$, the additional restriction that $\beta < \frac{1}{1-\kappa}$ for $\vec{Z}$ to be USPD is necessary, which implies that $\epsilon_2\beta < 1$ is required. Therefore, for any fixed choice of $\beta$, $\epsilon_2$ must also be taken small enough to satisfy this condition. Hence, \begin{equation*} \epsilon_2 < \min \left (\frac{\alpha_1}{\alpha_4}, \frac{K_3 - K_1}{\beta K_3}, \frac{1}{\beta} \right). \end{equation*} } \case{$\kappa=1-\epsilon_2>0$, for $\epsilon_2>0$, and $K_3 \leq K_1$.}{Here, again, $\eta = (1-\epsilon_2 \beta)$. For this case, it is clear that $(1-\epsilon_2 \beta)K_3 < K_1$. Thus, \begin{equation*} \alpha_1 = \frac{(1-\epsilon_2 \beta) K_3}{(C+1)}. \end{equation*} Using the same $\alpha_4$ as in the previous case and similar arguments, \begin{equation*} a(\vec{v}, \vec{v}) \geq \alpha_1 \Hdcnorm{\vec{v}}{\Omega}^2 - \epsilon_2 \alpha_4 \Hdcnorm{\vec{v}}{\Omega}^2 = (\alpha_1-\epsilon_2 \alpha_4) \Hdcnorm{\vec{v}}{\Omega}^2. \end{equation*} Hence, in order for $(\alpha_1-\epsilon_2 \alpha_4)>0$ to hold, it is necessary that \begin{equation*} \epsilon_2 < \frac{K_3}{K_3 \beta + \alpha_4 (C+1)}. \end{equation*} Finally, $\epsilon_2$ must still be chosen sufficiently small with respect to $\beta$ such that $\epsilon_2 \beta < 1$, as in Case 2. Therefore, \begin{equation*} \epsilon_2 < \min \left (\frac{K_3}{K_3 \beta + \alpha_4 (C+1)}, \frac{1}{\beta} \right). \end{equation*} } \end{caseof} Thus, if $\epsilon_1$, $\epsilon_2>0$ satisfy the applicable conditions in the cases above, then at each Newton iteration, $a(\vec{u}, \vec{v})$ is coercive for $\kappa \in (1 - \epsilon_2, 1+ \epsilon_1)$. \end{proof} \subsection{Discrete Weak Coercivity} \label{DiscreteInfSup} For this section, we consider the weak coercivity of $b(\cdot, \cdot)$, under Assumption \ref{secass}, with the restriction that $\Omega$ is a polyhedral domain. That is, we show that there exists a $\zeta > 0$ such that \begin{equation} \zeta \Ltwonorm{\gamma}{\Omega} \leq \sup_{\vec{v} \in V_h} \frac{\vert b(\vec{v}, \gamma) \vert}{\Hdcnorm{\vec{v}}{\Omega}}, \qquad \forall \gamma \in \Pi_h. \label{discreteLBBcondition} \end{equation} Before proving the weak coercivity result for $V_h$ and $\Pi_h$, we prove two critical lemmas. Let $N=2,3$ denote the dimension of $\Omega$. \begin{lemma} \label{maxbubblelemma} For the bubble functions, $b_T$, satisfying \eqref{bubbleprop1} and \eqref{bubbleprop2} on a rectangle $T$, $\displaystyle{\sup_{\vec{x} \in T} b_T = C_d / \vert T \vert}$, where $C_d=(\frac{3}{2})^N$. \end{lemma} \begin{proof} For $N=2$, without loss of generality, assume that $T$ is a rectangle at the origin given by $[0,a] \times [0, b]$. Let $\bar{b}_T = xy(a-x)(b-y)$ on $T$ and zero elsewhere. Note that $\bar{b}_T$ is the bubble function on $T$ that has not been normalized such that \eqref{bubbleprop1} holds. Integrating over $T$ yields \begin{align} \int_T \bar{b}_T \diff{V} =\frac{\vert T \vert^3}{36} \label{bubbleintegral}. \end{align} Computing the maximum value of $\bar{b}_T$ shows that $\displaystyle{\sup_{\vec{x} \in T} \bar{b}_T = \frac{\vert T \vert^2}{16}}$. Normalizing $\bar{b}_T$, using \eqref{bubbleintegral}, to define $b_T$ implies that \begin{equation*} \sup_{\vec{x} \in T} b_T = \frac{\vert T \vert^2 / 16}{\vert T \vert^3 / 36} = \frac{9}{4 \vert T \vert}. \end{equation*} The case for $N=3$ is derived analogously for $T$, the rectangular box $[0,a]\times[0,b]\times[0,c]$, and $\bar{b}_T = xyz(a-x)(b-y)(c-z)$. The corresponding $b_T$ satisfies \begin{equation*} \sup_{\vec{x} \in T} b_T = \frac{\vert T \vert^2 / 64}{\vert T \vert^3 / 216} = \frac{27}{8 \vert T \vert}. \end{equation*} \end{proof} Following the notation in \cite{Brenner1}, consider two finite elements $(T, \mathcal{P}, \mathcal{N})$ and $(\hat{T}, \hat{\mathcal{P}}, \hat{\mathcal{N}})$, where $T$ and $\hat{T}$ are element domains, $\mathcal{P}$ and $\hat{\mathcal{P}}$ are the respective sets of basis functions, and $\mathcal{N}$ and $\hat{\mathcal{N}}$ are the associated dual bases. We say that $(\hat{T}, \hat{\mathcal{P}}, \hat{\mathcal{N}})$ is affine equivalent to $(T, \mathcal{P}, \mathcal{N})$ if there exists an affine mapping, $G: T \to \hat{T}$, such that for $\vec{x} \in T$ \begin{equation*} G\vec{x} = \vec{x}_0 + M\vec{x}, \end{equation*} with non-singular matrix $M$, satisfying \begin{itemize} \item $G(T) = \hat{T}$ \item $G^* \hat{\mathcal{P}} = \mathcal{P}$ and \item $G_{*} \mathcal{N} = \hat{\mathcal{N}}$. \end{itemize} Here, the pullback $G^*$ is defined by $G^*(\hat{f}) := \hat{f} \circ G$, and the push-forward $G_{*}$ is defined by $(G_{*} N)(\hat{f}) := N(G^*(\hat{f}))$. \begin{lemma} \label{affinebubblelemma} Consider a rectangular reference element $(T, \mathcal{P}, \mathcal{N})$, where $\mathcal{P}$ is the basis of shape functions for $T$ associated with $V_h \times \Pi_h$, defined above. If, for all $\hat{T} \in \mathcal{T}_h$, $(\hat{T}, \hat{\mathcal{P}}, \hat{\mathcal{N}})$ is affine equivalent to $(T, \mathcal{P}, \mathcal{N})$, then $\displaystyle{\sup_{\hat{\vec{x}} \in \hat{T}} b_{\hat{T}} = C_d/ \vert \hat{T} \vert}$, where $b_{\hat{T}}$ is the normalized bubble function satisfying \eqref{bubbleprop1} and \eqref{bubbleprop2} on $\hat{T}$. \end{lemma} \begin{proof} Note that the non-normalized bubble function on $\hat{T}$, $\bar{b}_{\hat{T}}$, is given by \begin{equation*} \bar{b}_{\hat{T}} = b_T \circ G^{-1}, \end{equation*} where $b_T$ is the normalized bubble function on $T$. Therefore, the maximum value for $\bar{b}_{\hat{T}}$ corresponds to the maximum value for $b_T$, which, as shown in Lemma \ref{maxbubblelemma}, is $C_d/\vert T \vert$. Observe that \begin{align*} \int_{\hat{T}} \bar{b}_{\hat{T}} \diff{V} &= \int_T b_T \vert \det M \vert \diff{V} \nonumber \\ &= \vert \det M \vert, \end{align*} where $\det M$ denotes the determinant of the matrix $M$. Thus, $b_{\hat{T}}$ is given by dividing $\bar{b}_{\hat{T}}$ by $\vert \det M \vert$. Therefore, \begin{align*} \sup_{\hat{\vec{x}} \in \hat{T}} b_{\hat{T}} &= \frac{1}{\vert \det M \vert} \sup_{\vec{x} \in T} b_T \nonumber \\ &=\frac{C_d}{\vert \det M \vert \vert T \vert} \nonumber \\ &= \frac{C_d}{\vert \hat{T} \vert}. \end{align*} \end{proof} In the following, we will make use of the following second set of assumptions when necessary. \begin{assumption} \label{secass2} Let $\{ \mathcal{T}_h \}$ be a family of quadrilateral subdivisions of a polyhedral domain $\Omega$ satisfying \eqref{maxuniformity} and \eqref{quasiuniform}. Moreover, assume that for each $T \in \mathcal{T}_h$, the element $(T, \mathcal{P}_T, \mathcal{N}_T)$ is affine equivalent to a rectangular reference element for all $h$. \end{assumption} Prior to considering the following lemma, recall that $\alpha$ and $\beta$ are the bounds on the length of $\vec{n}$ in \eqref{limitsonnlength}, $\rho$ is the quasi-uniform mesh parameter defined in \eqref{quasiuniform}, and $C_d$ is the constant derived in Lemma \ref{maxbubblelemma} depending on $N$, the dimension of $\Omega$. \begin{lemma} \label{bubblespacelemma} Under Assumptions \ref{secass} and \ref{secass2}, $V_h$ and $\Pi_h$ constitute a pair satisfying \eqref{discreteLBBcondition} with constant $\zeta = h \left[\frac{2 \alpha \rho^N}{9C_f C_{*} \sqrt{\beta C_d}} \right]$, for $C_f$ and $C_{*}$ defined below. \end{lemma} \begin{proof} Since $V_h \subset Q_l \times Q_l \times Q_l$, by \cite[Theorem 4.5.11]{Brenner1} there exists $C_{*}>0$ depending only on $\rho$ such that \begin{equation*} \Honenorm{\vec{v}}{\Omega} \leq C_{*} h^{-1} \Ltwonormndim{\vec{v}}{\Omega}{3}. \end{equation*} Furthermore, using the fact that $\Hdcnorm{\vec{v}}{\Omega} \leq C_f \Honenorm{\vec{v}}{\Omega}$, \begin{equation} \label{Cfreference} \sup_{\vec{v} \in V_h} \frac{\vert b(\vec{v}, \gamma) \vert}{\Hdcnorm{\vec{v}}{\Omega}} \geq \sup_{\vec{v} \in V_h} \frac{\vert b(\vec{v}, \gamma) \vert}{C_f \Honenorm{\vec{v}}{\Omega}} \geq \sup_{\vec{v} \in V_h} \frac{\vert b(\vec{v}, \gamma) \vert}{C_f C_{*} h^{-1} \Ltwonormndim{\vec{v}}{\Omega}{3}}. \end{equation} Therefore, \eqref{discreteLBBcondition} is reduced to finding $\zeta>0$ such that \begin{align*} \zeta \Ltwonorm{\gamma}{\Omega} \leq \sup_{\vec{v} \in V_h} \frac{\vert b(\vec{v}, \gamma) \vert}{C_f C_{*} h^{-1} \Ltwonormndim{\vec{v}}{\Omega}{3}}, \qquad \forall \gamma \in \Pi_h. \end{align*} Now consider constructing $\vec{v}_0$ on each $T \in \mathcal{T}_h$ by letting $a_T = \gamma |_T$, where this denotes the restriction of $\gamma$ to the element $T$, and defining \begin{equation*} \vec{v}_0 |_T = a_T b_T \vec{n}_k |_T. \end{equation*} Observe that, as defined, $\vec{v}_0 \in V_h$. Let $C_m = \max_{T \in \mathcal{T}_h} \vert T \vert$. Then, \begin{align} b(\vec{v}_0, \gamma) = \sum_{T \in \mathcal{T}_h} \int_T \gamma \ltwoinner{\vec{v}_0}{\vec{n}_k} &\geq \alpha \sum_{T \in \mathcal{T}_h} \gamma^2 \int_T b_T \diff{V} \nonumber \\ &= \alpha \sum_{T \in \mathcal{T}_h} \gamma^2 \geq \frac{\alpha}{C_m} \Ltwonorm{\gamma}{\Omega}^2. \label{discreteinfsupnumerator} \end{align} It is also the case that \begin{align*} \Ltwonormndim{\vec{v}_0}{\Omega}{3}^2 &= \sum_{T \in \mathcal{T}_h} \int_T a_T^2 b_T^2 \ltwoinner{\vec{n}_k}{\vec{n}_k} \diff{V} \leq \beta \sum_{T \in \mathcal{T}_h} \gamma^2 \int_T b_T^2 \diff{V}. \end{align*} Since the bubble functions are fixed, let \begin{align*} C_b = \max_{T \in \mathcal{T}_h} \int_T b_T^2 \diff{V}, \qquad C_T = \min_{T \in \mathcal{T}_h} \vert T \vert. \end{align*} Thus, \begin{align} \Ltwonormndim{\vec{v}_0}{\Omega}{3}^2 &\leq \beta C_b \sum_{T \in \mathcal{T}_h} \gamma^2 \leq \frac{\beta C_b}{C_T} \Ltwonorm{\gamma}{\Omega}^2.\label{discreteinfsupdenominator} \end{align} Therefore, combining \eqref{discreteinfsupnumerator} and \eqref{discreteinfsupdenominator}, \begin{align} \sup_{\vec{v} \in V_h} \frac{\int_{\Omega} \gamma \ltwoinner{\vec{v}}{\vec{n}_k} \diff{V}}{\Ltwonorm{\vec{v}}{\Omega}} &\geq \frac{\int_{\Omega} \gamma \ltwoinner{\vec{v}_0}{\vec{n}_k} \diff{V}}{\Ltwonorm{\vec{v}_0}{\Omega}}\nonumber \\ &\geq \frac{\frac{\alpha}{C_m} \Ltwonorm{\gamma}{\Omega}^2}{\sqrt{\frac{\beta C_b}{C_T}} \Ltwonorm{\gamma}{\Omega}} = \frac{\alpha \sqrt{C_T}}{C_m \sqrt{\beta C_b}} \Ltwonorm{\gamma}{\Omega} \label{meshdepinfsupconstant}. \end{align} Note that the final constant in \eqref{meshdepinfsupconstant} is mesh dependent. Let $N=2,3$ denote the dimension of $\Omega$. Observe that \begin{equation*} C_b \leq \max_{T \in \mathcal{T}_h} \sup_{\vec{x} \in T} b_T \int_T b_T \diff{V} = \max_{T \in \mathcal{T}_h} \sup_{\vec{x} \in T} b_T. \end{equation*} From Lemma \ref{affinebubblelemma}, for arbitrary $T \in \mathcal{T}_h$, \begin{equation*} \sup_{\vec{x} \in T} b_T = C_d/ \vert T \vert, \end{equation*} where $C_d$ depends only on the dimension of $\Omega$. Therefore, \begin{equation*} \max_{T \in \mathcal{T}_h} \sup_{\vec{x} \in T} b_T = \frac{C_d}{C_T}. \end{equation*} Hence, \begin{equation} \label{CBinequal} \frac{\sqrt{C_T}}{C_m \sqrt{C_b}} \geq \frac{C_T}{C_m \sqrt{C_d}}. \end{equation} Define the constants \begin{align*} &C_{2,1} = \frac{\pi}{4},& &C_{2,2} = \pi, & &\text{for } N = 2, \\ &C_{3,1} = \frac{\pi}{6}, & &C_{3,2} = \frac{3\pi}{4},& &\text{for } N=3. \end{align*} Using Properties \eqref{maxuniformity} and \eqref{quasiuniform} with the constants above, it is straightforward to show that \begin{align*} C_T &\geq C_{N,1} (\min \{ \text{diam } B_T : T \in \mathcal{T}_h \})^N \geq C_{N,1} \rho^N (h \text{diam } \Omega)^N, \\ C_m &\leq C_{N,2} (\max \{ \text{diam } T : T \in \mathcal{T}_h \})^N \leq C_{N,2} (h \text{diam } \Omega)^N. \end{align*} Therefore, \begin{align} \label{CTCMInequal} \frac{C_T}{C_m} \geq \frac{C_{N,1} \rho^N}{C_{N,2}}. \end{align} Utilizing \eqref{CBinequal} and \eqref{CTCMInequal} \begin{equation*} \frac{\alpha \sqrt{C_T}}{C_m \sqrt{\beta C_b}} \Ltwonorm{\gamma}{\Omega} \geq \frac{\alpha C_{N,1}\rho^N}{C_{N,2} \sqrt{\beta C_d}} \Ltwonorm{\gamma}{\Omega} \geq \frac{2\alpha \rho^N}{9\sqrt{\beta C_d}} \Ltwonorm{\gamma}{\Omega}, \end{equation*} where $C_d$ depends only on the dimension of $\Omega$. Hence, \eqref{discreteLBBcondition} is satisfied with constant $\zeta = h \left[\frac{2 \alpha \rho^N}{9C_f C_{*} \sqrt{\beta C_d}} \right]$. Thus, $V_h$ and $\Pi_h$ represent a pair of spaces on which $b(\cdot, \cdot)$ is weakly coercive. \end{proof}\\ For $\vec{n}_k \in Q_p$, with $V_h \subset Q_m \times Q_m \times Q_m \oplus V_h^b$, as in \eqref{vspace}, and $l = \max (m, p+2)$, the above lemma yields an immediate corollary. \begin{corollary} \label{stabilitycorollary} Under Assumptions \ref{secass} and \ref{secass2}, $\vec{n}_k \in Q_p$ implies that $b(\cdot, \cdot)$ is weakly coercive for the pair $Q_l$--$P_0$. The special case that $\vec{n}_k \in P_0$ implies that $b(\cdot, \cdot)$ is weakly coercive on the pair $Q_{\max(m, 2)}$--$P_0$. \end{corollary} \begin{proof} Note that if $\vec{n}_k \in Q_p$, the bubble space defined above satisfies $V_h^b \subset Q_{p+2} \times Q_{p+2} \times Q_{p+2}$, since $b_T \in Q_2$. This implies that $V_h \subset Q_l \times Q_l \times Q_l$. Therefore, since $b(\cdot, \cdot)$ is weakly coercive for the pair $V_h$--$P_0$, weak coercivity must also hold for the pair $Q_l$--$P_0$. If $\vec{n}_k \in P_0$, then $V_h^b \subset Q_2 \times Q_2 \times Q_2$. Hence, $V_h \subset Q_{\max(m, 2)} \times Q_{\max(m, 2)} \times Q_{\max(m, 2)}$. The lemma above is equally valid for $\vec{n}_k \in P_0$. Therefore, $b(\cdot, \cdot)$ is weakly coercive on the pair $Q_{\max(m, 2)}$--$P_0$ for the given $\vec{n}_k$. \end{proof} In light of the lemmas discussed above, verification of weak coercivity allows for the formulation and proof of this paper's main theorem. \begin{theorem} \label{existuniquetheorem} Under Assumptions \ref{secass} and \ref{secass2}, existence of discrete solutions $(\delta \director_h, \delta \lambda_h)$ for each Newton linearization are guaranteed for the pair $V_h$--$\Pi_h$. In the case that $\kappa=1$ or that $\kappa$ satisfies the small data conditions of Lemma \ref{coercivitysmalldata}, such solutions are unique. \end{theorem} \begin{proof} Following a mixed formulation approach based on \cite{Brenner1, Braess1, Boffi1}, Lemmas \ref{boundedlinearforms} and \ref{bilinearformcontinuity} guarantee the existence of a solution to the system given by \eqref{generalizedNewtoniterationweakform1} and \eqref{generalizedNewtoniterationweakform2}. In the event that $\kappa =1$ or that $\kappa$ satisfies the small data assumptions, Lemma \ref{coercivityauv} or \ref{coercivitysmalldata} coupled with Lemma \ref{bubblespacelemma} implies that the solution is also unique. \end{proof} \subsection{Error Analysis} In the previous section, the derived weak coercivity constant depends on the mesh parameter $h$. Therefore, as $h$ approaches zero so too does the weak coercivity constant for the pair $V_h$ and $\Pi_h$. However, the convergence of the scheme for the enriched Lagrangian finite-element spaces composing $V_h$ is only slightly compromised. In this section, we derive approximation error bounds for the discrete solution. Throughout this section, it is assumed that Assumptions \ref{secass} and \ref{secass2} apply. Let $(\vec{u}, q)$ represent a solution to the continuum variational system given by \eqref{contgeneralizedNewtoniterationweakform1} and \eqref{contgeneralizedNewtoniterationweakform2} and $(\vec{u}_h, q_h)$ be the unique solution to the discrete system in \eqref{generalizedNewtoniterationweakform1} and \eqref{generalizedNewtoniterationweakform2}. As above, denote the dimension of $\Omega$ by $N=2, 3$. \begin{lemma} Let $\Pi_h$ and $V_h$ be defined as in \eqref{pispace} and \eqref{vspace} with $m=2$. Under Assumptions \ref{secass} and \ref{secass2}, for $\vec{u} \in \Hn{3}{\Omega}^3$ and $q \in \Hone{\Omega}$ there exists $C_a>0$ such that \begin{equation} \label{errorapprox} \Hdcnorm{\vec{u}-\vec{u}_h}{\Omega} \leq C_a h \big (\Hnnorm{\vec{u}}{3} + \Honenorm{q}{\Omega} \big ). \end{equation} \end{lemma} \begin{proof} Let $\alpha_0$ denote the coercivity constant from either Lemma \eqref{coercivityauv} or \eqref{coercivitysmalldata}. Furthermore, let $\zeta$ denote the $h$-dependent weak coercivity constant derived in Lemma \eqref{bubblespacelemma}. By Theorem 5.2.2 in \cite{Boffi1}, \begin{equation} \label{boffiinequality} \Hdcnorm{\vec{u} - \vec{u}_h}{\Omega} \leq \frac{4 C_A C_B}{\alpha_0 \zeta} E_u + \frac{C_B}{\alpha_0} E_{q}, \end{equation} where $C_A$ and $C_B$ are the continuity constants associated with $a(\cdot, \cdot)$ and $b(\cdot, \cdot)$, respectively, and \begin{align*} E_u = \inf_{v_h \in V_h} \Hdcnorm{\vec{u} - \vec{v}_h}{\Omega}, & & E_{q} = \inf_{\gamma_h \in \Pi_h} \Ltwonorm{q - \gamma_h}{\Omega}. \end{align*} Note that \begin{equation*} \inf_{v_h \in V_h} \Hdcnorm{\vec{u} - \vec{v}_h}{\Omega} \leq C_f \inf_{v_h \in V_h} \Honenorm{\vec{u}-\vec{v}_h}{\Omega}, \end{equation*} where $C_f$ is the constant used in \eqref{Cfreference}. Let $\mathcal{I}^h f$ denote the global interpolant of $f$ over the appropriate finite-element space. Since $\{\mathcal{T}_h\}$ is quasi-uniform, it is, in particular, non-degenerate. Therefore, applying \cite[Theorem 4.4.24]{Brenner1} to the discrete space $V_h$, there exists a $C_5>0$, such that \begin{align*} \left ( \sum_{T \in \mathcal{T}_h} \Vert \vec{v} - \mathcal{I}^h \vec{v} \Vert^2_{H^1(T)} \right)^{1/2} = \Honenorm{\vec{v} - \mathcal{I}^h \vec{v}}{\Omega} \leq C_5 h^2 \Hnnorm{\vec{v}}{3},& &\forall \vec{v} \in \Hn{3}{\Omega}. \end{align*} This implies that if $\vec{u} \in \Hn{3}{\Omega}^3$, then \begin{equation} \label{Vhbound} \inf_{\vec{v}_h \in V_h} \Hdcnorm{\vec{u} - \vec{v}_h}{\Omega} \leq C_f C_5 h^2 \Hnnorm{\vec{u}}{3}. \end{equation} For $\Pi_h$, Theorem 3.1.6 in \cite{Ciarlet1} implies that there exists a $C_6>0$ such that \begin{align*} \Ltwonorm{\gamma-\mathcal{I}^h \gamma}{\Omega} \leq C_6 h \Honenorm{\gamma}{\Omega}, & &\forall \gamma \in \Hone{\Omega}. \end{align*} Hence, if $q \in \Hone{\Omega}$, \begin{equation} \label{Pihbound} \inf_{\gamma_h \in \Pi_h} \Ltwonorm{q-\gamma_h}{\Omega} \leq C_6 h \Honenorm{q}{\Omega}. \end{equation} Combining \eqref{Vhbound} and \eqref{Pihbound} with \eqref{boffiinequality} yields the error estimate \begin{align*} \Hdcnorm{\vec{u}-\vec{u}_h}{\Omega} &\leq \frac{4 C_A C_B}{\alpha_0 \zeta} C_f C_5 h^2 \Hnnorm{\vec{u}}{3} + \frac{C_B}{\alpha_0} C_6 h \Honenorm{q} \nonumber \\ &= \frac{18C_A C_B C_f^2 C_{*} \sqrt{\beta C_d} C_5}{\alpha \rho^N \alpha_0} h \Hnnorm{\vec{u}}{3} + \frac{C_B C_6}{\alpha_0} h\Honenorm{q}{\Omega}. \end{align*} Taking $C_a = \max \left ( \frac{18C_A C_B C_f^2 C_{*} \sqrt{\beta C_d} C_5}{\alpha \rho^N \alpha_0}, \frac{C_B C_6}{\alpha_0} \right)$, \eqref{errorapprox} is obtained. \end{proof} Thus, the approximation is convergent for $V_h$--$\Pi_h$ but with an order of sub-optimality, due to the weak coercivity constant's dependence on the mesh parameter. However, use of a discrete $\Hn{-1}{\Omega}$ norm for the space $\Pi_h$ is currently being considered as a means of eliminating this mesh dependence. \subsection{Practical Choice of Finite Elements} The bubble enrichment discussed above is non-standard in its incorporation of $\vec{n}_k$ in the construction of the bubbles. Therefore, during numerical implementation, it was desirable to find an experimentally stable, standard, finite-element pair closely related to the spaces discussed above. It was observed that $Q_1$--$Q_1$ finite-element discretizations resulted in singular matrices. This implies that $Q_1$--$Q_1$ is not a pair for which $b(\cdot, \cdot)$ is weakly coercive. Such a phenomenon is not unique. For example, instabilities arise for equal order elements in Galerkin approaches to both the Stokes' equations \cite{Bochev1} and the Navier-Stokes' equations \cite{Franca1}. On the other hand, in the numerical experiments to be discussed below, mixed finite-element approaches, such as $Q_2$--$P_0$ discretizations, experimentally appear to admit weak coercivity without the need for rising order finite-element implementations or bubble enrichments. Corollary \ref{stabilitycorollary} implies that for a piecewise constant initial iterate, the update element space $Q_2$--$P_0$ implies weak coercivity for the first Newton iteration. With this assurance, coupled with the empirical weak coercivity evidence for $Q_2$--$P_0$, we employ $Q_2$--$P_0$ spaces for all iterations in the experiments below. In the event that singular matrices occur for the $Q_2$--$P_0$ discretization of a particular problem, the bubble enriched finite-element pair $V_h$--$\Pi_h$, defined in \eqref{pispace} and \eqref{vspace}, may be implemented and is particularly attractive because the rising order of the bubble functions, $b_T \vec{n}_k \vert_T$, on each element does not increase the number of unknowns at each Newton iteration. \section{Numerical Methodology} \label{nummethodology} The algorithm to perform the minimization discussed in previous sections has three stages; see Algorithm \ref{algo}. The outermost phase is nested iteration (NI) \cite{McCormick1, Starke1}, which begins on a specified coarsest grid level. Newton iterations are performed on each grid, updating the current approximation after each step. The stopping criterion for the Newton iterations at each level is based on a specified tolerance for the current approximation's conformance to the first-order optimality conditions in the standard Euclidean $l_2$ norm. In the numerical experiments to follow, this tolerance was always $10^{-3}$. The resulting approximation is then interpolated to a finer grid. The current implementation performs uniform grid refinement after each set of Newton iterations. The Newton iteration systems are constructed by applying finite-element discretizations on each grid. The resulting, relatively sparse, matrix has the anticipated saddle-point block structure \begin{equation*} \left [ \begin{array}{c c} \vec{A} & \vec{B} \\ \vec{B}^{T} & \vec{0} \end{array} \right ] . \end{equation*} The matrix is inverted using LU decomposition in order to solve for the discrete updates $\delta \director_h$ and $\delta \lambda_h$. Finally, an incomplete Newton correction is performed. That is, the new iterates are given by \begin{equation} \label{corrections} \left [ \begin{array}{c} \vec{n}_{k+1} \\ \lambda_{k+1} \end{array} \right ] = \left [ \begin{array}{c} \vec{n}_k \\ \lambda_k \\ \end{array} \right ] + \omega \left [ \begin{array}{c} \delta \director_h \\ \delta \lambda_h \end{array} \right ], \end{equation} where $\omega \leq 1$. This is to ensure relatively strict adherence to the constraint manifold, which is necessary for the well-posedness discussed above. For this algorithm, $\omega$ is chosen to begin at $0.2$ on the coarsest grid and increases by $0.2$, to a maximum of $1$, after each grid refinement, so that as the approximation converges, larger Newton steps are taken. For complicated boundary conditions, such damped Newton steps are important in preventing method divergence. The grid management and discretizations are implemented using the deal.II finite-element library, which is an aggressively optimized and parallelized open-source library widely used in scientific computing \cite{BangerthHartmannKanschat2007, DealIIReference}. In practice, as discussed above, $Q_2$--$P_0$ discretizations were observed to experimentally admit weak coercivity. Therefore, $Q_2$--$P_0$ elements were used to approximate $\delta \director_h$ and $\delta \lambda_h$ on each grid for the numerical tests. \vspace{.3cm} \begin{algorithm}[H] \label{algo} \SetAlgoLined ~\\ 0. Initialize $(\vec{n}_0, \lambda_0)$ on coarse grid. ~\\ \While{Refinement limit not reached} { \While{First-order optimality conformance threshold not satisfied} { 1. Set up discrete linear system \eqref{newtonhessian} on current grid, $H$. ~\\ 2. Solve for $\delta \director_{H}$ and $\delta \lambda_{H}$. ~\\ 3. Compute $\vec{n}_{k+1}$ and $\lambda_{k+1}$ as in \eqref{corrections}. ~\\ } 4. Uniformly refine the grid. ~\\ 5. Interpolate $\vec{n}_{H} \to \vec{n}_{h}$ and $\lambda_{H} \to \lambda_h$. } \caption{Newton's method minimization algorithm with NI} \end{algorithm} \vspace{.3cm} \subsection{Free Elastic Numerical Results}\label{FreeElasticResults} The general test problem in this section considers a classical domain with two parallel substrates placed at distance $d=1$ apart. The substrates run parallel to the $xz$-plane and perpendicular to the $y$-axis. It is assumed that this domain represents a uniform slab in the $xy$-plane. That is, $\vec{n}$ may have a non-zero $z$ component but $\pd{\vec{n}}{z} = \vec{0}$. Hence, we consider the 2-D domain $\Omega = \{ (x,y) \text{ } \vert \text{ } 0 \leq x,y \leq 1 \}$. The problem assumes periodic boundary conditions at the edges $x=0$ and $x=1$. Dirichlet boundary conditions are enforced on the $y$-boundaries. As discussed above, the simplification outlined in \eqref{stronganchoringdivthm} is relevant for this domain and boundary conditions. \begin{figure}[h!] \centering \begin{subfigure}{.49 \textwidth} \centering \includegraphics[scale=.30]{FESimpleBCInitial4x4.png} \end{subfigure} \begin{subfigure}{.49 \textwidth} \raggedright \includegraphics[scale=.30]{FESimpleBC.png} \end{subfigure} \caption{\small{Initial guess (left) on $4 \times 4$ mesh with initial free energy of $5.467$ and resolved solution (right) on $128 \times 128$ mesh (restricted for visualization) with final free energy of 0 for a uniformly aligned boundary.}} \label{FEsimpleBC} \end{figure} The first numerical experiment is run on one of the simplest configurations of this type. Along each of the substrates the liquid crystal rods are uniformly aligned parallel to the $x$-axis. The relevant Frank constants are $K_1=K_2=K_3=1$. The problem is solved on a $4 \times 4$ coarse grid with five successive uniform refinements resulting in a $128 \times 128$ fine grid. The initial guess and computed, converged solution are displayed in Figure \ref{FEsimpleBC}. The final minimized functional energy is $\mathcal{F}_1 = 0$, compared to the initial guess energy of $5.467$. In Table \ref{gridprogFEsimpleBC}, the number of Newton iterations per grid is detailed as well as the conformance of the solution to the first-order optimality conditions after the first and final Newton steps, respectively, on each grid. Assuming the presence of solvers that scale linearly with the number of non-zeros in the matrix, the work required in these iterations is roughly $1.34$ times that of assembling and solving a single linearization step on the finest grid. In contrast, without nested iteration, the algorithm requires $21$ damped Newton steps on the $128 \times 128$ finest grid alone, to satisfy the tolerance limit. The application of damped Newton steps becomes even more important when beginning on finer grids with a rough initial guess, as divergence can be more prevalent. Table \ref{gridprogFEsimpleBC} also reveals the performance of the algorithm with respect to the pointwise constraint, presenting the increasingly tighter minimum and maximum director deviations from unit length at the quadrature nodes. The computed equilibrium solution behaves as expected with the rods uniformly aligning parallel to the $x$-axis. \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|} \hline Grid Dim. &Newton Iter.&Init. Res.&Final Res.& Deviation in $\ltwonorm{\vec{n}}^2$ & Final Energy\\ \hline $4 \times 4$ & 18 & 4.35e-00 & 4.39e-04 &6.17e-06, 5.54e-05 & 4.941e-08\\ \hline $8 \times 8$ & 1 & 2.44e-04 & 9.74e-05 & 1.25e-06, 2.26e-05 & 7.905e-09\\ \hline $16 \times 16$ & 1 & 5.48e-05 & 1.10e-05 & 1.26e-07, 4.55e-06 & 3.162e-10\\ \hline $32 \times 32$ & 1 & 6.42e-06 & 1.35e-11 & 4.20e-14, 4.30e-11 & 7.932e-21\\ \hline $64 \times 64$ & 1 & 6.77e-12 & 6.37e-14 & -4.00e-16, 0 & 0\\ \hline $128 \times 128$ & 1 & 1.30e-13 & 1.14e-13 & -4.00e-16, 0 & 0\\ \hline \end{tabular} } \caption{\small{Grid and solution progression for uniform free elastic boundary conditions with initial and final residuals for the first-order optimality conditions, minimum and maximum director deviations from unit length at the quadrature nodes, and final functional energy on each grid.}} \label{gridprogFEsimpleBC} \end{table} The second test, run for the free elastic slab problem, incorporates twist boundary conditions and unequal Frank constants. On the lower slab, along $y=0$, the nematic rods are aligned parallel to the $x$-axis. For the upper slab, the rods are uniformly aligned along the $z$-axis. The relevant constants for this run are $K_1 = 1$, $K_2 = 1.2$, and $K_3 = 1$. This implies that $\kappa = K_2/K_3 > 1$. The solves are again performed on a $4 \times 4$ coarse grid, uniformly ascending to a $128 \times 128$ fine grid. The expected configuration for such boundary conditions is a twisted equilibrium solution along the $y$-axis. Indeed, the numerically resolved solution in Figure \ref{FETwistedBC}, displayed alongside the initial guess, demonstrates such a twist. The final minimized functional energy is $\mathcal{F}_1 = 1.480$, compared to the initial guess energy of $12.534$. Table \ref{gridprogFETwistedBC} enumerates the algorithm run attributes. \begin{figure}[h!] \centering \begin{subfigure}{.49 \textwidth} \centering \includegraphics[scale=.30]{FETwistBCInitial4x4.png} \end{subfigure} \begin{subfigure}{.49 \textwidth} \raggedright \includegraphics[scale=.30]{FETwistBC.png} \end{subfigure} \caption{\small{Initial guess (left) on $4 \times 4$ mesh with initial free energy of $12.534$ and resolved solution (right) on $128 \times 128$ (mesh restricted for visualization) with final free energy of 1.480 for a twist boundary.}} \label{FETwistedBC} \end{figure} As in Table \ref{gridprogFEsimpleBC} above, a sizable majority of the Newton iteration computations are isolated to the coarsest grids, with the finest grids requiring only one Newton iteration to reach the residual tolerance limit. Therefore, most of the computational cost is also isolated to the cheaper coarse grids rather than the finer levels. Here, the total work required is approximately $1.43$ times that of assembling and solving a single linearization step on the finest grid. Without nested iteration, $22$ damped Newton steps are required on the finest grid to compute the equilibrium solution. \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|} \hline Grid Dim. &Newton Iter.&Init. Res.&Final Res.& Deviation in $\ltwonorm{\vec{n}}^2$ & Final Energy\\ \hline $4 \times 4$ & 19 & 6.71e-00 & 3.97e-04 & -5.69e-05, 1.50e-04 & 1.481\\ \hline $8 \times 8$ & 5 & 1.80e-02 & 1.84e-04 & -4.10e-06, 2.57e-06 & 1.480\\ \hline $16 \times 16$ & 2 & 4.51e-03 & 1.80e-04 & -3.27e-07, 1.51e-07 & 1.480 \\ \hline $32 \times 32$ & 2 & 1.13e-03 & 2.09e-14 & -1.47e-08, 6.88e-09 & 1.480\\ \hline $64 \times 64$ & 1 & 2.82e-04 & 4.31e-11 & -9.21e-10, 4.31e-10 & 1.480\\ \hline $128 \times 128$ & 1 & 7.05e-05 & 1.36e-12 & -5.75e-11, 2.69e-11 & 1.480\\ \hline \end{tabular} } \caption{\small{Grid and solution progression for the free elastic problem with twist boundary conditions with initial and final residuals for the first-order optimality conditions, minimum and maximum director deviations from unit length at the quadrature nodes, and final functional energy on each grid.}} \label{gridprogFETwistedBC} \end{table} In the final numerical run, letting $r = 0.25$ and $s = 0.95$, the boundary conditions considered are \begin{align*} n_1 &= 0,\\ n_2 &= \cos\big(r(\pi + 2 \tan^{-1}(X_m) -2 \tan^{-1}(X_p))\big), \\ n_3 &= \sin\big(r(\pi + 2 \tan^{-1}(X_m) -2 \tan^{-1}(X_p))\big), \end{align*} where $X_m=\frac{-s\sin(2\pi(x+r))}{-s\cos(2\pi(x+r))-1}$ and $X_p = \frac{-s\sin(2\pi(x+r))}{-s\cos(2\pi(x+r))+1}$. Such boundary conditions are meant to simulate nano-patterned surfaces important in current research \cite{Atherton1, Atherton2}. Even in the absence of electric fields, such patterned surfaces result in complicated director configurations throughout the interior of $\Omega$. A similar grid progression to the cases above is applied. The Frank elastic constants for the experiment are $K_1=1$, $K_2=.62903$, and $K_3=1.32258$. This results in $\kappa < 1$. The final solution, as well as the initial guess, are displayed in Figure \ref{FENanoBC}. Table \ref{gridprogNanoBC}, again, details the relevant output data. The computed equilibrium configuration demonstrates the expected alignment and symmetries given the patterned surfaces. The minimized functional energy is $\mathcal{F}_1 = 3.890$, compared to the initial guess energy of $13.242$. The work required is approximately $3.06$ times that of assembling and solving a single linearization step on the finest grid. On the other hand, without nested iterations, $22$ damped Newton steps are required on the finest grid. Therefore, in all cases discussed, nested iteration is successful in significantly reducing the computational work necessary to compute an equilibrium solution. \begin{figure}[h!] \centering \begin{subfigure}{.49 \textwidth} \centering \includegraphics[scale=.30]{FENanoBCInitial4x4.png} \end{subfigure} \begin{subfigure}{.49 \textwidth} \raggedright \includegraphics[scale=.30]{FENanoBC.png} \end{subfigure} \caption{\small{Initial guess (left) on $4 \times 4$ mesh with initial free energy of $13.242$ and resolved solution (right) on $128 \times 128$ mesh (restricted for visualization) with final free energy of 3.890 for a nano-patterned boundary.}} \label{FENanoBC} \end{figure} \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|} \hline Grid Dim. &Newton Iter.&Init. Res.&Final Res.& Deviation in $\ltwonorm{\vec{n}}^2$ & Final Energy\\ \hline $4 \times 4$ & 19 & 7.04e-00 & 4.72e-04 & -9.07e-02, 4.67e-02 & 2.521\\ \hline $8 \times 8$ & 9 & 1.20e-00 & 3.14e-04 & -8.20e-02, 4.58e-02 & 3.194 \\ \hline $16 \times 16$ & 6 & 1.06e-00 & 6.71e-05 & -6.69e-02, 3.96e-02 & 3.674\\ \hline $32 \times 32$ & 3 & 8.22e-01 & 3.42e-12 & -4.31e-02, 2.78e-02 & 3.885\\ \hline $64 \times 64$ & 3 & 5.04e-01 & 4.75e-14 & -1.73e-02, 1.26e-02 & 3.900\\ \hline $128 \times 128$ & 2 & 2.24e-01 & 3.00e-09 & -3.51e-03, 2.81e-03 & 3.890\\ \hline \end{tabular} } \caption{\small{Grid and solution progression for patterned boundary conditions with initial and final residuals for the first-order optimality conditions, minimum and maximum director deviations from unit length at the quadrature nodes, and final functional energy on each grid.}} \label{gridprogNanoBC} \end{table} \section{Summary and Future Work} \label{conclusion} We have discussed a constrained minimization approach for liquid crystal equilibrium configurations in the presence of free elastic effects. Such minimization is founded upon the Frank-Oseen model for liquid crystal free energy. Due to the nonlinearity of the continuum first-order optimality conditions, Newton linearizations were derived. The resulting discrete systems were analyzed, and it was shown that solutions to the discretized Newton iterations exist. If $\kappa=1$ or $\kappa$ satisfies the conditions of the small data assumption in Lemma \ref{coercivitysmalldata} and the assumptions of Lemma \ref{bubblespacelemma} hold, then unique solutions to the discrete Newton iterations are guaranteed for the prescribed discrete spaces. Error analysis was conducted to demonstrate discrete convergence results for the method. Numerical results demonstrate the accuracy and efficiency of the algorithm in resolving some difficult features for free elastic effects. The experiments address problems that include unequal Frank constants and nano-patterned boundary conditions. The experiments also reveal the necessity for a mixed finite-element approach. Such a requirement exposes an interesting parallel to other problems with similar instabilities such as the Stokes' and Navier-Stokes' equations. The minimization approach overcomes some difficulties inherent to the liquid crystal equilibrium problem, such as the nonlinear unit length director constraint, and effectively deals with heterogeneous Frank constants. The algorithm also productively utilizes nested iteration to reduce computational costs by isolating much of the computational work to the coarsest grids. Such computational work allocation significantly reduces the effective number of Newton iterations on the finest grid, even for the nano-patterned boundary conditions example. The above method is currently being extended to include electric and flexoelectric effects in order to more accurately capture physical phenomenon important to many applications, such as the study of bistable devices \cite{Davidson1}. The rising complexity involved in these extensions presents interesting challenges, such as the appearance of more complicated saddle-point structures. Development and implementation of specifically tailored solvers for the systems encountered above, as well as those anticipated in future problems, is a priority. Additionally, investigation into the use of $\Hn{-1}{\Omega}$ norms for the Lagrange multiplier to achieve discrete inf-sup stability independent of the mesh parameter, $h$, are being pursued. Furthermore, analysis of the Newton linearizations for the electric and flexoelectric augmentations will be undertaken. Future work will also include study of effective adaptive refinement and linearization tolerance schemes. Because the energy minimization formulation does not yield an obvious a priori error estimator, new techniques will be explored to flag cells for refinement and determine when grid refinement should occur. \section*{Acknowledgments} The authors would like to thank Professors Thomas Manteuffel, Johnny Guzm\'{a}n, and Ludmil Zikatanov for their useful contributions and suggestions. \bibliographystyle{plain} \nocite{*}
1,314,259,995,468
arxiv
\section{Introduction} Differential graded algebras (DGAs for short) were introduced by Cartan in \cite{cartan1954groupes} and occur naturally in a number of different areas of geometry and topology. We are here interested in those that appear in the context of Legendrian contact homology, which is a powerful contact topological invariant due to Chekanov \cite{Chekanov_DGA_Legendrian} and Eliashberg, Givental and Hofer \cite{Eliashberg_&_SFT}. In its basic setup, this theory associates a differential graded algebra, called the {\em Chekanov-Eliashberg DGA}, to a given Legendrian submanifold of a contact manifold. The DGA homotopy type (or even, stable tame isomorphism type) of the Chekanov-Eliashberg DGA is independent of the choices made in the construction and invariant under isotopy through Legendrian submanifolds. Because of some serious analytical difficulties, Legendrian contact homology has been rigorously defined only for Legendrian submanifolds of contactisations of Liouville manifolds \cite{LCHgeneral} and in few other sporadic cases \cite{Chekanov_DGA_Legendrian, 1024.57014, Sabloff_thesis, Licata_Sabloff, Ekholm_Ng_subcritical}. Since the Chekanov-Eliashberg DGA is semifree and fully noncommutative, it can be difficult to extract invariants from it. In fact, as an algebra, it is isomorphic to a tensor algebra (and therefore is typically of infinite rank) and its differential is nonlinear with respect to the generators. To circumvent these difficulties, Chekanov introduced his linearisation procedure in \cite{Chekanov_DGA_Legendrian}: to a differential graded algebra equipped with an augmentation he associates a chain complex which is generated, {\em as a module}, by the generators of the DGA {\em as an algebra}. The differential then becomes linear at the price of losing the information which is contained in the multiplicative structure of the DGA, but at least the homology of the linearised complex is computable. It is well known that the set of isomorphism classes of linearised homologies is invariant under DGA homotopy; see e.g.~\cite[Theorem 2.8]{Bourgeois_Survey}. Thus, linearised Legendrian contact homology provides us with a computable Legendrian isotopy invariant. In order to recover at least part of the nonlinear information lost in the linearisation, one can study products in the linearised Legendrian contact homology groups induced by the product structure of the Chekanov-Eliashberg DGA. Civan, Koprowski, Etnyre, Sabloff and Walker in \cite{Productstructure} endowed Chekanov's linearised chain complex with an $A_\infty$-structure. This construction was generalised in \cite{augcat} by the first author and Bourgeois, who showed that a differential graded algebra naturally produces an $A_\infty$-category whose objects are its augmentations. In dimension three, the $A_\infty$-category constructed by the first author and Bourgeois admits a unital refinement defined by Ng, Rutherford, Shende, Sivek and Zaslow in~\cite{NRSSZ}. The latter article also establishes an equivalence between this unital $A_\infty$-category and one defined in terms of derived sheaves of microlocal rank one with microsupport given by a fixed Legendrian knot. Our expectation is that the $A_\infty$-structures constructed here correspond to such sheaves being of arbitrary microlocal rank. $A_\infty$-algebras are by now classical structures which were first introduced by Stasheff in \cite{HomotopyAssociativity} as a tool in the study of `group-like' topological spaces. Fukaya was the first to upgrade the notion of an $A_\infty$-algebra to that of an $A_\infty$-category. In \cite{MorseHomotopy} he associated an $A_\infty$-category, which now goes under the name of the \emph{Fukaya category}, to a symplectic manifold. See \cite{Seidel_Fukaya} for a good introduction. Inspired by Fukaya's work \cite{MorseHomotopy}, Kontsevich in \cite{kontsevich1995homological} formulated the {\em homological mirror symmetry conjecture} relating the derived Fukaya category of a symplectic manifold to the derived category of coherent sheaves on a ``mirror'' manifold. The construction in \cite{Productstructure} and \cite{augcat} defines $A_\infty$-operations only when the coefficient ring of the DGA is commutative. The goal of this paper is to extend that construction to noncommutative coefficient rings in the following two cases: \begin{enumerate} \item[(I)] the coefficients of the DGA as well as the augmentations are taken in a unital noncommutative algebra, or \item[(II)] the coefficients of the DGA as well as the augmentations are taken in a noncommutative {\em Hermitian algebra}. (See Definition~\ref{defn: hermitian algebra}.) This case includes both finite-dimensional algebras over a field and group rings. \end{enumerate} Case (II) is obviously included in Case (I), but we will see that there is a particularly nice alternative construction of an $A_\infty$-structure in case (II) which gives a different result. We refer to Subsections \ref{subsec:case1} and \ref{subsec:case2} for the respective constructions. Both generalisations above are sensible to study when having Legendrian isotopy invariants in mind, albeit for different reasons. Case (I) occurs because there are Legendrian submanifolds whose Chekanov-Eliashberg DGA does not admit augmentations in any unital algebra of finite rank over a commutative ring, but admits an augmentation in a unital noncommutative infinite-dimensional one (for example, in their characteristic algebras). The first such examples were Legendrian knots constructed by Sivek in \cite{TheContHomofLegKNotswithMAXTBI} building on examples found by Shonkwiler and Shea Vela-Vick in \cite{Shink_Vela}. From them, the second and fourth authors constructed higher dimensional examples in \cite{EstimNumbrReebChordLinReprCharAlg}. Observe that any differential graded algebra has an augmentation in its ``characteristic algebra'', introduced by Ng in \cite{Ngcomputable}, which is the quotient of the DGA by the two-sided ideal generated by its boundaries. This algebra is in general noncommutative and infinite-dimensional, and any augmentation factors through it. It is of course possible that the characteristic algebra vanishes, but it does so if and only if the DGA is acyclic \cite{Characteristic}. The \emph{complex} that we will define in case (I) (but not the higher order operations) was used in \cite{Caps} by the second author in order to deduce that a Legendrian submanifold with a non-acyclic Chekanov-Eliashberg DGA does not admit a displaceable Lagrangian cap. Finally, we note that the construction we give in Case (I) is closely related to the $A_\infty$-structures and bounding cochains with noncommutative coefficients as introduced by Cho, Hong and Sui-Cheong in their recent work \cite{Cho}. Namely, the (uncurved) $A_\infty$-structures that we produce from a DGA and its augmentations can be seen to coincide with the (uncurved) $A_\infty$-structures produced by their bounding cochains. Case (II) also occurs naturally in the context of Legendrian contact homology. For example, in \cite{Satellites} Ng and Rutherford show that augmentations of certain satellites of Legendrian knots induce augmentations in matrix algebras for the Chekanov-Eliashberg DGA of the underlying knot. Moreover, coefficients in a group ring appear naturally if one considers the Chekanov-Eliashberg DGA with coefficients ``twisted'' by the fundamental group of the Legendrian submanifold. We learned this construction from Eriksson-\"{O}stman, who makes use of it in his upcoming work \cite{Albin}. This version of Legendrian contact homology can be seen as a natural generalisation of Morse homology and Floer homology with coefficients twisted by the fundamental group; see the work \cite{KFloer} by Sullivan and \cite{Damian_Lifted} by Damian. In the setting of Legendrian contact homology with twisted coefficients, an exact Lagrangian filling gives rise to an augmentation taking values in the group ring of the fundamental group of the filling. See the work \cite{Cthulhu} by the authors for more details, were Legendrian contact homology with twisted coefficients is used to study the topology of Lagrangian fillings and cobordisms. In Section \ref{sec:computation} we outline how our construction can be used as an efficient computational tool for distinguishing a Legendrian knot from its Legendrian mirror in the case when there are no augmentations in a commutative algebra. Note that it, in general, it is much easier to extract invariants from the $A_\infty$-algebra compared to the DGA. Finally, we recall that Legendrian contact homology is not the only place where noncommutative graded algebras appear in symplectic geometry. Another source is cluster homology, a proposed generalisation of Lagrangian Floer homology due to Cornea and Lalonde \cite{Cluster}, which is supposed to provide an alternative approach to the $A_\infty$-structures in Floer homology introduced by Fukaya, Oh, Ohta and Ono \cite{fooo}. \subsection*{Acknowledgements} We would like to thank the organisers of the twenty-second G\"okova Geometry-Topology conference for the wonderful mathematical, as well as natural, environment, and the Institut Mittag-Leffler for hospitality during the program ``Symplectic Geometry and Topology'', when part of this article was written. The first author has also benefited from the hospitality of CIRGET in Montr\'eal and IAS in Princeton. Last but not least, we thank Lenny Ng and Stiven Sivek for useful discussions. The example in Section~\ref{sec: Lenny's example}, suggested to us by Ng, was a major source of inspiration. \section{Algebraic preliminaries} \label{sec:preliminaries} In this section, we fix some notations and recall some basic definitions from the theory of modules over (possibly noncommutative) algebras. For more details of this theory, we refer to \cite{dummit2004abstract}. We also introduce some notation that will simplify the various formulas for the $A_\infty$-structures that we will define. \subsection{Bimodules and tensor products} In this paper $R$ will always denote a commutative ring and $A$ will denote a unital algebra over $R$ which is not necessarily commutative. Important examples will be the matrix algebra $M_{n}(R)$ corresponding to the endomorphisms of the free $R$-module $R^n$, and the group ring $R[G]$ of an arbitrary group $G$. For $R$-modules $M$, $N$ we denote by \[ M \otimes N := M \otimes_R N \] their tensor product as $R$-modules. Moreover, if $M$, $N$ are $A$--$A$-bimodules, their (balanced) tensor product is denoted by \[ M \boxtimes N := M\otimes_A N.\] We recall that the balanced tensor product is the quotient of $M \otimes N$ by $ma\otimes n = m\otimes an$ for all $a \in A$, $m\in M$ and $n\in N$. We also remind that a free $A$--$A$-bimodule $M$ on generating set $B$ is an $A$--$A$-bimodule $M$ and a map $i:B\to M$ of sets such that, for any $A$--$A$-bimodule $N$ and any map $f:B\to N$ of sets, there is a unique $A$--$A$-bimodule morphism $\overline{f}:M\to N$ such that $\overline{f}\circ i=f$. The elements of ${\mathcal B}= i(B)$ in $M$ are a basis for $M$. The free $A$--$A$-bimodule with basis ${\mathcal B}$ will often be identified with $\oplus_{c \in {\mathcal B}} (A \otimes_{R} A)$, where the action of $A$ from the left (resp. right) acts by multiplication from the left (resp. right) on the left (resp. right) factor. Elements of $M$ will also written as linear combinations of elements of the form $ a_+ c a_-$ with $a_\pm \in A$ and $c \in {\mathcal B}$. A \emph{grading} of an $A$--$A$-bimodule $M$ in the group $\Z/\Z2 \mu$ is a direct sum decomposition $M = \oplus_{g \in \Z/\Z2 \mu} M_g$. If $m \in M_g$, we write $|m| =g$. The tensor product of graded bimodules is graded by the usual rule. \subsection{Tensor algebras} Given an $A$--$A$-bimodule $M$, we define the {\em tensor algebra} of $M$ as the algebra \[ \mathcal{T}_A(M):=\bigoplus_{n=0}^\infty M^{\boxtimes n}\] with the multiplication \[ \mathfrak{m} \colon \mathcal{T}_A(M) \boxtimes \mathcal{T}_A(M) \to \mathcal{T}_A(M), \] \[ \mathfrak{m}((m_1 \boxtimes \ldots \boxtimes m_i) \boxtimes (n_1 \boxtimes \ldots \boxtimes n_j)) = m_1 \boxtimes \ldots \boxtimes m_i \boxtimes n_1 \boxtimes \ldots \boxtimes n_j. \] Here we have used the notation \begin{align*} M^{\boxtimes 0} &:= A,\\ M^{\boxtimes n} & := \underbrace{M \boxtimes \hdots \boxtimes M}_n. \end{align*} We will call $M^{\boxtimes 0}$ the {\em zero-length part} of $\mathcal{T}_A(M)$. If $M$ is a graded bimodule, then the tensor algebra $\mathcal{T}_A(M)$ inherits a grading \[\mathcal{T}_A(M) = \bigoplus_{g \in \Z/\Z2 \mu} \mathcal{T}_A(M)_g \] by requiring that the zero-length part lives in degree zero; i.e. $M^{\boxtimes 0} \subset \mathcal{T}_A(M)_g$ and \[\mathfrak{m} \colon \mathcal{T}_A(M)_{g_1} \boxtimes \mathcal{T}_A(M)_{g_1} \to \mathcal{T}_A(M)_{g_1+g_2}.\] In this article, algebra maps will always be unital. Algebra maps $\mathcal{T}_A(M) \to \mathcal{T}_A(N)$ between tensor algebras over $A$ will always be morphisms of $A$--$A$-bimodules, and in particular they will restrict to the identity $A=M^{\boxtimes 0} \to N^{\boxtimes 0}=A$ on the zero-length parts. On the other hand, algebra maps $\mathcal{T}_A(M) \to B$ for a general $R$-algebra $B$ will be $R$--$R$-bimodule morphisms, and their restriction to $M^{\boxtimes 0}=A$ induces a unital $R$-algebra morphisms $A \to B$. Algebra maps defined on $\mathcal{T}_A(M)$ are determined by their restrictions to $M^{\boxtimes 0}=A$ and $M^{\boxtimes 1} = M$. Note that, as in the case when $A$ is commutative, there is a notion of ``free product'' of tensor algebras defined by \[ \mathcal{T}_A(M) \star \mathcal{T}_A(N):=\mathcal{T}_A(M \oplus N),\] which is again a tensor algebra. Moreover, for algebra maps $f_i \colon \mathcal{A}_i \to \mathcal{B}_i$, $i=1,\hdots,n$, between tensor algebras $\mathcal{A}_1, \ldots \mathcal{A}_n, \mathcal{B}_1, \ldots \mathcal{B}_n$, there is a naturally induced algebra map \[f_1 \star \hdots \star f_n \colon \mathcal{A}_1 \star \hdots \star \mathcal{A}_n \to \mathcal{B}_1 \star \hdots \star \mathcal{B}_n\] between the corresponding free products. In all our applications, $M$ will be a free $A$--$A$-bimodules. In this case, if the elements $c_1, \ldots, c_m,\ldots$ freely generate the $A$--$A$-bimodule $M$, they also generate the algebra $\mathcal{T}_A(M)$ in the following sense: every element in $M^{\boxtimes n}$ can be written as $a_0c_{i_1}a_1 \ldots c_{i_n}a_n$, with $a_0, \ldots, a_n \in A$. \subsection{Duals} \label{sec:bimoduledual} For an $R$-module $M$, we denote by $M^*$ the dual module $\Hom_R(M,R)$. If $M$ is free with a given finite basis ${\mathcal B}$, then $M^*$ is again free with a dual basis ${\mathcal B}^*$ induced by ${\mathcal B}$. We recall that, for any $c \in {\mathcal B}$, the dual basis element $c^*$ is the element of $M^*$ which maps $c$ to $1 \in R$ and any other element of ${\mathcal B}$ to $0$. Hence, when the basis is part of the data, we will identify $M$ with $M^*$ by identifying $c$ with $c^*$ for all $c \in {\mathcal B}$. If ${\mathcal B}$ is not finite, the above construction only gives an injection $M \to M^*$. Given a $R$-module map $f:M\rightarrow N$, we denote the adjoint morphism by $f^*:N^*\rightarrow M^*$. Again, if $M$ and $N$ are free with given finite bases, then we denote $f^* \colon N\rightarrow M$. Let $A$ be any $R$-algebra. We can regard $A$ as a nonfree $A$--$A$-bimodule over itself. For an $A$--$A$-bimodule $M$ we will define $M^\vee := \Hom_{A-A}(M,A)$ in the sense of bimodules. Observe that in general $M^\vee$ only has the structure of an $R$-module. If $M$ is a free and finitely generated $A$--$A$-bimodule with a preferred basis ${\mathcal B}$, then $M^\vee$ can be identified with a free $A$-module with the same basis. The correspondence is given by \begin{equation}\label{pippo} M^\vee \ni \varphi \mapsto \sum \limits_{c \in {\mathcal B}} \varphi(c)c. \end{equation} Again, any morphism $f \colon M \to N$ of bimodules gives rise to an adjoint morphism \[ f^\vee \colon N^\vee \to M^\vee \] which, typically, is only a morphism of bimodules. We define morphisms of $R$-modules $\psi_n \colon (M^\vee)^{\otimes n} \to (M^{\boxtimes n})^\vee$ by \begin{equation}\label{pluto} \psi_n(\beta_1 \otimes \hdots \otimes \beta_n)(m_1 \boxtimes \hdots \boxtimes m_n)= \beta_1(m_1) \hdots \beta_n(m_m) \end{equation} for $\beta_i \in M^\vee$ and $m_i \in M$. Note that $\psi_n(\beta_1 \otimes \hdots \otimes \beta_n )$ is well defined on the balanced tensor product since the $\beta_i$ are bimodule morphisms. The maps $\psi_n$ cannot be seen as morphisms of bimodules in any sensible way. If $M$ is graded, then the dual modules $M^*=\Hom_R(M,R)$ and $M^\vee =\Hom_{A-A}(M,A)$ (when defined) are also graded with gradings $(M^\vee)_g:=(M_{g-1})^\vee$ and $(M^*)_g:=(M_{g-1})^*$, i.e.~the \emph{suspension} of the dual gradings. \subsection{Hermitian algebras} We are interested in duals $\Hom_R(M,R)$ of $A$--$A$-bimodules $M$ for algebras $A$ which are not necessarily finitely generated free $R$-modules. For that reason, in order to have a better behaving theory, we will introduce some additional structure on the algebra $A$. A commutative ring $R$ is an {\em involutive ring} if it is endowed with an involution $r \mapsto \overline{r}$ (called {\em conjugation}), which is also a ring homomorphism. The prototypical example to keep in mind is the field of complex numbers, but we will also allow involutive rings where the conjugation is the identity. From now on every ring will be tacitly considered involutive, possibly with a trivial involution. \begin{defn}\label{defn: hermitian algebra} A {\em Hermitian algebra} $(A, \star, \mathfrak{t})$ over an involutive ring $R$ consists of: \begin{itemize} \item an $R$-algebra $A$, \item a map \begin{gather*} \star \colon A \to A, \\ a \mapsto a^\star, \end{gather*} satisfying \begin{enumerate} \item $(r a + s b)^\star = \overline{r}a^\star + \overline{s} b^\star$ for all $r,s \in R$ and all $a,b \in A$, \item $(ab)^\star=b^\star a^\star$, and \item $(a^\star)^\star=a$, and \end{enumerate} \item a Hermitian form $$\mathfrak{t} \colon A \times A \to R$$ such that $$\mathfrak{t}(ba, c)=\mathfrak{t}(a, b^\star c) = \mathfrak{t}(b, ca^\star)$$ for all $a,b,c \in A$, and which is non-degenerate in the following strong sense. For any $n \ge 1$, the morphism \begin{gather*} \iota \colon \underbrace{A \otimes_R \hdots \otimes_R A}_n \to (\underbrace{A \otimes_R \hdots \otimes_R A}_n)^*,\\ x \mapsto \iota_x, \end{gather*} determined by \[\iota_{a_1 \otimes \hdots \otimes a_n}(a_1' \otimes \hdots \otimes a_n') = \mathfrak{t}(a_1',a_1)\cdot \hdots \cdot \mathfrak{t}(a_n',a_n) \in R \] is injective. \end{itemize} \end{defn} From (2) and (3) it follows that $1^\star = 1$. In fact $a = (a^\star)^\star = (1 a^\star)^\star = a 1^\star$, and similarly $a= 1^\star a$ for all $a \in A$. \begin{Rem} There are two cases in which the above non-degeneracy for $n > 1$ follows from the case $n=1$: \begin{enumerate} \item $A$ is free (possibly infinitely generated) as an $R$-module and $\mathfrak{t}$ is induced by the canonical pairing of its basis elements, or \item $R$ is a domain. \end{enumerate} \end{Rem} \noindent Note that, if the conjugation on $R$ is trivial, $\mathfrak{t}$ is a symmetric bilinear form. Our main examples of Hermitian algebras will be the group ring $R[G]$ over an arbitrary group $G$ and the matrix algebras $M_{n}(R)$; in both cases $R$ is an arbitrary commutative ring. On the group ring the involution is induced by the inverse in $G$, i.e. $g^\star = g^{-1}$ on the basis elements $g \in G$, and $\mathfrak{t}$ is the scalar product for which the group elements $g \in G$ form an orthonormal basis. On the matrix algebra $M_{n}(R)$ we distinguish whether the conjugation on $R$ is trivial or not. In the first case, the involution in $M_{n}(R)$ is the transposition, and in the second case it is the adjoint (i.e. the transposition followed by the conjugations). In both types of matrix algebras, $$\mathfrak{t}(a,b):=\OP{tr}(b^\star a), \:\: a,b \in M_{n}(R)$$ is given by the trace. For simplicity, from now on we will consider only Hermitian algebras over commutative rings whose conjugation is trivial. \subsection{Bimodules over Hermitian algebras and their duals} Let $M$ be an arbitrary $A$--$A$-bimodule over a Hermitian algebra $A$. The involution on $A$ allows us to define an $A$--$A$-bimodule structure on $M^*=\Hom_R(M,R)$ by \begin{equation}\label{bimodule via star} (a_1\varphi a_2)(m) := \varphi(a_1^\star m a_2^\star) \end{equation} for any $a_1, a_2 \in A$, $m \in M$ and $\varphi \in \Hom_R(M,R)$. \begin{Lem}\label{adjoint is a bimodule map} Let $A$ be a Hermitian algebra and let $f \colon M \to N$ be a morphism between $A$--$A$-bimodules $M$ and $N$. Then the adjoint map $f^* \colon N^* \to M^*$ is also a morphism of $A$--$A$-bimodules. \end{Lem} \begin{proof} The proof is a simple computation: Let $\varphi \in N^*$, $m \in M$ and $a_+, a_- \in A$. Then \[f^*(a_+ \varphi a_- ) (m)= \varphi (a_+^\star f(m) a_-^\star)= \varphi (f(a_+^\star m a_-^\star))= a_+ f^*(\varphi(m)) a_-. \] \end{proof} Now consider a free $A$--$A$-bimodule $M$ with a preferred basis ${\mathcal B}$. The bilinear pairing $\mathfrak{t}$ on $A$ and the basis ${\mathcal B}$ induce an $R$-bilinear pairing on each $A$--$A$-bimodule $M^{\boxtimes n}$, $n \ge 0$, which, on elements of the form $a_0d_1a_1\ldots d_na_n$ with $a_i \in A$ and $d_i \in {\mathcal B}$ is defined by \[ \langle a_0d_1a_1\ldots d_na_n, a_0'd_1'a_1'\ldots d_n'a_n' \rangle = \begin{cases} \mathfrak{t}(a_0, a_0')\cdot \ldots \cdot \mathfrak{t}(a_n, a_n') & \text{if } d_i=d_i', \: i=1,\hdots,n, \\ 0 & \text{otherwise}, \end{cases} \] where $d_i' \in {\mathcal B}$ as well. It can be checked explicitly that \[\langle a_1ma_2,n \rangle = \langle m,a_1^\star na_2^\star \rangle \] holds for all $m,n \in M^{\boxtimes n}$ and $a_1,a_2 \in A$. By the assumption of nondegeneracy of $\mathfrak{t}$, this pairing then induces an injection \begin{gather} \iota^{(n)} \colon M^{\boxtimes n} \hookrightarrow (M^{\boxtimes n})^*,\nonumber\\ m \mapsto \langle m, \x \rangle \in \Hom_R(M^{\boxtimes n},R), \label{inclusion in dual} \end{gather} for each $n \ge 0$. The identifications $\iota^{(n)}$ also satisfy the following property. \begin{Lem} Let $A$ be a Hermitian algebra and let $M$ be a free $A$--$A$-bimodule with a preferred basis. Then the inclusion $\iota^{(n)}$ for $n \ge 1$ is a morphism of $A$--$A$-bimodules for the bimodule structure on $(M^{\boxtimes n})^*$ described in Equation~\eqref{bimodule via star}. \end{Lem} \begin{proof} For any $m \in M$ and $a_1,a_2 \in A$ we compute \[a_1\langle m, \x \rangle a_2 = \langle m,a_1^\star \x a_2^\star \rangle = \langle a_1ma_2,\x \rangle,\] which shows the claim. \end{proof} We will often tacitly identify $M^{\boxtimes n}$ with its image in $(M^{\boxtimes n})^*$ under the inclusion $\iota^{(n)}$. In general, it is not necessarily the case that an $R$-module morphism $f \colon M \to N$ between free infinitely generated $R$-modules has an adjoint morphism $f^* \colon N^* \to M^*$ that restricts to a morphism $f^* \colon N \to M \subset M^*$ for these submodules $N \subset N^*$ and $M \subset M^*$. However, this turns out to be the case for a large class of maps that we are interested in here. First we give the following useful formula. \begin{Lem} \label{lem:mainformula} Let $A$ be a Hermitian algebra, and let $M,N$ be free finitely generated $A$--$A$-bimodules with preferred bases. Consider a morphism $g \colon M \to N^{\boxtimes n}$, $n \ge 1$, of $A$--$A$-bimodules which vanishes on all basis elements except a single $c \in M$, on which it takes the form \[ g(c)= a_0d_1a_1 \ldots d_n a_n,\] where $d_i \in N$ again are basis elements and $a_i \in A$. Then \begin{eqnarray*} \lefteqn{g^*( \langle a_0'd_1'a_1'\ldots a_{n-1}'d_n'a_n', \x \rangle)=}\\ &=& \begin{cases} \langle (\mathfrak{t}(a_1,a_1')\cdot\hdots\cdot \mathfrak{t}(a_{n-1},a_{n-1}')) \cdot a_0' a_0^\star \cdot c \cdot a_n^\star a_n',\x \rangle, & \text{if } d_i=d_i', \: i=1,\hdots,n,\\ 0, & \text{otherwise} \end{cases} \end{eqnarray*} for arbitrary basis elements $d_i' \in N$ and $a_i' \in A$. \end{Lem} \begin{proof} This is a simple verification; for a generator $d$ we have $$ g^*( \langle a_0'd_1'a_1'\ldots a_{n-1}'d_n'a_n', \x \rangle)(d)=\langle a_0'd_1'a_1' \ldots a_{n-1}'d_n'a_n', g(d) \rangle.$$ This can be nonzero only if $ d=c$ where we get: \begin{eqnarray*} \lefteqn{\langle a_0'd_1'a_1'\ldots a_{n-1}'d_n'a_n', g(c) \rangle=}\\ &=& \begin{cases} \big(\mathfrak{t}(a_0,a_0')\cdot\mathfrak{t}(a_1,a_1')\cdots \mathfrak{t}(a_n,a_n') \big)& \text{if } d_i=d_i', \: i=1,\hdots,n,\\ 0, & \text{otherwise.} \end{cases} \end{eqnarray*} On the other hand the expression $$\langle (\mathfrak{t}(a_1,a_1')\cdot\hdots\cdot \mathfrak{t}(a_{n-1},a_{n-1}'))\cdot a_0' a_0^\star \cdot c \cdot a_n^\star a_n',d \rangle$$ is nonzero only when $ d=c$. Moreover, $$\langle (\mathfrak{t}(a_1,a_1')\cdot\hdots\cdot \mathfrak{t}(a_{n-1},a_{n-1}')) \cdot a_0' a_0^\star \cdot c \cdot a_n^\star a_n', c \rangle=\mathfrak{t}(a_0,a_0')\cdot\mathfrak{t}(a_1,a_1')\cdots \mathfrak{t}(a_n,a_n')$$ holds since $\mathfrak{t}(a,b)=\mathfrak{t}(a b^\star,1)$. This concludes the proof. \end{proof} We similarly compute the following relation. \begin{Lem} \label{lem:mainformula2} Let $A$ be a Hermitian algebra, and let $M,N$ be free finitely generated $A$--$A$-bimodules with preferred bases. Consider a morphism $g \colon M \to N^{\boxtimes n}$, $n \ge 1$, of $A$--$A$-bimodules of as in Lemma \ref{lem:mainformula}. For the morphism \[ G=\id_M^{\boxtimes k} \boxtimes g \boxtimes \id_M^{\boxtimes l} \colon M^{\boxtimes (k+1+l)} \to M^{\boxtimes k} \boxtimes N^{\boxtimes n} \boxtimes M^{\boxtimes l}\] we then have \begin{eqnarray*} \lefteqn{G^*( \langle x \boxtimes a_0'd_1'a_1'\ldots a_{n-1}'d_n'a_n' \boxtimes y, \x \rangle)=}\\ &=& \begin{cases} \langle (\mathfrak{t}(a_1,a_1')\cdot\hdots\cdot \mathfrak{t}(a_{n-1},a_{n-1}')) \cdot x \boxtimes a_0' a_0^\star \cdot c \cdot a_n^\star a_n' \boxtimes y,\x \rangle, & \text{if } d_i=d_i', \: i=1,\hdots,n,\\ 0, & \text{otherwise} \end{cases} \end{eqnarray*} for arbitrary basis elements $d_i' \in N$ and $a_i' \in A$, and any $x \in M^{\boxtimes k}$, $y \in M^{\boxtimes l}$. \end{Lem} \begin{proof} Again it is a matter of checking that $(\mathfrak{t}(a_1,a_1')\cdot\hdots\cdot \mathfrak{t}(a_{n-1},a_{n-1}'))\cdot x \boxtimes a_0' a_0^\star \cdot c \cdot a_n^\star a_n' \boxtimes y$ represents the dual $G^*( \langle x \boxtimes a_0'd_1'a_1'\ldots a_{n-1}'d_n'a_n' \boxtimes y, \x \rangle)$. This follows by a straight-forward computation similarly to the proof of Lemma \ref{lem:mainformula}. \end{proof} \begin{Prop} \label{prp:mainprop} Let $A$ be a Hermitian algebra and let $M,N$ be free finitely generated $A$--$A$-bimodules with preferred bases. Given a morphism $f \colon M \to N^{\boxtimes n}$ of bimodules where $n>0$, for any $k,l\geq 0$ we consider the induced bimodule morphism \[ F=\id_M^{\boxtimes k} \boxtimes f \boxtimes \id_M^{\boxtimes l} \colon M^{\boxtimes (k+1+l)} \to M^{\boxtimes k} \boxtimes N^{\boxtimes n} \boxtimes M^{\boxtimes l}.\] Then the adjoint \[F^* \colon ( M^{\boxtimes k} \boxtimes N^{\boxtimes n} \boxtimes M^{\boxtimes l} )^* \to (M^{\boxtimes (k+l+1)})^*\] is a morphism of $A$--$A$-bimodules restricting to a morphism of the form \begin{gather*} F^* \colon M^{\boxtimes k} \boxtimes N^{\boxtimes n} \boxtimes M^{\boxtimes l} \to M^{\boxtimes (k+l+1)},\\ F^* =\id_M^{\boxtimes k} \boxtimes f^* \boxtimes \id_M^{\boxtimes l}, \end{gather*} on the submodules defined by the inclusion in Equation~\eqref{inclusion in dual}. \end{Prop} \begin{proof} The fact that $F^{\ast}$ is a morphism of $A$-$A$-bimodules follows from Lemma~\ref{adjoint is a bimodule map}. The latter statement follows directly from Lemma~\ref{lem:mainformula2}. Namely, the morphism $F$ considered here can be written as a finite sum of morphisms $\id_M^{\boxtimes k} \boxtimes g \boxtimes \id_M^{\boxtimes l}$ satisfying the assumptions of Lemma~\ref{lem:mainformula2}. \end{proof} \begin{Rem} Note that in order for property (2) above to hold, it is crucial that $n>0$. For instance, the property is not satisfied for the adjoint $m^*$ of the multiplication $m: R[G] \otimes R[G] \to R[G]$ for the group ring of an infinite group. \end{Rem} \section{Differential graded algebras over noncommutative rings} In this section we recall some facts about differential graded algebras which are well known for commutative coefficient rings. \subsection{Definitions}\label{sec:definitions} Let $R$ be a unital commutative ring and $A$ a (not necessarily commutative) unital algebra over $R$. \begin{defn} A {\em differential graded algebra} $({\mathcal A}, \partial)$ over $A$ is a unital $\Z / 2 \mu \Z$-graded algebra $\mathcal{A}$ over $A$ whose differential $\partial \colon {\mathcal A} \to {\mathcal A}$ is a morphism of $A$--$A$-bimodules satisfying the following properties: \begin{enumerate} \item $\partial \circ \partial =0$, \item $\partial$ has degree $-1$, and \item $\partial(xy)= \partial(x)y + (-1)^{|x|} x \partial(y)$ for all homogeneous elements $x,y \in {\mathcal A}$. \label{leibniz} \end{enumerate} \end{defn} The last equality is known as \emph{graded Leibniz rule}, and tells us that $\partial$ is a derivation. Above $|x| \in \Z/2 \mu \Z$ is the degree of $x$. The graded Leibniz rule (and the fact that $1$ is homogeneous of degree $0$) implies that $\partial (1)=0$. In fact, $\partial (1)= \partial(1 \cdot 1)=\partial(1) \cdot 1 + 1 \cdot \partial(1)= \partial(1) + \partial(1)$. Since $\partial$ is a morphism of $A$--$A$--bimodules, this implies that $\partial(a \cdot 1)=0$ for all $a \in A$. In this article we will consider only ``semifree'' differential graded algebras with finitely many generators, i.e.~whose underlying algebra is the tensor algebra $${\mathcal A}= \mathcal{T}_A(M)=\bigoplus_{n=0}^\infty M^{\boxtimes n}$$ where $M$ is a finitely generated graded free $A$--$A$-bimodule. Moreover we will always assume that $M$ comes with a specified finite basis ${\mathcal B} = \{c_1, \ldots, c_k \}$ over $A$ consisting of homogeneous elements. The differential is determined by its action on $M$, where it decomposes as \begin{equation}\label{decomposition of differential} \partial|_M = \partial_0 + \partial_1 + \partial_2 + \ldots, \end{equation} where $\partial_n \colon M \to M^{\boxtimes n}$. Clearly $\partial_i =0$ for $i$ sufficiently large because $M$ is finitely generated. We refer to $\partial_0\colon M = M^{\boxtimes 1} \rightarrow M^{\boxtimes 0} = A$ as the \textit{constant part} of $\partial$. From the Leibniz rule it follows that the differential of an element $x= a_0c_{i_1}a_1 \ldots c_{i_n}a_n \in M^{\boxtimes n}$ is \begin{align} \partial x&=\sum_{j=0}^{n-1}(-1)^{|c_{i_1}|+\cdots+|c_{i_{j-1}}|}a_0c_{i_1}a_1\cdots a_{j-1} \partial(c_{i_j})a_j\cdots c_{i_n}a_n \nonumber \\ &=\sum_{k=0}^\infty\sum_{j=0}^{n-1}(-1)^{|c_{i_1}|+\cdots+|c_{i_{j-1}}|}a_0c_{i_1}a_1 \cdots a_{j-1} \partial_k(c_{i_j})a_j\cdots c_{i_n}a_n. \label{eq:2} \end{align} Combining this with the fact that $\partial\circ\partial=0$, we get the following relation for the maps $\partial_i$: \begin{equation} \label{eq:leibnizpartial} \sum_{k+l-1=n \atop k>0, \: l \ge 0 } \sum_{i=0}^{k-1} (\sigma^{\boxtimes i}\boxtimes\partial_l\boxtimes \id^{\boxtimes (k-1-i)})\circ \partial_k=0, \end{equation} for any fixed $n \ge 0$. Here $\sigma$ is the automorphism of $M$ which maps a homogenous element $m$ to $(-1)^{|m|}m$, and $\sigma^0:=1$. \subsection{Changing the coefficients} \label{sec:change} Assume that we are given a differential graded algebra with coefficients in $A$ and a morphism $A \to B$ of unital $R$-algebras. In certain situations it will be useful to consider a change of coefficients from $A$ to $B$. We recall that $B$ has an induced structure of an $A$--$A$-bimodule and that $M_B:=B \boxtimes M \boxtimes B$ is a free $B$--$B$-bimodule for any free $A$--$A$-bimodule $M$. \begin{Lem}\label{change of ring} Let $(\mathcal{A}, \partial)$ be a semi-free differential graded algebra over $A$ such that ${\mathcal A}$ is isomorphic to $\mathcal{T}_A(M)$ as an algebra, and let $f: A\rightarrow B$ be a unital algebra morphism. Then there exist: \begin{itemize} \item a unique semi-free differential graded algebra $(\mathcal{A}_B, \partial_B)$ over $B$ such that $\mathcal{A}_B$ is isomorphic to $\mathcal{T}_B( M_B)$ as an algebra, and \item a unique morphism $\hat{f} \colon \mathcal{A} \to \mathcal{A}_B$ of unital graded algebras \end{itemize} satisfying the following properties: \begin{enumerate} \item $\hat{f}$ is the natural morphism of unital $R$-algebras defined uniquely by the requirements that it restricts to $f$ on $M^{\boxtimes 0} = A$, and induces a graded bijection between the generators of $\mathcal{A}$ and $\mathcal{A}_B$, and \item $\hat{f}\circ \partial = \partial_B \circ \hat{f}$, \end{enumerate} i.e.~$\hat{f}$ is a unital DGA morphism. \end{Lem} \begin{proof} The existence of the algebra morphism $\hat{f}$ is immediate. The differential $\partial_B$ on $\mathcal{A}_B$ is defined on the image $\hat{f}(m) \in M_B$ of $m \in M$ to take the value \[ \partial_B( \hat{f}(m))=\hat{f}(\partial(m)).\] Since $\hat{f}$ is surjective on the generators of $\mathcal{A}$ this determines $\partial_B$ uniquely after extending $\partial_B$ using the graded Leibniz rule. Using the Leibniz rule, the fact that $\hat{f}$ is an algebra morphism implies that \[\hat{f}\circ \partial = \partial_B \circ \hat{f}\] is satisfied on all of $\mathcal{A}$. It remains to check that $\partial_B^2=0$. Since we clearly have \[\partial_B^2 \circ \hat{f}=\hat{f}\circ \partial^2=0,\] the fact that $\hat{f}$ is a bijection on the generators implies that $\partial_B^2=0$ is satisfied on all of $\mathcal{A}_B$. \end{proof} The following changes of coefficients was used in the construction of the augmentation category in \cite{augcat}, and will also be relevant in this article. Consider the unital $R$-algebra $A_n:=A \otimes_R (\bigoplus_{i=1}^n R e_i)$, where $\bigoplus_{i=1}^n R e_i$ has the ring structure induced by termwise multiplication, i.e.~$e_i\cdot e_j=\delta_{ij}e_i$ and $\sum e_i=1$. The morphism $f \colon A \to A_n$ will be the canonical morphism induced by the above tensor product with $\bigoplus_{i=1}^n R e_i$, i.e.~for which $f(1) = e_1+e_2+\hdots + e_n$. \subsection{Augmentations and linearisations} \label{ssec: augmentations and linearisations} The graded Leibniz rule is invariant under conjugation by degree-preserving unital algebra automorphisms. More precisely, let $\phi: \mathcal{A}\rightarrow\mathcal{A}$ be such an automorphism; then $\partial_\phi:=\phi^{-1} \circ \partial \circ \phi$ is a differential and $(\mathcal{A},\partial_\phi)$ is again a differential graded algebra. We denote by $\Pi_0$ the projection of $\mathcal{A}$ to the zero length part $M^{\boxtimes 0} =A$. The constant part of $\partial_\phi$ is given by $\Pi_0 \circ \phi^{-1} \circ \partial \circ \phi$. In particular, if the map $\varepsilon:=\Pi_0 \circ \phi^{-1}$ satisfies $\varepsilon\circ\partial =0$, this constant term vanishes. This motivates the definition of an augmentation: \begin{defn} Let $B$ be a unital $R$-algebra together with a unital algebra morphism $f \colon A \to B$ of $R$-modules. An \emph{augmentation} of $({\mathcal A}, \partial)$ into $B$ is a unital DGA morphism $\varepsilon \colon {\mathcal A} \to B$ for which $\varepsilon|_{M^{\boxtimes 0}}=f$. Here $B$ is regarded as a differential graded algebra with trivial differential and concentrated in degree zero. (Therefore $\varepsilon \circ \partial =0$.) \end{defn} \noindent Note that, in particular, $\varepsilon$ is a a morphism of $A$--$A$-bimodules for the $A$--$A$-bimodule structure on $B$ induced by $f$. In \cite{Chekanov_DGA_Legendrian}, Chekanov described a linearisation procedure which uses an augmentation to produce a differential on the graded $A$-module $M$. While Chekanov originally defined linearisation for differential graded algebra over a commutative ring, it is known that his construction works equally well for differential graded algebras with noncommutative coefficients. We now recall this construction. From a differential graded algebra $({\mathcal A}, \partial)$ together with an augmentation $\varepsilon \colon {\mathcal A} \to B$ we produce a new differential graded algebra as follows. By applying Lemma~\ref{change of ring} to $\varepsilon|_{M^{\boxtimes 0}}=f \colon A \to B$ we obtain a differential graded algebra $({\mathcal A}_B,\partial_B)$ with coefficients in $B$, and using the unital DGA morphism $\hat{f} \colon {\mathcal A} \to {\mathcal A}_B$ we define an augmentation $\varepsilon_B \colon {\mathcal A}_B \to B$ by the requirement that $\varepsilon_B \circ \hat{f}=\varepsilon$ holds on the generators. Using $\varepsilon_B$ we define a unital algebra automorphism $\Phi_\varepsilon \colon {\mathcal A}_B \to {\mathcal A}_B$ determined by \[ \Phi_\varepsilon(m)=m+\varepsilon_B(m), \quad m \in M_B.\] We obtain a differential via the conjugation \[\partial^\varepsilon := \Phi_\varepsilon \circ \partial_B \circ \Phi_\varepsilon^{-1}.\] Let $\Pi_0 \colon {\mathcal A}_B \to M_B^{\boxtimes 0} = B$ be the natural projection; then it follows that $\Pi_0 \circ \partial_B^\varepsilon = \varepsilon_B \circ \partial_B =0$. The differential graded algebra $(\mathcal{A}_B,\partial^\varepsilon)$ will be said to be obtained from $(\mathcal{A},\partial)$ by \emph{developing with respect to the augmentation $\varepsilon$}. The fact that $(\partial_B^\varepsilon)_0=0$ will be important in the next section. Using this, Equation \eqref{eq:leibnizpartial} can be rewritten as \begin{equation} \label{eq:leibnizaugmented} \sum_{k+l-1=n \atop k,l>0 } \sum_{i=0}^{k-1} (\sigma^{\boxtimes i}\boxtimes(\partial_B^\varepsilon)_l\boxtimes \id^{\boxtimes (k-1-i)})\circ (\partial_B^\varepsilon)_k=0, \end{equation} for any fixed $n>0$. \subsection{The free n-copy DGA} \label{sec:semisimple} Let $(\mathcal{A},\partial)$ be a differential graded algebra. We consider algebras $\mathcal{A}_{ij} = \mathcal{A}$ for $0 \le i,j \le n$ and form a differential graded algebra $(\mathfrak{A}_n, \mathfrak{d})$ where $\mathfrak{A}_n$ is, as a graded algebra, the free product \[ \mathfrak{A}_n:=\bigstar_{0 \le i ,j \le n}\mathcal{A}_{ij},\] and the differential $\mathfrak{d}$ is induced by $\partial$ as follows. If $c$ is a generator of ${\mathcal A}$, we denote by $c^{ij}$ the generator in ${\mathfrak A}$ which corresponds to the copy of $c$ in ${\mathcal A}_{ij}$. Then \begin{itemize} \item $\mathfrak{d}_0(c^{ij})= \partial_0(c) \in A$ if $i=j$, and $\mathfrak{d}_0(c^{ij})= 0$ if $i \ne j$, and \item the coefficient of $ a_0d_1^{i_1j_1}a_1d_2^{i_2j_2} \ldots d^{i_nj_n}_na_n$ in $\mathfrak{d}(c^{ij})$ is equal to the coefficient of\\ $a_0d_1a_1d_2 \ldots d_na_n$ in $\partial(c)$, where $d_i$ is a sequence of generators, given that $i_1=i$, $j_n=j$, and $j_{k-1} = i_k$ are satisfied, while this coefficient otherwise vanishes. \end{itemize} As usual, we extend $\mathfrak{d}$ to the whole algebra $\mathfrak{A}_n$ via the graded Leibniz rule. (Since we have not proved yet that $\mathfrak{d}^2=0$, strictly speaking, $(\mathfrak{A}_n, \mathfrak{d})$ is only a graded algebra with a derivation so far.) A generator $c^{ij}$ will be called \emph{mixed} if $i \neq j$ and \emph{pure} if $i=j$. Observe that $\mathfrak{d}$ preserves the filtration of $\mathfrak{A}_n$ given by the $R$-submodules spanned by those words containing at least a number $m \ge 0$ of mixed generators. A word $a_0d_1^{i_1j_1}a_1d_2^{i_2j_2} \ldots d^{i_nj_n}_na_n$ will be called {\em composable} if $j_{k-1} = i_k$ for $2 \le k \le n$. Words of length zero and one are automatically composable. We define $\mathfrak{A}_n^c \subset \mathfrak{A}_n$ as the sub-$A$--$A$-bimodule generated by composable words. It is immediate to verify that $\mathfrak{d}$ restricts to an endomorphism of $\mathfrak{A}_n^c$. To prove that $\mathfrak{d}^2=0$ we use an alternative definition using the change of coefficients. Recall the ring $A_n=A \otimes (\bigoplus_{i=1}^n Re_i)$ from the end of Section \ref{sec:change}. For a semi-free differential graded algebra $(\mathcal{A},\partial)$ we denote by $ ( {\mathcal A}_{A_n}, \partial_{A_n})$ the differential graded algebra obtained by the change of coefficients from $A$ to $A_n$ using Lemma \ref{change of ring}. Recall that there is grading preserving bijection between the sets of generators of the respective algebras $\mathcal{A}$ and $ {\mathcal A}_{A_n}$. For a generator $c$ of $\mathcal{A}$ we denote $e_ice_j \in {\mathcal A}_{A_n}$ by $c^{ij}$. Note that the $c^{ij}$'s generate $ {\mathcal A}_{A_n}$, albeit not freely. The differential $ \partial_{A_n}$ can now be expressed as follows on any generator $c$. Given that a term $a_0d_1a_1d_2 \ldots d_na_n$ appears in $\partial c$, the sum $$\sum_{i_1, j_n} \sum_{j_{k-1}=i_k}a_0e_{i_1}d_1e_{j_1}a_1d_2^{i_2j_2} \cdots d_{n-1}^{i_{n-1}j_{n-1}}a_{n-1}e_{i_n}d_n e_{j_n} a_n$$ appears in the expression $\partial_{A_n} c$. (Recall here that the $e_i$ are in the centre of $A$.) This means that the sum $$\sum_{j_{k-1}=i_k}e_{i_1}a_0d_1^{i_1j_1}a_1d_2^{i_2j_2}\cdots d_{n-1}^{i_{n-1}j_{n-1}}a_{n-1}d_n^{i_nj_n}e_{j_n}a_n$$ appears in the expression $\partial_{A_n}(c^{i_1j_n})$. More generally, we have $\partial_{A_n}(c^{i_1j_n}) \subset e_{i_1} {\mathcal A}_{A_n}e_{j_n}$. Since $e_ie_j=\delta_{ij}e_i$, in particular it follows that $\partial_{A_n}c^{ij}$ has no constant term for $i\not=j$. \begin{Lem} We have $\mathfrak{d}^2=0$. \end{Lem} \begin{proof} The first observation is that it is enough to prove that $\mathfrak{d}^2(c^{ij})=0$ for every generator $c^{ij}$ of $\mathfrak{A}_n$. There is an algebra morphism $\pi \colon \mathfrak{A}_n \to {\mathcal A}_{A_n}$ such that $\pi(c^{ij})= e_i c e_j$ and $\pi(1)= e_1 + \ldots + e_n$. It is easy to check that $\pi \circ \mathfrak{d} = \partial_{A_n} \circ \pi$, and moreover $\pi$ is injective on the sub-bimodule $\mathfrak{A}_n^c$ generated by composable words. Since $\mathfrak{d}^2(c^{ij}) \in \mathfrak{A}_n^c$ for every generator $c^{ij}$, from $ \partial_{A_n}^2=0$ it follows that $\mathfrak{d}^2=0$. \end{proof} From a sequence $\boldsymbol{\varepsilon}=(\varepsilon_0,\ldots,\varepsilon_n)$ of augmentations $\varepsilon_i \colon {\mathcal A} \to A$ we define an algebra morphism $\mathfrak{e} \colon \mathfrak{A}_n \to A$ such that, on a generator $c^{ij}$ of ${\mathcal A}_{ij}$, \[\mathfrak{e} (c^{ij})= \begin{cases} { \varepsilon_i(c)} & \text{if } i=j, \text{ and }\\ 0 & \text{if } i \ne j.\end{cases}\] \begin{Lem} The algebra morphism $\mathfrak{e} \colon \mathfrak{A}_n \to A$ is an augmentation. \end{Lem} \begin{proof} It suffices to check that $\mathfrak{e}(\mathfrak{d}(c^{ij}))=0$ holds on the generators. By construction, we have \[\mathfrak{e} (\mathfrak{d}(c^{ij}))= \begin{cases} \varepsilon_i(\partial(c)) & \text{if } i = j, \text{ and } \\ 0 & \text{if } i \neq j, \end{cases}\] which establishes the claim. \end{proof} We can use the augmentation $\mathfrak{e}$ to produce a differential graded algebra $(\mathfrak{A}_n, \mathfrak{d}^{\mathfrak{e}})$ whose differential has vanishing constant term by applying the procedure described in the previous subsection. \section{$A_\infty$-operations} \subsection{Case I: coefficients in a general noncommutative algebra} \label{subsec:case1} Let $(\mathcal{A},\partial)$ be a differential graded algebra with coefficients in a noncommutative algebra $A$ over a commutative ring $R$. We further assume that ${\mathcal A} =\mathcal{T}_A(M)$ is a tensor algebra over a free $A$--$A$-bimodule $M$ with a preferred basis $\{c_1,\hdots, c_k\}$. Recall that we decompose $\partial|_{M} = \partial_0 + \partial_1 + \hdots $, where $\partial_n \colon M \to M^{\boxtimes n}$. We start without the assumption that $\partial_0=0$. There are induced adjoints $(\partial_n)^\vee \colon (M^{\boxtimes n})^\vee \to M^\vee$, and we define \begin{equation} \label{definition of mu_n} \mu_n := (\partial_n)^{\vee} \circ \psi_n \colon (M^\vee)^{\otimes n} \to M^\vee, \quad n \ge 1, \end{equation} where $\psi_n$ is as defined in Section \ref{sec:bimoduledual} above. See Diagram \eqref{mubar1}. \begin{gather} \xymatrix{ M^\vee & (M^{\boxtimes n})^\vee \ar[l]_{(\partial_n)^\vee} \\ M^\vee \ar@{=}[u]^{\psi_1=\id_{M^\vee}} & \ar[l]^{\mu_n} \ar[u]_{\psi_n} (M^\vee)^{\otimes n} }\label{mubar1} \end{gather} Given elements $m_1, \ldots, m_n \in M^\vee$, we will write $\mu_n(m_1, \ldots, m_n)$ and $\mu_n(m_1 \otimes \ldots \otimes m_n)$ interchangeably. The operations $\mu_n$ can be expressed more concretely as follows. \begin{Lem}\label{explicit computation I} If, for every element $c_i$ in the basis of $M$, \[\partial_nc_i=\sum_I \sum_{j=1}^{m_{i,I}}a^{i,I}_{j,0}c_{i_1}a^{i,I}_{j,1} \ldots c_{i_n}a^{i,I}_{j,n}\] with $a^{i,I}_{j,l} \in A$, and $I=(i_1,\hdots,i_n)$ denoting a multi-index with $1 \le i_l \le k$, then \[ \mu_n(b_1c_{i_1},\hdots,b_nc_{i_n})=\sum_{i=1}^k \sum_{j=1}^{m_{i,I}} (a^{i,I}_{j,0}b_1a^{i,I}_{j,1}\ldots a^{i,I}_{j,n-1}b_na^{i,I}_{j,n})\cdot c_i\] for each $n \ge 1$ and any elements $b_i \in A$. \end{Lem} \begin{proof} From Equations \eqref{pippo} and \eqref{pluto} it follows that \begin{align*} &\psi_n(b_1c_{i_1}\otimes \hdots \otimes b_nc_{i_n}) (a_0 c_{j_1}a_1 \ldots a_{n-1} c_{j_n} a_n )=\\&=\begin{cases} a_0 b_1 a_1 \ldots a_{n-1}b_n a_n, & \text{if } c_{i_1}= c_{j_1}, \ldots, c_{i_n}=c_{j_n}, \\ 0, & \text{otherwise}. \end{cases} \end{align*} Then from $$\mu_n(b_1c_{i_1},\hdots,b_nc_{i_n})(c_i)= \psi_n(b_1c_{i_1}\otimes \hdots \otimes b_nc_{i_n})(\partial_n c_i)$$ for all basis elements $c_i$, and from Equation \eqref{pippo} again, the lemma follows. \end{proof} \begin{Rem} The maps $\mu_n$ are $R$-multilinear, but do not satisfy any form of $A$-linearity in general. \end{Rem} The maps $(\mu_i)_{i\geq 1}$ do not necessarily satisfy the $A_\infty$ relations because the curvature term $\mu_0:=(\partial_0)^\vee \colon A \to M^\vee$ might be non-vanishing. Given augmentations of $\mathcal{A}$, this can be amended. To an augmentation $\varepsilon \colon {\mathcal A} \to A$ we associate the element \[\varepsilon^\vee:=\varepsilon(c_1)c_1+\hdots+\varepsilon(c_k)c_k \in M^\vee,\] i.e.~the adjoint of $\varepsilon|_M$. \begin{Rem} We should think of $\varepsilon^\vee$ as giving rise to a ``bounding cochain'' in the sense of \cite{fooo} via the infinite sum \[ \sum_{i=0}^\infty (\varepsilon^\vee)^{\otimes i} \] living in the completion $\prod_{i=0}^\infty (M^\vee)^{\otimes i}$ of $\bigoplus_{i=0}^\infty (M^\vee)^{\otimes i}$, where $(\varepsilon^\vee)^{\otimes 0}:=1 \in R$. \end{Rem} \begin{defn}\label{defn: A_infty operations case I} For a sequence $\boldsymbol{\varepsilon}=(\varepsilon_0,\varepsilon_1,\hdots,\varepsilon_n)$ of augmentations $ \epsilon_i \colon {\mathcal A} \to A$, we define the operations \[\mu^{\boldsymbol{\varepsilon}}_n \colon (M^\vee)^{\otimes n} \to M^\vee\] via the formulas \begin{eqnarray*} \lefteqn{\mu^{\boldsymbol{\varepsilon}}_n(m_1, \ldots, m_n) =}\\ &=&\sum_{i=1}^\infty \sum_{i_0+\hdots+i_n+n=i \atop i_j \ge 0} \mu_i( (\varepsilon_0^\vee)^{\otimes i_0} \otimes m_1 \otimes (\varepsilon_1^\vee)^{\otimes i_1} \otimes \hdots\otimes (\varepsilon_{n-1}^\vee)^{\otimes i_{n-1}} \otimes m_n\otimes (\varepsilon_{n}^\vee)^{\otimes i_{n}}), \end{eqnarray*} where $m_i \in M^{\vee}$ (recall that $(\varepsilon_0^\vee)^{ \otimes 0}=1$ is considered as an element in $R$). \end{defn} See Figure \ref{fig:curve} for a geometric explanation of the terms appearing in $\mu^{\boldsymbol{\varepsilon}}_n(m_1, \ldots, m_n)$. Given a $n$-tuple of augmentation $\boldsymbol{\varepsilon}=(\varepsilon_1,\ldots,\varepsilon_n)$ we denote by $\boldsymbol{\varepsilon}_{ij}=(\varepsilon_i, \ldots, \varepsilon_j)$ and $\hat{\boldsymbol{\varepsilon}}_{ij}=(\varepsilon_1, \ldots, \varepsilon_i, \varepsilon_j, \ldots, \varepsilon_n)$. \begin{figure}[ht] \labellist \pinlabel $d_0$ at 100 122 \pinlabel $d_1$ at 10 -6 \pinlabel $d_2$ at 55 -6 \pinlabel $d_3$ at 101 -6 \pinlabel $d_4$ at 145 -6 \pinlabel $d_5$ at 190 -6 \pinlabel $a_0$ at 33 65 \pinlabel $a_1$ at 32 14 \pinlabel $a_2$ at 78 14 \pinlabel $a_3$ at 122 14 \pinlabel $a_4$ at 168 14 \pinlabel $a_5$ at 167 65 \endlabellist \centering \includegraphics{courbe} \vspace{5mm} \caption{The ``pseudoholomorphic disk with punctures'' shown here is supposed to give a contribution of $+1$ to the coefficient in front of $a_0d_1a_1d_2a_2d_3a_3d_4a_4d_5a_5$ in the expression $\partial_5(d_0)$. If the generators $d_1,\hdots,d_5$ are all distinct, this pseudoholomorphic disc gives a contribution of $a_0\varepsilon_0(d_1)a_1xa_2\varepsilon_1(d_3) a_3ya_4\varepsilon_2(d_4)a_5 \in A$ to the coefficient in front of $d_0$ in the expression of $\mu^{(\varepsilon_0,\varepsilon_1,\varepsilon_2)}_2(xd_2,yd_4)$.} \label{fig:curve} \end{figure} \begin{Thm} \label{thm:case1} For any $n \ge 1$ and fixed sequence $\boldsymbol{\varepsilon}=(\varepsilon_0,\varepsilon_1,\hdots,\varepsilon_n)$ of augmentations, the above $R$-module morphisms satisfy the relations \begin{eqnarray*} \lefteqn{0=\sum_{n=k-1+l \atop k,l > 0}\sum_{i=1}^k (-1)^\dagger \mu^{\hat{\boldsymbol{\varepsilon}}_{i-1,i+l-1}}_k (m_1,\hdots,m_{i-1},}\\ & & \mu^{\boldsymbol{\varepsilon}_{i-1,i+l-1}}_l(m_i,\hdots,m_{i+l-1}), m_{i+l}, \hdots,m_n), \end{eqnarray*} where $\dagger=|m_1|+\cdots+|m_{i-1}|+(i-1)$, and $m_1,\hdots,m_n \in M^\vee$ are homogeneous elements. In other words, the above operations form the morphisms and higher operations of an $A_\infty$-category over $R$ whose objects consist of the augmentations $\varepsilon \colon \mathcal{A} \to A$. \end{Thm} \begin{proof} First we handle the case when all augmentations are equal and trivial (i.e.~sending all generators to zero). In this case the $A_\infty$-relations readily follow from Formula \eqref{eq:leibnizaugmented}, which is satisfied when $\partial_0=0$. Also, see the following diagram. \begin{gather} \xymatrix@C=10pc{ (M^{\boxtimes k})^\vee & (M^{\boxtimes (k+l)})^\vee \ar[l]_{(\sum \sigma\boxtimes \partial_l\boxtimes \id)^\vee} \\ (M^\vee)^{\otimes k} \ar[u]^{\psi_k} & \ar[l]^{\sum \sigma\otimes \mu_l\otimes \id} \ar[u]_{\psi_{k+l}} (M^\vee)^{\otimes (k+l)}. }\label{mubar2} \end{gather} The general case can now be reduced to the above case in the following manner. First, given a sequence $\boldsymbol{\varepsilon}=(\varepsilon,\hdots,\varepsilon)$ consisting of a single augmentation, we compute that $(\mu^{\boldsymbol{\varepsilon}}_i)_{i\geq 1}$ associated to $(\mathcal{A},\partial)$ are equal to the morphisms $(\mu_i)_{i\geq 1}$ associated to the DGA $(\mathcal{A},\partial^\varepsilon)$ defined in Subsection \ref{ssec: augmentations and linearisations}. This case thus follows from the above. Finally, for an arbitrary sequence $\boldsymbol{\varepsilon}=(\varepsilon_0,\hdots, \varepsilon_n)$ of augmentations we apply the construction in Section \ref{sec:semisimple}. Namely, we produce the auxiliary ``semisimple'' differential graded algebra $(\mathfrak{A}_n,\mathfrak{d})$ and the auxiliary augmentation $\mathfrak{e} \colon \mathfrak{A}_n \to A$ induced by $\boldsymbol{\varepsilon}$. Using the notation in Section \ref{sec:semisimple}, it can be seen that \[ \mu_n^{\boldsymbol{\varepsilon}}(a_1d_1,\ldots, a_nd_n)= \mu_n^{\mathfrak{e}} (a_1d_1^{i_1j_1},\ldots, a_nd_n^{i_nj_n}) \] (after identifying the output with an element of $\mathcal{A}_{i_1j_n} = \mathcal{A}$), where $i_k < j_k$ and $j_k=i_{k+1}$ holds for all indices $k$, and $d_i$ is a sequence of basis elements. We have thus managed to reduce the general case to the first case. For the sign $\dagger$ it suffices to notice that $(\sigma)^\vee= - \sigma$ according to sign the convention of Section \ref{sec:bimoduledual}. \end{proof} \subsection{Case II: coefficients in a Hermitian algebra} \label{subsec:case2} Let $(\mathcal{A}, \partial)$ be a differential graded algebra with coefficients in a noncommutative algebra $A$ over a commutative ring $R$. As in the previous subsection, we assume that ${\mathcal A}=\mathcal{T}_A(M)$ is a tensor algebra over an free $A$--$A$-bimodule $M$ with a preferred basis $\{c_1,\hdots, c_k\}$. In this subsection we make the assumption that $A$ is a Hermitian algebra. Recall that there are induced inclusions $\iota^{(n)} \colon M^{\boxtimes n} \to (M^{\boxtimes n})^*$ for each $n\ge 0$ induced by the basis on $M$ and by the bilinear form $\mathfrak{t}$ on $A$. We define the $A$--$A$-bimodule morphisms \[ \mu_n := (\partial_n)^* \colon (M^{\boxtimes n})^* \to (M)^*\] for each $n \ge 1$. In view of Proposition \ref{prp:mainprop} these morphisms restrict to morphisms $\mu_n := (\partial_n)^* \colon M^{\boxtimes n} \to M$ under the above inclusions. However, since $\partial_0$ is not assumed to be zero, these operations might not give rise to an $A_\infty$ structure in the strict sense. We now proceed to amend this. Using Lemma \ref{lem:mainformula}, the operations $\mu_n$ can in this case be expressed more concretely as follows. \begin{Lem}\label{explicit computation II} If, for every element $c_i$ in the basis of $M$, \[\partial_nc_i=\sum_I \sum_{j=1}^{m_{i,I}}a^{i,I}_{j,0}c_{i_1}a^{i,I}_{j,1} \ldots c_{i_n}a^{i,I}_{j,n}\] with $a^{i,I}_{j,l} \in A$, and $I=(i_1,\hdots,i_n)$ denoting a multi-index with $1 \le i_l \le k$, then \begin{eqnarray*} \lefteqn{ \mu_n(b_0c_{i_1}b_1\hdots b_{n-1}c_{i_n}b_n)=}\\ &=& \sum_{i=1}^k \sum_{j=1}^{m_{i,I}} \langle c_{i_1} b_1\hdots b_{n-1}c_{i_n}, c_{i_1}a^{i,I}_{j,1} \ldots a^{i,I}_{j,n-1} c_{i_n} \rangle b_0(a^{i,I}_{j,0})^\star c_i(a^{i,I}_{j,n})^\star b_n \end{eqnarray*} for each $n \ge 1$ and any elements $b_i \in A$. \end{Lem} Given an augmentation $\varepsilon \colon \mathcal{A} \to A$, we define the adjoints \[\varepsilon_{(n)}^* \co A^* \to (M^{\boxtimes n})^*\] for each $n \ge 0$, where $\varepsilon_{(0)}^*=\id_{A^*}$. Again these maps are related to the notion of a ``bounding cochain''. As a side remark, We note that \begin{equation} \label{poneron} \varepsilon_{(i)}^*(a)=a \cdot \varepsilon_{(i)}^*(1)=\varepsilon_{(i)}^*(1) \cdot a \end{equation} holds for the $A$--$A$-bimodule structure defined by (\ref{bimodule via star}). \begin{Rem} When the algebra $A$ is free as an $R$-module and the pairing $\mathfrak{t}$ is induced by an orthonormal basis, the ``bounding cochains'' \[\varepsilon_{(n)}^* \co A \to (M^{\boxtimes n})^*\] can be expressed as \begin{equation}\label{epsilon star explicit} \varepsilon^*_{(n)}(a) = \langle a, \varepsilon(\x) \rangle = \sum_{a_0 d_1 a_1 \ldots a_{n-1}d_n a_n} \mathfrak{t}(a,\varepsilon(a_0 d_1 a_1 \ldots a_{n-1} d_n a_n))a_0 d_1 a_1 \ldots a_{n-1}d_n a_n, \end{equation} where the sum is taken over all words $a_0 d_1 a_1 \ldots a_{n-1}d_n a_n$ such that $a_1, \ldots, a_n$ are elements of the orthonormal basis of $A$ and $d_1, \ldots, d_n$ are elements of the prescribed basis of $M$ (both allowing repetitions). As a double check we verify Equation \ref{poneron} for $n=1$ using Equation \eqref{epsilon star explicit}. If $\{ c_1, \ldots, c_k \}$ is the basis of $M$, we denote $\varepsilon_i = \varepsilon(c_i)$. Then Equation \eqref{epsilon star explicit} for $n=1$ can be rewritten as $$\varepsilon^*_{(1)}(a) = \sum_{i=1}^k \sum_{a_+, a_-} \mathfrak{t}(a, a_+ \varepsilon_i a_-) a_+ c_i a_-,$$ where $a_+$ and $a_-$ run through the orthonormal basis of $A$. Now we observe that $$\sum_{a_+} \mathfrak{t}(a, a_+ \varepsilon_i a_-) a_+ = \sum_{a_+} \mathfrak{t}(a a_-^\star \varepsilon^\star_i, a_+)a_+ = a a_-^\star \varepsilon^\star_i$$ and therefore we can rewrite $$\varepsilon^*_{(1)}(a) = \sum_{i=1}^k \sum_{a_-} a a_-^\star \varepsilon^\star_i c_i a_-.$$ On the other hand we have \begin{align*} &a \varepsilon_{(1)}(1) = \sum_{i=1}^k \sum_{a_+, a_-} \mathfrak{t}(1, a_+ \varepsilon_i a_-) aa_+c_ia_- =\\&= \sum_{i=1}^k \sum_{a_+, a_-} \mathfrak{t} (a_-^\star \varepsilon_i^\star, a_+) aa_+c_ia_- = \sum_{i=1}^k \sum_{a_-} aa_-^\star \varepsilon_i^\star c_i a_-. \end{align*} Then half of Equation \eqref{poneron} is verified. The other half is similar. \end{Rem} \begin{defn}\label{defn: A-infty operation case II} Given a sequence $\boldsymbol{\varepsilon}=(\varepsilon_0,\varepsilon_1,\hdots, \varepsilon_n)$ of augmentations of ${\mathcal A}$, we define the operations \[\mu^{\boldsymbol{\varepsilon}}_n \colon (M^{\boxtimes n}) \to M^*, \quad n \ge 1,\] via the formulas \begin{eqnarray} \lefteqn{\mu^{\boldsymbol{\varepsilon}}_n(a_0m_1a_1 \ldots a_{n-1}m_na_n) =} \\ \nonumber &=& \sum \limits_{i=1}^\infty \sum_{i_0+\hdots +i_n+n=i \atop i_j \ge 0} \mu_i( (\varepsilon_0)_{(i_0)}^*(a_0)\boxtimes m_1 \boxtimes (\varepsilon_1)_{(i_1)}^*(m_1) \boxtimes \ldots \boxtimes m_n \boxtimes (\varepsilon_n)_{(i_{n})}^*(a_{n})),\label{ainfinity} \end{eqnarray} where $m_1 \ldots, m_n \in M$ and $a_0, \ldots, a_n \in A$ and $a_0m_1a_1 \ldots a_{n-1}m_na_n \in M^{\boxtimes n}$ is identified to an element in $(M^{\boxtimes n})^*$ by the inclusion $\iota_{(n)} \colon M^{\boxtimes n} \to (M^{\boxtimes n})^*$ (see Equation \eqref{inclusion in dual}). \end{defn} \begin{Lem} The compositions in Formula \eqref{ainfinity} give rise to a well-defined map $$\mu^{\boldsymbol{\varepsilon}}_n \colon M^{\boxtimes n} \to M \subset M^*.$$ \end{Lem} \begin{proof} Since $$\mu^{\boldsymbol{\varepsilon}}_n :=\sum \limits_{i=1}^\infty \sum_{i_0+\hdots +i_n+n=i \atop i_j > 0}\big(\big((\varepsilon_0)^{\boxtimes i_0} \boxtimes \id_M \boxtimes (\varepsilon_1)^{\boxtimes i_1} \boxtimes \hdots \boxtimes\id_M \boxtimes (\varepsilon_n)^{\boxtimes i_n} \big)\circ \partial_i \big)^*,$$ the statement follows from Proposition \ref{prp:mainprop}. \end{proof} \begin{Rem} The operations $\mu^{\boldsymbol{\varepsilon}}_n$ are morphisms of $A$--$A$-bimodules by Lemma \ref{adjoint is a bimodule map} and Equation \eqref{poneron}. \end{Rem} The main result of this section is that these operations define an $A_\infty$-category. \begin{Thm}\label{thm:caseII} For any $n \ge 1$ and fixed sequence $\boldsymbol{\varepsilon}= (\varepsilon_0,\varepsilon_1,\hdots,\varepsilon_n)$ of augmentations, the operations in Definition \ref{defn: A-infty operation case II} satisfy the following $A_\infty$ relations: \begin{equation*} \sum_{n=k-1+l \atop k,l > 0}\sum_{i=1}^k (-1)^\dagger \mu^{\hat{\boldsymbol{\varepsilon}}_{i-1,i+l-1}}_k(m_1,\hdots,m_{i-1}, \mu^{\boldsymbol{\varepsilon}_{i-1,i+l-1}}_l(m_i,\hdots,m_{i+l-1}), m_{i+l},\hdots,m_n) =0, \end{equation*} where $\dagger=|m_1|+\cdots+|m_{i-1}|+(i-1)$, and $m_1,\hdots,m_n \in M$ are of homogeneous degree. In other words, the above operations form the morphisms and higher operations of an $A_\infty$-category over $A$ whose objects consist of the augmentations $\varepsilon \colon (\mathcal{A},\partial) \to (A,0)$. \end{Thm} \begin{proof} The proof follows mutatis mutandis from the proof of Theorem \ref{thm:case1}. To that end, we just have to check the fact that the equality \[ \id^{\boxtimes i-1} \boxtimes \mu_l \boxtimes \id^{\boxtimes (k-i)} = (\id^{\boxtimes i-1} \boxtimes \partial_n \boxtimes \id^{\boxtimes (k-i)})^*\] is satisfied. \end{proof} \section{A toy example of a DGA}\label{sec: Lenny's example} In this section we discuss a toy example which illustrates the two different $A_\infty$-structures defined. This example was suggested by Lenny Ng and was an inspiration for this paper. Let $A$ be an algebra over $R= \Z/2\Z$ and let $g_1, g_2$ be two elements of $A$ which do not (necessarily) commute. We will consider the differential graded algebra $({\mathcal A}, \partial)$ over $A$ generated by $c_1, \ldots,c_5$ and with differential \begin{eqnarray*} & & \partial c_1 = c_2 g_1 c_4 +c_3,\\ & & \partial c_2 = c_5 g_2,\\ & & \partial c_3 = c_5 g_2 g_1 c_4,\\ & & \partial c_4 = \partial c_5 =0. \end{eqnarray*} It is easily checked that $\partial^2=0$. Moreover, $\partial_0$ vanishes and therefore there is a canonical augmentation which sends every generator to zero. \subsection{The $A_\infty$-structure defined in Subsection \ref{subsec:case1} (Case I)} The construction of Subsection \ref{subsec:case1}, performed on the trivial augmentation, gives rise to an $A_\infty$-algebra structure on $M^\vee=\oplus_{i=1}^5 A c_i$. Let $a, a' \in A$. Using Lemma \ref{explicit computation I} we compute the first order operations \begin{eqnarray*} & & \mu_1(ac_1)=\mu_1(ac_2)=\mu_1(ac_4)=0,\\ & & \mu_1(ac_3)=ac_1,\\ & & \mu_1(ac_5)=ag_2\cdot c_2, \end{eqnarray*} and the second order operations \begin{eqnarray*} & & \mu_2(ac_2,a'c_4)=ag_1a' \cdot c_1, \\ & & \mu_2(ac_5,a'c_4)=ag_2g_1a' \cdot c_3, \end{eqnarray*} while $\mu_2(ac_i,a'c_j)=0$ whenever $(i,j) \notin \{(2,4),(5,4)\}$. Finally, $\mu_n \equiv 0$ for all $n \ge 3$. We verify that these operations verify the $A_\infty$ relations. The only nontrivial relation to verify (i.e. the only one where not all terms vanish) is \begin{eqnarray*} \lefteqn{\mu_1(\mu_2(a_1c_5, a_2c_4))+ \mu_2(\mu_1(a_1c_5), a_2c_4)+ \mu_2(a_1c_5, \mu_1(a_2c_4)) =} \\ &=& \mu_1(a_1g_2g_1a_2c_3) + \mu_2(a_1g_2c_2, a_2c_4) + 0 \\ &=& a_1g_2g_1a_2c_1 + a_1g_2g_1a_2c_1\\ &=& 0. \end{eqnarray*} \subsection{The $A_\infty$-structure defined in Subsection \ref{subsec:case2} (Case II)} In this subsection we assume that $A$ is a Hermitian algebra. The construction of Subsection \ref{subsec:case2}, performed on the trivial augmentation, gives rise to an $A_\infty$-algebra structure on $M$, which we identify to a submodule of $M^*$ by \eqref{inclusion in dual}. Elements in $M^{\boxtimes n}$ can be written, as usual, as linear combinations of terms of the form $a_0c_{i_1}a_1 \ldots a_{n-1}c_{i_n}a_n$, where $a_0, \ldots a_n \in A$ and $c_{i_1}, \ldots, c_{i_n}$ are element of the prescribed basis of $M$. Since in case II the operations $\mu_n$ are morphisms of $A$--$A$-bimodule, we will give their values only on elements of the form $c_{i_1}a_1 \ldots a_{n-1}c_{i_n}$. Using Lemma \ref{explicit computation II} we compute the first order operations \begin{eqnarray*} & & \mu_1(c_1)=\mu_1(c_2)=\mu_1(c_4)=0,\\ & & \mu_1(c_3)=c_1,\\ & & \mu_1(c_5)=c_2 g_2^\star \end{eqnarray*} and the second order operations, for all $h \in A$, \begin{eqnarray*} & & \mu_2(c_2hc_4)= \mathfrak{t}(h, g_1)c_1, \\ & & \mu_2(c_5hc_4)= \mathfrak{t}(h, g_2g_1) c_3, \end{eqnarray*} while $\mu_2(c_ihc_j)=0$ in all other cases. Finally, $\mu_n \equiv 0$ for all $n \ge 3$. The only nontrivial $A_\infty$ relation to check is \begin{eqnarray*} \lefteqn{\mu_1(\mu_2(c_5hc_4))+ \mu_2(\mu_1(c_5) \boxtimes hc_4) + \mu_2(c_5h \boxtimes \mu_1(c_4)) = } \\ & = & \mathfrak{t}(h, g_2g_1)\mu_1(c_3) + \mu_2(c_2g_2^\star hc_4) + 0 \\ & = & \mathfrak{t}(h, g_2g_1) c_1 + \mathfrak{t}(g_2^\star h, g_1)c_1 \\ & = & 0 \end{eqnarray*} because $\mathfrak{t}(h, g_2g_1) = \mathfrak{t}(g_2^\star h, g_1)$ by the properties of the adjoint. \section{Potential examples of knots distinguished by the constructed $A_\infty$-structures} \label{sec:computation} For computational purposes, the $A_\infty$-algebra is much easier to use for extracting invariants compared to the DGA. For instance, the products and higher order Massey products in linearised Legendrian cohomology introduced in \cite{Productstructure} were in the same article shown to be efficient tools for distinguishing a Legendrian knot from its mirror (in case when the underlying homologies are isomorphic). The latter construction considered an $A_\infty$-structure for coefficients in $\Z_2$. Assume that there exists a Legendrian knot $\Lambda_n \subset (\R^3,dz-ydx)$ which satisfies the following for some $n>1$: \begin{enumerate}[label=(\roman*)] \item The rotation number of $\Lambda_n$ is zero; \item The bound on the Thurston-Bennequin invariant of $\Lambda_n$ in terms of the Kauffman polynomial of the underlying smooth knot is not sharp. In particular, this means that the Chekanov-Eliashberg algebra of $\Lambda_n$ has no augmentation in $\Z_2$ (see \cite{RutherfordKauffman} for more details); and \item The Chekanov-Eliashberg algebra of $\Lambda_n$ admits a (0-graded) augmentation in $M_n(\Z_2)$. \end{enumerate} The authors expect that such knots can be constructed using the methods from \cite[Theorem 4.8]{Satellites} but, unfortunately, as of now they are not aware of an explicit example. In any case, performing cusp connected sums (see \cite{Etnyre_&_Connected_Sums}) between the Legendrian knots considered in \cite{Productstructure} and the hypothetical Legendrian knot $\Lambda_n$ we obtain Legendrian knots for which our construction can be used as an efficient computational tool; see Proposition \ref{prp:computation} below. First we recall some details concerning the examples from \cite{Productstructure}. For a Legendrian knot $\Lambda$ we denote by $\overline{\Lambda}$ its \emph{Legendrian mirror}, i.e.~its image under the contactomorphism $(x,y,z) \mapsto (x,-y,-z)$. Consider the examples $\Lambda_{k,l,m}$ constructed in the proof of \cite[Theorem 1.1]{Productstructure} (the first part) which satisfy the following. For any triple $k,l,m \ge 1$ it is the case that \begin{itemize} \item the rotation number of $\Lambda_{k,l,m}$ is zero, and the DGA is hence graded in the integers, and \item given that the three numbers $l-m-1$, $m-k+1$, $l-k+1$, are distinct, there is a unique graded augmentation $\varepsilon$ in $\Z_2$. \end{itemize} It follows that the same properties are satisfied for its Legendrian mirror $\overline{\Lambda_{k,l,m}}$. Given $k,l,m$ satisfying the second property, the knot $\Lambda_{k,l,m}$ is distinguished from its mirror up to Legendrian isotopy by a computation showing that \begin{enumerate} \item the product $$ \mu^{\varepsilon,\varepsilon,\varepsilon}_2 \colon LCH^{l-m-1}_\varepsilon(\Lambda_{k,l,m}) \oplus LCH^{m-k+1}_\varepsilon(\Lambda_{k,l,m}) \to LCH^{l-k}_\varepsilon(\Lambda_{k,l,m})$$ does not vanish identically, while \item the product $$ \mu^{\varepsilon,\varepsilon,\varepsilon}_2 \colon LCH^{l-m-1}_\varepsilon(\overline{\Lambda_{k,l,m}}) \oplus LCH^{m-k+1}_\varepsilon(\overline{\Lambda_{k,l,m}}) \to LCH^{l-k}_\varepsilon(\overline{\Lambda_{k,l,m}})$$ does vanish. \end{enumerate} We can use these examples to show the following. \begin{Prop} \label{prp:computation} Assume the existence of a Legendrian knot $\Lambda_n$ for some $n>1$ satisfying conditions (i)--(iii) above. The cusp connected sum $\Lambda_{k,l,m} \# \Lambda_n \subset (\R^3,dz-ydx)$ is a Legendrian knot admitting a (0-graded) augmentation in $M_n(\Z_2)$ but not in $\Z_2$. For suitable $k,l,m > 0$ (depending on the knot $\Lambda_n$) the Legendrian knot $\Lambda_{k,l,m} \# \Lambda_n$ can moreover be distinguished from both $\overline{\Lambda_{k,l,m}} \# \Lambda_n$ and $\overline{\Lambda_{k,l,m} \# \Lambda_n}$ using the $A_\infty$-structure in linearised Legendrian contact homology with coefficients in $M_n(\Z_2)$ as constructed in Section \ref{subsec:case2} (i.e.~Case II). \end{Prop} \begin{proof}[Sketch of proof] The fact that the connected sum has augmentations in $M_n(\Z_2)$ but not in $\Z_2$ was shown in \cite[Lemma 4.3]{EstimNumbrReebChordLinReprCharAlg}. The existence part uses the following explicit construction that we now outline. Given augmentations $\varepsilon_i \colon \Lambda_i \to A_i$, $i=1,2$, in the unital algebras $A_i$, then there is an induced augmentation $(\varepsilon_1 \# \varepsilon_2) \colon \mathcal{A}(\Lambda_1 \# \Lambda_2) \to A_1 \otimes A_2$ determined as follows. Recall that $$\mathcal{A}(\Lambda_1 \# \Lambda_2)=\mathcal{A}(\Lambda_1) \star \mathcal{A}(\Lambda_2) \star \langle c_0 \rangle$$ holds on the level of generators, where $|c_0|=0$. The induced augmentation is determined uniquely by the requirements that $(\varepsilon_1 \# \varepsilon_2)(c)=\varepsilon_i(c)$ holds on the old generators (using the canonical algebra maps $a \mapsto a \otimes 1_{A_2} \in A_1 \otimes A_2$ and $b \mapsto 1_{A_1} \otimes b \in A_1 \otimes A_2$) while $(\varepsilon_1 \# \varepsilon_2)(c_0)=1=1_{A_1} \otimes 1_{A_2} \in A_1 \otimes A_2$ holds on the new generator. The computations of the DGA of $\Lambda_{k,l,m}$ performed in \cite{Productstructure} can readily be seen to give the following. Consider the construction of the $A_\infty$-structure with coefficients in $M_n(\Z_2)$ as defined in Section \ref{subsec:case2} (i.e.~Case II). The non-vanishing of the product as in (1) again holds for $\Lambda_{k,l,m} \# \Lambda_n$ when using the augmentation $\varepsilon \# \varepsilon_2$ taking values in $M_n(\Z_2)$. It can moreover be seen that that (2) is satisfied for \emph{any} pair of graded augmentations in $M_n(\Z_2)$ for the same coefficients, given that $k,l,m >0$ were chosen appropriately. E.g.~we can choose $k,l,m>0$ so that $l-m-1$, $m-k+1$, $l-k+1$ all are distinct and sufficiently large (depending on the degrees of the Reeb chords of $\Lambda_n$). \end{proof} The following result shows the relation between the linearised Legendrian contact cohomology of a Legendrian knot and its Legendrian mirror. \begin{Lem} Let $\Lambda \subset (\R^3,dz-ydx)$ be a Legendrian knot. For any pair of augmentations $\varepsilon_i \colon (\mathcal{A}(\Lambda),\partial) \to M_n(R)$, $i=1,2$, there are induced augmentations $\overline{\varepsilon}_i \colon (\mathcal{A}(\overline{\Lambda}),\partial') \to M_n(R)$ for which there is a canonical isomorphism $$ (LCC^\bullet(\Lambda),d^{\varepsilon_0,\varepsilon_1}) \simeq (LCC^\bullet(\overline{\Lambda}),d^{\overline{\varepsilon}_1,\overline{\varepsilon}_0})$$ of graded $R$-bimodules. (This can even be made into an isomorphism of free $M_n(R)$-bimodules, but in this case we must use a non-standard free bimodule structure on the latter where left and right multiplication has been interchanged, while utilising the transpose of a matrix.) \end{Lem} \begin{proof} Recall that there is a canonical grading-preserving bijection between the set of generators of $\Lambda$ and $\overline{\Lambda}$. Under the corresponding identification $\mathcal{A}(\Lambda) \simeq \mathcal{A}(\overline{\Lambda})$ the differential of the latter takes the form $\partial'(c)=\iota \circ \partial(c)$ on the generators, where $\iota$ is the involution which reverses the letters in each word (this is an isomorphism from a free algebra to its opposite). The statement readily follows if we take $\overline{\varepsilon}_i$ to be defined by \[ \overline{\varepsilon}_i(c):=\left(\varepsilon_i(c)\right)^t,\] the latter denoting the transpose of a matrix in $M_n(R)$ (this is also an involution inducing an isomorphism from the ring of matrices to its opposite ring). \end{proof} \section{Directed systems and consistent sequences of DGAs} Both directed systems and consistent sequences of differential graded algebras appear naturally in applications. In this section we discuss briefly how our constructions can be carried over to these cases. \subsection{The infinitely generated case: a directed system of DGAs} The differential graded algebra considered up to this point have all been finitely generated. For many of the applications that we have in mind this is also sufficient. Namely, the Chekanov-Eliashberg DGA of a Legendrian submanifold $\Lambda$ is generated by the Reeb chords of $\Lambda$, and a generic Legendrian submanifold has finitely many Reeb chords in most contact manifolds for which the Chekanov-Eliashberg DGA is rigorously defined. For example, this is the case for closed Legendrian submanifolds of the standard contact $\R^{2n+1}$. Nonetheless, for a general contact manifold there may be infinitely many Reeb chords on a generic closed Legendrian submanifold. In this case then the Chekanov-Eliashberg DGA is infinitely generated, and hence $M$ is a free $A$--$A$-bimodule with an infinite preferred basis. However, note that to every Reeb chord we can associate an \emph{action} $\ell \in \R_{>0}$, and generically all Reeb chords below a certain action still comprise a finite subset. We write $M^{\ell} \subset M$ for the free and finitely generated $A$--$A$-bimodule spanned by the Reeb chords of action less than $\ell>0$. We write $\mathcal{A}^\ell:= \mathcal{T}_A(M^{\ell})$, and the action-decreasing property of the differential in the Chekanov-Eliashberg DGA implies that each ${\mathcal A}^\ell$ is a sub-DGA of ${\mathcal A}$, and therefore there is an induced directed system \[i_{\ell_1,\ell_2} \colon (\mathcal{A}^{\ell_1},\partial) \hookrightarrow (\mathcal{A}^{\ell_2},\partial), \quad \ell_1 \le \ell_2,\] of finitely generated differential graded algebras. The direct limit of this directed system is the infinitely generated differential graded algebra $(\mathcal{A}, \partial)$ and therefore we can reduce the study of an infinitely generated graded algebra endowed with an ``action filtration'' as above to the study of directed systems of finitely generated DGAs. In this setting the $A_\infty$-categories obtained by applying the constructions in Subsection \ref{subsec:case1} and \ref{subsec:case2} to the direct system $({\mathcal A}^\ell, \partial)$ form an inverse system; namely we have morphisms \begin{gather*} i_{\ell_1,\ell_2}^\vee \colon (M^{\ell_2})^\vee \to (M^{\ell_1})^\vee, \quad \ell_1 < \ell_2, \\ i_{\ell_1,\ell_2}^* \colon (M^{\ell_2})^* \to (M^{\ell_1})^*, \quad \ell_1 < \ell_2. \end{gather*} Using the given choice of basis of $M$, the adjoint morphisms $i_{\ell_1,\ell_2}^\vee$ and $i_{\ell_1,\ell_2}^*$ both correspond to canonical projections onto the submodules spanned by the generators having actions at most $\ell_1$. The linearised coboundary maps $\mu^{\varepsilon_0,\varepsilon_1}$ (defined using either of the constructions) makes the above inverse systems into inverse systems of complexes; i.e.~the above projection maps are chain maps. The Mittag-Leffler property be seen to hold for the corresponding inverse system of boundaries, and hence the inverse limits of homologies is equal to the homology of the inverse limit complex. The respective $A_\infty$-structures constructed for the above inverse system of complexes can then seen to satisfy $i_{\ell_1,\ell_2}^\vee \circ \mu_n=\mu_n \circ ((i_{\ell_1,\ell_2}^\vee)^{\otimes n})$ and $i_{\ell_1,\ell_2}^* \circ \mu_n=\mu_n \circ ((i_{\ell_1,\ell_2}^*)^{\boxtimes n})$. This gives rise to an $A_\infty$-structure on the inverse limits of $(M^{\ell})^\vee$ and $(M^{\ell})^*$. \subsection{Consistent sequences of DGAs} \label{sec:cons-sequ-dga-1} The construction of $\mathfrak{A}_n$ and $\mathcal{A}_{A_n}$ in Section \ref{sec:semisimple} out of $\mathcal{A}$ leads to families of differential graded algebras with an increasing number of generators. Such sequences were used in \cite{augcat} to upgrade the $A_\infty$ algebra structure from \cite{Productstructure} to an $A_\infty$ category whose objects are the augmentations of $\mathcal{A}$. This idea was later generalised in \cite{NRSSZ}, where the notion of a \textit{consistent} family of differentiable graded algebras was introduced. Here we briefly describe this notion and show how it also gives rise to $A_\infty$-categories with noncommutative coefficients. The geometrical construction underlying this algebraic definition will be sketched in Appendix \ref{sec:cons-sequ-dga}. Let $(\mathcal{A},\partial)$ be a semifree differential graded algebra over the noncommutative algebra $A$. Its underlying algebra is thus the tensor algebra $\mathcal{T}_A(M)$ over $A$ of a free $A$--$A$-bimodule $M$ with basis ${\mathcal B}$. An \textit{(m-components) link grading} (as introduced in \cite{Mis_grading}) on $\mathcal{A}$ is a pair of maps $b,e \colon {\mathcal B} \rightarrow \{1,\hdots, m\}$ such that: \begin{itemize} \item If $c \in {\mathcal B}$ is such that $b(c)\not = e(c)$ then $\partial(c)$ has no constant term, and \item For any $c \in {\mathcal B}$ and any word $a_0c_1a_1\cdots a_{n-1}c_na_n$ appearing in an expression of $\partial(c)$, we have $b(c_{i-1})=e(c_{i})$. \end{itemize} A generator $c \in {\mathcal B}$ is called \textit{pure} if $b(c)=e(c)$ and {\em mixed} otherwise. On $\mathfrak{A}_n$ and ${\mathcal A}_{A_n}$ there is a link grading defined by $b(c^{ij})=i$ and $e(c^{ij})=j$. Moreover, words $a_0c_1a_1\cdots a_{n-1}c_na_n$ such that $b(c_{i-1})=e(c_{i})$ (i.e. of the type appearing in the differential of a basis element) are called \textit{composable} in \cite{EffectLegendrian}, \cite{augcat} and \cite{NRSSZ}. This terminology comes from the Chekanov-Eliashberg algebra of an $m$-components Legendrian link: the components are labeled by $\{1,\hdots,m\}$, and the maps $b$ and $e$ give the label of the component of the starting point and endpoint of a Reeb chord of the link. Composable words are those which can appear as negative asymptotics of a holomorphic disc with boundary on the cylinder over the link. Let $(\mathcal{A},\partial)$ be a differential graded algebra equipped with a link grading $(b,e)$ and let $I$ be a subset of $\{1,\hdots,m\}$. We denote by $\mathcal{A}_I$ the subalgebra of $\mathcal{A}$ generated by basis elements $c$ for which $b(c),e(c)\in I$. There is a projection $\pi \colon {\mathcal A} \to {\mathcal A}_I$ such that, for every basis element, $\pi(c) = c$ if $(b(c), e(c)) \in I \times I$, and $\pi(c)=0$ otherwise. It follows from the definition of a link grading that $\partial$ descends to a differential $\partial_I = \pi \circ \partial$ on $\mathcal{A}_I$. For $m$-components Legendrian links this corresponds to taking chords of the sub-link whose components are labeled by $I$ and defining a differential which counts only holomorphic discs which are asymptotic to chords in this sublink. Note that the differential graded algebra $\mathcal{A}_I$ is equipped with a link grading once we identify $I$ with $\{1,\hdots, l\}$ by an order preserving identification. When $I=\{i\}$ we denote $\mathcal{A}_I$ simply by $\mathcal{A}_i$. We give now the definition of a consistent family of differential graded algebras following \cite{NRSSZ}. \begin{defn} A sequence $(\mathcal{A}^{(i)},\partial^{(i)})$ of semi-free differential graded algebras with generating sets ${\mathcal B}^{(i)}$ and link gradings $(b^{(i)},e^{(i)})$ taking values in $\{1, \ldots, i \}$ is \emph{consistent} if the following properties are satisfied. \begin{enumerate} \item For every increasing map $f \colon \{1,\hdots, i\}\rightarrow \{1,\hdots,j\}$ there is an induced map $h_f \colon {\mathcal B}^{(i)}\rightarrow {\mathcal B}^{(j)}$ such that, for any generator $c \in {\mathcal B}^{(i)}$, we have $$(b^{(j)}(h_f(c)),e^{(j)}(h_f(c)))=(f(b^{(i)}(c)),f(e^{(i)}(c))).$$ \item For any two composable increasing maps $f$ and $g$ between finite sets, we have $h_{f\circ g}=h_f\circ h_g$. \item For any $f$ as above, the algebra morphism $h_f \colon \mathcal{A}^{(i)} \rightarrow \mathcal{A}^{(j)}$ coinciding with $h_f$ on generators satisfies the property that $\pi\circ h_f:\mathcal{A}^{(i)}\rightarrow \mathcal{A}^{(j)}_{f(\{1,\hdots i\})}$ is an isomorphism of differential graded algebras. \end{enumerate} \end{defn} Note that increasing maps from $\{1,\hdots, i\}$ to $\{1,\hdots, j\}$ are in one-to-one correspondence with subsets of $\{1,\hdots, j\}$ of cardinality $i$. Figure \ref{fig:3-6copy} shows the geometric meaning of the maps $h_f$ when ${\mathcal A}^{(i)}$ is the Chekanov-Eliashberg of the $i$-copy link of a Legendrian submanifold. See Appendix \ref{sec:posit-augm-repr}. The upshot of this definition is that, since $\mathcal{A}^{(1)}$ is isomorphic to $\mathcal{A}^{(i+1)}_{k}$ for any $k\in \{1,\hdots,i+1\}$, any $(i+1)$-tuple of augmentations $(\varepsilon_0, \ldots, \varepsilon_i)$ of $\mathcal{A}^{(1)}$ gives rise to an augmentation $\boldsymbol{\varepsilon}$ of $\mathcal{A}^{(i+1)}$ which vanishes on the mixed generators and satisfies $\boldsymbol{\varepsilon}(c) = \varepsilon_k(c)$ for any $c \in {\mathcal A}^{(i+1)}_k \cong {\mathcal A}^{(1)}$. \begin{figure}[t] \labellist \pinlabel $d_{2,1}$ at 140 278 \pinlabel $h_{\{2,5,6\}}$ at 370 294 \pinlabel $d_{5,2}$ at 579 278 \endlabellist \centering \includegraphics[width=\textwidth]{3-6copies} \caption{The $3$-copies link as a sublink of the $6$-copy link.} \label{fig:3-6copy} \end{figure} We denote by $M^{(i+1)}$ the free bimodule generated by ${\mathcal B}^{(i+1)}$. Also for any subset $I$ of $\{1,\hdots,j\}$ of cardinality $i+1$, we denote by $M^{I}$ the corresponding submodule of $M^{(j)}$ (which is identified with $M^{(i+1)}$). We decompose each differential $\partial^{(i+1)}$ restricted to $M^{(i+1)}$ into a sum $$\partial^{(i+1)}|_{M^{(i+1)}}= \partial_0^{(i+1)}+\cdots +\partial_k^{(i+1)},$$ where $\partial_l^{(i+1)}$ takes values in $(M^{(i+1)})^{\boxtimes l}$. Now, given an $(i+1)$-tuple of augmentations of $\mathcal{A}^{(1)}$, inducing an augmentation $\boldsymbol{\varepsilon}$ of $\mathcal{A}^{(i+1)}$, we define the operation $\mu^{\epsilon_0, \ldots, \epsilon_i}_{i}$ as follows. We consider the map \begin{equation} \label{eq:3} \xymatrixrowsep{0.2in} \xymatrixcolsep{0.5in} \xymatrix{ M^{(2)}\ar[r]^{h_{\{1,i+1\}}} & M^{(i+1)} \ar[r]^{(\partial^{(i+1)})^{\boldsymbol{\varepsilon}}_i} & (M^{(i+1)})^{\boxtimes i} \ar[d]^{\pi} \\ & & M^{(i+1)}_{\{i,i+1\}}\boxtimes M^{(i+1)}_{\{i-1,i\}}\boxtimes \ldots \boxtimes M^{(i+1)}_{\{1,2\}} \ar[d]^{\simeq} \\ & & (M^{(2)})^{\boxtimes i}.} \end{equation} The map $\pi \colon (M^{(i+1)})^{\boxtimes i} \to M^{(i+1)}_{\{i,i+1\}} \boxtimes \ldots \boxtimes M^{(i+1)}_{\{1,2\}}$ is the restriction of the canonical projection $\pi \colon \mathcal{A}^{(i+1)} \to \mathcal{A}^{(i+1)}_{\{i,i+1\}}\star\ldots\star\mathcal{A}^{(i+1)}_{\{1,2\}}$. This allows us to define $A_\infty$-categories whose objects are augmentations of $\mathcal{A}^{(1)}$, the morphism space between any pair of augmentation is a copy of $M^{(2)}$, and the compositions are defined by taking adjoints of the maps $M^{(2)} \to (M^{(2)})^{\boxtimes i}$ defined in \eqref{eq:3}, using the construction from either Subsection \ref{subsec:case1} or \ref{subsec:case2}. Note that the procedure described in Section \ref{sec:semisimple} which associates to $\mathcal{A}$ and $n$ the differential graded algebra $\mathfrak{A}_n$ produces a consistent sequence of differential graded algebras whose link grading is $(b(c^{ij}),e(c^{ij}))=(i,j)$. This sequence has the property that $M^{(2)} \cong M^{(1)} = M$, and therefore the augmentation category defined from it contains the same information as the differential graded algebra ${\mathcal A}$. However, there exist consistent sequences containing strictly more information than simply that contained in $\mathcal{A}=\mathcal{A}^{(1)}$. For instance, even in the case when ${\mathcal A}$ is finitely generated, an infinite consistent sequence may still give rise an $A_\infty$-category with nontrivial operations of arbitrarily high order. In Appendix \ref{sec:cons-sequ-dga} we sketch the geometric construction of \cite{NRSSZ}, which illustrates such a phenomenon.
1,314,259,995,469
arxiv
\section{Introduction} One of the interesting properties of several real-world complex networks --- such as scientific collaborations, brain networks, social and economical networks --- concerns their modular structure. Modularity is important from both topological and dynamical points of view~\cite{fortunato2010community}. Topologically, communities correspond to the partitioning of the network into major groups of reference, revealing much about the possible origin of the communities as well as the behavior of different dynamics in the network. Indeed, modularity influences dynamics, because it tends to constrain dissemination of activation inside each module (e.g.~\cite{costa2009beyond}). It was also shown that the all important issue regarding the interplay between topology and dynamics is heavily influenced by modularity, in the sense that different types of such a relationship can be observed within communities of a \emph{same} network~\cite{comin2014random}. For all its importance and promises, modularity remains a challenge as a consequence of the difficulty of, given a network, to identify its respective communities~\cite{fortunato2010community}. Indeed, many are the reported approaches proposing new methods of community detection (e.g.~\cite{fortunato2010community}). Part of the difficulty in finding communities are better understood by taking into account the direct analogy between this task and the problem of \emph{clustering}, or \emph{unsupervised classification}, in the research area known as pattern recognition, which is backed by decades of investigations (e.g.~\cite{nasrabadi2007pattern}). The main problem in clustering concerns the diversity of manners in which a cluster can appear or be defined. Basically, a cluster is a subset of the objects so that its elements are more similar (closer) one another than with objects in the remainder of the set. The multiplicity of ways to define a cluster can start to be appreciated by observing that there is an infinite number of possible distance types in a geometric space (e.g. Euclidean, Mahalanobis, cityblock, etc.). Distinct metrics will possibly be more compatible with specific types of data. Another problem in defining clustering is the criteria to be adopted for deciding if each of the given objects belong or not to that cluster. This issue can be effectively treated by mapping the objects into a density space, so that clusters become associated to density extremes. However, as a consequence of real-world complexity, these extremes will appear along a whole set of spatial scales. These two main types of problems, choice of metrics and cluster identification, are directly inherited by community detection approaches. The current work addresses the problem of community detection through an analogy with prototype-based supervised classification. More specifically, a prototype node is assigned to each community, and used as a reference from which respective distances are calculated. \section{Materials and methods} \subsection{Our method} Our method requires the definition of references nodes. These central nodes can be defined using a priori information about the network or according to measurements such as the degree, clustering coefficient, accessibility~\cite{costa2007characterization}, and/or geographic characteristics of the network. Yet another possibility is to have these reference nodes provided by the user. Given the reference nodes, a distance vector $D_i$ is defined for each node $i$ in the network. The $j$-the element of $D_i$ contains all the distances between node $i$ and the reference node $c_j$ times a weight factor $\alpha_{c_j}$. That is \begin{equation} \vec{D}_i = (\alpha_{c_1}d_{ic_1}, \alpha_{c_2}d_{ic_2},...,\alpha_{c_M}d_{ic_M}), \end{equation} where $M$ is the number of reference nodes. Distance $d_{ic_j}$ can be topological or spatial. Each of the reference nodes can be understood as a kind of coordinate axis in a multidimensional space $\Omega$. The consideration of the several distances, taking into account their multiplicity, provides robust indication about the position of each of the network nodes with respect to the overall network structure, typically inducing respective well-separated clusters in $\Omega$. Each of these clusters tend to correspond to one of the original communities. The assignment of each node to each of the communities is done by taking into account the minimum topological or spatial distance with each of the reference nodes. Ties were broken by comparing the sum of distances of the neighbors of the node to each center. \subsection{Benchmark networks} \label{s:benchmark} A benchmark network model was defined in order to verify if the method proposed in this work can correctly identify the community structure of spatial networks. The model works as follows: first, a set of reference points $S$ is defined. For each reference point $p$, a Poisson point process~\cite{chiu2013stochastic} with density $\lambda_p$ is performed inside a circle of radius $R_p$ centered on the point, defining the positions of the network nodes. Nodes are then connected according to the Waxman~\cite{WX} algorithm, that is, node $i$ is connected to node $j$ with probability \begin{equation} p_{ij} = e^{-\beta d_{ij}} \end{equation} where $d_{ij}$ is the Euclidean distance between the nodes and $\beta$ is a constant that controls the network average degree. \section{Preliminary Results} A network with two communities was created using the model described in Section~\ref{s:benchmark}. The parameters used in the model were $R_1=3$, $R_2=1$ and $\lambda_1=\lambda_2=65$. The distance between the two reference points for each community was set to $R_1+R_2$. The created network was partitioned by the proposed method assuming Euclidean distance between the nodes. The weight parameter was set to $\alpha_{c_i}=R_i$. The result is shown in Figure~\ref{fig:2_wx_a}. The method correctly identified all the nodes in the two communities. \begin{figure*}[ht] \centering \subfigure[]{ \includegraphics[width=0.7\columnwidth]{2_wx.pdf} \label{fig:2_wx_a} } \quad \subfigure[]{ \includegraphics[width=0.9\columnwidth]{2_wx_pca.pdf} \label{fig:2_wx_b} } \caption{Communities detected in the benchmark network. (a) Original network, showing communities 1 and 2 colored in, respectively, green and red. (b) PCA of the node distance vectors.} \label{fig:2_wx} \end{figure*} In order to better visualize the association of the nodes with each community, Principal Component Analysis (PCA) was applied to the node distance vectors. The result is shown in Figure~\ref{fig:2_wx_b}. Two well-separated clusters are observed in the PCA space. The proposed method was also applied to the Zachary's karate club network~\cite{zachary1977information}. The two nodes with the largest degree, $v_{i=0}$ and $v_{i=33}$, were set as reference for each community. The topological distance, instead of a geometric distance, was used and the weights were set to $\alpha_{c_1}=\alpha_{c_2}=1$. The detected communities are shown in Figure~\ref{fig:kara}. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{karate.pdf} \end{center} \caption{Detected communities in the Zachary's karate club.} \label{fig:kara} \end{figure} \section{Concluding Remarks} Community finding remains a challenging problem in network science. In this note, we propose a distance-based method that takes into account the topological or geometric distances to pre-defined reference points. The membership of each node is determined by taking the community identifier corresponding to the smallest of the obtained distances for each node. The method has been found to perform surprisingly well for several spatial networks. Future developments should include the evaluation of the methodology for other spatial networks, as well as its extension to non-spatial structures. \acknowledgments Paulo Paulino thanks CNPq-USP (grant no. 157326/2017-9) for support. C. H. Comin thanks FAPESP (grant no. 15/18942-8) for financial support. L. da F. Costa thanks CNPq (grant no. 307333/2013-2) and NAP-PRP-USP for sponsorship. This work has been supported also by FAPESP grants 11/50761-2 and 2015/22308-2. \bibliographystyle{apsrev}
1,314,259,995,470
arxiv
\section{Introduction} What would be the fate of a small classical object as it falls through the horizon of a large black hole (BH)? Given that the BH is large enough, classical relativity predicts that the object will only suffer a small tidal force. As Hawking's quantum-mechanical model of BH evaporation makes essentially the same prediction \cite{Hawk,info}, this was long thought to be a settled matter. Nonetheless, the table has since been turned on account of the recent ``firewall'' proposal, which suggests that the object will rather be obliterated due to interactions with high-energy quanta \cite{AMPS}. Also see \cite{Sunny1,info4,Braun} for earlier, related discussions, \cite{MP,Bousso2,Mathur} for important clarifications and \cite{fw1,fw2,fw3,fw4,Sunny2,avery,lowe,vv,pap,Giddings,jac,HH,Avery2,AMPSS,lowefw2,SMfw,pagefw,MT,VRfw,newfw3,newhawk,Harlow} for what is just a sampling of the firewall literature. We have recently developed a new semiclassical model of BH evaporation and would now like to find out what our model predicts for the fate of a falling object. We have so far studied the model both from the perspective of the emitted Hawking radiation as observed from outside the BH \cite{slowleak,slowburn,endgame,density} and from the perspective of pair production near the BH horizon \cite{flameoff,noburn,schwing}. The premise has been to extend Hawking's original models --- respectively, the collapsing matter shell \cite{Hawk} and the eternal BH \cite{info} --- in a way that incorporates the quantum fluctuations of the background geometry and the back-reaction effects of the produced pairs. The main idea of our semiclassical model is that one has to treat the BH as a quantum state \cite{RB} rather than a fixed classical geometry. This induces corrections that are non-perturbative from the perspective of an effective theory of quantum fields on a fixed curved background but, yet, can significantly alter the outcomes. The analysis is carried out by introducing a Gaussian wavefunction for the horizon of the (incipient) BH, as motivated in \cite{RJ,RM,RB,flucyou}, and reevaluating all relevant quantities as expectation values. In effect, we take into account that the BH is of finite size and monotonically decaying throughout the process. Here, we are mostly interested in the pair-production point of view. As discussed previously in \cite{flameoff,noburn}, we have realized a picture in which the produced pairs remain in the near-horizon zone a parametrically short time in comparison to the Page time \cite{page} ({\em i.e.}, the midpoint of evaporation in units of entropy). After this briefer interval of time --- which we have called the coherence time $t_{coh}$ --- the negative-energy modes should be viewed as having been subsumed by the BH interior and their positive-energy partners, as having transitioned to the external Hawking radiation. Then, as a consequence of this continual depletion of modes from the near-horizon zone, the number of pairs in this region is of the order of the square root of the BH entropy. This last outcome should, in a qualitative sense, really be regarded as generic. After all, given any model for which the BH mass is finite and decreasing in time due to the emission of particles, the number of pairs in the near-horizon region should be parametrically smaller than the BH entropy. For instance, by the Page time, about half of the particles that will ever be emitted by the BH have already moved far away from the horizon and transitioned into ``real'' Hawking particles. Clearly, then, the number of pairs in the zone cannot be any larger than the remaining number of would-be Hawking particles that are still waiting to be emitted. We will show that this estimate for the number of pairs implies that they induce a parametrically small force on free-falling objects crossing the horizon. This is in direct contrast to the aforementioned firewall proposal. The basic idea underlying the firewall proposal is as follows: The standard properties of quantum mechanics, such as unitary evolution and the strong subadditivity of entropy, prohibit the positive-energy modes in the zone from being concurrently entangled with both their negative-energy partners and the older, outgoing Hawking particles. The former is necessary to ensure that the horizon is free of drama, while the latter is needed for the eventual purification of the radiation. This conclusion is indeed correct and requires the produced pairs in our model to deviate significantly from maximal entanglement at times later than the Page time \cite{schwing}. However, this observation by itself does not determine the number of disentangled pairs near the horizon and, hence, the amount of excitation in the near-horizon region above the Hartle-Hawking vacuum. We have previously studied the plight of an in-falling shell of matter (quantum or classical) as it passes through the horizon \cite{noburn}. Our findings revealed that the shell ``sees''~\footnote{More accurately, what an external observer perceives the shell to see.} excitations of the vacuum that are parametrically suppressed relative to the Planckian energy scales that are normally attributed to a firewall. However, this analysis was limited in the following three ways: First, our calculation was based simply on estimating the magnitude of the energy density near the horizon. What really is needed would be a physical result that can be directly compared to the situation when no excited modes are present. Second, we would like to re-express the situation as much as possible from the direct perspective of the falling object. This is a non-trivial extension because our framework --- just like Hawking's --- is formulated from the perspective of an external, stationary observer. Third, we had not yet accounted for the possibility of strong deviations from maximal entanglement and the resulting properties of the state of the matter fields in the near-horizon zone. As it turns out, we only need some limited information about the near-horizon state and do not need to know the state of the interior of the BH. \subsection{A thought experiment and its outcome} We are proposing a thought experiment that consists of dropping a cylindrical ``stick'' radially towards the BH horizon and asking how its journey is influenced by the disentangled modes within the near-horizon region. We can calculate the total number of pairs in this region and the degree of disentanglement amongst them. These inputs enable us to determine the curvature that is induced by the disentangled modes and then, by way of the geodesic deviation equation, the corresponding force on the stick in terms of a dimensionless parameter, the mechanical strain. Then we can discuss whether the induced strain can be used to detect the position of the horizon and to what extent, if at all, the falling stick is consumed as it falls through this surface. The current approach allows us to discuss the geodesic deviation equation from the perspective of the falling stick. Hence, there is no longer any need to speculate as to what is the precise definition of the firewall, which remains elusive. We can compare the gravitational force delivered by the disentangled modes to that delivered by the background Riemann curvature and discuss the implications. This can be done for Hawking's model, for our semiclassical model and for the Page model \cite{page} as implicitly interpreted in the context of the firewall discussions. As for the force induced by additional interactions, such as those of electromagnetism, it is likely to be subdominant, but this issue should probably be considered in more detail. We find that the exerted force is proportional to the number of disentangled modes in the vicinity of the horizon and to the amount of disentanglement. After the Page time, the amount of disentanglement per mode is of order unity and, consequently, the force is proportional to the number of Hawking pairs in the near-horizon region. We find for our model that the force delivered by the modes is parametrically larger than that of the background. On the other hand, the strain on the stick is still parametrically small --- it is suppressed by the ratio of the length of the stick to the BH radius. This smallness can be attributed, once again, to the bounded number of pairs in the near-horizon region. In the Page model \cite{page}, as implicitly interpreted in the firewall discussions, the number of pairs becomes of order of the BH entropy at the Page time and the degree of disentanglement per mode grows to order unity by the same time \cite{MP}. As a result, the force on the stick becomes Planckian, inducing a parametrically large strain which is so large that the stick indeed breaks up. So that, in this case, we find a phenomenon whose outcome leads to a disintegration of the stick and could certainly be interpreted as a ``firewall". In the Hawking model, on the other hand, the modes are always maximally entangled up to small corrections, and so their impact on the stick is much smaller than that of the background curvature. In summary, the arguments for a firewall in \cite{AMPS} are basically substantiated by our results, since the near-horizon state has to be different than that which is predicted by an effective theory of fields on a fixed BH background. But, at least for our semiclassical model, the degree of deviation from the standard vacuum is much smaller than claimed. \subsection{Comparison with a previous analysis} The current treatment was motivated in part by that of Itzhaki \cite{Sunny1}, which can be viewed as the first realization of what only later was dubbed a BH firewall. Itzhaki posed the following question: What is the effect of a gravitational shock wave due to an outgoing Hawking mode on an ingoing test particle? This was computed and the answer was summed over all such shock waves that the test particle encounters on its way to the horizon. Itzhaki found that the net effect is to displace the particle so far from its original (null) trajectory that it never even has the opportunity to cross the horizon --- the BH had already evaporated before the particle ever got there. This is a remarkable finding and obviously a much different one than ours. But we believe that there is no contradiction. Itzhaki's conclusion is based on the exponential squeezing of the modes in the vicinity of the horizon; in other words, the exponentially large near-horizon redshift in Hawking's model. The result is that, before the test particle ever reaches the horizon, it crosses the path of {\em all} the emitted Hawking particles. We, however, view this infinite redshift as an approximation of treating the background geometry as a strictly classical entity \cite{flucyou} (which is tantamount to assuming an infinitely massive BH) and the test particle as strictly point like. In our framework, the quantum fluctuations of the BH regulate this would-be infinite redshift. In fact, as will be shown, the redshift is a ``red herring'' --- the piling-up of modes near the horizon is mitigated by the continual depletion of incipient Hawking particles from the near-horizon zone, insofar as the redshift has been suitably regulated. As we will also be discussed later, our proposal for the energy density of the disentangled modes is parametrically larger than in Itzhaki's model but, in spite of this difference, the induced gravitational interactions on a finite object are still small. \subsection*{Contents} The rest of the paper is organized as follows: The next section contains a brief explanation of our semiclassical model of BH evaporation. Then, in Section~3, we use a novel physical argument to affirm our previous description of the pair-produced modes in \cite{flameoff}; namely, that the would-be Hawking particles ``escape'' from the near-horizon region after an interval of order $t_{coh}$, which is parametrically shorter than the BH lifetime. The quantitative analysis of the induced strain is found in Section~4, where we give a detailed account of the plight of the stick. Section~5 contains a brief summary. \section{The semiclassical model} In the following, fundamental constants, besides the Planck length $\;l_P=\sqrt{\hbar G}\;$, are usually set to unity except when needed for clarity. We are mostly interested in parametric dependence and so typically neglect numerical factors. We assume, for concreteness, a four-dimensional Schwarzschild BH with metric $\;ds^2=-F(r)dudv+ r^2d\Omega^2\;$, where $\;F(r)=1-R_S/r\;$ and $\;R_S=2l_P^2M\;$ is the Schwarzschild radius. Also, $u,v$ are the retarded and advanced null coordinates, $\;u,v=t\mp r_{\ast}\;$, such that $\;r_{\ast}=\int \frac{dr}{F(r)}\;$ is the Tortoise coordinate. The BH entropy is $\;S_{BH}=\frac{\pi R_S^2}{l_P^2}\;$ and the BH is semiclassical, $\;S_{BH}\gg 1\;$. We use $N$ to denote either the cumulative number of particles emitted from the BH or the cumulative number of pairs produced (these are parametrically the same number) and $N_{pairs}$ to denote the number of pairs in the near-horizon zone at some given time. Our semiclassical model is similar in many respects to the Hawking model of BH evaporation. However, there is a significant difference: The BH is treated as a quantum state and its quantum fluctuations are not neglected. In practice, we achieve this goal by assigning the (incipient) BH a Gaussian wavefunction \cite{RM,RB,flucyou} \begin{equation} \label{solutionfin} \Psi_{BH}(R)\;=\;{\cal N}^{-1/2} e^{\hbox{$-\frac{\pi}{2 l_P^2} (R-R_S)^2$}}\;, \end{equation} where $R$ parametrizes the fluctuating position of the quantum horizon and ${\cal N}$ is a normalization constant. We then calculate quantum expectation values rather than work directly with the classical metric. For an observable $\widehat O$, this means calculating \begin{equation} \langle \Psi_{BH}| \widehat{O} |\Psi_{BH}\rangle \;=\; 4\pi \int\limits^{\infty}_{0} dR\; R^2 O(R) \Psi^2_{BH}(R)\;. \label{expval} \end{equation} The small parameter in our model is the ``classicality'' parameter, $C_{BH}=1/S_{BH}$. What is essentially the same parameter also appears in \cite{Dvali1,Dvali2}. Technically, it is introduced by the width of $\Psi_{BH}$. The fact that the classicality parameter does not vanish --- it is rather small but finite --- can result in modifications to physical quantities. The differences are most pronounced for quantities that are either vanishing or divergent in the classical limit $C_{BH}=0$. In anticipation of the upcoming sections, we list here several relevant results: \begin{eqnarray} \langle \Psi_{BH} |F^2| \Psi_{BH} \rangle &\equiv & \lim\limits_{r\to R_S} \langle \Psi_{BH} |\left(\frac{r-R}{R}\right)^2 | \Psi_{BH} \rangle \;\simeq\; S^{-1}_{BH} \;, \label{2F} \\ \langle \Psi_{BH} |F^{-2}| \Psi_{BH} \rangle &\equiv & \lim\limits_{r\to R_S} \langle \Psi_{BH} |\left(\frac{R}{r-R}\right)^2| \Psi_{BH} \rangle \;\simeq\; S_{BH}\;, \label{F2} \\ \langle \Psi_{BH} |F^{-4}| \Psi_{BH} \rangle &\equiv & \lim\limits_{r\to R_S} \langle \Psi_{BH} |\left(\frac{R}{r-R}\right)^4| \Psi_{BH} \rangle \;\simeq\; S_{BH}^2\;, \label{4F} \end{eqnarray} where the latter two follow from the use of \begin{equation} \int dx \frac{1}{x^{2n}} e^{-S_{BH} x^2} \;\simeq\; \Gamma\left(\frac{-n+1}{2}\right) S_{BH}^n\;. \end{equation} An important time scale that accounts for the difference between an infinitely massive BH and one with a finite but large mass is the so-called coherence time. This time scale is typically $\;N_{coh}\simeq\sqrt{S_{BH}}\;$ (which reads in Schwarzschild units as $\; t_{coh}\simeq \frac{R^2_S}{l_P}$) and has a dual meaning: First, from the perspective of the external radiation, $N_{coh}$ is the scale of temporal extent for the matter correlations. This comes about because wavefunctions for the BH at different times are orthogonal when this time difference exceeds the coherence scale. Second, from the perspective of the produced pairs, $N_{coh}$ is the time that a mode stays in the near-horizon zone; after which, the negative-energy modes are subsumed by the interior matter and the positive-energy modes escape to become Hawking particles. It is latter meaning that is significant to the current work and will be substantiated in the section to follow. This is the bare necessity that a reader needs to know about our model for BH evaporation. Our earlier, cited papers can be consulted for more comprehensive discussions. \section{The Hawking modes near the horizon} Here, we will reconsider the pair-production picture of BH evaporation. Many of the aspects are common to the Hawking and semiclassical models. We will emphasize these aspects as well as the differences as they turn up in the discussion. If quantities are averaged over sufficiently long time periods, it should be clear that the average number of produced pairs must match the average number of emitted particles. On average, a Hawking particle is emitted once every Schwarzschild time $\;t\sim R_S\;$, and so pairs are produced at the same rate. Then, since $\;R_S\ll t_{coh}=R_S \frac{R_S}{l_P}\;$, we can treat both processes as acting continuously when looking at intervals of coherence time. In this way, the process of BH evaporation entails the continuous production of pairs and the continuous absorption of negative-energy modes. Meanwhile, positive-energy modes are transitioning into the outgoing Hawking radiation as their subsumed negative-energy partners are being absorbed into the interior matter, continually reducing the BH mass. All of these rates are determined, on average, by the BH's thermal rate of emission. Let us begin the analysis from the perspective of a local, free-falling observer. We are interested only in the massless modes with low angular momenta, which eventually do escape from the near-horizon zone. These are the modes that are constrained by the arguments of strong subadditivity while the rest are in their vacuum states. Each of the massless modes will have a momentum of magnitude $E$, where $\;E\sim 1/R_S\;$, as $R_S$ sets the size of the wavelength. It then follows, from momentum conservation and from Hawking's realization that the positive- and negative-energy partners are created (respectively) just outside and just inside of the horizon, that their momenta are initially of the form \begin{equation} \vec{p}\;=\; \cos{\alpha} E \hat{Y} + \sin{\alpha} E \hat{U}\;, \end{equation} \begin{equation} \vec{q}\;=\; - \cos{\alpha} E \hat{Y}- \sin {\alpha} E \hat{U}\;, \end{equation} for the positive- and negative-energy mode respectively. Here, $\hat {Y}$ defines a direction along or on the horizon surface (it could be lightlike or spacelike), $\hat {U}$ is the lightlike Kruskal direction off the horizon and we are using the conventions that $U$ increases towards large values of $r$ and $\;0\leq \alpha \leq \pi\;$. The unit vector $\hat {Y}$ is a linear combination of the unit vectors $\hat{\theta}$, $\hat{\phi}$ and the null Kruskal direction along the horizon $\hat {V}$. The exact form of this linear combination is not relevant to the current considerations. In most cases, $\;\sin{\alpha}$, $|\cos{\alpha}|$ are of order unity and will be dropped for now on. The above relations are from a local, free-falling perspective. A stationary observer at large $r$ still detects modes with energy $\;E\sim 1/R_S\;$ but sees the momentum in the $U$ direction as being red-shifted according to \begin{equation} \vec{p}\;=\; E\hat{Y} + e^{-u/2R_S} E \hat{U}\;, \end{equation} \begin{equation} \vec{q}\;=\; - E \hat{Y} - e^{-u/2R_S} E \hat{U}\;, \end{equation} where $u$ is the retarded time coordinate and the redshift factors are meant to account for both the energy and the velocity of the mode. Now, given the standard classical geometry of Hawking's model, $\;u\simeq -2R_S\ln{\left(\frac{r-R_S}{R_S}\right)}\;$~\footnote{The factor of 2 is because $\;t\sim -r_{\ast}\;$ at the future horizon.} and then $\;e^{-u/2R_S}\simeq \frac{r-R_S}{R_S}\to 0\;$ as $\;r\to R_S\;$. This makes it clear that, from a stationary-observer's viewpoint, the $U$ component of the momentum $\vec{P}_{U}$ vanishes and so the partners are forever trapped on the horizon. This is consistent with Hawking's description of the pair-production process in \cite{info}, as there an eternal BH spacetime is assumed. But our semiclassical model leads to a different result. The average $\langle \Psi_{BH}| \vec p_U |\Psi_{BH}\rangle$ is still exponentially small; however the quantum fluctuations of the BH itself will lead to a small but finite variance \cite{flucyou}. Indeed, using the prescription~(\ref{expval}) and the result~(\ref{2F}), we find that \begin{eqnarray} \langle \Psi_{BH}| \vec p_U\cdot\vec p_U |\Psi_{BH}\rangle &=& E^2 \langle \Psi_{BH} |F^2| \Psi_{BH} \rangle \nonumber \\ &=& \frac{E^2}{S_{BH}} +{\cal O}[S_{BH}^{-2}]\;. \label{approx} \end{eqnarray} Now, since the average value of $\vec p_U$ is exponentially small and these modes are outgoing so that $\vec p_U$ positive by definition, we can use $\;\sqrt{\langle | \vec p_U\cdot\vec p_U |\rangle}\simeq E\frac{l_p}{R_S}\;$ as an estimate for the velocity of a mode in the direction orthogonal to the horizon, \begin{equation} v_{U}\;=\;\frac{\sqrt{\langle | \vec p_U\cdot\vec p_U |\rangle}}{E}\;\simeq\; \frac{l_p}{R_S} \;. \end{equation} We can then quantify the time of escape by using the above estimate. It follows that the time a Hawking mode takes to reach a distance $R_S$ away from the horizon is given by \begin{equation} t_{\rm escape}\;\simeq\; \frac{R_S}{v_U}\;\simeq\; R_S \frac{R_S}{l_P}\;=\;t_{coh}\;. \end{equation} In effect, the normally divergent factor in the escape-time estimate has been replaced by the large but finite factor $R_S/l_P$. To summarize, the effect of the quantum fluctuations is to make the redshift finite --- it takes the outgoing mode a finite time to escape the near-horizon region. The number of actively entangled pairs $N_{pairs}$ is then of the same order as the number of pairs that are produced by the BH over a time period $t_{coh}$; {\em i.e.}, $\;N_{pairs}\sim N_{coh}\;$. As commented upon earlier and detailed elsewhere \cite{flameoff,noburn}, this truncation in the number of partnered modes --- from order $S_{BH}$ to order $\;N_{coh}=S^{1/2}_{BH}\;$ --- is a critical part of our argument for resolving the aforementioned firewall problem. Another important quantity that we would like to introduce is $N_{dis}$, the number of active pairs times the degree of disentanglement per pair ${\cal D}_{dis}$. This is a model-dependent outcome, as it requires a specification of the state of the pairs or, at the very least, some means of quantifying how much this state deviates from the Hartle--Hawking state of maximal entanglement. Let us first recall that, for our model, $N_{pairs}$ is bounded from above by $\;N_{coh}=\sqrt{S_{BH}}\;$ and so $\;N_{dis}\lesssim \sqrt{S_{BH}}\;$. In addition, we have recently \cite{schwing} found a means for estimating ${\cal D}_{dis}$ in our framework. By partitioning the system of Hawking modes into three subsystems --- the already emitted Hawking particles (or early radiation) $A$, the positive-energy modes in the zone (or late radiation) $B$ and their negative-energy partners $C$ --- we have evaluated the entanglement between $A$ and $B$~\footnote{We used Renyi entropies for the analysis, with this choice justified in \cite{schwing}.}. The condition of strong subadditivity of entropy then enforces a lower bound on the the degree of disentanglement between a pair of modes in $B$ and $C$. The need for such a bound is quite natural, given that ``monogamy of entanglement'' is in play and that any positive-energy mode in the zone must have some degree of entanglement with subset $A$ if the state of the radiation is to eventually purify. What the analysis in \cite{schwing} does is put this idea on a more quantitative level. And so, taking this lower bound as an estimate for the degree of disentanglement per pair, we have \begin{equation} {\cal D}_{dis}\; \simeq \;\frac{ N C_{BH}-1}{1+ N C_{BH}}\;. \label{Fdis} \end{equation} One can notice that ${\cal D}_{dis}$ depends solely on the product $NC_{BH}=N(t)/S_{BH}(t)$, which happens to be the effective perturbative parameter for our framework \cite{slowleak}. One can also see that, after the Page time when $\;NC_{BH}\geq 1\;$, the amount of disentanglement is of order unity, $\;{\cal D}_{dis}\sim 1\;$. As a lower bound, this estimate is not useful at times before the Page time, for which ${\cal D}_{dis}$ is negative. However, when the BH is still young, ${\cal D}_{dis}$ must be a parametrically small number, as any model of BH evaporation should reduce to Hawking's model plus perturbatively small corrections at such early times. We will always be assuming that the BH is older than the Page time, as this is the regime of interest as far as the prospects for a firewall are concerned \cite{AMPS}. \section{The fate of a falling stick} \subsection{Setup of the thought experiment} We will next consider the consequences of our framework for a stick falling through the horizon of a semiclassical BH. By stick, we mean a classical, cylindrical object of length $\ell$ and radius $s$, such that $\;l_P\ll s \lesssim \ell \ll R_S\;$. The local frame of the stick will be denoted by $T$, $X$, $Y$ and $Z$. It is assumed to be falling toward the horizon on a radial trajectory with its long side aligned parallel to the direction of motion --- decreasing $r$ or, locally for the stick, decreasing $X$. In Fig.~1, the falling stick is depicted in a space-time diagram. Before proceeding, we need to make sure that the stick does not disintegrate due to any tidal forces arising from the gravitational background. This requirement imposes a constraint on the ratio $\ell/R_S\;$; {\em i.e.}, the ratio of the size of the stick to the Schwarzschild radius of the BH. This comes about because the stick will experience a relative longitudinal acceleration between its ends, for which the magnitude near the horizon is given by $\;\Delta a \sim \frac{GM}{R_S^2}\frac{\ell}{R_S}\;$. To continue with this idea, let us assume that the stick is made of some elastic material; then $\;\Delta a \sim \frac{K}{\rho} \Delta \ell\;$, where $\rho$ is the mass density of the stick and $K$ is its bulk modulus. It follows that, near the horizon, $\;\frac{K}{\rho} \Delta \ell \sim \frac{GM}{R_S^2} ({\ell}/ {R_S})\;$ or $\;{\Delta \ell}/ {\ell} \sim ({\rho}/ {K}) ({c^2}/ {R_S^2})\;$. Here, we have reinstated the speed of light $c$ and used that $R_S=2GM/c^2$. But $\;({K}/ {\rho}) \sim \omega_{\rm stick}^2\sim ({c_{\rm sound}}/ {\ell})^2 \;$, and so the result is $\;{\Delta \ell}/ {\ell} \sim ({c}/ {c_{\rm sound}})^2 ({\ell}/ {R_S})^2\;$. Meaning that, if we insist upon $\;{\Delta \ell}/ {\ell} < 1\;$, then $\;{\ell}/ {R_S} < {c_{\rm sound}}/ {c}\;$. For known materials, this ratio is no larger than about $10^{-4}$. We can then conclude that, to avoid the breaking up of the stick, the ratio ${\ell}/ {R_S}$ has to be parametrically small. \begin{figure} [t] \scalebox{.75} {\includegraphics{pairs.eps}} \caption{Spacetime diagram showing the Hawking pairs near the BH horizon with the falling stick away from the horizon.} \end{figure} \begin{figure} [t] \scalebox{0.7} {\includegraphics{stickfig2.eps}} \caption{Spacetime diagram showing the deformation of the stick induced by the disentangled Hawking modes as it crosses the horizon.} \end{figure} \begin{figure} [t] \scalebox{0.8} {\includegraphics{stickfig3.eps}} \caption{The deformation of the stick induced by the disentangled Hawking modes across the surface $r=R_S$.} \end{figure} For this setup, the acceleration of the stick in the $Y$ (or $Z$) direction can be determined from the geodesic deviation equation for Einstein's gravity. What we want to know, in particular, is the induced gravitational force which is delivered to the stick by the disentangled Hawking modes near the horizon. This force will eventually be parametrized in terms of a dimensionless scalar quantity, the deformation per unit length or the strain $\gamma$. Let us pause to comment briefly on the physical picture. Far away from the horizon --- where the Hawking modes are dilute and their induced force is weak --- the net force exerted on the stick will be negligible. Near the horizon, the situation is different. The effect of induced gravity is still relatively weak, but the part of the stick that is closer to the horizon will feel a stronger force than that which is farther. This is because the Hawking modes become both denser in number and more energetic as the horizon is approached. It is this gradient and the accompanying tidal force that could cause the stick to endure harm. What we first need to know is the stress--energy tensor for the disentangled modes, from which Einstein's equation will give us the associated curvature and then the geodesic deviation equation will yield the tidal force. Alternatively, one can deduce the induced change to the metric by treating the Hawking modes as shock waves \cite{Sunny1}. \subsection{The stress-energy tensor of the disentangled modes} Knowledge of the stress tensor requires one to know about the number of disentangled modes $N_{dis}$ and the energy density of each of these. A detailed discussion about $N_{dis}$ will be deferred until later. We will determine the stress--energy tensor with respect to an external (stationary) observer's perspective, but the associated Riemann tensor will be extracted from a manifestly scalar quantity. In this way, the remainder of the calculation can proceed from the stick's own point of view. A simple dimensional analysis suggests that the energy density of a fully disentangled mode is \begin{equation} \varepsilon \;\simeq \;\frac{1}{R_S^4 F}\;. \end{equation} This expression formally diverges near the horizon; however, the fluctuations of the background regulate the divergence in a similar manner to the way in which the stretched horizon does. The reasoning for the estimate for $\varepsilon$ is as follows: According to an external observer, the modes are, up to red-shifting effects, delocalized over a spherical shell of radius $R_S$ and width $R_S$, while the energy scale of any given mode is set by the Hawking temperature, $1/R_S$. Hence, the energy density per mode when the redshift is disregarded is $1/R^4_S$. The Tolman redshift introduces a factor of $F(r)$ into the denominator, as both the inverse of the energy scale and the width of the shell are suppressed by a factor of $\sqrt{F(r)}$. It then follows that \begin{equation} \delta T_{uu}\;\sim \;\delta T_{vv}\;\sim\; \frac{N_{dis}}{R_S^4 F}\; \label{stress} \end{equation} describes the energy flux for the disentangled modes as far as an external observer is concerned. We also need to know the background stress tensor as would be measured by the same external observer and, for this, employ the standard Schwarzschild result from \cite{Davies}. Then, {\em very near} to the horizon, $\;T_{uu}\sim T_{uv}\sim 0\;$ whereas $T_{vv}\sim -1/R^4_S\;$. As each of these is parametrically much smaller than the estimate in Eq.~(\ref{stress}) for any $\;N_{dis}>l_P/R_S\;$, the background tensor can be disregarded in the subsequent analysis. The estimate in Eq.~(\ref{stress}) can be compared to that of Itzhaki \cite{Sunny1}. Working with Kruskal coordinates (for which $\;U\sim -R_S e^{-u/2R_S}\;$ and $\;V\sim R_Se^{v/2R_S}$), he proposed that $\;T_{UU}\sim \frac{l_p^2}{U^2}\;$ and $\;T_{VV}\sim 0\;$ near the horizon. But, recalling that $\;T_{uu}\sim U^2 T_{UU}\;$ and $\;T_{VV} \sim T_{vv}\;$, we see that our proposal for $T_{uu}$ and $T_{vv}$ are both a factor of $1/F$ more divergent at the horizon than their counterparts in \cite{Sunny1}, as well as those of the standard Hartle--Hawking and Unruh states. This can be attributed to the disentangled modes for our picture being highly concentrated in the proximity of the horizon, as per the previous section. From our point of the view, the modes further removed from the horizon have already ``escaped''. \subsection{The curvature induced by the disentangled modes} \subsubsection{Curvature from the stress-energy tensor} We next want to convert Equation~(\ref{stress}) into a statement about curvature and, as already stated, work with a scalar quantity. The simplest choice of scalar is \begin{equation} G^{ab} G_{ab} \;=\; l_P^4 T^{ab} T_{ab} \;\simeq\; l_P^4 g^{uv}g^{uv}\delta T_{vv}\delta T_{uu}\; \label{above} \end{equation} where $G_{ab}$ is the Einstein tensor. The equality on the left follows from Einstein's equation and the relation on the right follows from the disentangled modes being the dominant source. In our semiclassical framework, the expression on the far right should be regarded as an expectation value with respect to the BH wavefunction; this being the context in which the disentangled modes are revealed. What we are then calculating is the expectation value of the scalar $G^{ab}G_{ab}$ with respect to the same wavefunction, and so it is more appropriate to write \begin{eqnarray} \langle\Psi_{BH}| G^{ab} G_{ab}|\Psi_{BH}\rangle &=& l_P^4\langle\Psi_{BH}| g^{uv}g^{uv}\delta T_{vv}\delta T_{uu}|\Psi_{BH}\rangle \nonumber \\ &\simeq& \;N_{dis}^2\frac{l_P^4}{R_S^8} \langle\Psi_{BH}| \frac{1}{F^4}|\Psi_{BH}\rangle \nonumber \\ &\simeq & \frac{N^2_{dis}}{R_S^4}\;, \label{Riemsq} \end{eqnarray} where the middle line follows from Eq.~(\ref{stress}) and the last line follows from taking the near-horizon limit along with Eq.~(\ref{4F}). One can now get a first hint about the fate of the falling stick by looking at various possibilities for $N_{dis}$. Clearly, the largest possible value for $N_{dis}$ is $S_{BH}$. This assumes that a finite fraction of all the modes that were ever emitted by the BH remain in the vicinity of the horizon. In this case, we find from Eq.~(\ref{Riemsq}) that $\;\langle G^{ab} G_{ab} \rangle_{\Psi_{BH}} \sim \frac{1}{l_P^4}\;$, and so it is likely that the stick disintegrates before it reaches the horizon. But, for our semiclassical model and for an old-enough BH, $\;N_{dis}\sim N_{pairs} \sim \sqrt{S_{BH}}\;$, for which $\;\langle G^{ab} G_{ab} \rangle_{\Psi_{BH}} \sim \frac{1}{R_S^2 l_P^2}\;$. Of course, if $N_{dis}$ is of order unity, then $\;\langle G^{ab} G_{ab} \rangle_{\Psi_{BH}} \sim \frac{1}{R_S^4}\;$, which cannot be distinguished from the background curvature. Let us next observe that, because of the Ricci scalar contribution in the Einstein tensor $\;G^{a}_{\;\;b}=R^{a}_{\;\;b}-\frac{1}{2}Rg^{a}_{\;\;b}\;$, all diagonal components of the (induced) Riemann tensor will be of roughly the same magnitude; meaning that the diagonal components of $R^{a}_{\;\;b}$ will scale with $N_{dis}/R_S^2$. It can then be deduced that, in the stick's own frame where the metric is regular, the root-mean-square (RMS) value of the Ricci curvature is given by \begin{equation} R^{A}_{\;\;\;B} \;\simeq\; \frac{N_{dis}}{R_S^2}\delta^{A}_{\;\;B}\;, \label{Riem} \end{equation} for $\;A,B=\left\{T,X,Y,Z\right\}\;$. One can immediately see that $R^{A}_{\;\;\;B}$ vanishes for the Hawking model (up to the implied background contribution) since $\;N_{dis}=0\;$ must be true in this case. \subsubsection{Curvature from the shock wave approximation} There is another way to quantify the effect of the disentangled modes acting on the stick. This would be, following Itzhaki \cite{Sunny1}, to treat the modes as shock waves and estimate the change in the metric and the curvature due to the waves crossing the stick. It is appropriate to use Kruskal coordinates for this calculation if it is to be from the stick's own perspective. A shock wave of energy $\;E\sim 1/R_S\;$ propagating outwards along the ray $U=U_0$ will change $g_{UU}$ by an amount $\;\delta g_{UU}\sim G E \delta (U-U_0)\sim l_P^2 /R_S\;\delta(U-U_0)\;$. The large Tolman blueshift for the modes in our model can be incorporated by the estimate $\;\delta(U-U_0)\sim 1/U\;$, with $U$ meant to be within a few Planck lengths from the horizon where $\;U\sim 0\;$. Then the total displacement will go as $\;\delta g_{UU}\sim N_{dis} l_P^2/R_S U\;$ (and a similar contribution to $\delta g_{VV}$ for the inward-moving modes). It can be verified that the deformed metric induces a near-horizon curvature of order \begin{equation} R^{V}_{\;\;U}\;\sim\; -\frac{1}{R_S}\frac{\partial \delta g_{UU}}{\partial U} \;\sim\; N_{dis} \frac{l_P^2}{R_S^2} \frac{1}{U^2}\;, \end{equation} and similarly for $R^{U}_{\;\;V}$. Then, since $\;U\sim R_S F\;$, we may use Eq.~(\ref{F2}) to conclude that the RMS value of the Ricci tensor is given by \begin{equation} R^{V}_{\;\;U}\;\sim\; \frac{N_{dis}}{R_S^2}\;, \end{equation} which is in perfect agreement with the estimate from Eq.~(\ref{Riem}). \subsection{The induced strain} Let us next recall the geodesic deviation equation and apply it to the current setup, \begin{equation} \frac{d^2(\Delta x)^{A}}{d\tau^2}\;=\; R^{AB}_{\;\;\;\;\;CD}V_B V^C (\Delta x)^D \;, \end{equation} where $({\Delta x})^{D}$ describes the spatial extent of stick --- so that $\;(\Delta x)^{X}=\ell\;$, $\;(\Delta x)^{Y}=(\Delta x)^{Z}= s\;$ --- and $V^A$ is the velocity vector for the stick in terms of proper time $\tau$. Using that the velocity vector for the stick is $\;V^{A}=-\beta\delta^{A}_{\;\;X}\;$ for some $\;\beta < 1\;$, we have \begin{eqnarray} \frac{d^2(\Delta x)^{A}}{d\tau^2} &=& \beta^2 R^{AX}_{\;\;\;\;\;XD} (\Delta x)^D \nonumber \\ &\simeq& \beta^2 \left. R^{A}_{\;\;\;D} (\Delta x)^D \right|_{A,D\neq X}\;, \end{eqnarray} where the second line follows from $R^{AX}_{\;\;\;\;\;BX}$ being the same order as $R^{A}_{\;\;\;B}$. Next, substituting Eq.~(\ref{Riem}) for the Ricci tensor, we obtain \begin{equation} \frac{d^2(\Delta x)^{A}}{d\tau^2}\; \simeq\; \beta^2 \left.\frac{N_{dis}}{R_S^2}(\Delta x)^A\right|_{A\neq X}\; \end{equation} or, after integrating twice, \begin{equation} \frac{\delta (\Delta Y)} {\Delta Y} \; \simeq\;(\Delta\tau)^2 \beta^2 \frac{N_{dis}}{R_S^2}\; \label{geo} \end{equation} and similarly for $Z$. Here, $\delta(\Delta Y)$ means the RMS deformation of the stick in the $Y$ direction, so that the magnitude of the left-hand side is the strain $\gamma$. At any given time, only a small (about Planck-sized) segment of the stick is exposed to the potentially dangerous near-horizon modes. For this reason, it is appropriate to start with the force acting on a segment of length $\;\Delta X\sim \Delta\;$ with $\Delta \gtrsim l_P$. The induced deformation on the stick is depicted in a spacetime diagram in Fig.~2 and at a fixed time in Fig.~3. The proper time that it takes this segment to pass through the near-horizon zone is then $\;\Delta \tau \sim \Delta/\beta\;$. Given these inputs, Eq.~(\ref{geo}) translates into \begin{equation} \gamma_{\Delta} \;\simeq\; N_{dis}\frac{\Delta^2}{R_S^2}\;. \end{equation} To estimate the total strain endured by the stick, we will assume the ``worst-case scenario'', in which the individual deformations add coherently. Then the total strain is simply $\;\ell/\Delta\gtrsim \ell/l_P\;$ times the previous result, \begin{equation} \gamma_{stick} \;\lesssim\; N_{dis}\frac{l_P\ell}{R_S^2}\;. \end{equation} The actual strain will depend on the whether or not the stick oscillates, its speed of sound and so forth. For instance, a more realistic estimate might rather be to add in quadrature the strains on each part of the stick. Then the result would be the RMS value $\;\gamma_{stick} \sim \sqrt{\frac{\ell}{\Delta}\left(\gamma_{\Delta}\right)^2}\;$, but this (or any other) modification would only weaken the previous estimate. \subsection{The induced strain in different models} Let us start with the case which is implicitly based on our previous attempt \cite{noburn} at quantifying the effects of firewall. There, we incorrectly estimated the disentanglement per mode ${\cal D}_{dis}$ as being equal to the product $\;N_{coh}C_{BH}\ll 1\;$. Then $\;N_{dis}< 1\;$ and the strain on the stick is \begin{equation} \gamma_{stick} \;\simeq \frac {\ell l_P}{R_S^2}\ll 1\;\;\;\;\;\;[{ N_{dis}< 1}]\;, \end{equation} which is vanishing in the classical limit. This outcome explains the underestimation in our previous study. Let us now consider what happens in our semiclassical model when the more accurate estimate of ${\cal D}_{dis}$ in Eq.~(\ref{Fdis}) is utilized. Then, after the Page time, $\;{\cal D}_{dis}\simeq 1\;$ and so $\;N_{dis}\sim N_{pairs}\sim\sqrt{S_{BH}}\sim \frac{R_S}{l_P}\;$. It follows that \begin{equation} \gamma_{stick} \;\simeq \frac {\ell}{R_S} \ll 1\;\;\;\;\;\;[ N_{dis}\sim \sqrt{S_{BH}}]\;. \label{track} \end{equation} This is clearly a small number, but how small? As we have seen before, if this experiment is to be conducted in a reasonable way, then this ratio is constrained by $\;{\ell}/ {R_S}<{c_{\rm sound}}/ {c}\lesssim 10^{-4}\;$. So that, in this case, despite the fact that the energy density is parametrically larger than $1/R_S^4$, the physical effect on the stick is still remarkably small. In other words, the equivalence principle is preserved. Finally, let us consider the Page model, as interpreted in the context of the firewall problem and, in particular, after the Page time. The usual interpretation of the Page model is that the number of pairs near the horizon is limited only by the original BH entropy and each of these has order one disentanglement \cite{MP}. Hence, $\;N_{dis} \sim S_{BH}\;$ and one then obtains \begin{equation} \gamma_{stick} \;\simeq \frac {\ell}{l_P}\gg 1\;\;\;\;\;\;[{ N_{dis}\sim S_{BH}}]\;. \end{equation} Such a large strain indicates that the stick is obliterated on its journey through the near-horizon region. This outcome can best be viewed as further evidence that, given the assumptions of \cite{AMPS}, a firewall is indeed an inevitable consequence. \section{Summary} Using a simple thought experiment, we have investigated how the fate of an in-falling classical object passing through the horizon depends on the state of the near-horizon Hawking radiation. We verified that our semiclassical framework for BH evaporation and pair production does not lead to a conflict with the equivalence principle of general relativity (while being consistent with standard quantum theory \cite{schwing}.) In particular, it was shown that, as long as the experiment of dropping an object through the near-horizon region can be safely carried out, the disentangled Hawking modes will do nothing further to jeopardize the serenity of the journey. This is true in spite of the disentanglement per mode being of order unity, as required for information to escape from the BH, and can be attributed to the Hawking pairs having an effective lifetime that is parametrically smaller than the Page time. On the other hand, the Page model, as normally interpreted in the firewall literature, does lead to a conflict with the equivalence principle, thus substantiating the arguments of \cite{AMPS} and others. \section*{Acknowledgments} We thank Sunny Itzhaki for many useful discussions and insights. The research of RB was supported by the Israel Science Foundation grant no. 239/10. The research of AJMM received support from an NRF Incentive Funding Grant 85353, an NRF Competitive Programme Grant 93595 and Rhodes Research Discretionary Grants. AJMM thanks Ben Gurion University for their hospitality during his visit.
1,314,259,995,471
arxiv
\section{Introduction} \label{sec:intro} Axions are some of the best-motivated extensions to the Standard Model (SM). The simplest such extension, the QCD axion, was originally proposed to solve the strong CP problem~\cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}, but it has since been realized that axions are common in many theories beyond the SM (BSM)~\cite{Witten:1984dg,Banks:1996ss,Svrcek:2006yi}. One particularly important example is string theory, which generically predicts a large number of light axions coupled weakly to the SM~\cite{Arvanitaki:2009fg}. The possibility of such a ``string axiverse'' is of particular interest because it offers a potential low-energy window into extremely high-energy physics. The simplest nonthermal production mechanism for a cosmological abundance of axions is the misalignment mechanism~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah,Turner:1983he,Marsh:2015xka}. Any axion with a mass lighter than the Hubble scale during inflation would be seeded in an approximately homogeneous state displaced from the vacuum. It would then remain frozen at this ``misaligned'' field value until the expansion rate drops below its mass, at which point it begins to coherently oscillate about the minimum of its potential. Barring substantial sources of isocurvature, axions have large-scale density perturbations that track the adiabatic fluctuations also seeded during inflation and are thus a viable candidate for the observed dark matter (DM) or a subcomponent thereof. An axion's potential is generically nonlinear, but at late times all axions with a mass $m$ much larger than the present-day Hubble rate ($m \gg H_0$) oscillate near the bottom of their potential and may be treated as free, massive fields. This is not, however, a valid assumption at early times, and it has become increasingly apparent that nonlinearities in an axion's potential can have an outsized impact on many late-time observables (see, e.g., Refs.~\cite{Daido:2015bva,Kitajima:2018zco,Arvanitaki:2019rax,Co:2019jts,Cyncynates:2021xzw,Co:2020dya, Eroncel:2022vjg,Eroncel:2022efc}). If the dark matter comprises a single axion, these early-time dynamics can strongly enhance structure on scales that enter the horizon when the Hubble rate $H$ is approximately the axion mass $m$~\cite{Arvanitaki:2019rax}. More generally, a string axiverse may consist of many axions interacting with each other through a joint potential, and recent work has shown that when any two of these have similar masses (within a factor of roughly 2) a new type of efficient, resonant energy transfer is possible~\cite{Cyncynates:2021xzw}. This mechanism, dubbed ``friendship'' due to the necessary mild coincidence of masses, transfers energy from an axion with a high decay constant to one with a lower decay constant. Since an axion's couplings to the SM are generically inversely proportional to its decay constant, the mechanism boosts the abundance of the more strongly coupled axion. In other words, friendly axion dark matter can be significantly more visible to direct detection experiments than would be expected for either axion individually. In this paper, we follow up on the work of Ref.~\cite{Cyncynates:2021xzw} with a suite of $3+1$ dimensional numerical simulations, corroborating its findings and extending the results to the strongly nonlinear regime. As anticipated in that work, large spatial inhomogeneities significantly modify the results of a homogeneous analysis. Nonperturbative fluctuations collapse into dense \textit{oscillons}, nontopological field configurations bound by self-interactions~\cite{kudryavtsev1975solitonlike,Makhankov:1978rg,Gleiser:1993pt,Kolb:1993hw,Salmi:2012ta,Amin:2011hj,Kawasaki:2019czd,Olle:2020qqy,Zhang:2020bec,Cyncynates:2021rtf}. The oscillons quench the resonant amplification and mediate energy transfer between the friendly pair, leading to approximate energy density equipartition over a broad range of parameters. In contrast to expectations from a homogeneous analysis, the enhanced visibility of one axion therefore does \textit{not} come at the expense of the other's detectability. In sum, nonlinear dynamics make the friendly axion model both more predictive (by being less parameter-dependent) and uniquely identifiable (because both axions would be detectable). This paper is divided as follows. In \cref{sec:review} we review the friendly axion model and results within the spatially homogeneous approximation. \cref{sec:results} presents the extension of these results into the nonlinear regime using numerical simulations, with a primary focus on the late-time abundance as relevant to direct detection experiments. We also investigate gravitational wave signatures in these scenarios, which, while not promising if the friendly axions make up all of the dark matter, are relevant for hyperlight subcomponents. Finally, we study a novel driving effect in which oscillons resonantly siphon energy from the axion background, parametrically enhancing their lifetime. We conclude in \cref{sec:discussion}, putting this work into the broader context of the landscape of nonlinear axion models. For completeness and ease of readability, we relegate an extended discussion of methodology and additional results to the appendices. \cref{app:numerical-details} enumerates the system of evolution equations equations and the details of our numerical implementation, and \cref{app:oscillons} expands upon our discussion of bound axion states. \section{Review of friendly axions} \label{sec:review} As a concrete and illustrative model, Ref.~\cite{Cyncynates:2021xzw} focuses on a simple two-axion potential with two instanton contributions:\footnote{ Throughout, we work in units where $\hbar = c = 1$. We also define the reduced Planck mass $M_\mathrm{pl} = 1 / \sqrt{8 \pi G} \approx 2.44 \times 10^{18} \, \mathrm{GeV}$. } \begin{align} \label{eq:twoAxionPotentialPhi} \begin{split} V(\phi_S,\phi_L) &= \Lambda_1^4 \left[ 1 - \cos \left( \frac{\phi_S}{f_S} + \frac{\phi_L}{f_L} \right) \right] \\ &\hphantom{{}={}} + \Lambda_2^4 \left( 1 - \cos \frac{\phi_L}{f_L} \right). \end{split} \end{align} The canonically normalized axion field variables $\phi_S$ and $\phi_L$ are naturally recast as angular variables via the definition $\theta_S \equiv \phi_S / f_S$ and $\theta_L \equiv \phi_L / f_L$. Redefining $\Lambda_1^4 \equiv m^2 f^2$ and $\Lambda_2^4 \equiv \mu^2 m^2 \mathcal{F}^2 f^2$, the axion masses are\footnote{ The interaction-basis axions $\phi_S$ and $\phi_L$ are not exact mass eigenstates, making this definition ambiguous. For $\mathcal{F} \gg 1$, the distinction between the two bases is small, and so we often neglect the distinction in our heuristic discussions. The effect is not, however, quantitatively negligible for all parts of the parameter space we consider, and it is always included in our results. } $m_S = m$ and $m_L = \mu m_S$ and their decay constants are $f_S = f$ and $f_L = \mathcal{F} f$, respectively. In terms of these variables, \cref{eq:twoAxionPotentialPhi} takes the form \begin{align} \label{eq:twoAxionPotential} \begin{split} V( \theta_L , \theta_S ) &= m^2 f^2 \Big[ \left( 1 - \cos \left( \theta_S + \theta_L \right) \right) \\ &\hphantom{{}={} m^2 f^2 \Big[} + \mu^2 \mathcal{F}^2 \left( 1 - \cos \theta_L \right) \Big]. \end{split} \end{align} We focus on the range $\mathcal{F} > 1$ where $f_S < f_L$, and we refer to $\phi_S$ and $\phi_L$ as the ``short'' and ``long'' axion respectively in reference to the size of their decay constants. (The regime with $f_S > f_L$ does not exhibit nonlinear resonances.) The short and long axions then form a ``friendly pair'' when $0.7 \lesssim \mu < 1$, corresponding to an $\mathcal{O}(1)$ coincidence in their masses. While \cref{eq:twoAxionPotentialPhi} might represent a subsector of a much larger axiverse, the dynamics of the friendly pair of interest are insensitive to possible couplings to other axions barring additional coincidences in mass. Namely, only the relative frequency of coupled oscillators determines the efficiency of energy transfer between them, so the actual instanton scales $\Lambda_i$ and decay constants $f_i$ matter only insofar as they (together) determine the axion masses. In the early Universe, the misalignment mechanism initializes each axion at an approximately spatially homogeneous value away from the late-time minimum; a natural assumption, barring anthropic and other considerations, is that $\theta_I(t_\text{initial}) = \mathcal{O}(1)$, where the capital index $I$ runs over axion flavors. The axions remain frozen at their misaligned values until the Hubble rate $H$ drops below their masses. Since the two axion masses are comparable, the long axion initially has $\mathcal{O}(\mathcal{F}^2)$ times more energy than the short axion. In the absence of couplings between the axions, this imbalance would persist to their present-day abundance. The same conclusion holds for coupled axions as well, so long as the masses of the axions are well separated. At large field values, however, interactions can substantially shift the axion oscillation frequency from its ground state value. Ref.~\cite{Cyncynates:2021xzw} showed that coupled axions with a decay constant hierarchy $\mathcal{F} \gtrsim 3$ and sufficiently close masses $0.75 \lesssim \mu < 1$ tend to align their frequencies in a process called autoresonance, illustrated in \cref{fig:homogeneousExpectation}. \begin{figure} \centering \includegraphics[width = \columnwidth]{homogeneous-illustration.pdf} \caption{ Homogeneous dynamics (i.e., ignoring the effect of spatial fluctuations) of friendly axions with mass ratio $\mu = 0.75$ and decay constant ratio $\mathcal{F} = 20$. Here $t$ and $\tau$ are cosmic and conformal time coordinates, respectively. The top panel depicts the evolution of the energy density in the short (pink) and long (blue) axions, while the bottom panel displays the field values $\phi_I / f_I$. Dashed black lines indicate standard cosmological redshifting for uncoupled axions as labeled. } \label{fig:homogeneousExpectation} \end{figure} Specifically, interactions drive the short axion (with the smaller decay constant) to dynamically adjust its oscillation amplitude to a fixed value in order to match its frequency to the long axion's, as evident in the lower panel of \cref{fig:homogeneousExpectation}. Consequently, the short axion energy density does not dilute like cold matter but instead remains fixed (as in the top panel of \cref{fig:homogeneousExpectation}) by siphoning energy from the long axion. If the fields remain spatially homogeneous, this energy transfer runs until backreaction disrupts the precise phase locking of the two fields. Autoresonance then ends when $\bar{\rho}_S / \bar{\rho}_L \simeq 2\mathcal{F}^2(1-\mu)^2 $ for $\mathcal{F}^2 \gg (1-\mu^2)^{-1}$, representing a near-complete transfer of the available energy density to the short axion. In other words, when autoresonance runs to completion, the energy density at late times in the dark sector is virtually entirely in the short axion---an outcome opposite to what one would expect from free evolution. The boost to the late-time energy density of the short axion relative to the scenario of independent axions is of great importance for direct detection experiments. Laboratory haloscopes probe the couplings of axion DM to SM states, which are typically higher-dimension operators suppressed by the axion decay constant $f_a$. For example, axions are expected to couple to SM photons via an interaction of the form: \begin{equation} \mathcal{L} \supset - \frac{g_{a \gamma \gamma}}{4} \phi F_{\mu \nu} \tilde{F}^{\mu \nu} \end{equation} where $g_{a \gamma \gamma} \simeq \alpha_\text{QED} / 2 \pi f_a$ is the axion-photon coupling~\cite{Marsh:2015xka,ParticleDataGroup:2020ssz}. As discussed in Ref.~\cite{Cyncynates:2021xzw}, when all axions evolve independently from $\mathcal O(1)$ initial misalignment angles, the final energy density $\rho_{a,0}$ of each axion is proportional to $f_a^2$. In this case, the signal strength $\rho_{a,0}g_{a\gamma\gamma}^2$ is roughly independent of $f_a$; as such, at a given mass any axion produced by the standard misalignment mechanism would be similarly hard to see. In a scenario with friendship however, the boosted late-time energy density of the short axion is $\bar\rho_{S,0} \propto f_L^2$ when autoresonance completes, but the coupling to SM photon is $g_{S\gamma\gamma} \simeq \alpha / 2 \pi f_S$. Thus the signal strength $\bar \rho_{S,0} g_{S\gamma\gamma}^2$ of the short axion is enhanced by $\mathcal{F}^2$, making it much more accessible to axion haloscopes. The effect, however, would be reversed for the long axion: its energy density is suppressed by $\sim \mathcal{F}^2$ compared to standard misalignment scenarios. In this picture, seeing \textit{both} friendly axions would therefore be difficult. The description of autoresonance given so far assumes the fields remain approximately spatially homogeneous, but large spatial fluctuations in the axions can prevent the completion of the energy transfer. The coherent oscillations of the short axion induce a time-dependent effective mass that resonantly amplifies fluctuations of the short axion, much like that which characterizes preheating after inflation~\cite{Traschen:1990sw, Kofman:1994rk, Kofman:1997yn} (see Refs.~\cite{Bassett:2005xm, Allahverdi:2010xz, Amin:2014eta, Lozanov:2019jxc} for reviews) and large misalignment~\cite{Arvanitaki:2019rax}. Large-amplitude fluctuations of the short axion can collapse under attractive self-interactions into oscillons---finite-lifetime, nontopological bound structures with densities of $\mathcal{O}(m^2 f^2)$ and radii of $\mathcal{O}(1/m)$. Such oscillons explore large field values for the short axion and thus continue to experience large interactions, but, being nonperturbative objects, are difficult to treat analytically. Ref.~\cite{Cyncynates:2021xzw} presented preliminary evidence that oscillon nucleation occurs for $\mathcal{F} \gtrsim 6$ and that oscillons quench autoresonance if they form early enough, setting a limit on the energy density transfer for $\mathcal{F}\gtrsim 20$. The remainder of this paper investigates the impact of the nonlinear dynamics of autoresonance and oscillon formation on the predictions of friendly axion scenarios through the use of $3+1$ dimensional numerical simulations. \section{Results} \label{sec:results} We now present numerical solutions for the fully nonlinear, friendly axion system. We implement numerical simulations of the axions' classical equations of motions with \textsf{pystella}~\cite{Adshead:2019lbr,Adshead:2019igv,pystella}, discretizing these equations onto a 3D, periodic, regularly spaced grid, computing spatial derivatives via fourth-order centered differencing, and utilizing a fourth-order Runge-Kutta method for time integration. Further details are provided in \cref{app:numerical-details}. Except where otherwise stated, all results use grids with $N^3 = 1024^3$ points, a comoving side length $L = 1.5 \, \pi / m$ and conformal timestep $\Delta \tau = \Delta x / 10 = L / 10 N$. The simulations begin with a numerical solution to the linearized system of equations starting at a time when the Hubble rate $H \ll m$ (see \cref{app:numerical-details} for further details). The 3D evolution begins when $H = m$, corresponding to a conformal time $m \tau_m = 1$ and cosmic time $m t_m = 1 / 2$. The scale factor is normalized relative to $a_m \equiv a(t_m)$. Of the free parameters in the model, the decay constant ratio $\mathcal{F}$ has the strongest effect on the dynamics. The mass ratio and initial misalignments mainly determine whether or not autoresonance occurs at all, whereas the decay constant ratio determines the size of nonlinear backreaction and even whether fluctuations are sizeably enhanced at all. Therefore, for most simulations we pick fiducial values $\mu = 0.75$, $\theta_{L}(0, \mathbf{x}) = 0.8 \, \pi$, and $\theta_{S}(0, \mathbf{x}) = 0$, and run simulations for varying values of $\mathcal{F}$.\footnote{ So long as we choose $\theta_L(0, \mathbf{x})$ large enough that the axions experience autoresonance, the initial misalignment angles are essentially inconsequential~\cite{Cyncynates:2021xzw}. On the other hand, the choice of the relatively detuned mass ratio $\mu = 0.75$ is made to reduce the runtime of the simulations, as smaller $\mu$ cause perturbations to grow faster (see Appendix~C of Ref.~\cite{Cyncynates:2021xzw}) and shortens the oscillon lifetime (explained in \cref{sec:drivenOscillons} below). } \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{delta-S-mu-80-F-50-8-panel.pdf} \caption{ Projected density contrast for the short axion, \cref{eqn:def-projected-density-contrast}, at eight snapshots illustrating key moments during the course of a simulation with mass ratio $\mu = 0.8$ and decay constant ratio $\mathcal{F} = 50$. The onset of strong nonlinearity is observed between the third and fourth panel, followed by the collapse of large overdensities into spherically symmetric oscillon configurations. At this point, autoresonance ceases at the homogeneous level, and energy stops flowing from the long axion to the short axion. The oscillons persist, driven by the long axion in a form of localized autoresonance (see Sec.~\ref{sec:drivenOscillons}). Between the sixth, seventh, and eighth panel, the oscillons dissipate, leaving remnant overdensities that would eventually seed small-scale galactic substructure. } \label{fig:delta-S-over-time} \end{figure*} For $\mathcal{F} \lesssim 6$, spatial perturbations do not grow large enough to form oscillons and the results of the simulations are described completely by Ref.~\cite{Cyncynates:2021xzw}. For larger $\mathcal{F}$, fluctuations of $\phi_S$ indeed collapse into oscillons as anticipated by Ref.~\cite{Cyncynates:2021xzw}. We present a broad overview of the dynamics of oscillon formation in \cref{fig:delta-S-over-time}, plotting two dimensional projections of the energy density in the short axion at various times over the course of a simulation.\footnote{ To be explicit, we display the energy density projected (averaged) along one axis of the simulation volume, e.g., \begin{align}\label{eqn:def-projected-density-contrast} \rho_{S}(x, y) &= \frac{1}{\left\langle \rho_S(x, y, z) \right\rangle} \frac{1}{L} \int_{0}^{L} \mathrm{d} z \, \rho_{S}(x, y, z). \end{align} Such a projected quantity presents more information about the full volume than a single two dimensional slice but also underestimates the magnitude of overdensities (since, e.g., any given oscillon occupies only a small fraction of space along the $z$ axis). } The field begins in a nearly homogeneous state in the first panel, in which the initial adiabatic fluctuations are too small to be seen. The second and third panels depict the linear enhancement of fluctuations by parametric resonance as the amplification of local overdensities. Fluctuations become nonlinear at a time $m t_\mathrm{nl} \sim 100$, resulting in large overdensities that quickly collapse under attractive self-interactions into the oscillons apparent in the fifth panel. These oscillons radiate energy and begin to dissipate one by one around $mt \gtrsim 2000$. Eventually, no bound objects remain and nonlinear interactions cease to dominate the dynamics, although significant density fluctuations remain. The interplay between the persistence of homogeneous autoresonance and the onset of nonlinearity has important consequences for the final distribution of energy between the two axions. We discuss these dynamics in \cref{sec:energyDensity}, comparing to the results of Ref.~\cite{Cyncynates:2021xzw}. In \cref{sec:gravitationalWaves} we compute the gravitational wave production from friendly axions, finding possible signatures for hyperlight subcomponents in the CMB $B$-mode polarization. Finally, in \cref{sec:drivenOscillons} we demonstrate that the oscillons that form continue to experience autoresonance long after the spatially averaged fields cease to resonate, and we discuss the implications for oscillon lifetimes. \subsection{Evolution of energy densities} \label{sec:energyDensity} Having established the importance of nonlinear dynamics for a large portion of parameter space, we now investigate how nonlinear density fluctuations impact the final distribution of energy between the two axions (and, as a consequence, their relic abundances today). We first study the evolution of each axion's energy density in \cref{fig:rho-evolution} for three representative values of $\mathcal{F}$, comparing the result of simulations to that of a homogeneous analysis. To avoid ambiguities in the final partition of energy densities we work in the mass basis \begin{subequations}\label{eqn:mass-basis-def} \begin{align} \nu_h &\equiv \phi_S\cos\eta + \phi_L\sin\eta, \\ \nu_l &\equiv -\phi_S\sin\eta + \phi_L\cos\eta, \\ \cos2\eta &\equiv\f{1 - \mu^2-\mathcal{F}^{-2}}{4\mathcal{F}^{-2} + (1 - \mu^2 - \mathcal{F}^{-2})^2}, \end{align} \end{subequations} where the heavy state $\nu_h$ is composed mostly of the short axion, and the light state $\nu_l$ is composed mostly of the long axion in the limit $\mathcal{F}\gg 1$ (see Appendix~A of Ref.~\cite{Cyncynates:2021xzw} for a complete discussion). \begin{figure} \centering \includegraphics[width=\columnwidth]{rho-vs-t-mass-basis-mu-75-adiabatic.pdf} \caption{ Evolution of energy density in the mass eigenstates (\cref{eqn:mass-basis-def}) for three simulations with mass ratio $\mu = 0.75$ and decay constant ratios $\mathcal{F} = 10$, $20$, and $40$ by row. Each panel depicts the simulation result for the heavy and light states ($\nu_h$ and $\nu_l$) in pink and blue, respectively, as well as the corresponding results for a purely homogeneous calculation in thin black and grey. The shaded region denotes the time when order-one density fluctuations are present, which we define as the time when more than 5\% of the energy in the short axion resides in overdensities $\rho_S (\mathrm{x}) / \bar{\rho}_S > 10$. Shortly after these large overdensities form, they either dissipate or coalesce into oscillons, so the gray regions are decent proxies for the presence of oscillons. Note that in contrast to \cref{fig:homogeneousExpectation} we here plot the energy density of the mass-basis fields rather than the interaction-basis ones. } \label{fig:rho-evolution} \end{figure} Each panel exhibits an initial phase of homogeneous, autoresonant energy transfer and the onset of nonlinearity that quenches autoresonance, at which point the energy density departs from the trend of the homogeneous result. From analytic estimates of growth rate for the fastest growing mode~\cite{Cyncynates:2021xzw}, nonlinearity occurs at approximately\footnote{This result accounts for both Hubble friction and the slight decay of the initial metric perturbation before the fastest-growing mode starts growing (see Eqs.~C17 and C18 in Ref.~\cite{Cyncynates:2021xzw} and the surrounding discussion, fixing $\delta\omega = \mu - 1$).} \begin{align}\label{eqn:tnl-approximation} m t_\mathrm{nl} &\approx 17.6 \frac{1 - 0.1 \log(1 - \mu)}{1 - \mu}, \end{align} in good agreement with $m t_\mathrm{nl} \approx 80$ observed in \cref{fig:rho-evolution}. The ultimate partitioning of energy depends primarily on the precise timing of nonlinearity and oscillon formation relative to the (would-be) completion of autoresonance, a point which we detail below. We now describe these two regimes of $\mathcal{F}$ in detail.\footnote{ The precise $\mathcal{F}$ where these regimes meet depends on the exact value of $t_\mathrm{nl}$, which varies with $\mu$ via \cref{eqn:tnl-approximation}. \label{footnote:tnl-mu-dependence} } For $6 \lesssim \mathcal{F} \lesssim 20$, oscillons nucleate \textit{after} the short axion's energy density first exceeds the long axion's. At roughly the same time, autoresonance ends and $\bar{\rho}_h$ ceases to be roughly constant, instead decaying approximately as $a^{-3}$ like nonrelativistic matter. Contrary to the homogeneous analysis of Ref.~\cite{Cyncynates:2021xzw}, however, we observe in this range that nonperturbative dynamics in fact enable energy transfer from the short axion back to the long axion, resulting in late-time \mbox{(near-)equilibration} of the two axion energy densities. This phenomenon is most evident in the top panel of \cref{fig:rho-evolution} ($\mathcal{F} = 10$), where the heavy and light axions' energy densities asymptote toward a common value. Interactions between the two axions are strongest where the field values are largest, suggesting that oscillons play a key role in reversing energy transfer. Inside an oscillon, the field amplitude oscillates with a period $\omega < m$ due to its binding energy. Since the long axion's natural frequency is $\mu m < m$, an oscillon can provide a locus for more efficient energy transfer from the short axion back to the long axion.\footnote{ In fact, during autoresonance the short axion is driven at exactly the frequency $\mu m$. When fluctuations grow nonperturbative the oscillon frequencies will thus remain close to $\mu m$. } Indeed, for most decay constant ratios $6 \lesssim \mathcal{F} \lesssim 20$, the end of autoresonance and formation of oscillons is associated with a substantial transfer of energy to the light axion. For $6 \lesssim \mathcal{F} \lesssim 10$, the final stage of energy transfer to the light axion occurs in discrete jumps that appear to coincide with the death of individual oscillons. In all cases, we observe that most of the radiation from the heavy axion into the light axion is into semirelativistic modes, as one would expect if oscillons are responsible for equilibration. However, at larger $\mathcal{F}$ equipartition is nearly achieved by the time oscillons form anyway; the subsequent evolution is more continuous, obfuscating any association between oscillon death and energy transfer. While nonlinear effects are evidently crucial, identifying the specific mechanism for energy flow in general is challenging. The middle panel with $\mathcal{F} = 20$ represents the marginal case where oscillons form at nearly the exact time that the heavy axion's energy density first reaches that of the light axion. For larger values $\mathcal{F} \gtrsim 20$, $t_\mathrm{nl}$ and oscillon formation occur before the heavy axion dominates the sector's energy density, terminating autoresonant energy transfer to the heavy axion. As shown in the bottom panel of \cref{fig:rho-evolution} ($\mathcal{F} = 40$), the energy density in both axions then decays as approximately $a^{-3}$. In this case the backreaction effects at play for smaller $\mathcal{F}$ are too suppressed to enable substantial energy transfer by the oscillons. The trends for yet larger $\mathcal{F}$ are qualitatively similar: parametric resonance proceeds at the same rate and oscillons form at a similar time. The final ratio of energy densities $\bar{\rho}_h / \bar{\rho}_l$ thus receives a constant boost due to the period of autoresonance but still decreases as $1 / \mathcal{F}^2$. Having discussed the dynamics that control the distribution of energy between the two axions, we now summarize the full $\mathcal{F}$-dependence of the late-time energy fractions, \begin{align}\label{eqn:energy-partition-def} \Xi_{I} &\equiv \left. \frac{\bar{\rho}_I}{\bar{\rho}_h + \bar{\rho}_l} \right\vert_\text{late time}, \end{align} where $I = h, l$. The final partitioning changes qualitatively at a critical decay constant ratio $\mathcal{F}_\star$ for which nonlinearities become important (at $t_\mathrm{nl}$) just as the heavy axion's energy density first matches the light one's (via autoresonant energy transfer). From our simulations we find $\mathcal{F}_\star \approx 20$ for $\mu = 0.75$; this value depends on the mass ratio in the same manner as $t_\mathrm{nl}$ (c.f. \cref{eqn:tnl-approximation}). This timing separates two distinct regimes: one of near-equilibration due to nonlinear effects at $\mathcal{F} < \mathcal{F}_\star$ and a $1/\mathcal{F}^2$ suppression of the heavy-axion abundance via the early end of autoresonance at larger $\mathcal{F}$. Both regimes are well captured by \begin{subequations}\label{eqn:energy-partition-estimate} \begin{align} \label{eq:frach} \Xi_h &\sim \begin{dcases} \frac{1}{2} & 6 \lesssim \mathcal{F} \lesssim \mathcal{F}_\star \\ \frac{1}{1 + 1.3 (\mathcal{F} / \mathcal{F}_\star)^2} \hphantom{1 - } & \mathcal{F} \gtrsim \mathcal{F}_\star \end{dcases} \\ \label{eq:fracl} \Xi_l &\sim \begin{dcases} \frac{1}{2} & 6 \lesssim \mathcal{F} \lesssim \mathcal{F}_\star \\ 1 - \frac{1}{1 + 1.3 (\mathcal{F} / \mathcal{F}_\star)^2} & \mathcal{F} \gtrsim \mathcal{F}_\star \end{dcases}, \end{align} \end{subequations} including an empirical factor of $1.3$ that best fits the results from simulations. We display the corresponding quantities computed directly from simulations in \cref{fig:final-rho-vs-F}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{final-partition-vs-F-mass-basis-mu-75-adiabatic.pdf} \caption{ Late-time partition of energy between the friendly axions ($\Xi_I$, \cref{eqn:energy-partition-def}) as a function of the decay constant ratio $\mathcal{F}$, evaluated at the end of the simulations (long after energy transfer ceases). Results for the heavy and light axions are respectively in pink and blue for the simulations and black and grey for the homogeneous computation. The dashed, dark pink and blue curves depict the empirical fit of \cref{eqn:energy-partition-estimate}. The shaded region indicates the range of $\mathcal{F}$ for which oscillons form after the heavy axion's energy overtakes the light one's (in which case nonlinear effects return a nonnegligible amount of energy to the light axion). These results are for a fixed mass ratio $\mu = 0.75$; the precise thresholds in $\mathcal{F}$ for these regimes would vary mildly for different values of $\mu$, but the qualitative behavior would not change (see \cref{footnote:tnl-mu-dependence}). } \label{fig:final-rho-vs-F} \end{figure} For $\mathcal{F} \lesssim 6$, the energy densities indeed match those predicted by the homogeneous theory, which are computed in full in Ref.~\cite{Cyncynates:2021xzw}. For $6 \lesssim \mathcal{F} \lesssim 20$, the energy density of the light axion, instead of being entirely depleted, remains within a factor of between 1 and 4 of the heavy axion's energy density with a precise dependence on $\mathcal{F}$ beyond the sophistication of \cref{eqn:energy-partition-estimate}. At larger $\mathcal{F} \gtrsim 20$, the $1 / \mathcal{F}^2$ scaling takes over, parametrically suppressing the heavy axion's abundance relative to the light axion's. Nonetheless, in this range the heavy axion carries approximately $\mathcal{F}_\star^2/1.3 \sim 310$ times more energy density than it would have had in the absence of autoresonance. The light axion's enhanced abundance has important consequences for direct detection experiments. Although the near-even partitioning of energy for $6 \lesssim \mathcal{F} \lesssim \mathcal{F}_\star$ implies that the light axion is slightly harder to detect than predicted by Ref.~\cite{Cyncynates:2021xzw}, it also implies that the long axion requires only twice the experimental sensitivity as it would for an independent axion of that mass. This is in contrast to the homogeneous expectation, which would be that the light axion requires $\mathcal{F}^2$ times the experimental sensitivity. The nonperturbative equalization of energy density in the sector thus serves to make the sector \emph{as a whole} more visible to direct detection experiments. Observing two axions with similar masses---one substantially more visible than expectations for single-axion misalignment, the other comparably so---is a unique signature of friendly dynamics of the form described here. For the mass range $10^{-10} \; \mathrm{eV} \lesssim m \lesssim 10^{-3} \; \mathrm{eV}$, many near-future experimental efforts will probe relevant parameter space (see, e.g., Refs.~\cite{Brouwer:2022bwo,Alesini:2017ifp,Stern:2016bbw,BRASS,Lasenby:2019prg,Berlin:2019ahk,Berlin:2020vrk,DMRadio:2022pkf,Beurthey:2020yuq,McAllister:2017lkb,Nagano:2019rbw,Liu:2018icu}). To close, we connect the partitioning of \cref{eqn:energy-partition-estimate} to present-day abundances by estimating the net present-day energy density in the sector. The energy density at horizon crossing is dominated by the light axion, i.e., $\bar{\rho}_{\mathrm{tot}} ( t_m ) \sim \mu^2 m^2 \mathcal{F}^2 f^2 \Theta_{L,0}^2$, which subsequently redshifts as $a^{-3}$.\footnote{ The $a^{-3}$ redshifting assumes that the axions are always non-interacting and nonrelativistic. In reality, at early times the axion interactions during autoresonance cause $\bar{\rho}_{\mathrm{tot}}$ to redshift slightly slower than $a^{-3}$, and at later times oscillons radiate mildly relativistic axions such that $\bar{\rho}_{\mathrm{tot}}$ redshifts slightly faster than $a^{-3}$ (until all axions become nonrelativistic). Together, these effects amount to only an $\mathcal{O}(1)$ factor which we neglect for simplicity. } Combined with \cref{eqn:energy-partition-estimate}, the present-day abundance of the each axion is \begin{align}\label{eqn:final-relic-abundance} \begin{split} \frac{\Omega_{h, 0}}{0.13} &\approx \Xi_I \left( \frac{m}{10^{-19} \, \mathrm{eV}} \right)^{1/2} \left( \frac{\mathcal{F} f}{10^{16} \, \mathrm{GeV}} \right)^2 \left( \mu \Theta_{L, 0} \right)^2, \end{split} \end{align} with $\Xi_I$ set by \cref{eqn:energy-partition-estimate}. Factors accounting for the thermal history of the SM (i.e., the number of effective relativistic degrees of freedom in the SM entropy and energy density) change the above result by only an $\mathcal{O}(1)$ factor over the mass range of interest and are omitted for simplicity. \subsection{Gravitational waves} \label{sec:gravitationalWaves} Rapidly growing fluctuations during and after autoresonance and the resulting oscillons can both source gravitational waves, again much like the parametric resonance and oscillon formation that can occur during preheating~\cite{Khlebnikov:1997di, Easther:2006gt, Easther:2006vd, Easther:2007vj, Garcia-Bellido:2007nns, Dufaux:2007pt, Zhou:2013tsa, Lozanov:2019ylm, Amin:2018xfe, Hiramatsu:2020obh} and single-axion misalignment~\cite{Arvanitaki:2019rax}. In this section we compute the signal strength generated by friendly axions and discuss the corresponding constraining power of existing and future observations. \Cref{app:gravitational-waves} briefly reviews stochastic gravitational wave backgrounds and the transfer functions required to relate their spectra at emission to that at the present day. We begin by estimating the scaling of the peak amplitude of the gravitational wave spectrum with the short-axion mass $m$ and long axion's decay constant $\mathcal{F} f$. Gravitational waves are sourced by the anisotropic part of the stress tensor (via \cref{eqn:gw-eom}), whose components scale like the energy density of the source. The time (relative to $t_m$) and wavenumber (relative to $m$) of peak emission varies weakly with model parameters (and is entirely independent of $f$). The spectral abundance of gravitational waves (\cref{eqn:omega-gw-spectrum-ito-power-spectrum}) therefore scales with two powers of the fractional energy density of the source at the time of emission, $\bar{\rho}_\mathrm{source}(t) / \bar{\rho}(t)$. The short axion is the dominant source of anisotropic stress, which we expect to peak near the time when the system becomes nonlinear, $t_\mathrm{nl}$, when axion gradients are largest and power is scattered to smaller scales. To proceed, we follow the model-independent heuristics of Refs.~\cite{Giblin:2014gra, Amin:2014eta}. Approximating the gravitational wave source as a Gaussian peaking at momentum $k_\star$ with width $\sigma$, one may estimate the peak amplitude to be~\cite{Giblin:2014gra}: \begin{align}\label{eqn:rule-of-thumb-1} \Omega_{\mathrm{GW}}(k_\star) &= \frac{27 \gamma^2 \nu^2}{\sqrt{\pi}} \frac{k_\star}{\sigma} \left( \frac{a H_p}{k_\star} \right)^2, \end{align} at the time the source is maximized. Here $\gamma$ is what fraction of the Universe's energy is in the source at the time of the process, $\nu$ measures how anisotropic the source is, and $H_p$ the Hubble parameter at the time of the process.\footnote{ Note that our $\nu^2$ corresponds to $\beta w^2$ in terms of the parameters of Ref.~\cite{Giblin:2014gra}. } The peak wavenumber $k_\star$ and width $\sigma$ are straightforward to approximate (or read off of simulation results), but the anisotropy coefficient $\nu$ is harder to estimate; Ref.~\cite{Giblin:2014gra} motivates $\nu \sim 10^{-2}$ to $10^{-1}$ for typical processes. Evaluating \cref{eqn:rule-of-thumb-1} at $t_\mathrm{nl}$ and plugging in $a H = 1 / \sqrt{2 t_\mathrm{nl}}$, \begin{align}\label{eqn:rule-of-thumb-2} \Omega_{\mathrm{GW}}(k_\star) &\approx \frac{27 \nu^2}{\sqrt{\pi}} \frac{k_\star}{\sigma} \left( \frac{m}{k_\star} \right)^{2} \frac{\left( \mu \mathcal{F} \Theta_{L, 0} f / M_\mathrm{pl} \right)^4 }{ \left[ 1 + 1.3 \left( \mathcal{F} / \mathcal{F}_\star \right)^2 \right]^2 }, \end{align} where we have used that the energy density in the short axion at $t_{\mathrm{nl}}$ is approximately $1 / [1 + 1.3 (\mathcal{F} / \mathcal{F}_\star)^2]$ of the total axion energy density, and the total axion energy density is given by $\bar{\rho}_{\mathrm{tot}}(t_m) (t_m / t_{\mathrm{nl}} )^{3/2}$. Notice that the suppression from how far inside the horizon the peak is (the factor of $[a H_p / k_\star]^2$ in \cref{eqn:rule-of-thumb-1}) is exactly compensated by the growth in time of $\gamma$ (since the homogeneous energy available to source the short axion redshifts with one fewer power of the scale factor than the SM radiation). We therefore expect gravitational wave signals from friendly axions to be only weakly sensitive to the time of nonlinearity $t_\mathrm{nl}$. By comparing \cref{eqn:rule-of-thumb-2} with the relic abundance (\cref{eqn:final-relic-abundance}) we may estimate the peak of the gravitational wave signal as a function of the mass $m$ and relic abundance of the heavy axion $\Omega_h(t_0)$: \begin{align} \begin{split} \Omega_{\mathrm{GW}, 0} h^2 &= \frac{27 \nu^2}{\sqrt{\pi}} \frac{k_\star}{\sigma} \left( \frac{m}{k_\star} \right)^{2} \left[ \Omega_{\mathrm{rad}}(t_0) h^2 \right]^{-1/2} \\ &\hphantom{ {}={} } \times \left[ \Omega_h(t_0) h^2 \right]^2 \left( \frac{m}{H_{100}} \right)^{-1} \end{split} \end{align} where we have dropped factors of the relativistic degrees of freedom, which reduce the amplitude by at most a factor of two at early enough times $t_m$ such that all SM species are in thermal equilibrium. Here, $H_{100} = 100 \, \mathrm{km} / \mathrm{s} / \mathrm{Mpc} = 2.13 \times 10^{-33} \, \mathrm{eV}$ and $h = H_0/H_{100}$. From our simulations we observe that $k_\star / m \approx 9$ and $\sigma \approx k_\star / 3$. Taking $\nu = 1/20$ (for which the estimates agree well with the simulation results) and considering the regime $\mathcal{F} \lesssim 20$ for which the signal is not suppressed, the peak amplitude is \begin{align}\label{eqn:omega-gw-scaling} \Omega_{\mathrm{GW}, 0}(k_\star) h^2 &\approx 10^{-15} \left( \frac{\Omega_S(t_0) h^2}{0.06} \right)^{2} \left( \frac{m}{10^{-21} \, \mathrm{eV}} \right)^{-1}, \end{align} at a present-day frequency of the peak of \begin{align}\label{eqn:gw-frequency-scaling} \frac{f_{\mathrm{GW}, \star}}{\mathrm{Hz}} &\approx 2.8 \times 10^{-14} \frac{k_\star}{m} \left( \frac{m}{10^{-21} \, \mathrm{eV}} \right)^{1/2}. \end{align} The amplitude estimate \cref{eqn:omega-gw-scaling} agrees quite well---within a factor of a few---with the spectra from simulations when evaluated at $t_\mathrm{nl}$. However, the spectra evaluated at the end of the simulation (after all gravitational wave production has concluded) are about an order of magnitude larger due to factors not captured by these simplistic estimates such as the time evolution of the source after $t_\mathrm{nl}$. From \cref{eqn:omega-gw-scaling,eqn:gw-frequency-scaling} we see that adjusting the mass $m$ to change the peak frequency by some factor modulates the gravitational wave power spectrum by two powers of that factor. Signals that could be visible at pulsar timing arrays (with frequencies of order $10^{-9} \, \mathrm{Hz}$) could therefore not exceed amplitudes of $10^{-23}$ (about eight orders of magnitude below the projected sensitivity of the Square Kilometer Array~\cite{Janssen:2014dka,Schmitz:2020syl}) without requiring an axion abundance that would overclose the Universe. This is an unfortunate consequence of the source both being short-lived and redshifting more slowly than the SM plasma from the time of gravitational wave production to the present day.\footnote{ Despite the dearth of direct gravitational-wave probes at frequencies between $10^{-15}$ and $10^{-9}$ Hz, two \textit{indirect} constraints apply around this range. Precision CMB measurements of the energy density in radiation in the Universe provide an upper bound on the present-day gravitational wave spectrum (gravitational waves themselves contributing to expansion like radiation)~\cite{Maggiore:1999vm, Dvorkin:2022jyg}, currently of order $\Omega_{\mathrm{GW}, 0} h^2 \sim 10^{-6}$~\cite{Planck:2018vyg,Pagano:2015hma}. In addition, recent work has argued that spectral distortions of the CMB blackbody also probe gravitational waves in this frequency range~\cite{Kite:2020uix}. While far-future experiments could provide tighter constraints than $N_\mathrm{eff}$ measurements, these still are unlikely to be useful probes of friendly axions. } Note that the stochastic background from single-field inflation, if detectable by future CMB experiments, is nearly scale invariant and of order $10^{-16}$~\cite{Smith:2005mm,Caprini:2018mtu}, far larger than those possible from friendly axion DM. On the other hand, existing measurements of the $B$-mode polarization of the CMB already constrain gravitational waves at frequencies between $10^{-18}$ and $10^{-16}$ Hz, with a most stringent upper limit of $\Omega_{\mathrm{GW}, 0} h^2 \sim 10^{-16}$ at $f_\mathrm{GW} \sim 10^{-17} \, \mathrm{Hz}$~\cite{Clarke:2020bil}. Importantly, the polarization is sourced almost exclusively at recombination, when the photon visibility function spikes. As a result, CMB constraints are only relevant for scenarios where gravitational waves are sourced before this time, i.e., when the Hubble scale is $H \gtrsim H_\mathrm{rec} \sim 3 \times 10^{-29} \, \mathrm{eV}$~\cite{Planck:2018vyg} In the scenarios we consider here, the anisotropic stress maximizes after about ten field oscillations, so the smallest relevant mass is of order $10^{-27} \, \mathrm{eV}$. Consulting \cref{eqn:gw-frequency-scaling}, the peak of the signal at such a mass corresponds to a frequency of order $2 \times 10^{-16} \, \mathrm{Hz}$. Such hyperlight axion dark matter is well ruled out by fuzzy dark matter constraints, but could make up some subcomponent of the total dark matter (depending on the mass)~\cite{Irsic:2017yje,Armengaud:2017nkf,Dentler:2021zij,Bozek:2014uqa,Hlozek:2014lca,Kobayashi:2017jcf,DES:2018zzu,Lague:2021frh,Hlozek:2017zzf,Dalal:2022rmp,Flitter:2022pzf}. By the preceding argument, CMB measurements mainly probe the infrared tail of the gravitational wave background from friendly axions. Simulating a large enough volume to capture these modes while also resolving the nonlinear dynamics at small scales requires orders of magnitude more computational resources than that available to us. Fortunately, on causal grounds the gravitational wave spectrum on scales much larger than the relevant dynamical scales (i.e., $k / a \ll m$) follows a simple power law, independent of the underlying dynamics~\cite{Hook:2020phx}. We therefore extrapolate the signals computed in simulations as decaying with $f_\mathrm{GW}^3$ at smaller frequencies, appropriate for infrared, ``causal'' modes generated inside the horizon and those generated while superhorizon that reenter during radiation-dominated era. The lowest-frequency modes in the simulation do nearly follow an $f_\mathrm{GW}^3$ scaling, so we expect this approximation to be at worst conservative. Constraints were similarly derived on the infrared tail of gravitational waves from a model of early dark energy in Ref.~\cite{Weiner:2020sxn}. However, note that recombination occurs shortly after matter-radiation equality, and superhorizon causal modes that reenter the horizon during the matter era instead grow with one power of $f_\mathrm{GW}$.\footnote{ See Ref.~\cite{Hook:2020phx} for a thorough presentation of the imprint of the expansion history and presence of free-streaming radiation on causal gravitational waves. } Since the CMB becomes increasingly less sensitive to $\Omega_\mathrm{GW}$ on scales larger than the horizon at equality, we do not expect the break in the power law to improve constraints. (On the other hand, it would likely be observable for causal gravitational wave backgrounds that are indeed observable in the CMB.) Our simulations themselves also do not account for the transition to the matter-dominated, instead assuming a radiation-dominated Universe; we expect this to be entirely sufficient for our estimates here. We investigate the possibility of CMB constraints in \cref{fig:gws-at-cmb}, which displays the possible signals from friendly axions subcomponents with $\mathcal{F} = 20$. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{gws-hyperlight-F-20.pdf} \caption{ Gravitational wave backgrounds from friendly axions with fixed decay constant ratio $\mathcal{F} = 20$ and mass ratio $\mu = 0.75$. Each curve depicts the signal for a short axion mass making up a fraction of the total dark matter $F_\mathrm{DM}$ as allowed by CMB and large scale structure data~\cite{Lague:2021frh}, indicated by the legend. Dashed lines indicate the extrapolation of the signal computed in the simulations (solid lines) to smaller frequencies as an $f_\mathrm{GW}^3$ power law as justified in the text. The transparent curves, with masses for which the aforementioned probes do not provide constraints, take the axions to make up all of dark matter. Such light dark matter is ruled out by fuzzy dark matter constraints, but we include these curves to illustrate the mass dependence of the signals. CMB constraints from Ref.~\cite{Clarke:2020bil} are superimposed in light green. } \label{fig:gws-at-cmb} \end{figure} To illustrate the constraining power on the present abundance of friendly axions, we consider masses varying from $10^{-27}$ to $10^{-20} \, \mathrm{eV}$, each taking the fraction of dark matter in friendly axions to saturate limits on ultralight axions from CMB and large scale structure data~\cite{Lague:2021frh}. Because the CMB is highly sensitive to low-frequency tensor modes (in terms of their effective energy density)~\cite{Clarke:2020bil}, the CMB can place tight constraints on the fraction of dark matter in hyperlight friendly axions. For $m = 10^{-27} \, \mathrm{eV}$, Planck and BICEP2/\textit{Keck}~\cite{Planck:2018jri,BICEP2:2018kqh} allow only a $0.1 \%$ friendly subcomponent.\footnote{ Note that the most recent BICEP/\textit{Keck} data release further improved constraints on the tensor-to-scalar ratio by a factor of $\sim 1.6$, and that CMB-S4 projects to provide upwards of a further factor of $\sim 30$~\cite{CMB-S4:2020lpa}. } We may obtain a heuristic bound as a function of mass by extrapolating the $f_\mathrm{GW}^3$ tail to the frequency corresponding to the horizon size at matter-radation equality, $f_\mathrm{eq} = 1.54 \times 10^{-15} \, \mathrm{Hz}$, where Ref.~\cite{Clarke:2020bil} reports an upper limit of $\Omega_{\mathrm{GW}, 0}^\mathrm{bound} h^2 = 3.2 \times 10^{-16}$. Applying the scalings of \cref{eqn:omega-gw-scaling,eqn:gw-frequency-scaling}, the CMB probes \begin{align} \frac{\Omega_h + \Omega_l}{\Omega_\mathrm{DM}} \lesssim 10^{-2} \left( \frac{m}{5 \times 10^{-27} \, \mathrm{eV}} \right)^{5/4} \sqrt{ \frac{\Omega_{\mathrm{GW}, 0}^\mathrm{bound}(f_\mathrm{eq}) h^2}{ 3.2 \times 10^{-16} } } \end{align} which is only relevant for masses $m \gtrsim 10^{-27} \, \mathrm{eV}$ for which the signals are produced before recombination. With the bound of Ref.~\cite{Clarke:2020bil}, the limits become irrelevant (i.e., order unity) for masses $m \gtrsim 10^{-25} \, \mathrm{eV}$, well below the present lower limits on the mass of fuzzy dark matter. \subsection{Driven oscillons} \label{sec:drivenOscillons} Beyond their important role in the nonperturbative dynamics of friendly axions, oscillons have long been a subject of interest~\cite{kudryavtsev1975solitonlike,Makhankov:1978rg,Kolb:1993hw,Salmi:2012ta,Amin:2011hj,Kawasaki:2019czd}. As nontopological excitations, oscillons generically decay by radiating semirelativistic modes, making their lifetime an interesting object of study~\cite{Olle:2020qqy,Zhang:2020bec,Cyncynates:2021rtf}. In models with friendly axions, the lifetime of an oscillon can be extended beyond its in-vacuum expectation because of energy transfer from the long axion~\cite{Cyncynates:2021xzw}. In this section, we show that short axion oscillons are driven by the long axion via autoresonance, obeying equations analogous to those for the homogeneous fields.\footnote{ Other related nonlinear wave equations exhibit oscillons/solitons driven by autoresonance, e.g., Refs.~\cite{friedland2005excitation,friedland2006emergence,maslov2007breather,batalov2011control,karachalios2019excitation}. } We then provide analytic results to quantify the oscillon lifetime enhancement and numerical results to support our analysis. Because of the importance of nonlinear dynamics, it is more convenient to discuss oscillons in the interaction basis of the short and long axion, which we adopt consistently throughout this section. An oscillon is a quasi-periodic, quasi-localized excitation of the axion field, and its fundamental frequency $\omega$ is smaller than the rest mass of the axion $m$. After its initial formation, the binding energy per particle inside an isolated oscillon, \begin{align} E_\te{bind} = m - \omega, \end{align} is a decreasing function of time, and consequently the characteristic size of the oscillon \begin{align} r_\te{osc} \approx \f{\pi}{\sqrt{2 m E_\te{bind}}}, \end{align} is an increasing function of time. Over time its rate of radiation falls off approximately exponentially with this growing separation of scales~\cite{Cyncynates:2021rtf}: \begin{align} P_\te{rad}/f^2\sim \exp\ps{-\f{r_\te{osc}}{\lambda_\te{rad}}}\sim \exp\ps{-\f{3\omega}{\sqrt{2 m (m - \omega)}}}, \end{align} where $\lambda_\mathrm{rad} = 3\omega$ is the radiation wavelength due to $3\to1$ processes, which dominate the decay for most $\omega$ in potentials with parity symmetry. Thus as oscillons age and $\omega$ increases toward $m$, the oscillon radiates more and more slowly. However, the oscillon is not infinitely long-lived: a geometrical consequence of living in three dimensions is that an oscillon must die at a frequency $\omega_\te{crit} < m$ (see e.g., Refs.~\cite{Fodor:2009kf,Fodor:2019ftc,Cyncynates:2021rtf}). An oscillon therefore typically spends the majority of its lifetime at a frequency close to $\omega_\te{crit}$. The mechanism of radiation (which is discussed in detail in Refs.~\cite{Fodor:2019ftc,Zhang:2020bec,Cyncynates:2021rtf}) does not play an important role in our analysis, so we may simply characterize the decay rate of the oscillon by its instantaneous lifetime: \begin{align} T_\te{inst}(\omega)&= \f{E_\te{osc}(\omega)}{P_\te{rad}(\omega)}\equiv \f{1}{\Gamma_\te{inst}(\omega)}, \end{align} where $E_\te{osc}$ is the total bound energy in the oscillon and $P_\te{rad}$ is the power radiated by the oscillon, all measured while the oscillon is at a frequency $\omega$. [The precise values of $E_\mathrm{osc}(\omega)$ and $P_\mathrm{rad}(\omega)$ may be calculated using the software package \href{https://github.com/SimpleOscillon/Code}{\faGithub\hspace{0.1cm}\textsf{Simple Oscillon}}.] As shown in earlier sections, friendly axions admit oscillon solutions too, although only the short axion is likely to form them. To study this scenario, we work in the limit of a homogeneous long axion, $\phi_L = \Phi_L(t)$, and model the short axion as a spherically symmetric configuration with a fixed radial profile $\phi_S(r, t) = \Phi_S(t) R_S(r)$. To be self-consistent, we work in the limit where the short axion does not backreact onto the long axion, i.e., $f_S \ll f_L$. This approximation neglects the effects of radiation, which we include via an artificial damping term with coefficient $\Gamma_\mathrm{inst}$. In the small $\Phi_L/f_L$ limit, keeping only linear terms, \begin{subequations}\label{eq:oscillonEffectiveEOM} \begin{align} \ddot{\Phi}_L + \f{3}{2t}\dot{\Phi}_L &= - m^2\mu^2\Phi_L, \\ \ddot{\Phi}_S + \p{\f{3}{2t} + \Gamma_\te{inst}} \dot{\Phi}_S &= - V_0'(\Phi_S) - V_1'(\Phi_S)\f{\Phi_L}{f_L}. \end{align} \end{subequations} Here $V_0$ and $V_1$ are effective potentials derived by integrating out $R_S$. Ultimately, the precise form of these potentials is unimportant, their salient feature being that they possess attractive nonlinearities. These equations are thus precisely in the same form as those for the homogeneous system studied in Ref.~\cite{Cyncynates:2021xzw}, and autoresonance between $\Phi_S$ and $\Phi_L$ therefore occurs for a broad range of initial conditions (although the likelihood that any given patch of space leads to oscillon formation and autoresonance is most easily assessed with simulations). \Cref{fig:phaseLock} demonstrates that autoresonance between the short oscillons and the long axion occurs in our full $3+1D$ simulations. \begin{figure}[t!] \centering \includegraphics[width = \columnwidth]{driven-oscillon-amp-and-driver-broken.pdf} \caption{ Central amplitude of a short-axion oscillon (left vertical axis, blue curves) and average amplitude of the long axion (right vertical axis, red curves), measured in a cosmological simulation with $N^3 = 512^3$ gridpoints, box length $L = \pi / m$, mass ratio $\mu = 0.75$, and decay constant ratio $\mathcal{F} = 50$. Note the break in the horizontal axis between $520 \lesssim m t \lesssim 850$. } \label{fig:phaseLock} \end{figure} The short-axion oscillon oscillates at the same frequency $\omega = \mu$ as the driver $\left\langle \phi_L \right\rangle$, conclusively demonstrating local nonlinear resonance inside the oscillon. The phase offset between the short and long axion evolves from roughly $\pi/3$ at early times (left side of \cref{fig:phaseLock}) to about $\pi/2$ just before oscillon death (right side), indicating an increasingly efficient energy transfer from the long axion to the short oscillon. The energy transfer rate peaks when the phase shift reaches $\pi / 2$, and subsequently decreases as the long axion's amplitude redshifts. In analogy to homogeneous autoresonance, the long axion can then no longer support the short axion against its own radiation. Using the equations of motion \cref{eq:oscillonEffectiveEOM} for the spherically symmetric system, we can solve for the dynamics of the short oscillon and the long driver to arrive at the driven oscillon lifetime $t_\te{death}$ in terms of its instantaneous vacuum lifetime $T_\te{inst}(\mu)$. The details of this calculation are described in \cref{app:oscillons}, and we summarize the results here. First, the long driver amplitude must be large enough that it supplies sufficient energy to the oscillon, leading to \begin{align} \label{eq:driveLimit} m \mu t_\te{death}\approx \left[ m \mu T_\te{inst}(\mu) \right]^{4/3}. \end{align} We compare this analytic scaling to simulations in \cref{fig:lifetime}, verifying the $(\mu T_\mathrm{inst}(\mu))^{4/3}$ dependence in the range $0.73 \lesssim \mu \leq 0.83$. but spherically symmetric simulations verify the scaling of \cref{eq:driveLimit} out to $\mu = 0.89$. \begin{figure}[t!] \centering \includegraphics[width = \columnwidth]{driven-oscillon-lifetime.pdf} \caption{ Driven oscillon lifetime measured in $3+1D$ cosmological simulations (red dots) versus the analytic prediction of \cref{eq:BRLimit} (blue line) at a decay constant ratio $\mathcal{F} = 1000$ (large enough that backreaction is never important). The analytic curve only contains one free parameter corresponding to an order 1 multiplicative constant in \cref{eq:driveLimit} (see \cref{app:oscillons}). Note that the driver frequency $\omega_\mathrm{driver}$ simply corresponds to the mass of the long axion, $\mu m$. } \label{fig:lifetime} \end{figure} At lower frequencies the driven oscillon lifetime is shorter than the vacuum lifetime: once the oscillon falls off autoresonance, it rapidly dumps its energy into the long axion field, cutting its life short. Larger values of $\mu$ require longer ($3+1D$) simulation runtimes than our computational resources permit, The oscillon also backreacts on the driver, inducing spatial perturbations. These fluctuations remain small only until \begin{align} \label{eq:BRLimit} m \mu t_\te{death} \lesssim \mathcal{F}^{8/3}. \end{align} This scaling is demonstrated in \cref{fig:Backreaction}: the spherically symmetric solutions to the full coupled nonlinear wave equations (see Eq.~\ref{eq:OscillonEOM}) exhibit $\mathcal{F}^{8/3}$ behavior for $\mathcal{F} \lesssim 40$, at which point the lifetime saturates the driving limit, \cref{eq:driveLimit}. \begin{figure}[t!] \centering \includegraphics[width = \columnwidth]{driven-oscillon-backreaction.pdf} \caption{Lifetime of the driven oscillon versus decay constant ratio $\mathcal{F}$ in a spherically symmetric simulation with $\mu = 0.85$ (red) with superposed $\mathcal{F}^{8/3}$ scaling predicted by \cref{eq:BRLimit} (blue). The wiggles in the red curve are due to the precise phase relationship between the oscillon and the driver, a feature that would be washed out in any realistic simulation with randomly sampled initial phases. } \label{fig:Backreaction} \end{figure} Finally, the oscillon siphons energy from the long axion, depleting the latter's local energy density. Nearby regions of space then resupply this region with long axion; requiring that this resupply rate exceeds the depletion rate due to the oscillon leads to the final constraint \begin{align} \label{eq:DepletionLimit} t_\te{death}\lesssim \mathcal{F}^2 T_\te{inst}(\mu). \end{align} One can check that there is no region of $\mathcal{F}-T_\mathrm{inst}$ parameter space where \cref{eq:DepletionLimit} dominates the lifetime. Taken together, the lifetime is \begin{align} \label{eq:lifetime} m \mu t_\te{death} &\approx \min\left( \left[ m \mu T_\te{inst}(\mu) \right]^{4/3}, \mathcal{F}^{8/3} \right). \end{align} While the lifetimes of these driven oscillons are parametrically enhanced relative to their in-vacuum expected lifespans, they are still far too short-lived to be of any cosmological relevance. Nonetheless, these novel dynamics---interesting in their own right---potentially broaden the class of scalar field theories that admit oscillons surviving into the present day. We discuss this possibility and associated challenges in App.~\ref{app:generalPotentialLongevity}. \section{Discussion} \label{sec:discussion} Nonlinear effects in the early Universe can have a drastic impact on the late-time distribution of energy in dark sectors. The ``friendly'' axion system of Ref.~\cite{Cyncynates:2021xzw} provides a concrete example model, where nonlinearities dominate the dynamics of both the background and fluctuations and have important consequences for direct detection experiments. In this paper, we numerically evolved the full system, showing that large axion fluctuations---in particular the nucleation of oscillons---work to equilibrate the relic densities of the two axions for a moderate ratio of the heavy axion's decay constant to that of the light one ($6 \lesssim \mathcal{F} \lesssim 20$). For smaller decay constant ratios $\mathcal{F} \lesssim 6$, spatial fluctuations have a negligible effect on the dynamics, and we recover the results of homogeneous computations from Ref.~\cite{Cyncynates:2021xzw}. At larger ratios $\mathcal{F} \gg 20$, oscillon nucleation prevents the heavy axion from ever attaining a substantial abundance. The novel dynamics in the intermediate-$\mathcal{F}$ regime position friendly axions to be positively identified as two-component dark matter by direct detection experiments. The lighter axion's abundance is reduced by no more than a factor of about two, in sharp contrast to expectations based on homogeneous approximations in which its abundance would be parametrically depleted~\cite{Cyncynates:2021xzw}. The heavier axion's abundance (and therefore detection prospects) is still parametrically enhanced (by a factor of $\approx \mathcal{F}^2 / 2$), but only at a moderate cost to the visibility of the lighter axion. Many upcoming axion direct detection experiments~\cite{Brouwer:2022bwo,Alesini:2017ifp,Stern:2016bbw,DMRadio:2022pkf} would potentially be sensitive to \textit{both} axions in a friendly pair having masses within the experiment's sensitivity band. Direct detection of axion dark matter with a decay constant substantially smaller than that expected in standard misalignment scenarios should prompt a search for a second, more weakly coupled axion at a nearby mass. We also computed the stochastic gravitational wave background produced by oscillon nucleation in a friendly axion sector. If friendly axions compose all of the dark matter, the present-day strain is well out of reach of near-future gravitational wave experiments, but the cosmic microwave background polarization does constrain (and in the future may probe) hyperlight friendly pairs making up a subcomponent of dark matter. Density and vector perturbations are also produced in these scenarios; their effect on the CMB (and other cosmological observables) is less straightforward to evaluate, but they may well provide even more stringent constraints than just the (as-yet unobserved) primordial $B$-mode polarization. Finally, for $\mathcal{F} \gtrsim 20$, although autoresonance is quenched by oscillon production (preventing the axions' energies from equalizing) our simulations demonstrate that short-axion oscillons produced in the early Universe are driven by the long axion background, parametrically extending their lifetimes. For the specific friendly axion potential studied here (\cref{eq:twoAxionPotential}), driven oscillons can live one to two orders of magnitude longer than their undriven counterparts. Though they are still not long-lived enough to be astrophysically relevant, even at the lightest possible axion masses, this may not be the case for other scalar potentials. Similar dynamics in other coupled axion theories may lead to driven oscillons that could naturally live until the present day, with numerous possible observational signatures including gravitational lensing~\cite{VanTilburg:2018ykj}, optical lensing~\cite{Prabhu:2020pzm}, and electromagnetic bursts~\cite{Prabhu:2020yif,Buckley:2020fmh,Amin:2020vja,Amin:2021tnq}. Our results are qualitatively insensitive to the amplitude of the initial primordial curvature perturbations. However, the size does determine the precise minimum and maximum decay constant ratios for which the two axion energy densities equalize. For simplicity, we used a scale-invariant initial power spectrum with magnitude set by the \textit{Planck} measurements at CMB scales, but in reality adiabatic fluctuations are red-tilted on large scales and are much less constrained on smaller scales. If there is less initial power at small scales, the time to nonlinearity and oscillon formation increases, allowing friendly pairs with larger decay constant ratios $\mathcal{F} \gg 20$ to achieve equipartition. We also note that whether the initial axion perturbations are adiabatic as above or seeded directly in the axion field (i.e., isocurvature perturbations) does not measurably affect any of our results, as verified by a set of simulations with purely isocurvature initial conditions. The string axiverse is a rare example of a low-energy signature of quantum gravity, most of whose novel predictions reside at the grand-unified or string scales, far outside experimental reach. In general, an axiverse can comprise a multitude of light, coupled axions; this work provides further evidence of the outsized impact nonlinear effects and interactions have on the phenomenology of the axiverse. The friendly model considered here, for which nonperturbative dynamics revise predictions by multiple orders of magnitude, is only a prototypical example; further work is necessary to understand the phenomenology of fully realistic axiverses and the critical role played by nonlinear dynamics. \begin{acknowledgments} We thank Masha Baryakhtar and Davide Racco for thoughtful feedback on this manuscript, Savas Dimopoulos for useful discussions on ``oscillon longevity centers,'' and Dmitriy Zhigunov for helpful conversations about strongly nonlinear fields. D.C.\ is grateful for the support of the Stanford Institute for Theoretical Physics (SITP), the National Science Foundation under Grant No.\ PHY-2014215, and the Gordon and Betty Moore Foundation under Grant No.\ GBMF7946. O.S.\ is supported by a DARE fellowship from the Vice Provost for Graduate Education at Stanford University. J.O.T.\ is supported by the ARCS Foundation, and is thankful to the Perimeter Institute for its hospitality during the final stages of completing this manuscript. This work used the Extreme Science and Engineering Discovery Environment (XSEDE)~\cite{xsede}, which is supported by National Science Foundation grant number ACI-1548562; simulations were run through allocation TG-PHY200037 on the Anvil cluster at Purdue University, Bridges-2 at the Pittsburgh Supercomputing Center which is supported by NSF award number ACI-1928147, and Expanse at the San Diego Supercomputer Center. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science. Simulations in this work were implemented with \textsf{pystella}~\cite{pystella}, which is available at \href{https://github.com/zachjweiner/pystella}{github.com/zachjweiner/pystella} and makes use of the Python packages \textsf{PyOpenCL}~\cite{kloeckner_pycuda_2012}, \textsf{Loopy}~\cite{kloeckner_loopy_2014}, \textsf{mpi4py}~\cite{DALCIN2008655,DALCIN20051108}, \textsf{mpi4py-fft}~\cite{jpdc_fft}, and \textsf{NumPy}~\cite{Harris:2020xlr}. This work also made use of the packages \textsf{SciPy}~\cite{Virtanen:2019joe}, \textsf{matplotlib}~\cite{Hunter:2007ouj}, \textsf{SymPy}~\cite{Meurer:2017yhf}, and \textsf{CMasher}~\cite{cmasher}. \end{acknowledgments}
1,314,259,995,472
arxiv
\section{Introduction} \textbf{Background.} Next generation network slicing is a concept introduced to meet the requirements of different services provided in such networks. By applying this concept, it is possible to deploy multiple logical networks on a common infrastructure network to provide different services that may have conflicting requirements, which includes Radio Access Network (RAN), transport network, and Core Network (CN) virtualization. A network slice consists of Virtual Network Functions (VNF) and Virtual Links (VL). Network resources are limited, therefore, they should be optimally allocated to the slices ~\cite{MSU-CSE-06-2, alliance2016description, MSU-CSE-06-1}. There exists many studies on the efficient allocation of network resources. In these studies, it is assumed that the requested resources’ volumes for VNFs and VLs are fix and known~\cite{ebrahimi2019joint,fendt2018network,farkiani2019fast,chen2020network,bhamare2017optimal,halabian2019distributed}, while, in real-world problems, these volumes change dynamically and are usually unknown ~\cite{marotta2017energy,bauschert2014network}, that is, the resource requirements of VNFs and VLs of each slice are uncertain for different reasons (e.g., changes in the users’ count of each slice). If the uncertainty conditions are of no concern in the resources allocation, it may lead to slice violation with a slight change in demand. In network slicing, the slice requests arrive in the network in a continuous manner, named online slice requests, which should be checked if the network has enough resources for newly arrived slices, their requested resources should be allocated. In this case, each slice has a lifespan and is active temporarily or permanently in the network. At the end of a slice lifespan, its resources are taken back.\\ \textbf{Contributions.} The focus of this article is on allocating resources to online slice requests where there exist uncertainties in the resources’ volumes required by VNFs and VLs. The two approaches in managing the uncertainty are: 1) reactive, and 2) proactive. The reactive approach leads to unpredictable and often significant delays in operations of slices as they need to resupply the resources to active slices on-demand for instance, through the Virtual Machines (VM) migrations. As to the proactive approach in resource allocation the uncertain demand information is applied to avoid the need for resource reallocation on the spot. In this article the proactive approach is of concern. The objective is to increase the accepted slices’ count and decrease servers’ and switches’ power consumption. In this article, the problem is considered as Energy-Aware Online Network Slicing under Uncertainties (EA-ONSU) and a system model is proposed for it. In this system model, each slice is considered as a Virtual Network (VN) which includes VMs and VLs. The tenants, the slice owners, know their requirements and are allowed to manage the VNFs needed on their VNs and serve their customers thereupon \cite{ebrahimi2019joint}. The admission control and proactive resource allocation in this proposed system model are formulated as being Binary Linear Programming (BLP) when the concept of $\Gamma$–Robustness is applied to overcome uncertainties. The contributions of this article are summarized: \begin{itemize} \item Due to the practical limitations like fluctuations in resources demand (e.g., unusual changes in resources demand due to increasing slice’s customers' count), we model the slices' VNFs' and VLs' requested resources' uncertainties. \item The joint admission control and proactive resource allocation optimization problem is formulated to slice requests under demand uncertainties as BLP, by applying the $\Gamma$-Robustness concept, named the Robust Infrastructure Network Slicing (ROBINS). The objective is to increase the accepted slices’ count and decrease the physical servers’ and switches’ power consumption. \item A new optimal algorithm named Optimal Energy-Aware Online Network Slicing under Uncertainties (OEA-ONSU) is proposed for accepting and allocating resources to online slice requests by applying the ROBINS. The OEA-ONSU is a three step algorithm: 1) the previous slices’ expiration are checked, 2) admission control and resource allocation to newly arrived slices are made, and 3) the status of infrastructure network is updated. In order to apply the proposed method to real-world and large-scale networks and a find near-optimal solution in reasonable time, a novel heuristic algorithm, named Near-optimal Energy-Aware Online Network Slicing under Uncertainties (NEA-ONSU), is developed. \item The performance results of OEA-ONSU and NEA-ONSU algorithms on the Abilene network \cite{orlowski2010sndlib} are presented, where, the results reveal the efficiency of these proposed algorithms in accepting requests, allocating resources with considering energy efficiency, and providing adjustable tolerance vs. the VNFs workloads’ and VLs traffics’ uncertainties, separately. Considering the acceptance ratio and power consumption, the two important components of the objective function, NEA-ONSU has about 7\% and 10\% optimality gaps, respectively, while being about 30X faster than that of OEA-ONSU. \end{itemize} \textbf{Paper organization.} The rest of this article is organized as follows: the related works are reviewed in Sec. \ref{Related works}; the system is modeled in Sec.~\ref{System Model}; the online robust admission control and resource allocation: mathematical formulation is discussed in Sec.~\ref{online robust admission control and resource allocation: Mathematical Formulation}; the online robust admission control and resource allocation: solution methodology is introduced in Sec.~\ref{online robust admission control and resource allocation: Solution Methodology}; the numerical results are presented in Sec.~\ref{Numerical Results} and the article is concluded in Sec.~\ref{Conclusion}. \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \begin{table*}[htpb] \centering \caption{Summary of Related Researches} \label{Related-Research} \small \resizebox{\textwidth}{!}{ \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline \textbf{ref.}& \begin{tabular}{c}\textbf{CPU} \\\textbf{Robustness} \end{tabular}& \begin{tabular}{c}\textbf{RAM} \\\textbf{Robustness} \end{tabular}& \begin{tabular}{c}\textbf{Storage} \\\textbf{Robustness}\end{tabular}& \begin{tabular}{c}\textbf{Bandwidth}\\ \textbf{Robustness}\end{tabular}& \begin{tabular}{c}\textbf{VNF} \\\textbf{Placement}\end{tabular}& \begin{tabular}{c}\textbf{VLE}\end{tabular}& \begin{tabular}{c}\textbf{Delay}\end{tabular}& \textbf{Online} & \begin{tabular}{c}\textbf{Applicable} \\\textbf{on Large}\\ \textbf{Networks}\end{tabular}& \begin{tabular}{c}\textbf{Decreasing} \\\textbf{Power} \\\textbf{Consumption} \\\textbf{of Servers}\end{tabular}& \begin{tabular}{c}\textbf{Decreasing}\\ \textbf{Power}\\ \textbf{Consumption}\\ \textbf{of Switches}\end{tabular}& \begin{tabular}{c}\textbf{Admission} \\\textbf{Control}\end{tabular} \\ \hline \cite{ebrahimi2019joint} & & & & & \checkmark & \checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark\\ \hline \cite{farkiani2019fast} & & & & & \checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{chen2020network} & & & & & \checkmark & \checkmark & \checkmark & & & \checkmark & & \\ \hline \cite{bhamare2017optimal} & & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & \checkmark\\ \hline \cite{sun2019energy} & & & & & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{soualah2019online} & & & & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark \\ \hline \cite{ghazizadeh2019joint} & & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & \\ \hline \cite{varasteh2021holu} & & & & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{chen2021optimal} & & & & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{marotta2017energy} & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{hosseini2019probabilistic} & \checkmark & & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & & & \\ \hline \cite{marotta2017fast} & \checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{nguyen2019proactive} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{wen2018robustness} & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & & \\ \hline \cite{reddy2016robust} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \cite{baumgartner2017network} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{wen2017robust} & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & & \\ \hline \cite{bauschert2020fast} & & & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{nguyen2020deadline} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{luu2021uncertainty} & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{luu2022admission} & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline This work & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \end{tabular} } \end{table*} \section{Related works} \label{Related works} The related works are categorized in terms of uncertainties: 1) Resource allocation with fixed resource demand \cite{ebrahimi2019joint,farkiani2019fast,sun2019energy,soualah2019online,chen2020network,bhamare2017optimal,ghazizadeh2019joint,varasteh2021holu,chen2021optimal}, 2) Resource allocation under demand uncertainties \cite{hosseini2019probabilistic,marotta2017energy,marotta2017fast,nguyen2019proactive,wen2018robustness,reddy2016robust,baumgartner2017network,wen2017robust,bauschert2020fast,nguyen2020deadline,luu2021uncertainty,luu2022admission}. \subsection{Resource allocation with fixed resource demand}The energy consumption of cloud nodes and the cost of bandwidth consumption is decreased in \cite{ebrahimi2019joint}, where the provided framework allows tenants to manage their slices and serve their customers. For this purpose, an Integer Linear Programming (ILP) formulation is designed for resource allocation and an ILP formulation is designed for admission control. The issue of energy-aware service deployment is studied in \cite{farkiani2019fast}, where, an ILP is formulated by considering limited VNF traffic processing capacity and management issues. By applying the Benders decomposition, feasibility pump, and primal-dual algorithms, a fast and scalable algorithm with polynomial execution time is devised to compute a near-optimal solution. The problem of resource allocation in network slicing is studied in \cite{chen2020network}, where, flexible routing, End-to-End (E2E) latency, and coordination overhead are of concern. The problem is formulated as a mixed binary linear programming, with the objective to reduce the energy consumption of cloud nodes. The problem of placing VNFs to form Service Function Chains (SFCs) on geographically distributed clouds is addressed in \cite{bhamare2017optimal}. This problem is formulated as ILP to reduce inter-cloud traffic and response time in a multi-cloud scenario. The total implementation cost and Service Level Agreements (SLAs) are of concern. The energy efficiency optimization for orchestrating online SFC requests in multi-domain networks is assessed in \cite{sun2019energy}, where, the problem is formulated as ILP, followed by a heuristic algorithm, which next to meeting the needs of online SFCs, assures the privacy of each cloud efficiently. An ILP formulation is presented in \cite{soualah2019online} to solve the Virtualized Network Function Forwarding Graph (VNF-FG) placement and chaining problem. VNFs are shared between tenants to optimize resource consumption and increase the infrastructure provider revenue. A mathematical formulation for reliability-aware VNF placement and routing by considering Quality of Service (QoS) parameters is presented in \cite{ghazizadeh2019joint}, where, a resource allocation algorithm, applying a shared protection scheme with Active-Standby redundancy, is proposed to optimize the redundant VNFs without affecting the QoS parameters. The problem is formulated as Mixed Integer Linear Programming (MILP) and a meta-heuristic algorithm is proposed to make the solution scalable in large-scale networks. In \cite{varasteh2021holu}, the power-aware and delay-constrained joint VNF placement and routing (PD-VPR) problem is formulated as an ILP. Then, a fast online heuristic named Holu is developed to overcome high computational complexity of ILP. In \cite{chen2021optimal}, the problem of resource allocation to SFCs by considering the E2E latency constraints of all services and all cloud and communication resource budget and energy efficiency constraints, is considered. The problem is formulated as Mixed Binary Linear Program (MBLP). Then, an alternative MBLP formulation is developed which shows same optimal solution and it is more computationally efficient when the dimension of the corresponding network becomes large. \subsection{Resource allocation under demand uncertainties} The manner where the robust strategies that place VNFs in virtual data centers impact on energy saving and the level of protection vs. uncertainties in demand is assessed in \cite{marotta2017energy}. Therefore, a robust optimization model with a heuristic algorithm is proposed for reducing the energy consumption of computational and network infrastructure that is robust to fluctuations in resources demand. To reduce energy consumption, the unused servers and switchs’ ports are turned off. The issue of mapping VLs into physical paths, known as Virtual Link Embedding (VLE), where the bandwidth requirements of VLs are uncertain, is assessed in \cite{hosseini2019probabilistic}. To have VLs with predictable performance, the mapping must assure the required E2E congestion probability of physical paths with no dependency on the characteristics of the paths where VLs are mapped. Accordingly, a general uncertainty model is proposed where the bandwidth requirements of VLs are uncertain. There exists a model for uncertainty in VNFs’ demand. The VLE problem is formulated as a nonlinear optimization one. Then, a model for large-scale networks is provided by applying the approximate formulation. The robust network embedding problem under resource demand uncertainties is assessed in \cite{marotta2017fast}, where the limitations of delay are considered by applying the robust mixed integer optimization techniques. A model for proactive SFC is proposed with the objective to prevent resource reallocation to SFCs when demand fluctuate, prior to which, most of the proposed models are reactive, where, resources are reallocated during the demand fluctuations, which has a negative effect on the delay-sensitive SFCs’ performance, \cite{nguyen2019proactive}. Consequently, first, a SFC orchestration is formulated with a predefined deadline limitation as the Mixed Integer Non-Linear Programming (MINLP) by applying the $\Gamma$-Robustness concept and next, an approximate algorithm is devised to solve the large-scale problems. The problem of robust and E2E network slicing, where the slices are considered as a set of VNFs and links between them is discussed in \cite{wen2018robustness}. Bugs may occur in some VNFs in a random manner, making some slices to lose their validity, hence, triggering a slice recovery process. Beacause traffic demand in each slice is stochastic, making drastic changes in traffic demand can lead to slice reconfiguration. As to slice recovery in network slicing under bandwidth uncertainty, a solution is presented in \cite{wen2018robustness} and a heuristic algorithm is devised based on Variable Neighborhood Search (VNS) to accelerate the problem solving time in large-scale networks. Researchers in \cite{reddy2016robust} applied the $\Gamma$-Robustness concept in resource allocation to VNF chains by considering their tolerable delays and uncertain state in traffic demand, which leads to uncertainty in the requirements of VNFs resources, formulated as MILP. An admission control method is adopted and the objective function is to reduce the energy consumption of servers and links next to reducing the penalty due to the admission control process. To improve the scalability of the model, the MILP-based VNS algorithm is presented as well. To overcome the scalability issues in \cite{reddy2016robust}, a model when the concept of light robustness is applied presented in \cite{baumgartner2017network}. The problem of failure recovery in network slicing under uncertainty in traffic demand is assessed in \cite{wen2017robust}, where, first, the problem is formulated as MILP, next, a robust optimization is applied to fulfill the stochastic traffic requests. A Mixed Integer Programming (MIP) formulation is presented in \cite{bauschert2020fast} for the network slicing problem under traffic uncertainty and to reduce the computational complexity of mathematical optimization, a meta-heuristic algorithm based on ant colony optimization algorithms is devised for the robust network slice design problem. To avoid frequent resource re-provisioning, the deadline-aware, co-located, and geo-distributed SFC orchestration with demand uncertainties as robust optimization problem is formulated in \cite{nguyen2020deadline}, where, the exact and approximate algorithms are devised to solve it. Uncertain demand knowledge is used in computing the proactive SFC orchestration that can withstand fluctuations in dynamic service demand. A resource allocation approach in network slicing that is robust to partly unknown users’ count with random usage of slice resources is proposed in \cite{luu2021uncertainty}, where, the objective is to increase the total earnings of the infrastructure provider (IP). The resource allocation to slices is made as to limit its impact on low-priority background services, which may coexist next to the slices in the infrastructure network. In this context, the probabilistic assurance is that the volume of allocated network resources to the slices will meet its requirements. In \cite{luu2022admission}, a prioritized admission control mechanism for concurrent slices based on an infrastructure resource reservation approach is proposed. The reservation accounts for the dynamic nature of slice requests while being robust to slice resource demands uncertainties. Adopting the perspective of an IP, reservation schemes are proposed that maximize the number of slices for which infrastructure resources can be granted while minimizing the costs charged to the Mobile Network Operators (MNOs). The findings of this article are compared with that of the reviewed articles in Table~\ref{Related-Research}. In practice, there exist different types of resources in the infrastructure network: the CPU, RAM, storage, and bandwidth that a slice needs to operate. To confront the demand uncertainties, each resource demand uncertainty must be of concern. In this article, a comprehensive uncertainty formulation is devised and due to the business model introduced by 5GPPP \cite{queseth20175g} for network slicing, the online slice requests with their requirements including propagation delay of VLs is of concern. The joint admission control and resource allocation with the main objective of increasing the accepted slices’ count and decreasing the power consumption of the infrastructure network is of concern. Because of this comprehensive formulation of the problem, a large-scale problem that needs to be solved very fast as to be applicable on large-scale networks, a new and rapid near-optimal algorithm is devised. As observed, none of the available studies meets all the conditions addressed in this article. \begin{table}[!ht] \renewcommand{\arraystretch}{1.5} \centering \scriptsize \caption{{Main Notations}} \label{table-notation} \resizebox{\columnwidth}{!}{ \begin{tabular}{>{\color{black}}c |>{\color{black}}c} \hline \textbf{Notation}& \textbf{Definition}\\\hline \multicolumn{2}{c}{\textbf{Input Parameters}}\\\hline $\mathcal{N},\mathcal{L}$ &Sets of IP's physical nodes and links\\\hline $\mathcal{N}_\text{Used},\mathcal{N}_\text{Unused}$ &Sets of previously used and unused nodes\\\hline ${R}_{n}$, ${R'}_{n}$ &\begin{tabular}{c}Vector of node $n$'s total and available resources\\ that includes CPU, RAM, and storage\end{tabular}\\ \hline ${P}^\text{Max}_{n}$ , ${P}^\text{Idle}_{n}$ & Maximum and idle power consumption of node $n$\\ \hline ${S_n}, {S^\text{Port}_n}$& Switch $n$ and its one port power consumption\\ \hline ${S'_n}$& Number of connected ports of switch $n$ to other switches\\ \hline $B_{l_{n,n'}}$, $B'_{l_{n,n'}}$ &Total and available bandwidth of the physical link $l_{n,n'}$\\\hline ${B}_\text{Total}$& Sum of all links' bandwidths\\ \hline $\tau_{l_{n,n'}}$ &Propagation delay of the physical link $l_{n,n'}$\\\hline ${\zeta}_{l_{n,n'}}$&\begin{tabular}{c} Includes switch $n$ and $n'$ idle power and power of ports used in link ${l}_{n,n'}$\end{tabular}\\ \hline ${\gamma}_{l_{n,n'}}$&\begin{tabular}{c} To specify used link $l_{n,n'}$, it is 1 if link $l_{n,n'}$ was used\end{tabular}\\ \hline $I^{l_{u,u'}}_{p_{n,n'}^b}$&\begin{tabular}{c}Indicator that determines physical link $l_{u,u'}$\\contributes in the $b^{\text{th}}$ path between $n$ and $n'$, if it has value 1\end{tabular}\\\hline $\mathcal{L}_{p_{n,n'}^b}$&\begin{tabular}{c}Set of all physical links $l_{u,u'}$ contribute in the $b^{\text{th}}$ path between $n$ and $n'$\end{tabular}\\\hline $\mathcal{P}_{n,n'}$ &Set of paths between $n$ and $n'$\\\hline $\mathcal{T}, \mathcal{T}_\text{c}$&Sets of all tenants that their slices are accepted and current tenants\\ \hline $\mathcal{D}_{t}, \mathcal{D}_{t\_\text{c}}$& \begin{tabular}{c} Sets of tenant $t$'s slices and current tenant $t$'s arrived slices \end{tabular} \\ \hline $\mathcal{S}, \mathcal{S}_\text{c}$ &Sets of total accepted slices and current time slot arrived slices\\ \hline $\phi_{t,d}$ &lifespan of the $d^\text{th}$ slice of tenant $t$\\ \hline $\mathcal{M}_{t,d}, \mathcal{E}_{t,d}$ &Sets of $s_{t,d}$'s requested VMs and VLs\\ \hline $\nu_{m_{t,d}}$ &\begin{tabular}{c}Vector of the requested capacities for VM $m_{t,d}$ that\\ includes $\nu^\text{CPU}_{m_{t,d}}$, $\nu^\text{RAM}_{m_{t,d}}$ and, $\nu^\text{STOR}_{m_{t,d}}$\end{tabular}\\ \hline $\Upsilon_{e_ {m, m'}}$ &\begin{tabular}{c}Vector of specifications of requested VL between the two VMs $m$ and $m'$ \\in slice $s_{t, d}$ that includes $\omega_ {e_ {m, m'}}$ and $\tau_\text{max}^{e_ {m, m'}}$ \end{tabular}\\\hline $\omega_{e_{m,m'}}$ &Requested data rate between two VMs $m_{t,d}$ and $m'_{t,d}$\\ \hline $\tau^{e_{m,m'}}_{\max}$ &\begin{tabular}{c}Maximum tolerable propagation delay between two VMs $m_{t,d}$ and $m'_{t,d}$\end{tabular}\\ \hline $\alpha_n$&To specify previously turned-on node $n$, it is 1 if node $n$ is turned-on\\\hline ${N}_\text{Total}$& Sum of All nodes power consumption\\ \hline ${N}_\text{Used}$& Nodes power consumption in previous time slots\\ \hline ${S}_\text{Total}$& Sum of All switches power consumption\\ \hline ${S}_\text{Used}$& Switches power consumption in previous time slots\\ \hline $\Gamma_1$,$\Gamma_2$& To specify protection levels for VMs and VLs\\ \hline ${\Delta}_1,{\Delta}_2$& To specify relative deviations for requested resources of VMs and VLs\\ \hline \multicolumn{2}{c}{\textbf{Decision Variables}}\\\hline ${\beta}_{n}$& 1 if node $n$ is turned on in current time slot\\\hline ${\delta}_{t,d}$& 1 if slice $d$ of tenant $t$ is accepted\\\hline $\pi_{n}^m$&\begin{tabular}{c} 1 if VM $m_{t,d}$ is placed on node $n$\end{tabular}\\\hline $\xi_{p_{n,n'}^b}^{e_{m,m'}}$&\begin{tabular}{c}1 if VL $e_{m,m'}$ is mapped on the $b^{\text{th}}$ path between $n$ and $n'$\end{tabular}\\\hline ${\vartheta}_{l_{n,n'}}$&\begin{tabular}{c} 1 if link $l_{n,n'}$ used in current time slot\end{tabular}\\ \hline \multicolumn{2}{c}{\textbf{Auxiliary Variables}}\\\hline ${\eta}$& Number of rejected slices in current time slot\\\hline ${N}_\text{c}$&\begin{tabular}{c} Sum of used nodes power consumption with respect to\\ arrived slices in current time slot\end{tabular}\\ \hline ${U}_{n}$&\begin{tabular}{c} Amount of node $n$'s used resources (CPU,RAM,Storage)\\ in current time slot \end{tabular}\\ \hline ${O}^{t}_{n}$&\begin{tabular}{c} Amount of node $n$'s used resources (CPU,RAM,Storage) for robustness\\ in time slot $t$ \end{tabular}\\ \hline ${S}_\text{c}$& Sum of the used switches power consumption in current time slot\\ \hline ${U'}_{l_{n,n'}}$ &Amount of link $l_{n,n'}$'s used bandwidth in current time slot\\\hline ${O'}^{t}_{l_{n,n'}}$ &Amount of link $l_{n,n'}$'s used bandwidth for robustness in time slot $t$\\\hline \begin{tabular}{c}${\rho_{1}}^{m}_{n}$, ${z_{1}}_{n}$,\\ ${\rho_2}^{e_{m,m'}}_{p^b_{n,n'}}$, ${z_2}_{l_{u,u'}}$\end{tabular} & Robustness variables\\\hline \end{tabular} } \end{table} \section{System Model}\label{System Model} This system includes: 1) Infrastructure network model and 2) Slice requests model. The main notations are summarized in Table \ref{table-notation}. \begin{figure}[t] \center{\includegraphics[width=\columnwidth] {images/Network_Architecture.pdf}} \caption{\label{Architecture} Infrastructure Provider's Network Architecture} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/Slice-request-processing2.png} \caption{ The Process of Processing Each Tenant's Slice Request }\label{Slice-request-processing} \end{figure} \subsection{Infrastructure Network Model} An IP is assumed that provides network slices for several tenants on a shared network. The infrastructure network is considered as an undirected graph $G(\mathcal{N}; \mathcal{L})$, where $\mathcal{N}$ is the set of cloud nodes\footnote{Each cloud node includes a forwarding device and a cluster of servers.} and base stations (BS), and $\mathcal{L} = [l_{n,n'} ]$ is the set of links which connect the nodes\footnote{We use the terms node and server interchangeably.} (the cloud nodes and BSs) of this graph, thus, if $ l_{n,n'}=1$, then the two nodes $n$ and $n'$ are connected, otherwise the opposite holds. As observed in Fig. (\ref{Architecture}), the distributed cloud nodes are considered in both the core and transport layers, and the abstract RAN \cite{wang2019reconfiguration} layer includes only BSs, where the nodes form a network together. It is worth noting that the abstract RAN implies that the RAN details are not considered. Only the computation and storage capacities of BSs and their forwarding devices \cite{wang2019reconfiguration}, which connect them to transport layer, are considered. It is assumed that all nodes support the Network Function Virtualization (NFV) and as observed in Fig. (\ref{Slice-request-processing}), there is an orchestrator that includes an NFV and Software Defined Networking (SDN) manager which, first, receives the tenants’ slice requests and next, if acceptable, creates them by allocating the resources thereupon. The resources of each node include the CPU, RAM, and storage, considered as a vector $ R_{n} = (R_{n}^\text{CPU}, R_{n}^\text{RAM}, R_{n}^\text{STOR})$ for each node $n\in \mathcal{N}$. The physical links of the network have fixed bandwidths, therefore, each link $l_{n,n'}\in \mathcal{L}$ has a limited bandwidth of $B_{l_{n,n'}}$. Because the nodes are distributed, there exists a considerable propagation delay, that is, $\tau_{l_{n,n'}}$ is the propagation delay between nodes $n$ and $n'\in \mathcal{N}$. \subsection{Slice Requests Model} There exist some tenants who request slices. In this article, it is assumed that $\mathcal{T} = \{{1,…, T}\}$ is the set of all tenants, and set $\mathcal{D}_{t} = \{{1,…, d_{t}}\}$ is the requested slices of tenant $t$. Because each tenant can request different slices, in this set, $d_{t}$ is the number of the slices requested by tenant $t$. In this process, an online resource allocation is of concern. It is possible that, some tenants who may or may not already be in the set $\mathcal{T}$ transmit new slice requests to the IP at any given time, consequently, another set, named $\mathcal{T}_\text{c}$, is considered to include tenants who send slice requests and must decide whether to accept and allocate the resources thereupon. The set $\mathcal{D}_{t\_\text{c}}$ is considered as well, which includes the current slice requests of tenant $t\in \mathcal{T}_\text{c}$. A set named $\mathcal{S}_\text{c}$ is considered to include the current slice requests, that is, each $\mathcal{S}_\text{c}$’s member is represented by $s_{t,d}$ which is the $d^{\text{th}}$ slice request of tenant $t$ in current time. The set named $\mathcal{S}$ is considered to include all the accepted slices. As observed in Fig. (\ref{Slice-request-processing}), when a tenant transmits a slice request to the orchestrator, first, the possibility of accepting is checked, and if the slice is accepted, the required resources are allocated and the slice is activated. Whenever the tenant does not need the slice, the slice lifespan is expired, the slice termination process is run and its resources are taken back, consequently, the three sets $\mathcal{T}$, $\mathcal{D}_{t}$, and $\mathcal{S}$ are updated after both slice resource allocation and expiration. Each slice request specifies the set of VMs and their associated VLs represented by undirected graph $s_{t, d} = (\mathcal{M}_{t, d},\mathcal{E}_{t, d})$, where, $\mathcal{M}_{t, d}$ and $\mathcal{E}_{t, d}$ are the sets of the requested VMs and VLs by tenant $t$ for slice $d$, respectively. For each VM $m_ {t, d}\in \mathcal{M}_{t, d}$, a vector $\nu _ {m_ {t, d}} = [\nu _ {m_ {t, d}} ^ \text{CPU}, \nu _ {m_ {t, d}} ^ \text{RAM}, \nu _ {m_ {t , d}} ^ \text{STOR}]$ that indicates the volume of the resources that are needed by VM $m_ {t, d}$ is of concern. From now on $t$ and $d$ indices are removed from $m$ and $m'$ for readability. For each VL $e_{m,m'} \in \mathcal{E}_{t,d}$, $m$ and $m'\in \mathcal{M}_ {t, d}$, the specifications are expressed by $\Upsilon_{e_ {m, m'}} = [\omega_ {e_ {m, m'}}, \tau_\text{max}^{e_ {m, m'}}]$, where, $\omega_ {e_ {m, m'}}$ is the link data rate and $\tau_\text{max} ^ {e_ {m, m'}}$ is the maximum tolerable delay. \section{online robust admission control and resource allocation: Mathematical Formulation} \label{online robust admission control and resource allocation: Mathematical Formulation} Based on the business model introduced by 5GPPP \cite{queseth20175g} for the relationship between the IP and the tenants, after a tenant transmits its slice request to the IP orchestrator, if the available resources are sufficient to accept the slice, the resources will be allocated. In the next generation networks slicing, the slice requests with different lifespans arrive at the orchestrator at different times, and the IP must be able to allocate resources to new slices in an online manner and release the resources containing expired slices. High power consumption of IT infrastructures due to environmental and economic reasons has become a major issue among researchers \cite{farkiani2019fast}. The total IT infrastructure power consumption consists of power consumed by the turned-on switches with their applied ports, and the power consumed by the turned-on servers. A robust optimization model is proposed for admission control and allocation of network resources to the online slice requests of tenants to reduce the IT infrastructure's total power consumption next to increasing the accepted slices’ count. Because formation of each slice takes time, in this study, the time slots are considered that at beginning of each time slot, the newly arrived slices during the previous time slot are processed and the resources of slices with expired lifespans are released. In this process ($\phi_{t,d}$) determines the slice lifespan. An example of the admission control and resource allocation process to 5 slices that arrive at the network during 4 time slots is drawn in Fig. (\ref{Time-slot-operations}), where, at the beginning of each time slot, the process of checking the existence of expired slices and resource allocation to newly accepted slices are evident. A predefined unit of lifespan is considered. Slice 1 with lifespan 1 is accepted in time slot 1, therefore, at the beginning of time slot 2 that slices 2 and 3 are accepted, slice 1 is expired, and its resources are taken back. Because the lifespans of slices 2 and 3 are equal to 2, they are expired at the middle of time slot 3. Slice 4 that arrives at the network in time slot 2, is rejected, because its requested resources are not available in the infrastructure network. slice 5 which is arrived at network in time slot 1, is accepted and because it is a permanent slice, it is not expired. Due to the online allocation, when a slice request arrives, the admission control and the resource allocation are made concerning the remaining resources of the network. Because the admission control and resource allocation operations are made at the beginning of each time slot, the following optimal model makes the optimal allocation only in each time slot, thus, no global optimization. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/Time-slot-operations.pdf} \caption{Online Slice Requests and Each Time Slot's Operations }\label{Time-slot-operations} \end{figure} \subsection{Slice Acceptance} To separate the accepted slices from the rejected slices in the slices’ admission control, the variable $\delta_{t,d}$ is defined, which in case of accepting a slice $d$ of tenant $t$, its volume is set to 1, that is, if it is not possible to meet the requirements of a slice in terms of the resources required by its VMs and VLs, and tolerable delays of VLs, this volume is set to 0. The following constraint is defined to specify the rejected slices’ count which should be decreased in the objective function. \begin{align}\label{Slice Acceptance} &\text{C1: }\sum\limits_{t\in \mathcal{T}_\text{c}} \sum\limits_{d\in\mathcal{\mathcal{D}_\text{$t\_\text{c}$}}}(1-\mathcal{\delta}_{t,d}) = {\eta}, \end{align} where, ${\eta}$ is an integer variable that specifies the count of rejected slices in current time slot. \subsection{VNF Placement} Each slice is considered as a VN that includes VMs and VLs and after the slice is accepted, the tenant as a slice owner manages the required VNFs on the VN. In this article, VNF placement means allocating resources to a VM. The binary variable $\pi^m_n$ is set to 1, if VM $m$ is placed on node $n$. To assure that a VM $m$ is placed on a single node and to prevent a rejected slice’s VM from being placed on the nodes, the following constraint is defined: \begin{align}\label{VNF Placement} &\text{C2: }\sum\limits_{n \in \mathcal{N}} {\pi}_{n}^{m} = {\delta}_{t,d}, \forall m \in \mathcal{M}_{t,d}, \forall t \in \mathcal{T}_\text{c}, \forall d \in \mathcal{D}_{t\_\text{c}}. \end{align} In this constraint, ${\delta}_{t,d}$ is applied on the right side to assure that decisions are made only for the placement of the accepted slices’ VMs. \subsection{VL Embedding} There may be several paths between the two nodes $n$ and $n'$, as $\mathcal{P} _ {n, n'} = \{1, .., b, .., P _ {n, n'}\}$, such that $P _ {n, n'}$ is the physical paths’ count between the two nodes $n$ and $n'$, consequently, the $b^\text{th}$ path is symbolized as $p_ {n, n'} ^ {b}$. The binary decision variable $\xi _ {p^ {b}_{n, n'}} ^ {e_ {m, m'}}$ is defined to determine whether VL $e_ {m , m'}$ is embedded over path $p_ {n, n'} ^ {b}$ or not and if so, $\xi _ {p^ {b}_{n, n'}} ^ {e_ {m, m'}}$ is set to 1, otherwise, 0. The following constraints assure that the VLs of the accepted slices are located on only one path between the nodes that contain the associated VMs of the VLs: \begin{align}\label{VL Embedding}\nonumber &\text{C3: }\sum\limits_{n\in \mathcal{N}} \sum\limits_{n' \in \mathcal{N}} \sum\limits_{b\in\mathcal{\text{$\mathcal{P}$}}_{n,n'}} \xi^{e_{m,m'}}_{p^b_{n,n'}} = \delta_{t,d},\,\forall e_{m,m'} \in \mathcal{E}_{t,d},\\ &\forall{t}\in \mathcal{T}_\text{c},\, \forall {d} \in \mathcal{D}_{t\_\text{c}}.\\\nonumber &\text{C4: }\xi_{p^{b}_{n,n'}}^{e_{m,m'}} = \pi_n^{m}\times \pi_{n'}^{m'}, \forall n,n' \in \mathcal{N},\forall b\in\mathcal{\text{$\mathcal{P}$}}_{n,n'}, \\ &\forall m,m' \in \mathcal{M}_{t,d}, \forall e_{m,m'} \in \mathcal{E}_{t,d},\forall t \in {\mathcal{T}_\text{c}},\forall{d}\in \mathcal{D}_{t\_\text{c}}. \end{align} By applying the nonlinear constraint C4, we assure that if $\xi_{p^{b}_{n,n'}}^{e_{m,m'}}$ is 1, $\pi_n^{m}$ and $\pi_{n'}^{m'}$ should be 1, consequently, to have a BLP model, the following transformation is advised: \begin{align} &\text{C4-1: }\xi^{e_{m,m'}}_{p^b_{n,n'}} \le \pi_{n}^{m} + 1 - \pi_{n'}^{m'},\\ &\text{C4-2: }\pi_{n}^{m} \le \xi^{e_{m,m'}}_{p^b_{n,n'}} + 1 - \pi_{n'}^{m'},\\\nonumber &\text{C4-3: }\xi^{e_{m,m'}}_{p^b_{n,n'}} \le \pi_{n'}^{m'},\,\forall n,n' \in \mathcal{N},\forall b\in\mathcal{\text{$\mathcal{P}$}}_{n,n'}, \\ &\forall m,m' \in \mathcal{M}_{t,d},\forall e_{m,m'} \in \mathcal{E}_{t,d},\forall{t}\in \mathcal{T}_\text{c},\forall{d}\in {\mathcal{D}_{t\_\text{c}}}. \end{align} Therefore, constraints (C4-1 to C4-3) should be applied instead of C4. \subsection{Node Resources Limitations} The sum of the allocated resources to VMs, placed on a node $n$ must not exceed its capacity ($R'_n$), thus, the following constraints prevail: \begin{align}\label{Node Resources Limitations} &\text{C5: }\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in{\mathcal{D}_\text{$t\_\text{c}$}}}\sum\limits_{m \in \mathcal{M}_{t,d}} \pi_{n}^m\times{\nu_m} = {U_n},\forall n \in \mathcal{N}.\\ &\text{C6: }{U_n} \le {{R'}_n}, \forall n\in \mathcal{N}. \end{align} Because each node and VM have three types of resources CPU, RAM, and storage, the new integer variable $U_n$ is defined to specify the sum of the required resources by the VMs, placed on node $n$. The $U_n$ is a vector containing the required CPU, RAM, and storage by the VMs. \subsection{Link Bandwidth Limitation} To determine the physical link $l_ {u, u'}$ as a member of the path $p_ {n, n'} ^ {b}$, the binary indicator $I_ {p_ {n, n'} ^ b} ^ {l_ {u , u'}}$ is defined, where, $\mathcal{L}_ {p_ {n, n'} ^ {b}}$ is the set of all physical links in path $p_ {n, n'} ^ {b}$. The following constraints specify the bandwidth of each link $l_{n,n'}$ applied by VLs on it considering the available bandwidth ($B'_{l_{u,u'}}$). The variable $U'_{l_{u,u'}}$ is defined to specify the sum of the required bandwidths by VLs on the physical link $l_{n,n'}$ and the variable ${\vartheta}_{l_{n,n'}}$ is defined to specify whether link $l_{n,n'}$ is applied in the current time slot or not, consequently, if link $l_{n,n'}$ is applied in the current time slot, this variable is set to 1, otherwise 0. \begin{align}\label{Link Bandwidth Limitations}\nonumber &\text{C7: }\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in \mathcal{D}_{t\_\text{c}}} \sum\limits_{e_{m,m'} \in \mathcal{E}_{t,k}} \sum\limits_{n\in \mathcal{N}} \sum\limits_{n'\in \mathcal{N}} \sum\limits_{b\in\mathcal{\mathcal{P}}_{n,n'}} I^{l_{u,u'}}_{p_{n,n'}^b}\times\\ &\xi^{e_{m,m'}}_{p^b_{n,n'}}\times\omega_{e_{m,m'}} = {U'}_{l_{u,u'}} ,\,\forall l_{u,u'}\in \mathcal{L}_{p_{n,n'}^b}.\\ &\text{C8: }{{U'}_{l_{u,u'}}} \le {\vartheta}_{l_{u,u'}}\times{{B'}_{l_{u,u'}}},\forall l_{u,u'}\in \mathcal{L}_{p_{n,n'}^b}. \end{align} \subsection{Delay Model} To assume the VLs’ maximum tolerable propagation delays, the following constraint is defined: \begin{align}\nonumber &\text{C9: }\sum\limits_{l_{u,u'} \in \mathcal{L}_{p_{n,n'}^b}} I^{l_{u,u'}}_{p_{n,n'}^b}\times \xi^{e_{m,m'}}_{p^b_{n,n'}} \times\tau_{l_{u,u'}} \le \tau^{e_{m,m'}}_{\max} ,\,\\\nonumber &\forall n,n' \in \mathcal{N},\forall b \in \mathcal{P}_{n,n'}, \forall e_{m,m'}\in\mathcal{E}_{t,d},\, \forall{t}\in \mathcal{T}_\text{c},\, \\ &\forall{d}\in \mathcal{D}_{t\_\text{c}}. \end{align} Because the tenant as a slice owner is responsible for slice management, the IP does not have any information about the VNFs that will be executed on VMs. Therefore, the execution delay cannot be computed. Also, the transmission delay is not considered, for simplicity \cite{ebrahimi2019joint}. \subsection{Node Power Consumption} The linear node power consumption model proposed by \cite{dayarathna2015data} is modified as follows: \begin{align}\label{Node Power Consumption}\nonumber &\text{C10: }\sum_{n\in \mathcal{N}}(P^{\text{Max}}_n - P^{\text{Idle}}_n)\times \frac{{U}^{\text{CPU}}_n}{R^\text{CPU}_n} + (1-\alpha_n)\times \beta_{n}\times P^{\text{Idle}}_n \\ &={N}_{\text{c}}.\\\nonumber &\text{C11: }\pi_{n}^{m} \le \alpha_n,\forall n\in \mathcal{N}_\text{Used},\forall m \in \mathcal{M}_{t,d},\forall{t}\in\mathcal{T}_\text{c},\\ &\forall {d} \in \mathcal{D}_{t\_\text{c}}.\\\nonumber &\text{C12: }\pi_{n}^{m} \le \beta_{n},\forall n\in \mathcal{N}_\text{Unused},\forall m \in \mathcal{M}_{t,d},\forall{t}\in\mathcal{T}_\text{c},\\ &\forall{d}\in \mathcal{D}_{t\_\text{c}}. \end{align} In constraint C10, $P_{n} ^ \text{Idle}$ and $P_{n} ^ \text{Max}$ are equal to the power consumption volumes of the node $n$ at idle and maximum usage, respectively. The $\frac{U_{n} ^ \text{CPU}}{R_{n}^\text{CPU}}$ specifies the CPU utilization of node $n$ volume. To propose an online admission control and resource allocation with the aim of power consumption minimization, we need to divide nodes (set $\mathcal{N}$) into two sets $\mathcal{N}_{Used}$ and $\mathcal{N}_{Unused}$ to use nodes that have already been turned on as much as possible. The variable $\beta_{n}$ is set to 1 if node $n \in \mathcal{N}_{Unused}$ is turned on in the current time slot, while, $\alpha_n$ specifies the previously turned-on node $n \in \mathcal{N}_{Used}$. $\alpha_n$ is 0 if node $n$ is not previously applied. The circumstances that the nodes are turned-on and VMs are placed on the turned-on nodes is verified through constraints C11 and C12. By applying these constraints, a continuous variable, $N_{\text{c}}$, specifies the total nodes’ power consumption in the current time slot. \subsection{Switch Power Consumption} The volume of switch power consumption is computed through constraint C13. The switch power consumption model is proposed by \cite{farkiani2019fast}. For simplicity, a parameter $\zeta_{l_{u,u'}} $ is defined for each link $l_{n,n'}$, is the sum volume of: 1) the idle power consumption volume of switch $u$, 2) the idle power consumption volume of switch $u'$ , 3) the power consumption volume of switch $u$’s applied port for link $l_{u,u'}$, and 4) the power consumption volume of switch $u'$’s applied port for link $l_{u,u'}$. The formulation for computing the power of each link in its cumulative sense is as follows: \begin{align}\label{link_power_calculation} &\zeta_{l_{u,u'}} = (2\times S^\text{Port}_u)+\frac{S_{u}}{S'_{u}} + \frac{S_{u'}}{S'_{u'}},\forall l_{u,u'} \in \mathcal{L}, u \neq u'. \\ & \label{link_power_calculation2} \zeta_{l_{u,u'}} = (S^\text{Port}_u)+\frac{S_{u}}{S'_{u}},\forall l_{u,u'} \in \mathcal{L}, u = u'. \end{align} In these two equations, $S_u$ and $S^\text{Port}_u$ are switch $u$ and its one port power consumption, respectively. Also, $S'_u$ specifies number of connected ports of switch $u$ to other switches. When a previously unused link $l_{u,u'}$ is applied in the current time slot for embedding VLs of newly accepted slices, first, the $\vartheta_{l_{u,u'}}$ is set to 1, and next, the sum of $\zeta_{l_{u,u'}}$ for all newly activated links in current time slot specifies the switches' power consumption. Here, a continuous variable, $S_{\text{c}}$, specifies the total switches' power consumption in the current time slot. \begin{align}\label{Switch Power Consumption} &\text{C13: }\sum\limits_{u\in\mathcal{N}}\sum\limits_{u'\in\mathcal{N}}\vartheta_{l_{u,u'}}\times{\zeta}_{l_{u,u'}} = {S}_{\text{c}}, \vartheta_{l_{u,u'}} \neq \gamma_{l_{u,u'}}. \end{align} \subsection{The Admission Control and Resource Allocation Base Model} \label{The Admission Control and Resource Allocation Base Model} The nodes' and switches' power consumption and the accepted slices’ count are considered as the main components of the objective function in this article to reduce the power consumption volume and the rejected slices’ count. Here, the objective function, named C, is expressed as follows: \begin{align}\label{objective function}\nonumber &C = (\frac{{N}_{\text{c}}}{{N}_{\text{Total}}}) + (\frac{{S}_{\text{c}}}{{S}_{\text{Total}}}) + \eta+\sum\limits_{n\in\mathcal{N}}(\frac{U^\text{RAM}_n}{R^\text{RAM}_n}) + \\ &\sum\limits_{n\in\mathcal{N}}(\frac{U^\text{STOR}_n}{R^\text{STOR}_n}) + \sum\limits_{u\in\mathcal{N}}\sum\limits_{u'\in\mathcal{N}} (\frac{{U'}_{l_{u,u'}}}{{B}_\text{Total}}). \end{align} In the objective function C all components are normalized, to give priority to minimize $\eta$, the one that specifies the rejected slices’ count. Therefore, first $\eta$ and then the rest of the components are minimized. The nodes’ and switches’ power consumption are computed through $N_\text{c}$ and $S_\text{c}$, respectively. Because of the considered online approach in this article, the last three components in the objective function are defined to avoid the resources waste. The base model of admission control and resource allocation is presented as follows: \begin{align}\label{joint_optimization}\nonumber &\mathop{\min}~~C\\\nonumber &\text{subject to:}\\\nonumber &\text{(C1)-(C3),(C4-1)-(C4-3),(C5)-(C13)}\\\nonumber &\text{C14: }\beta_n \in \{0,1\}, \forall n.\\\nonumber &\text{C15: }\delta_{t,d} \in \{0,1\}, \forall t,d.\\\nonumber &\text{C16: }\pi^m_n \in \{0,1\}, \forall t,d,m,n.\\ &\text{C17: }\xi^{e_{m,m'}}_{p^b_{n,n'}} \in \{0,1\}, \forall e_{m,m'},b,n,n',t,d.\\\nonumber &\text{C18: }\vartheta^{l_{n,n'}} \in \{0,1\}, \forall l_{n,n'}.\\\nonumber \end{align} The rejected slices’ count is computed through constraint C1. Constraint C2 is applied in placing the accepted slices’ VMs on the nodes. By enforcing constraints C3 to C4-3, it is assumed that the VLs of accepted slices are located on only one path between nodes that include the associated VMs of the VLs. The volume of applied resources in each node $n$ by VMs on it concerning the available resources is specified by enforcing constraints C5 and C6. The applied bandwidth of each link $l_{u,u'}$ by VLs on it concerning the available bandwidth is specified by enforcing constraints C7 and C8. Constraint C9 is related to delay limit of each VL. The volume of nodes’ power consumption is computed through constraints C10, C11 and C12. The volume of switches’ power consumption is computed through constraint C13. The binary decision variables of the problem are specified through constraints C14-C18. \subsection{The Admission Control and Resource Allocation Robust Model} Considering the resources of each VM (CPU, RAM, and storage) and the data rate for each slice's link on constant basis, is a very ideal and unreal assumption. In this situation, if the demand face unusual changes, the VLs and VMs become congested, leading to slice violation. To overcome this phenomenon and formulate the problem in conditions closer to reality, the uncertainties in the data rates of VLs and the workloads of VMs are considered as the stochastic variables. For this purpose, the requested resources of a slice are modeled in a sense that for slice $s_ {t, d}$, ${\nu _ {m}}=[\nu _ {m}^\text{CPU},\nu_ {m} ^ \text{RAM}, \nu _ {m} ^ \text{STOR}]$ and data rate $\omega_ {e_ {m, m'}}$ are considered as ${\widetilde{\nu}_{m}} = [{\widetilde{\nu} _ {m}^ \text{CPU}}, {\widetilde{\nu} _ {m} ^ \text{RAM}}, {\widetilde{\nu} _ {m} ^ \text{STOR}}]$ and ${\widetilde{\omega}_ {e_ {m, m'}}}$ rather than being fixed, where the volumes of ${\widetilde{\nu} _ {m}^ \text{CPU}}$ , ${\widetilde{\nu} _ {m} ^ \text{RAM}}$ , ${\widetilde{\nu} _ {m} ^ \text{STOR}}$, and ${\widetilde{\omega}_ {e_ {m, m'}}}$ have a uniform distribution \cite{wen2017robust} in the intervals $[{ \overline{\nu} _ {m} ^ \text{CPU}}- {\widehat{\nu} _ {m} ^ \text{CPU}}, {\overline{\nu} _ {m} ^ \text{CPU}} + {\widehat{\nu} _ {m} ^ \text{CPU}}]$, $[{\overline{\nu} _ {m} ^ \text{RAM}}- {\widehat{\nu} _ {m} ^ \text{RAM}}, { \overline{\nu} _ {m} ^ \text{RAM}} + {\widehat{\nu} _ {m} ^ \text{RAM}}]$, $[{\overline{\nu} _ {m} ^ \text{STOR}}- {\widehat{\nu} _ {m} ^ \text{STOR}}, {\overline{\nu} _ {m} ^ \text{STOR}} + {\widehat{\nu} _ {m} ^ \text{STOR}}]$, and $[{\overline{\omega}_ {e_ {m, m'}}}- {\widehat{\omega}_ {e_ {m, m'}}}, {\overline{\omega}_ {e_ {m, m'}}}+ {\widehat{\omega}_ {e_ {m, m'}}}]$, respectively. In the defined intervals, ${\overline{\nu} _ {m} ^ \text{CPU}}, {\overline{\nu} _ {m} ^ \text{RAM}}, {\overline{\nu} _ {m} ^ \text{STOR}}$, and ${\overline{\omega}_ {e_ {m, m'}}}$ are the centers of intervals or said otherwise, the nominal volumes, and ${\widehat{\nu} _ {m} ^ \text{CPU}} , {\widehat{\nu} _ {m} ^ \text{RAM}}, {\widehat{\nu} _ {m} ^ \text{STOR}}$, and ${\widehat{\omega}_ {e_ {m, m'}}}$ are the maximum deviation of demand. These intervals can be defined by the tenant, where, a tenant for a given slice, declares the least and the most slice’s required data rates, likewise, an interval for the requested resources’ volumes by the tenant can be of concern, which would overcome the fluctuations in traffics and workloads. An optimization model of admission control and resource allocation with the defined stochastic variables is presented, where, the changes that is considered in slice requests affect the constraints C5 and C7 of the base model (\ref{joint_optimization}). The stated constraints will be changed as follows: \begin{align}\label{constraints_ with_stochastic_variables} &\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in{\mathcal{D}_\text{$t\_\text{c}$}}}\sum\limits_{m \in \mathcal{M}_{t,d}} \pi_{n}^m\times{{\widetilde{\nu_m}}} = {U_n}, \forall n \in \mathcal{N}.\\\nonumber &\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in \mathcal{D}_{t\_\text{c}}} \sum\limits_{e_{m,m'} \in \mathcal{E}_{t,k}} \sum\limits_{n\in\mathcal{N}} \sum\limits_{n'\in\mathcal{N}} \sum\limits_{b\in\mathcal{\mathcal{P}}_{n,n'}} I^{l_{u,u'}}_{p_{n,n'}^b}\times\text{${\xi}$}^{e_{m,m'}}_{p^b_{n,n'}}\\ &\times{\widetilde{\omega}_{e_{m,m'}}} = {U'}_{l_{u,u'}} ,\,\forall l_{u,u'}\in \mathcal{L}_{p_{n,n'}^b}. \end{align} Because the formulation with these modified constraints that includes stochastic variables cannot be solved directly, for its worst-case robust counterpart formulation, the $\Gamma$-Robustness is applied to encounter uncertainties. In this robust formulation, the stochastic variables are converted into deterministic ones, using the conversion process presented in \cite{marotta2017energy}. Therefore, the linear $\Gamma$-Robustness based model with 2 parameters $\Gamma_1$ and $\Gamma_2$, named \textbf{ROBINS}, is as follows:\\ \\ \textbf{(ROBINS)} \begin{align}\label{robust_joint_optimization}\nonumber &\mathop{\min}~~C\\\nonumber &\text{subject to:}\\\nonumber &\text{(C1)-(C3),(C4-1)-(C4-3)}\\\nonumber &\text{C5-1: }\sum\limits_{t}\sum\limits_{d}\sum\limits_{m} {\pi}_{n}^m\times{\overline{\nu}_m} + \sum\limits_{t}\sum\limits_{d}\sum\limits_{m}{\rho_{1}}^{m}_{n} + \Gamma_1\times{z_{1}}_{n}\\\nonumber &= {U_n},\forall n \in \mathcal{N}.\\\nonumber &\text{C5-2: }\pi_{n}^m\times{\widehat{\nu_m}} \le {\rho_{1}}^{m}_{n} + {z_{1}}_{n},\quad \forall n,m,t,d.\\\nonumber &\text{(C6)}\\\nonumber &\text{C7-1: }\sum\limits_{t}\sum\limits_{d} \sum\limits_{e_{m,m'}} \sum\limits_{n} \sum\limits_{n'} \sum\limits_{b} I^{l_{u,u'}}_{p_{n,n'}^b}\times \xi^{e_{m,m'}}_{p^b_{n,n'}}\times\overline{\omega}_{e_{m,m'}}+\\\nonumber &\sum\limits_{t}\sum\limits_{d}\sum\limits_{e_{m,m'}}\sum\limits_{n}\sum\limits_{n'} \sum\limits_{b}I^{l_{u,u'}}_{p_{n,n'}^b}\times{\rho}^{e_{m,m'}}_{p^b_{n,n'}} + \Gamma_2\times{z_2}_{l_{u,u'}} =\\ &{U'}_{l_{u,u'}} ,\forall l_{u,u'}.\\\nonumber &\text{C7-2: }I^{l_{u,u'}}_{p_{n,n'}^b} \times\xi^{e_{m,m'}}_{p^b_{n,n'}}\times{\widehat{\omega}_{e_{m,m'}}} \le {\rho_2}^{e_{m,m'}}_{p^b_{n,n'}} + {z_2}_{l_{u,u'}},\\\nonumber &\forall e_{m,m'},b,n,n',l_{u,u'}.\\\nonumber &\text{(C8)-(C18)}\\\nonumber \end{align} In ROBINS, the robustness auxiliary variables ${\rho_{1}}^{m}_{n}$, ${z_{1}}_{n}$, ${\rho_2}^{e_{m,m'}}_{p^b_{n,n'}}$, and ${z_2}_{l_{u,u'}}$ are integers. To specify the relative deviations, the ${\widehat{\nu} _ {m}} = \Delta_1 \times {\overline{\nu} _ {m}}$ and ${\widehat{\omega}_ {e_ {m, m'}}} = \Delta_2 \times {\overline{\omega}_ {e_ {m, m'}}}$ such that $\Delta_1$ and $\Delta_2$ are considered within 0 to 1 range. By applying $\Gamma_1$ and $\Gamma_2$ parameters, the count of the VMs and VLs that their demand can fluctuate, are specified, respectively. It is worth noting that this worst-case model that considers the resource requests according to the worst conditions, can be applied to any resource request distribution \cite{jiang2016data}, including uniform, normal, etc. On the other hand, the advantage of $\Gamma$-Robustness compared to the general worst-case form \cite{wen2017robust}, is that, the parameter $\Gamma$ can be used to control the number of random variables whose values can fluctuate. Therefore, the IP can use this parameter to control the robustness alongside the $\Delta$ parameter that specifies the fluctuation interval. \begin{center} \begin{figure}[t!h] \centering \includegraphics[width=\columnwidth]{images/Flow_chart.pdf} \caption{Online Admission Control and Resource Allocation Process}\label{flowchart} \end{figure} \end{center} \section{online robust admission control and resource allocation: Solution Methodology} \label{online robust admission control and resource allocation: Solution Methodology} An admission control and resource allocation algorithm is proposed through the ROBINS. ROBINS is based on BLP that can be solved by using any commercial optimization solver, such as IBM ILOG CPLEX. However, finding the optimal solution in reasonable amount of time may have difficulties for an advanced state-of-the-art slover like CPLEX when the size of the problem increase (e.g. real-world and large-scale networks)\cite{marotta2017energy}. Therefore, to find a near-optimal solution in a reasonable amount of time, a heuristic algorithm is presented. The general process of admission control and resource allocation of the newly arrived slices applied in this article is shown in Fig. (\ref{flowchart}), where, first, in each time slot except the first, the allocated resources to the slices that are expired are taken back, next, the possibility of acceptance of arrived slices are checked and if there exist slices which their requirements cannot be met, they are rejected and then the requested resources’ are allocated to the accepted slices. \subsection{Optimal Admission Control and Resource Allocation Algorithm}\label{Optimal Admission control and Resource Allocation Algorithm} An algorithm is proposed for optimal admission control and resource allocation, according to process shown in Fig. (\ref{flowchart}) and ROBINS, named OEA-ONSU. Algorithm \ref{algorithm_OEA_ONSU} shows the OEA-ONSU. In this algorithm, each time slot, consist of three steps: 1) the existence of expired slices are checked (line 2). If the slice lifespan is expired, both the allocated resources to its VMs (line 4-8), and VLs (line 9-15) will be taken back and the sets $\mathcal{T}$, $\mathcal{D}_t$,and $\mathcal{S}$ must be updated, otherwise, only the $\phi_{t,d}$' volume is reduced one unit (lines 19-21). If there exist no VMs and VLs on a previously applied server and link, respectively, these two will be turned OFF (lines 24-25 and 34-35), otherwise, if the lifespans of the slices in a previous time slot are expired, the resources considered for robustness will be taken back to the servers and links (lines 26-31 and 36-41). After the expired slices’ and the available network resources’ status are determined, it is time to apply the admission control and resource allocation to the newly arrived slices, that is, entering step 2), where, the ROBINS model must be solved (line 43) and 3) where, the sets $\mathcal{T}$, $\mathcal{D}_t$, and $\mathcal{S}$, and the volume of applied resources, and consumed power must be updated. \\ \begin{algorithm}[t] \caption{OEA-ONSU} \footnotesize \tiny \SetAlgoLined \label{algorithm_OEA_ONSU} \KwInput{$G,\mathcal{S},\mathcal{S}_\text{c},\mathcal{T},\mathcal{D}_t,\mathcal{T}_\text{c},\mathcal{D}_{t\_\text{c}},N_\text{Used},S_\text{Used},O^{t'}_n,O'^{t'}_{l_{n,n'}}, \Gamma_1,\Gamma_2,\Delta_1,\Delta_2$} \KwOutput{Rejected slices of $\mathcal{S}_\text{c}$, Servers and paths for embedding accepted slices of $\mathcal{S}_\text{c}$} \For {each time slot} { \tcc{STEP 1: Get resources of expired slices} \For {each tenant $t$ in $\mathcal{T}$ and each slice $d$ in $\mathcal{D}_t$} { \eIf {$\phi_{t,d} == 0$} { \For{each $m$ in $\mathcal{M}_{t,d}$ and each $n$ in $\mathcal{N}_\text{Used}$, if $\pi^m_n == 1$}{ $\pi^m_n = 0$\; ${R'}_n += \nu_m$\; Update $N_\text{Used}$\; } \For{each $e_{m,m'}$ in $\mathcal{E}_{t,d}$ and each Path $b$ in $\mathcal{P}_{n,n'}$, if $\xi^{e_{m,m'}}_{p^b_{n,n'}} == 1$} { $\xi^{e_{m,m'}}_{p^b_{n,n'}} = 0$\; \For{each used link $l_{u,u'}$, if $l_{u,u'}$ in $p^b_{n,n'}$} { ${B'}_{l_{u,u'}} += \omega_{e_{m,m'}}$\; Update $S_\text{Used}$\; } } Update $\mathcal{T}$\; Update $\mathcal{D}_t$\; Update $\mathcal{S}$\; }{ $\phi_{t,d} -= 1$\; } } \For {each $n$ in $\mathcal{N}_\text{Used}$} { \eIf {number of VMs on $n == 0$} { $\mathcal{N}_\text{Used}$.remove$(n)$\; }{ \For {each time slot $t'$ in previous time slots, if in current time slot,$\phi$ of all slices in $t' == 0$} { $R'_n += O^{t'}_{n}$\; Update $N_\text{Used}$\; } } } \For{each used link $l_{u,u'}$} { \eIf {number of VLs on $l_{u,u'} == 0$} { $\gamma_{l_{u,u'}} = 0$\; }{ \For{each time slot $t'$ in previous time slots, if in current time slot, $\phi$ of all slices in $t' == 0$} { ${B'}_{l_{u,u'}} + = O'^{t'}_{l_{u,u'}}$\; Update $S_\text{Used}$\; } } } \tcc{STEP 2: Admission control and Resource allocation} \textbf{Solve ROBINS}\; \tcc{STEP 3: Update $\mathcal{T}$, $\mathcal{D}_t$, $\mathcal{S}$, and the volume of available resources, applied resources for robustness, and the power consumption} \For {each accepted slice in current time slot} { Update $\mathcal{T}$\; Update $\mathcal{D}_t$\; Update $\mathcal{S}$\; } \For {each $n$ in $\mathcal{N}_\text{Used}$} { Update ${R'}_n$\; Update $O^t_n$ for current time slot $t$\; } \For{each used link ${l_{u,u'}}$} { Update ${B'}_{l_{u,u'}}$\; Update $O'^t_{l_{u,u'}}$ for current time slot $t$\; } Update $N_\text{Used}$\; Update $S_\text{Used}$\; } \end{algorithm} \subsection{Near Optimal Admission control and Resource Allocation Algorithm}\label{Near Optimal Admission control and Resource Allocation Algorithm} To find a near-optimal solution in a reasonable amount of time in the step 2 of Algorithm \ref{algorithm_OEA_ONSU}, a greedy algorithm is devised to use instead of ROBINS. In this context, the new three-step algorithm, named NEA-ONSU, is devised, and Algorithm \ref{algorithm_NEA_ONSU} shows its details. Step 1 of this algorithm resembles that of Algorithm \ref{algorithm_OEA_ONSU}. In step 2, first, all the servers are sorted in the descending order of their available capacities (line 3), next, all slices are sorted in a descending order based on the total capacity required for their VMs (line 4) and for each slice, the requested VMs are sorted in descending order of their capacities (line 6). For each VM, first, the first proper server should be found according to the resources required for that VM and robustness, and next, for each VM that is connected to this VM, the same process is run, where the required bandwidth capacity and robustness and the maximum tolerable propagation delay of the VL between the two current VMs are of concern. Consequently, first, the existing paths between the two candidate servers for placing the two VMs based on their propagation delays in an ascending order must be sorted and next, the first path that meets the requirements (lines 7-30) must be selected. If the required servers and links cannot be found for a slice, that slice will be rejected (lines 31-32), otherwise, the sets $\mathcal{T}$, $\mathcal{D}_t$, and $\mathcal{S}$ must be updated (lines 33-37). In step 3, the applied servers and links must be specified, and the volume of applied resources for robustness and the power consumption must be computed. \begin{itemize} \item \textbf{Computational complexity:} Consider $t$ as the current time slot count, and $|\mathcal{T}|$ and $|\mathcal{D}_t|$ as the total accepted tenants’ and slices’ count up to the current time slot, respectively; $|\mathcal{T}_\text{c}|$ and $|\mathcal{D}_{t\_\text{c}}|$ as the current time slot tenants’ and slices’ count, respectively; $\mathcal{|N|}$ as the nodes’ count; $|\mathcal{M}|$ as the VMs’ count; $|\mathcal{E}|$ as the VLs’ set length; $b$ is the count of the paths between two nodes, and $|\mathcal{L}|$ as the physical links’ set length. The computational complexity of step 1 is O($|\mathcal{T}|.|\mathcal{D}_t|.({\mathcal{|N|}}.|\mathcal{M}|+{\mathcal{|N|}}.t+|\mathcal{E}|.\mathcal{|N|}^2b.|\mathcal{L}|)$), step 2 is O($(|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|).\log(|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|)+\mathcal{|N|}.\log \mathcal{|N|} + (|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|).(|\mathcal{M}|^2.\mathcal{|N|}^4.b.\log(\mathcal{|N|}^2.b) + \mathcal{|N|}^4.b^2.|\mathcal{L}|^2.|\mathcal{E}| + |\mathcal{M}|.\mathcal{|N|}^3.b.|\mathcal{L}|)$), and step 3 is O($|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|.({\mathcal{|N|}}.|\mathcal{M}|+|\mathcal{E}|.\mathcal{|N|}^2b.|\mathcal{L}|)$). Therefore, the total computational complexity of Algorithm \ref{algorithm_NEA_ONSU} is the sum of the step 1 and step 2 complexities, because, the complexity of step 3 is negligible compared to the steps 1 and 2. \end{itemize} \begin{algorithm}[t] \caption{NEA-ONSU} \footnotesize \tiny \SetAlgoLined \label{algorithm_NEA_ONSU} \KwInput{$G,\mathcal{S},\mathcal{S}_\text{c},\mathcal{T},\mathcal{D}_t,\mathcal{T}_\text{c},\mathcal{D}_{t\_\text{c}},N_\text{Used},S_\text{Used},O^{t'}_n,O'^{t'}_{l_{n,n'}},\Gamma_1,\Gamma_2,\Delta_1,\Delta_2$} \KwOutput{Rejected slices of $\mathcal{S}_\text{c}$, Servers and paths for embedding accepted slices of $\mathcal{S}_\text{c}$} \For {each time slot} { \tcc{STEP 1: Get resources of expired slices} Same~as~OEA-ONSU \\ \tcc{STEP 2: Admission control and Resource allocation} $\textbf{sorted\_N}$: sort~$\mathcal{N}$~descending~according~to~available~capacity~of~each~node\; $\textbf{sorted\_slices}$: sort~$\mathcal{S}_\text{c}$~descending~according~to~requested~VMs'~resources~of~slices\; \For {each slice $s$ in sorted\_slices} { $\textbf{sorted\_VMs}$: sort~VMs~descending according to their requested capacities\; \For{each $m$ in sorted\_VMs and each node $n$ in sorted\_N, if node~$n$~is~proper~for~$m$~by~considering~robustness} { \For{each $m'!=m$ in sorted\_VMs, if ${e}_{m,m'}$~exists} { \For{each node $n'$ in sorted\_N, if node $n'$ is proper for $m'$ by considering robustness} { $\textbf{sorted\_paths}$: sort~paths~between~$n$ ,~$n'$~ascending according~to~propagation~delay\; \For{each $p$ in ${{sorted\_paths}}$, if $p.prop\_delay$ $\le$ ${{\tau}^{e_{m,m'}}_{max}}$} { \For{each link ${l_{u,u'}}\in p$, if ${{B'}_{l_{u,u'}}}$ is enough for ${{e}_{m,m'}}$ by considering robustness} { \textbf{ }\\ ${\pi}^m_n = 1$\; \textbf{ }\\ ${R'}_n -= \nu_m$\; \textbf{ }\\ ${\pi}^{m'}_{n'} = 1$\; \textbf{ }\\ ${R'}_{n'} -= \nu_{m'}$\; \textbf{ }\\ ${\xi}^{e_{m,m'}}_{p} = 1$\; \For{each link $l_{u,u'} \in p$} { ${B'}_{l_{u,u'}}-=\omega_{e_{m,m'}}$\; } } } } } } \eIf{suitable servers and paths not found} { $\delta_{t,d} = 0$\; }{ Update $\mathcal{T}$\; Update $\mathcal{D}_t$\; Update $\mathcal{S}$\; } } \tcc{STEP 3: Specify the applied servers and links, and compute the volume of applied resources for robustness and the power consumption} Update $\mathcal{N}_{Used}$\; \For{each $l_{n,n'}$} { Update $\vartheta_{l_{n,n'}}$\; } \For {each $n$ in ${\mathcal{N}_\text{Used}}$} { Update $O^t_n$ for current time slot $t$\; ${R'}_n -= O^t_n$\; } \For{each used link ${l_{u,u'}}$} { Update $O'^t_{l_{u,u'}}$ for current time slot $t$\; ${B'}_{l_{u,u'}} -= O'^t_{l_{u,u'}}$\; } Update $N_\text{Used}$\; Update $S_\text{Used}$\; } \end{algorithm} \section{Numerical Results} \label{Numerical Results}\subsection{Simulation Environment} The simulation is run to evaluate the efficiency of the proposed algorithms. To implement the infrastructure network, the Abilene network \cite{orlowski2010sndlib} with 12 nodes and 27 links is involved, Fig. (\ref{fig:Abilene-Network}). The main Abilene network has 15 links, but here, each node has a link to itself to allow the VMs to interconnect on the common server, therefore, the link capacity for all links that interconnect the switches is set to $BW(i,j)=10$ Gbps, while, the capacity of each node’s local link is set to 40 Gbps, thus, no bottleneck is formed in the local communications. \begin{figure} \centering \includegraphics[width=0.65\columnwidth]{images/Abilene.pdf} \caption{Abilene Network Topology} \label{fig:Abilene-Network} \end{figure} The algorithms are implemented in Python 3.6. The docplex Python library 2.4.61 and IBM CPLEX Optimizer 12.8 are applied to implement and solve the ROBINS. Simulations are run on a machine with 20 cores 2.3GHz Intel Xeon E5-2695 v3 CPU and 128 GB RAM. The Poisson distribution is applied for new slice requests’ arrivals with the mean arrival rate $\lambda$ = 2 and max arrival rate which is set to 5 per time slot. The slice lifespan ($\mu$) is drawn from an exponential distribution with a mean of 10 time slots (40 time slots of the new and expired slice requests are simulated). The simulation parameters are tabulated in Table~\ref{simulation-parameters}. It is assumed that each tenant has only one slice request. Specifications of the physical servers proposed by \cite{farkiani2019fast} are modified and tabulated in Table \ref{servers-specifications}. The VMs' specifications are tabulated in Table \ref{VMs-specifications}. The Barabasi-Albert model \cite{hosseini2019probabilistic,barabasi1999emergence} is used to generate the VNs topologies. As observed in Table \ref{servers-specifications}, two types of servers are of concern and in each simulation iteration, for each server of the infrastructure network, the specifications are randomly selected from these two types. As observed in Table \ref{VMs-specifications}, three types of VMs are of concern and in any type of VM, the resources are designed to be compatible. The switch power data are extracted from \cite{marotta2017fast,bari2019esso} and are modified into: the power of each switch is 184 Watts, the power of each 10 Gbps port is 4.3 Watts and the power of each 40 Gbps port is 13.6 Watts. Consequently, the power of each link is computed through Eqs. (\ref{link_power_calculation} and \ref{link_power_calculation2}). The source code of simulation is available in \cite{ecv7-nz24-21}. \begin{table}[htpb] \centering \caption{Simulation Parameters} \label{simulation-parameters} \small \begin{center} \begin{tabular}{||m{5cm}|m{3cm}||} \hline {Number of time slots}& {40} \\ \hline {$\Gamma_1,\Gamma_2$} & {$[0,4],[0,4]$} \\ \hline {$\Delta_1,\Delta_2$} & {$(0\%,10\%,30\%)$}\\ \hline {Max number of $|\mathcal{T}_\text{c}|$} & 5 \\ \hline {$|\mathcal{D}_{t_\text{c}}|$} & 1 \\ \hline {Number of VMs of each slice request} & U(2-4) \\ \hline {Bandwidth of each VL (Mbps)} & U(100-1500) \\ \hline {Propagation delay of each VL (ms)} & U(4-13) \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \centering \caption{Servers Specifications} \label{servers-specifications} \small \begin{center} \begin{tabular}{||m{2cm}|m{2cm}|m{2cm}||} \hline Type & 1 & 2\\ \hline {CPU (core)}& 32 & 48\\ \hline {RAM (GB)} & 192 & 768 \\ \hline {Storage (GB)} & 4000 & 4000\\ \hline {$P^\text{Max}$ (Watts)} & 540 & 700 \\ \hline {$P^\text{Idle}$ (Watts)} & 170 & 180 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \centering \caption{VMs Specifications} \label{VMs-specifications} \small \begin{center} \begin{tabular}{||m{2cm}|m{1cm}|m{1cm}|m{1cm}||} \hline {Type}& {1} & {2}& {3} \\ \hline {CPU (core)} & 1 & 2 & 4 \\ \hline {RAM (GB)} & 2 & 4 & 16\\ \hline {Storage (GB)} & 120 & 120 & 120 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-AccRatio-fig-different-protection-levels.pdf} \caption{Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-AccRatio-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-AccRatio-fig-different-intervals.pdf} \caption{Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-AccRatio-Different-relative-deviations} \end{subfigure} \hfill \caption{Acceptance Ratio} \label{fig:comp-AccRatio} \end{figure} \subsection{Performance Metrics and Results} The results are presented for two scenarios: in the first, the relative deviations $\Delta_1$=$\Delta_2$\footnote{Considering $\Delta_1$=$\Delta_2$ is for simplicity and the algorithms have the ability to take different values for these two parameters as input.} for all requested VMs’ resources and VLs’ data rates are set to $10\%$, and results are presented for different protection levels that are specified with $\Gamma_1$=$\Gamma_2$\footnote{Considering $\Gamma_1$=$\Gamma_2$ is for simplicity and the algorithms have the ability to take different values for these two parameters as input.}, with 0 to 4 volumes; in the second, the $\Gamma_1$=$\Gamma_2$ and their volumes are set to 1 and the results are presented for different $\Delta_1$=$\Delta_2$, with 0\%, 10\%, and 30\% volumes. The reason for running these two scenarios is to compare the applied robustness using $\Gamma$ and $\Delta$, which are the two general parameters in providing robustness. The baseline is resource allocation without robustness \cite{marotta2017fast,wen2017robust}. Therefore, we consider the resource allocation without robustness ($\Delta$=0\% or $\Gamma$=0) as the baseline and compare the results of our algorithms in 1) $\Delta$=0\% or $\Gamma$=0 (as the baseline) with 2) $\Delta$ or $\Gamma$ with different values except 0, in two stated scenarios, based on the following performance metrics. The simulation results constitute the 20 simulations’ average. \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Node-Power-Consumption-fig-different-protection-levels.pdf} \caption{Server Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-NodePower-timeslot-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Switch-Power-Consumption-fig-different-protection-levels.pdf} \caption{Switch Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-SwitchPower-timeslot-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Total-Power-Consumption-fig-different-protection-levels.pdf} \caption{Total Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-TotalPower-timeslot-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Node-Power-Consumption-fig-different-intervals.pdf} \caption{Server Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-NodePower-timeslot-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Switch-Power-Consumption-fig-different-intervals.pdf} \caption{Switch Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-SwitchPower-timeslot-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Total-Power-Consumption-fig-different-intervals.pdf} \caption{Total Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-TotalPower-timeslot-Different-intervals} \end{subfigure} \caption{Power Consumption Changes in Time Slots} \label{fig:comp-PowerConsTimeslots} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Node-Power-Consumption-fig-different-protection-levels.pdf} \caption{Server Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-NodePower-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Switch-Power-Consumption-fig-different-protection-levels.pdf} \caption{Switch Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-SwitchPower-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Total-Power-Consumption-fig-different-protection-levels.pdf} \caption{Total Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-TotalPower-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Node-Power-Consumption-fig-Relative-deviation.pdf} \caption{Server Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-NodePower-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Switch-Power-Consumption-fig-Relative-deviation.pdf} \caption{Switch Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-SwitchPower-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Total-Power-Consumption-fig-Relative-deviation.pdf} \caption{Total Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-TotalPower-Different-intervals} \end{subfigure} \caption{Average Power Consumption Over Time Slots} \label{fig:comp-AvgPowerCons} \end{figure*} \begin{enumerate} \begin{figure*} \centering \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-servers-fig-different-protection-levels.pdf} \caption{Active Servers in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-ActiveServers-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-servers-fig-different-intervals.pdf} \caption{Active Servers in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-ActiveServers-Different-relative-deviations} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-links-fig-different-protection-levels.pdf} \caption{Active Links in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-ActiveLinks-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-links-fig-different-intervals.pdf} \caption{Active Links in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-ActiveLinks-Different-relative-deviations} \end{subfigure} \hfill \caption{Average Active Servers and Links} \label{fig:comp-ActiveServersLinks} \end{figure*} \item \textbf{Acceptance ratio:} The first metric is the accepted slices’ count to the total arrived slices’ count ratio, expressed through Eq. (\ref{Acc_ratio_calculation}): \begin{align} \label{Acc_ratio_calculation} &{Acceptance\_ratio} = {\frac{\#Accepted\_slices}{\#Arrived\_slices}}\times100. \end{align} The ratios of OEA-ONSU and NEA-ONSU are compared in two stated scenarios, Fig. (\ref{fig:comp-AccRatio}). The average of arrived slices in 20 simulations is 100. As observed in scenario 1, Fig. (\ref{fig:comp-AccRatio-Different-protection-levels})), an increase in $\Gamma_1$ and $\Gamma_2$, decreases the OEA-ONSU’s gained acceptance ratio, because the capacity of resources are limited and this algorithm cannot accept all slices with their requirements, while, in scenario 2 in Fig. (\ref{fig:comp-AccRatio-Different-relative-deviations}), the OEA-ONSU acceptance ratio is almost fixed and usually, higher than that of the scenario 1. As to NEA-ONSU, in scenario 1, compared to 2, its acceptance ratios are closer to the optimal state, and in scenario 2, acceptance ratios decrease by about 10\%. Unlike OEA-ONSU’s behavior in scenario 1, at $\Gamma$=0, the NEA-ONSU’s acceptance ratio is less than the acceptance ratio at $\Gamma$=1 and 2, and this is due to the greedy behavior of NEA-ONSU in choosing nodes to place VMs. On average, in terms of acceptance ratio, the OEA-ONSU has about 2\% gap with baseline and the NEA-ONSU has about 7\% optimality gap with OEA-ONSU. \item \textbf{Power consumption:} One of other metrics applied in assessing OEA-ONSU and NEA-ONSU is the power consumption metric, consisting of the servers’ power consumption, switches’ power consumption, and total power consumption, the sum of the two prior powers. The changes in this metric during time slots, are in two batches, described as follows: 1) Figs. (\ref{fig:comp-NodePower-timeslot-Different-protection-levels}, \ref{fig:comp-SwitchPower-timeslot-Different-protection-levels}, and \ref{fig:comp-TotalPower-timeslot-Different-protection-levels}), scenario 1, 2) Figs. (\ref{fig:comp-NodePower-timeslot-Different-intervals}, \ref{fig:comp-SwitchPower-timeslot-Different-intervals}, and \ref{fig:comp-TotalPower-timeslot-Different-intervals}), scenario 2. The average total power of servers and switches, when they are turned-on under maximum load, is 7450 and 2500 Watts, respectively. The average power consumption over time slots includes two batches of figures: 1) Figs. (\ref{fig:comp-NodePower-Different-protection-levels}, \ref{fig:comp-SwitchPower-Different-protection-levels}, and \ref{fig:comp-TotalPower-Different-protection-levels}), scenario 1, 2) Figs. (\ref{fig:comp-NodePower-Different-intervals}, \ref{fig:comp-SwitchPower-Different-intervals}, and \ref{fig:comp-TotalPower-Different-intervals}), scenario 2. As observed in Figs. (\ref{fig:comp-NodePower-timeslot-Different-protection-levels} and \ref{fig:comp-NodePower-timeslot-Different-intervals}), and considering the comparison of OEA-ONSU and NEA-ONSU in terms of acceptance ratio, the power consumption of the servers in scenario 1 with an increase $\Gamma$, from time slot 23 onwards, and in scenario 2 with an increase $\Delta$, from time slot 19 onwards, as approaches The average volume of total power of servers under maximum load, it begins to decrease due to the inability to accept the new slice requests and the expired slices exit. The server power consumption reduction in scenario 2 is higher than that of the scenario 1, which is due to a higher reduction in acceptance ratio in scenario 2. These conditions in switch and total power consumption, respectively, are evident in the figures. Because the effect of servers’ power consumption in total power consumption is more than the switches’ power consumption, the total power consumption changes are more affected by servers’ power consumption. In Fig. (\ref{fig:comp-AvgPowerCons}), where as observed, in general, by increasing $\Gamma$ and $\Delta$, the average power consumption increase. As observed in Fig. (\ref{fig:comp-NodePower-Different-protection-levels}), although, acceptance ratio of $\Gamma$=0, Fig. (\ref{fig:comp-AccRatio-Different-protection-levels}), is less than $\Gamma$=1 in NEA-ONSU it is expected that power consumption to be low, because the slice requests need more capacity for robustness and it is possible that the order of choosing servers in NEA-ONSU for $\Gamma$=1 vary from $\Gamma$=0, the server power consumption is decreased, and because the applied links’ count increase, the switch power consumption in $\Gamma$=1 becomes more than $\Gamma$=0. On average, as to power consumption, the OEA-ONSU has about 4\% gap with baseline and the NEA-ONSU has about 10\% optimality gap with OEA-ONSU. \item \textbf{Active servers and links:} The third evaluation metric includes the active servers' and links' count, separately. As observed in Fig. (\ref{fig:comp-ActiveServersLinks}), by increasing $\Gamma$, the active servers’ count, Fig. (\ref{fig:comp-ActiveServers-Different-protection-levels}), remains almost fixed, while the active links’ count, Fig. (\ref{fig:comp-ActiveLinks-Different-protection-levels}), increases. In scenario 1, increasing the servers’ power consumption, Fig. (\ref{fig:comp-NodePower-Different-protection-levels}), is due to increasing the applied resources’ volume instead of increasing applied servers’ counts. In Figs. (\ref{fig:comp-ActiveServers-Different-relative-deviations} and \ref{fig:comp-ActiveLinks-Different-relative-deviations}), by increasing $\Delta$, active servers’ count in OEA-ONSU increases more compared to the active links’ count, and as to NEA-ONSU, the active servers’ count increase. In Fig. (\ref{fig:comp-ActiveLinks-Different-relative-deviations}), for NEA-ONSU, the active links’ count in $\Delta$=10\% is less than $\Delta$=0\%, duo to low acceptance ratio in $\Delta$=10\%. At $\Delta$=30\%, although, the acceptance ratio is less than $\Delta$=0\%, the active links’ count is high, that is, the slices’ requests require more resources, that is, the active links’ count affects switch power consumption, Fig. (\ref{fig:comp-SwitchPower-Different-intervals}). In general, the percentage of active servers and links in scenario 1 are more than that of the scenario 2. In this metric, computing the gap of OEA-ONSU with baseline, and optimality gap of NEA-ONSU are not essential because, according to the objective function, here, the gaps are essential only in the two previous metrics. \item \textbf{Execution time:} Because finding the optimal solution for the ROBINS BLP problem may be unreachable in reasonable amount of time for real-world and large-scale networks, and, the near-optimal solution in a short time is sought, consequently, the algorithm NEA-ONSU is devised as an alternative to OEA-ONSU. In this context, execution time is a proper metric for comparing these two algorithms. The execution time of these algorithms is subject to the execution time of the three steps therein. The average execution time of these algorithms in each time slot is shown in Fig. (\ref{fig:comp-Executiontime}), where, as observed, NEA-ONSU is about 30X faster than OEA-ONSU. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{images/Compare-Time.pdf} \caption{Average Execution Time} \label{fig:comp-Executiontime} \end{figure} \end{enumerate} Generally, the exact gain of optimal algorithm compared to the stated baseline is providing reliable slices with a little gap with the baseline. \section{Conclusion} \label{Conclusion} We proposed two online admission control and resource allocation algorithms for network slicing under bandwidth and workload uncertainties. These include robustness vs. high fluctuations on both requested VLs' bandwidths and VMs' resources. Besides, because the slices' arrive to the network in different times and their lifespans vary, the solution dynamically react to the online slice requests. The joint problem of online admission control and resource allocation considering the energy consumption is formulated mathematically. The formulation is a BLP, where the $\Gamma$-Robustness concept is exploited to overcome VLs bandwidths' and VNFs workloads' uncertainties. Then, an optimal algorithm, named OEA-ONSU, that adopts this mathematical model is proposed. To find near-optimal solution in reasonable amount of time, a new heuristic algorithm, named NEA-ONSU, is proposed. The assessments’ results indicate that the efficiency of NEA-ONSU is vital in increasing the accepted requests’ count, decreasing power consumption and providing adjustable tolerance vs. the VNFs workloads’ and VLs traffics’ uncertainties, separately. Considering the acceptance ratio and power consumption that constitute the two important components of the objective function, NEA-ONSU has about 7\% and 10\% optimality gaps, respectively, while being about 30X faster than that of OEA-ONSU. A prospective extension to this work is to let the IP allocates more resources to the slices dynamically, if they need more resources than the provisioned resources for robustness. In other words, we need to add resource reallocation process to this work. This allows trade-off between proactive and reactive approaches. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \small{ \section{Introduction} \textbf{Background.} Next generation network slicing is a concept introduced to meet the requirements of different services provided in such networks. By applying this concept, it is possible to deploy multiple logical networks on a common infrastructure network to provide different services that may have conflicting requirements, which includes Radio Access Network (RAN), transport network, and Core Network (CN) virtualization. A network slice consists of Virtual Network Functions (VNF) and Virtual Links (VL). Network resources are limited, therefore, they should be optimally allocated to the slices ~\cite{MSU-CSE-06-2, alliance2016description, MSU-CSE-06-1}. There exists many studies on the efficient allocation of network resources. In these studies, it is assumed that the requested resources’ volumes for VNFs and VLs are fix and known~\cite{ebrahimi2019joint,fendt2018network,farkiani2019fast,chen2020network,bhamare2017optimal,halabian2019distributed}, while, in real-world problems, these volumes change dynamically and are usually unknown ~\cite{marotta2017energy,bauschert2014network}, that is, the resource requirements of VNFs and VLs of each slice are uncertain for different reasons (e.g., changes in the users’ count of each slice). If the uncertainty conditions are of no concern in the resources allocation, it may lead to slice violation with a slight change in demand. In network slicing, the slice requests arrive in the network in a continuous manner, named online slice requests, which should be checked if the network has enough resources for newly arrived slices, their requested resources should be allocated. In this case, each slice has a lifespan and is active temporarily or permanently in the network. At the end of a slice lifespan, its resources are taken back.\\ \textbf{Contributions.} The focus of this article is on allocating resources to online slice requests where there exist uncertainties in the resources’ volumes required by VNFs and VLs. The two approaches in managing the uncertainty are: 1) reactive, and 2) proactive. The reactive approach leads to unpredictable and often significant delays in operations of slices as they need to resupply the resources to active slices on-demand for instance, through the Virtual Machines (VM) migrations. As to the proactive approach in resource allocation the uncertain demand information is applied to avoid the need for resource reallocation on the spot. In this article the proactive approach is of concern. The objective is to increase the accepted slices’ count and decrease servers’ and switches’ power consumption. In this article, the problem is considered as Energy-Aware Online Network Slicing under Uncertainties (EA-ONSU) and a system model is proposed for it. In this system model, each slice is considered as a Virtual Network (VN) which includes VMs and VLs. The tenants, the slice owners, know their requirements and are allowed to manage the VNFs needed on their VNs and serve their customers thereupon \cite{ebrahimi2019joint}. The admission control and proactive resource allocation in this proposed system model are formulated as being Binary Linear Programming (BLP) when the concept of $\Gamma$–Robustness is applied to overcome uncertainties. The contributions of this article are summarized: \begin{itemize} \item Due to the practical limitations like fluctuations in resources demand (e.g., unusual changes in resources demand due to increasing slice’s customers' count), we model the slices' VNFs' and VLs' requested resources' uncertainties. \item The joint admission control and proactive resource allocation optimization problem is formulated to slice requests under demand uncertainties as BLP, by applying the $\Gamma$-Robustness concept, named the Robust Infrastructure Network Slicing (ROBINS). The objective is to increase the accepted slices’ count and decrease the physical servers’ and switches’ power consumption. \item A new optimal algorithm named Optimal Energy-Aware Online Network Slicing under Uncertainties (OEA-ONSU) is proposed for accepting and allocating resources to online slice requests by applying the ROBINS. The OEA-ONSU is a three step algorithm: 1) the previous slices’ expiration are checked, 2) admission control and resource allocation to newly arrived slices are made, and 3) the status of infrastructure network is updated. In order to apply the proposed method to real-world and large-scale networks and a find near-optimal solution in reasonable time, a novel heuristic algorithm, named Near-optimal Energy-Aware Online Network Slicing under Uncertainties (NEA-ONSU), is developed. \item The performance results of OEA-ONSU and NEA-ONSU algorithms on the Abilene network \cite{orlowski2010sndlib} are presented, where, the results reveal the efficiency of these proposed algorithms in accepting requests, allocating resources with considering energy efficiency, and providing adjustable tolerance vs. the VNFs workloads’ and VLs traffics’ uncertainties, separately. Considering the acceptance ratio and power consumption, the two important components of the objective function, NEA-ONSU has about 7\% and 10\% optimality gaps, respectively, while being about 30X faster than that of OEA-ONSU. \end{itemize} \textbf{Paper organization.} The rest of this article is organized as follows: the related works are reviewed in Sec. \ref{Related works}; the system is modeled in Sec.~\ref{System Model}; the online robust admission control and resource allocation: mathematical formulation is discussed in Sec.~\ref{online robust admission control and resource allocation: Mathematical Formulation}; the online robust admission control and resource allocation: solution methodology is introduced in Sec.~\ref{online robust admission control and resource allocation: Solution Methodology}; the numerical results are presented in Sec.~\ref{Numerical Results} and the article is concluded in Sec.~\ref{Conclusion}. \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \begin{table*}[htpb] \centering \caption{Summary of Related Researches} \label{Related-Research} \small \resizebox{\textwidth}{!}{ \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline \textbf{ref.}& \begin{tabular}{c}\textbf{CPU} \\\textbf{Robustness} \end{tabular}& \begin{tabular}{c}\textbf{RAM} \\\textbf{Robustness} \end{tabular}& \begin{tabular}{c}\textbf{Storage} \\\textbf{Robustness}\end{tabular}& \begin{tabular}{c}\textbf{Bandwidth}\\ \textbf{Robustness}\end{tabular}& \begin{tabular}{c}\textbf{VNF} \\\textbf{Placement}\end{tabular}& \begin{tabular}{c}\textbf{VLE}\end{tabular}& \begin{tabular}{c}\textbf{Delay}\end{tabular}& \textbf{Online} & \begin{tabular}{c}\textbf{Applicable} \\\textbf{on Large}\\ \textbf{Networks}\end{tabular}& \begin{tabular}{c}\textbf{Decreasing} \\\textbf{Power} \\\textbf{Consumption} \\\textbf{of Servers}\end{tabular}& \begin{tabular}{c}\textbf{Decreasing}\\ \textbf{Power}\\ \textbf{Consumption}\\ \textbf{of Switches}\end{tabular}& \begin{tabular}{c}\textbf{Admission} \\\textbf{Control}\end{tabular} \\ \hline \cite{ebrahimi2019joint} & & & & & \checkmark & \checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark\\ \hline \cite{farkiani2019fast} & & & & & \checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{chen2020network} & & & & & \checkmark & \checkmark & \checkmark & & & \checkmark & & \\ \hline \cite{bhamare2017optimal} & & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & \checkmark\\ \hline \cite{sun2019energy} & & & & & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{soualah2019online} & & & & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark \\ \hline \cite{ghazizadeh2019joint} & & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & \\ \hline \cite{varasteh2021holu} & & & & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{chen2021optimal} & & & & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{marotta2017energy} & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{hosseini2019probabilistic} & \checkmark & & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & & & \\ \hline \cite{marotta2017fast} & \checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{nguyen2019proactive} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{wen2018robustness} & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & & \\ \hline \cite{reddy2016robust} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \cite{baumgartner2017network} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{wen2017robust} & & & & \checkmark & \checkmark & \checkmark & & \checkmark & & & & \\ \hline \cite{bauschert2020fast} & & & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \cite{nguyen2020deadline} & \checkmark & & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{luu2021uncertainty} & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \hline \cite{luu2022admission} & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline This work & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\ \hline \end{tabular} } \end{table*} \section{Related works} \label{Related works} The related works are categorized in terms of uncertainties: 1) Resource allocation with fixed resource demand \cite{ebrahimi2019joint,farkiani2019fast,sun2019energy,soualah2019online,chen2020network,bhamare2017optimal,ghazizadeh2019joint,varasteh2021holu,chen2021optimal}, 2) Resource allocation under demand uncertainties \cite{hosseini2019probabilistic,marotta2017energy,marotta2017fast,nguyen2019proactive,wen2018robustness,reddy2016robust,baumgartner2017network,wen2017robust,bauschert2020fast,nguyen2020deadline,luu2021uncertainty,luu2022admission}. \subsection{Resource allocation with fixed resource demand}The energy consumption of cloud nodes and the cost of bandwidth consumption is decreased in \cite{ebrahimi2019joint}, where the provided framework allows tenants to manage their slices and serve their customers. For this purpose, an Integer Linear Programming (ILP) formulation is designed for resource allocation and an ILP formulation is designed for admission control. The issue of energy-aware service deployment is studied in \cite{farkiani2019fast}, where, an ILP is formulated by considering limited VNF traffic processing capacity and management issues. By applying the Benders decomposition, feasibility pump, and primal-dual algorithms, a fast and scalable algorithm with polynomial execution time is devised to compute a near-optimal solution. The problem of resource allocation in network slicing is studied in \cite{chen2020network}, where, flexible routing, End-to-End (E2E) latency, and coordination overhead are of concern. The problem is formulated as a mixed binary linear programming, with the objective to reduce the energy consumption of cloud nodes. The problem of placing VNFs to form Service Function Chains (SFCs) on geographically distributed clouds is addressed in \cite{bhamare2017optimal}. This problem is formulated as ILP to reduce inter-cloud traffic and response time in a multi-cloud scenario. The total implementation cost and Service Level Agreements (SLAs) are of concern. The energy efficiency optimization for orchestrating online SFC requests in multi-domain networks is assessed in \cite{sun2019energy}, where, the problem is formulated as ILP, followed by a heuristic algorithm, which next to meeting the needs of online SFCs, assures the privacy of each cloud efficiently. An ILP formulation is presented in \cite{soualah2019online} to solve the Virtualized Network Function Forwarding Graph (VNF-FG) placement and chaining problem. VNFs are shared between tenants to optimize resource consumption and increase the infrastructure provider revenue. A mathematical formulation for reliability-aware VNF placement and routing by considering Quality of Service (QoS) parameters is presented in \cite{ghazizadeh2019joint}, where, a resource allocation algorithm, applying a shared protection scheme with Active-Standby redundancy, is proposed to optimize the redundant VNFs without affecting the QoS parameters. The problem is formulated as Mixed Integer Linear Programming (MILP) and a meta-heuristic algorithm is proposed to make the solution scalable in large-scale networks. In \cite{varasteh2021holu}, the power-aware and delay-constrained joint VNF placement and routing (PD-VPR) problem is formulated as an ILP. Then, a fast online heuristic named Holu is developed to overcome high computational complexity of ILP. In \cite{chen2021optimal}, the problem of resource allocation to SFCs by considering the E2E latency constraints of all services and all cloud and communication resource budget and energy efficiency constraints, is considered. The problem is formulated as Mixed Binary Linear Program (MBLP). Then, an alternative MBLP formulation is developed which shows same optimal solution and it is more computationally efficient when the dimension of the corresponding network becomes large. \subsection{Resource allocation under demand uncertainties} The manner where the robust strategies that place VNFs in virtual data centers impact on energy saving and the level of protection vs. uncertainties in demand is assessed in \cite{marotta2017energy}. Therefore, a robust optimization model with a heuristic algorithm is proposed for reducing the energy consumption of computational and network infrastructure that is robust to fluctuations in resources demand. To reduce energy consumption, the unused servers and switchs’ ports are turned off. The issue of mapping VLs into physical paths, known as Virtual Link Embedding (VLE), where the bandwidth requirements of VLs are uncertain, is assessed in \cite{hosseini2019probabilistic}. To have VLs with predictable performance, the mapping must assure the required E2E congestion probability of physical paths with no dependency on the characteristics of the paths where VLs are mapped. Accordingly, a general uncertainty model is proposed where the bandwidth requirements of VLs are uncertain. There exists a model for uncertainty in VNFs’ demand. The VLE problem is formulated as a nonlinear optimization one. Then, a model for large-scale networks is provided by applying the approximate formulation. The robust network embedding problem under resource demand uncertainties is assessed in \cite{marotta2017fast}, where the limitations of delay are considered by applying the robust mixed integer optimization techniques. A model for proactive SFC is proposed with the objective to prevent resource reallocation to SFCs when demand fluctuate, prior to which, most of the proposed models are reactive, where, resources are reallocated during the demand fluctuations, which has a negative effect on the delay-sensitive SFCs’ performance, \cite{nguyen2019proactive}. Consequently, first, a SFC orchestration is formulated with a predefined deadline limitation as the Mixed Integer Non-Linear Programming (MINLP) by applying the $\Gamma$-Robustness concept and next, an approximate algorithm is devised to solve the large-scale problems. The problem of robust and E2E network slicing, where the slices are considered as a set of VNFs and links between them is discussed in \cite{wen2018robustness}. Bugs may occur in some VNFs in a random manner, making some slices to lose their validity, hence, triggering a slice recovery process. Beacause traffic demand in each slice is stochastic, making drastic changes in traffic demand can lead to slice reconfiguration. As to slice recovery in network slicing under bandwidth uncertainty, a solution is presented in \cite{wen2018robustness} and a heuristic algorithm is devised based on Variable Neighborhood Search (VNS) to accelerate the problem solving time in large-scale networks. Researchers in \cite{reddy2016robust} applied the $\Gamma$-Robustness concept in resource allocation to VNF chains by considering their tolerable delays and uncertain state in traffic demand, which leads to uncertainty in the requirements of VNFs resources, formulated as MILP. An admission control method is adopted and the objective function is to reduce the energy consumption of servers and links next to reducing the penalty due to the admission control process. To improve the scalability of the model, the MILP-based VNS algorithm is presented as well. To overcome the scalability issues in \cite{reddy2016robust}, a model when the concept of light robustness is applied presented in \cite{baumgartner2017network}. The problem of failure recovery in network slicing under uncertainty in traffic demand is assessed in \cite{wen2017robust}, where, first, the problem is formulated as MILP, next, a robust optimization is applied to fulfill the stochastic traffic requests. A Mixed Integer Programming (MIP) formulation is presented in \cite{bauschert2020fast} for the network slicing problem under traffic uncertainty and to reduce the computational complexity of mathematical optimization, a meta-heuristic algorithm based on ant colony optimization algorithms is devised for the robust network slice design problem. To avoid frequent resource re-provisioning, the deadline-aware, co-located, and geo-distributed SFC orchestration with demand uncertainties as robust optimization problem is formulated in \cite{nguyen2020deadline}, where, the exact and approximate algorithms are devised to solve it. Uncertain demand knowledge is used in computing the proactive SFC orchestration that can withstand fluctuations in dynamic service demand. A resource allocation approach in network slicing that is robust to partly unknown users’ count with random usage of slice resources is proposed in \cite{luu2021uncertainty}, where, the objective is to increase the total earnings of the infrastructure provider (IP). The resource allocation to slices is made as to limit its impact on low-priority background services, which may coexist next to the slices in the infrastructure network. In this context, the probabilistic assurance is that the volume of allocated network resources to the slices will meet its requirements. In \cite{luu2022admission}, a prioritized admission control mechanism for concurrent slices based on an infrastructure resource reservation approach is proposed. The reservation accounts for the dynamic nature of slice requests while being robust to slice resource demands uncertainties. Adopting the perspective of an IP, reservation schemes are proposed that maximize the number of slices for which infrastructure resources can be granted while minimizing the costs charged to the Mobile Network Operators (MNOs). The findings of this article are compared with that of the reviewed articles in Table~\ref{Related-Research}. In practice, there exist different types of resources in the infrastructure network: the CPU, RAM, storage, and bandwidth that a slice needs to operate. To confront the demand uncertainties, each resource demand uncertainty must be of concern. In this article, a comprehensive uncertainty formulation is devised and due to the business model introduced by 5GPPP \cite{queseth20175g} for network slicing, the online slice requests with their requirements including propagation delay of VLs is of concern. The joint admission control and resource allocation with the main objective of increasing the accepted slices’ count and decreasing the power consumption of the infrastructure network is of concern. Because of this comprehensive formulation of the problem, a large-scale problem that needs to be solved very fast as to be applicable on large-scale networks, a new and rapid near-optimal algorithm is devised. As observed, none of the available studies meets all the conditions addressed in this article. \begin{table}[!ht] \renewcommand{\arraystretch}{1.5} \centering \scriptsize \caption{{Main Notations}} \label{table-notation} \resizebox{\columnwidth}{!}{ \begin{tabular}{>{\color{black}}c |>{\color{black}}c} \hline \textbf{Notation}& \textbf{Definition}\\\hline \multicolumn{2}{c}{\textbf{Input Parameters}}\\\hline $\mathcal{N},\mathcal{L}$ &Sets of IP's physical nodes and links\\\hline $\mathcal{N}_\text{Used},\mathcal{N}_\text{Unused}$ &Sets of previously used and unused nodes\\\hline ${R}_{n}$, ${R'}_{n}$ &\begin{tabular}{c}Vector of node $n$'s total and available resources\\ that includes CPU, RAM, and storage\end{tabular}\\ \hline ${P}^\text{Max}_{n}$ , ${P}^\text{Idle}_{n}$ & Maximum and idle power consumption of node $n$\\ \hline ${S_n}, {S^\text{Port}_n}$& Switch $n$ and its one port power consumption\\ \hline ${S'_n}$& Number of connected ports of switch $n$ to other switches\\ \hline $B_{l_{n,n'}}$, $B'_{l_{n,n'}}$ &Total and available bandwidth of the physical link $l_{n,n'}$\\\hline ${B}_\text{Total}$& Sum of all links' bandwidths\\ \hline $\tau_{l_{n,n'}}$ &Propagation delay of the physical link $l_{n,n'}$\\\hline ${\zeta}_{l_{n,n'}}$&\begin{tabular}{c} Includes switch $n$ and $n'$ idle power and power of ports used in link ${l}_{n,n'}$\end{tabular}\\ \hline ${\gamma}_{l_{n,n'}}$&\begin{tabular}{c} To specify used link $l_{n,n'}$, it is 1 if link $l_{n,n'}$ was used\end{tabular}\\ \hline $I^{l_{u,u'}}_{p_{n,n'}^b}$&\begin{tabular}{c}Indicator that determines physical link $l_{u,u'}$\\contributes in the $b^{\text{th}}$ path between $n$ and $n'$, if it has value 1\end{tabular}\\\hline $\mathcal{L}_{p_{n,n'}^b}$&\begin{tabular}{c}Set of all physical links $l_{u,u'}$ contribute in the $b^{\text{th}}$ path between $n$ and $n'$\end{tabular}\\\hline $\mathcal{P}_{n,n'}$ &Set of paths between $n$ and $n'$\\\hline $\mathcal{T}, \mathcal{T}_\text{c}$&Sets of all tenants that their slices are accepted and current tenants\\ \hline $\mathcal{D}_{t}, \mathcal{D}_{t\_\text{c}}$& \begin{tabular}{c} Sets of tenant $t$'s slices and current tenant $t$'s arrived slices \end{tabular} \\ \hline $\mathcal{S}, \mathcal{S}_\text{c}$ &Sets of total accepted slices and current time slot arrived slices\\ \hline $\phi_{t,d}$ &lifespan of the $d^\text{th}$ slice of tenant $t$\\ \hline $\mathcal{M}_{t,d}, \mathcal{E}_{t,d}$ &Sets of $s_{t,d}$'s requested VMs and VLs\\ \hline $\nu_{m_{t,d}}$ &\begin{tabular}{c}Vector of the requested capacities for VM $m_{t,d}$ that\\ includes $\nu^\text{CPU}_{m_{t,d}}$, $\nu^\text{RAM}_{m_{t,d}}$ and, $\nu^\text{STOR}_{m_{t,d}}$\end{tabular}\\ \hline $\Upsilon_{e_ {m, m'}}$ &\begin{tabular}{c}Vector of specifications of requested VL between the two VMs $m$ and $m'$ \\in slice $s_{t, d}$ that includes $\omega_ {e_ {m, m'}}$ and $\tau_\text{max}^{e_ {m, m'}}$ \end{tabular}\\\hline $\omega_{e_{m,m'}}$ &Requested data rate between two VMs $m_{t,d}$ and $m'_{t,d}$\\ \hline $\tau^{e_{m,m'}}_{\max}$ &\begin{tabular}{c}Maximum tolerable propagation delay between two VMs $m_{t,d}$ and $m'_{t,d}$\end{tabular}\\ \hline $\alpha_n$&To specify previously turned-on node $n$, it is 1 if node $n$ is turned-on\\\hline ${N}_\text{Total}$& Sum of All nodes power consumption\\ \hline ${N}_\text{Used}$& Nodes power consumption in previous time slots\\ \hline ${S}_\text{Total}$& Sum of All switches power consumption\\ \hline ${S}_\text{Used}$& Switches power consumption in previous time slots\\ \hline $\Gamma_1$,$\Gamma_2$& To specify protection levels for VMs and VLs\\ \hline ${\Delta}_1,{\Delta}_2$& To specify relative deviations for requested resources of VMs and VLs\\ \hline \multicolumn{2}{c}{\textbf{Decision Variables}}\\\hline ${\beta}_{n}$& 1 if node $n$ is turned on in current time slot\\\hline ${\delta}_{t,d}$& 1 if slice $d$ of tenant $t$ is accepted\\\hline $\pi_{n}^m$&\begin{tabular}{c} 1 if VM $m_{t,d}$ is placed on node $n$\end{tabular}\\\hline $\xi_{p_{n,n'}^b}^{e_{m,m'}}$&\begin{tabular}{c}1 if VL $e_{m,m'}$ is mapped on the $b^{\text{th}}$ path between $n$ and $n'$\end{tabular}\\\hline ${\vartheta}_{l_{n,n'}}$&\begin{tabular}{c} 1 if link $l_{n,n'}$ used in current time slot\end{tabular}\\ \hline \multicolumn{2}{c}{\textbf{Auxiliary Variables}}\\\hline ${\eta}$& Number of rejected slices in current time slot\\\hline ${N}_\text{c}$&\begin{tabular}{c} Sum of used nodes power consumption with respect to\\ arrived slices in current time slot\end{tabular}\\ \hline ${U}_{n}$&\begin{tabular}{c} Amount of node $n$'s used resources (CPU,RAM,Storage)\\ in current time slot \end{tabular}\\ \hline ${O}^{t}_{n}$&\begin{tabular}{c} Amount of node $n$'s used resources (CPU,RAM,Storage) for robustness\\ in time slot $t$ \end{tabular}\\ \hline ${S}_\text{c}$& Sum of the used switches power consumption in current time slot\\ \hline ${U'}_{l_{n,n'}}$ &Amount of link $l_{n,n'}$'s used bandwidth in current time slot\\\hline ${O'}^{t}_{l_{n,n'}}$ &Amount of link $l_{n,n'}$'s used bandwidth for robustness in time slot $t$\\\hline \begin{tabular}{c}${\rho_{1}}^{m}_{n}$, ${z_{1}}_{n}$,\\ ${\rho_2}^{e_{m,m'}}_{p^b_{n,n'}}$, ${z_2}_{l_{u,u'}}$\end{tabular} & Robustness variables\\\hline \end{tabular} } \end{table} \section{System Model}\label{System Model} This system includes: 1) Infrastructure network model and 2) Slice requests model. The main notations are summarized in Table \ref{table-notation}. \begin{figure}[t] \center{\includegraphics[width=\columnwidth] {images/Network_Architecture.pdf}} \caption{\label{Architecture} Infrastructure Provider's Network Architecture} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/Slice-request-processing2.png} \caption{ The Process of Processing Each Tenant's Slice Request }\label{Slice-request-processing} \end{figure} \subsection{Infrastructure Network Model} An IP is assumed that provides network slices for several tenants on a shared network. The infrastructure network is considered as an undirected graph $G(\mathcal{N}; \mathcal{L})$, where $\mathcal{N}$ is the set of cloud nodes\footnote{Each cloud node includes a forwarding device and a cluster of servers.} and base stations (BS), and $\mathcal{L} = [l_{n,n'} ]$ is the set of links which connect the nodes\footnote{We use the terms node and server interchangeably.} (the cloud nodes and BSs) of this graph, thus, if $ l_{n,n'}=1$, then the two nodes $n$ and $n'$ are connected, otherwise the opposite holds. As observed in Fig. (\ref{Architecture}), the distributed cloud nodes are considered in both the core and transport layers, and the abstract RAN \cite{wang2019reconfiguration} layer includes only BSs, where the nodes form a network together. It is worth noting that the abstract RAN implies that the RAN details are not considered. Only the computation and storage capacities of BSs and their forwarding devices \cite{wang2019reconfiguration}, which connect them to transport layer, are considered. It is assumed that all nodes support the Network Function Virtualization (NFV) and as observed in Fig. (\ref{Slice-request-processing}), there is an orchestrator that includes an NFV and Software Defined Networking (SDN) manager which, first, receives the tenants’ slice requests and next, if acceptable, creates them by allocating the resources thereupon. The resources of each node include the CPU, RAM, and storage, considered as a vector $ R_{n} = (R_{n}^\text{CPU}, R_{n}^\text{RAM}, R_{n}^\text{STOR})$ for each node $n\in \mathcal{N}$. The physical links of the network have fixed bandwidths, therefore, each link $l_{n,n'}\in \mathcal{L}$ has a limited bandwidth of $B_{l_{n,n'}}$. Because the nodes are distributed, there exists a considerable propagation delay, that is, $\tau_{l_{n,n'}}$ is the propagation delay between nodes $n$ and $n'\in \mathcal{N}$. \subsection{Slice Requests Model} There exist some tenants who request slices. In this article, it is assumed that $\mathcal{T} = \{{1,…, T}\}$ is the set of all tenants, and set $\mathcal{D}_{t} = \{{1,…, d_{t}}\}$ is the requested slices of tenant $t$. Because each tenant can request different slices, in this set, $d_{t}$ is the number of the slices requested by tenant $t$. In this process, an online resource allocation is of concern. It is possible that, some tenants who may or may not already be in the set $\mathcal{T}$ transmit new slice requests to the IP at any given time, consequently, another set, named $\mathcal{T}_\text{c}$, is considered to include tenants who send slice requests and must decide whether to accept and allocate the resources thereupon. The set $\mathcal{D}_{t\_\text{c}}$ is considered as well, which includes the current slice requests of tenant $t\in \mathcal{T}_\text{c}$. A set named $\mathcal{S}_\text{c}$ is considered to include the current slice requests, that is, each $\mathcal{S}_\text{c}$’s member is represented by $s_{t,d}$ which is the $d^{\text{th}}$ slice request of tenant $t$ in current time. The set named $\mathcal{S}$ is considered to include all the accepted slices. As observed in Fig. (\ref{Slice-request-processing}), when a tenant transmits a slice request to the orchestrator, first, the possibility of accepting is checked, and if the slice is accepted, the required resources are allocated and the slice is activated. Whenever the tenant does not need the slice, the slice lifespan is expired, the slice termination process is run and its resources are taken back, consequently, the three sets $\mathcal{T}$, $\mathcal{D}_{t}$, and $\mathcal{S}$ are updated after both slice resource allocation and expiration. Each slice request specifies the set of VMs and their associated VLs represented by undirected graph $s_{t, d} = (\mathcal{M}_{t, d},\mathcal{E}_{t, d})$, where, $\mathcal{M}_{t, d}$ and $\mathcal{E}_{t, d}$ are the sets of the requested VMs and VLs by tenant $t$ for slice $d$, respectively. For each VM $m_ {t, d}\in \mathcal{M}_{t, d}$, a vector $\nu _ {m_ {t, d}} = [\nu _ {m_ {t, d}} ^ \text{CPU}, \nu _ {m_ {t, d}} ^ \text{RAM}, \nu _ {m_ {t , d}} ^ \text{STOR}]$ that indicates the volume of the resources that are needed by VM $m_ {t, d}$ is of concern. From now on $t$ and $d$ indices are removed from $m$ and $m'$ for readability. For each VL $e_{m,m'} \in \mathcal{E}_{t,d}$, $m$ and $m'\in \mathcal{M}_ {t, d}$, the specifications are expressed by $\Upsilon_{e_ {m, m'}} = [\omega_ {e_ {m, m'}}, \tau_\text{max}^{e_ {m, m'}}]$, where, $\omega_ {e_ {m, m'}}$ is the link data rate and $\tau_\text{max} ^ {e_ {m, m'}}$ is the maximum tolerable delay. \section{online robust admission control and resource allocation: Mathematical Formulation} \label{online robust admission control and resource allocation: Mathematical Formulation} Based on the business model introduced by 5GPPP \cite{queseth20175g} for the relationship between the IP and the tenants, after a tenant transmits its slice request to the IP orchestrator, if the available resources are sufficient to accept the slice, the resources will be allocated. In the next generation networks slicing, the slice requests with different lifespans arrive at the orchestrator at different times, and the IP must be able to allocate resources to new slices in an online manner and release the resources containing expired slices. High power consumption of IT infrastructures due to environmental and economic reasons has become a major issue among researchers \cite{farkiani2019fast}. The total IT infrastructure power consumption consists of power consumed by the turned-on switches with their applied ports, and the power consumed by the turned-on servers. A robust optimization model is proposed for admission control and allocation of network resources to the online slice requests of tenants to reduce the IT infrastructure's total power consumption next to increasing the accepted slices’ count. Because formation of each slice takes time, in this study, the time slots are considered that at beginning of each time slot, the newly arrived slices during the previous time slot are processed and the resources of slices with expired lifespans are released. In this process ($\phi_{t,d}$) determines the slice lifespan. An example of the admission control and resource allocation process to 5 slices that arrive at the network during 4 time slots is drawn in Fig. (\ref{Time-slot-operations}), where, at the beginning of each time slot, the process of checking the existence of expired slices and resource allocation to newly accepted slices are evident. A predefined unit of lifespan is considered. Slice 1 with lifespan 1 is accepted in time slot 1, therefore, at the beginning of time slot 2 that slices 2 and 3 are accepted, slice 1 is expired, and its resources are taken back. Because the lifespans of slices 2 and 3 are equal to 2, they are expired at the middle of time slot 3. Slice 4 that arrives at the network in time slot 2, is rejected, because its requested resources are not available in the infrastructure network. slice 5 which is arrived at network in time slot 1, is accepted and because it is a permanent slice, it is not expired. Due to the online allocation, when a slice request arrives, the admission control and the resource allocation are made concerning the remaining resources of the network. Because the admission control and resource allocation operations are made at the beginning of each time slot, the following optimal model makes the optimal allocation only in each time slot, thus, no global optimization. \begin{figure} \centering \includegraphics[width=\columnwidth]{images/Time-slot-operations.pdf} \caption{Online Slice Requests and Each Time Slot's Operations }\label{Time-slot-operations} \end{figure} \subsection{Slice Acceptance} To separate the accepted slices from the rejected slices in the slices’ admission control, the variable $\delta_{t,d}$ is defined, which in case of accepting a slice $d$ of tenant $t$, its volume is set to 1, that is, if it is not possible to meet the requirements of a slice in terms of the resources required by its VMs and VLs, and tolerable delays of VLs, this volume is set to 0. The following constraint is defined to specify the rejected slices’ count which should be decreased in the objective function. \begin{align}\label{Slice Acceptance} &\text{C1: }\sum\limits_{t\in \mathcal{T}_\text{c}} \sum\limits_{d\in\mathcal{\mathcal{D}_\text{$t\_\text{c}$}}}(1-\mathcal{\delta}_{t,d}) = {\eta}, \end{align} where, ${\eta}$ is an integer variable that specifies the count of rejected slices in current time slot. \subsection{VNF Placement} Each slice is considered as a VN that includes VMs and VLs and after the slice is accepted, the tenant as a slice owner manages the required VNFs on the VN. In this article, VNF placement means allocating resources to a VM. The binary variable $\pi^m_n$ is set to 1, if VM $m$ is placed on node $n$. To assure that a VM $m$ is placed on a single node and to prevent a rejected slice’s VM from being placed on the nodes, the following constraint is defined: \begin{align}\label{VNF Placement} &\text{C2: }\sum\limits_{n \in \mathcal{N}} {\pi}_{n}^{m} = {\delta}_{t,d}, \forall m \in \mathcal{M}_{t,d}, \forall t \in \mathcal{T}_\text{c}, \forall d \in \mathcal{D}_{t\_\text{c}}. \end{align} In this constraint, ${\delta}_{t,d}$ is applied on the right side to assure that decisions are made only for the placement of the accepted slices’ VMs. \subsection{VL Embedding} There may be several paths between the two nodes $n$ and $n'$, as $\mathcal{P} _ {n, n'} = \{1, .., b, .., P _ {n, n'}\}$, such that $P _ {n, n'}$ is the physical paths’ count between the two nodes $n$ and $n'$, consequently, the $b^\text{th}$ path is symbolized as $p_ {n, n'} ^ {b}$. The binary decision variable $\xi _ {p^ {b}_{n, n'}} ^ {e_ {m, m'}}$ is defined to determine whether VL $e_ {m , m'}$ is embedded over path $p_ {n, n'} ^ {b}$ or not and if so, $\xi _ {p^ {b}_{n, n'}} ^ {e_ {m, m'}}$ is set to 1, otherwise, 0. The following constraints assure that the VLs of the accepted slices are located on only one path between the nodes that contain the associated VMs of the VLs: \begin{align}\label{VL Embedding}\nonumber &\text{C3: }\sum\limits_{n\in \mathcal{N}} \sum\limits_{n' \in \mathcal{N}} \sum\limits_{b\in\mathcal{\text{$\mathcal{P}$}}_{n,n'}} \xi^{e_{m,m'}}_{p^b_{n,n'}} = \delta_{t,d},\,\forall e_{m,m'} \in \mathcal{E}_{t,d},\\ &\forall{t}\in \mathcal{T}_\text{c},\, \forall {d} \in \mathcal{D}_{t\_\text{c}}.\\\nonumber &\text{C4: }\xi_{p^{b}_{n,n'}}^{e_{m,m'}} = \pi_n^{m}\times \pi_{n'}^{m'}, \forall n,n' \in \mathcal{N},\forall b\in\mathcal{\text{$\mathcal{P}$}}_{n,n'}, \\ &\forall m,m' \in \mathcal{M}_{t,d}, \forall e_{m,m'} \in \mathcal{E}_{t,d},\forall t \in {\mathcal{T}_\text{c}},\forall{d}\in \mathcal{D}_{t\_\text{c}}. \end{align} By applying the nonlinear constraint C4, we assure that if $\xi_{p^{b}_{n,n'}}^{e_{m,m'}}$ is 1, $\pi_n^{m}$ and $\pi_{n'}^{m'}$ should be 1, consequently, to have a BLP model, the following transformation is advised: \begin{align} &\text{C4-1: }\xi^{e_{m,m'}}_{p^b_{n,n'}} \le \pi_{n}^{m} + 1 - \pi_{n'}^{m'},\\ &\text{C4-2: }\pi_{n}^{m} \le \xi^{e_{m,m'}}_{p^b_{n,n'}} + 1 - \pi_{n'}^{m'},\\\nonumber &\text{C4-3: }\xi^{e_{m,m'}}_{p^b_{n,n'}} \le \pi_{n'}^{m'},\,\forall n,n' \in \mathcal{N},\forall b\in\mathcal{\text{$\mathcal{P}$}}_{n,n'}, \\ &\forall m,m' \in \mathcal{M}_{t,d},\forall e_{m,m'} \in \mathcal{E}_{t,d},\forall{t}\in \mathcal{T}_\text{c},\forall{d}\in {\mathcal{D}_{t\_\text{c}}}. \end{align} Therefore, constraints (C4-1 to C4-3) should be applied instead of C4. \subsection{Node Resources Limitations} The sum of the allocated resources to VMs, placed on a node $n$ must not exceed its capacity ($R'_n$), thus, the following constraints prevail: \begin{align}\label{Node Resources Limitations} &\text{C5: }\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in{\mathcal{D}_\text{$t\_\text{c}$}}}\sum\limits_{m \in \mathcal{M}_{t,d}} \pi_{n}^m\times{\nu_m} = {U_n},\forall n \in \mathcal{N}.\\ &\text{C6: }{U_n} \le {{R'}_n}, \forall n\in \mathcal{N}. \end{align} Because each node and VM have three types of resources CPU, RAM, and storage, the new integer variable $U_n$ is defined to specify the sum of the required resources by the VMs, placed on node $n$. The $U_n$ is a vector containing the required CPU, RAM, and storage by the VMs. \subsection{Link Bandwidth Limitation} To determine the physical link $l_ {u, u'}$ as a member of the path $p_ {n, n'} ^ {b}$, the binary indicator $I_ {p_ {n, n'} ^ b} ^ {l_ {u , u'}}$ is defined, where, $\mathcal{L}_ {p_ {n, n'} ^ {b}}$ is the set of all physical links in path $p_ {n, n'} ^ {b}$. The following constraints specify the bandwidth of each link $l_{n,n'}$ applied by VLs on it considering the available bandwidth ($B'_{l_{u,u'}}$). The variable $U'_{l_{u,u'}}$ is defined to specify the sum of the required bandwidths by VLs on the physical link $l_{n,n'}$ and the variable ${\vartheta}_{l_{n,n'}}$ is defined to specify whether link $l_{n,n'}$ is applied in the current time slot or not, consequently, if link $l_{n,n'}$ is applied in the current time slot, this variable is set to 1, otherwise 0. \begin{align}\label{Link Bandwidth Limitations}\nonumber &\text{C7: }\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in \mathcal{D}_{t\_\text{c}}} \sum\limits_{e_{m,m'} \in \mathcal{E}_{t,k}} \sum\limits_{n\in \mathcal{N}} \sum\limits_{n'\in \mathcal{N}} \sum\limits_{b\in\mathcal{\mathcal{P}}_{n,n'}} I^{l_{u,u'}}_{p_{n,n'}^b}\times\\ &\xi^{e_{m,m'}}_{p^b_{n,n'}}\times\omega_{e_{m,m'}} = {U'}_{l_{u,u'}} ,\,\forall l_{u,u'}\in \mathcal{L}_{p_{n,n'}^b}.\\ &\text{C8: }{{U'}_{l_{u,u'}}} \le {\vartheta}_{l_{u,u'}}\times{{B'}_{l_{u,u'}}},\forall l_{u,u'}\in \mathcal{L}_{p_{n,n'}^b}. \end{align} \subsection{Delay Model} To assume the VLs’ maximum tolerable propagation delays, the following constraint is defined: \begin{align}\nonumber &\text{C9: }\sum\limits_{l_{u,u'} \in \mathcal{L}_{p_{n,n'}^b}} I^{l_{u,u'}}_{p_{n,n'}^b}\times \xi^{e_{m,m'}}_{p^b_{n,n'}} \times\tau_{l_{u,u'}} \le \tau^{e_{m,m'}}_{\max} ,\,\\\nonumber &\forall n,n' \in \mathcal{N},\forall b \in \mathcal{P}_{n,n'}, \forall e_{m,m'}\in\mathcal{E}_{t,d},\, \forall{t}\in \mathcal{T}_\text{c},\, \\ &\forall{d}\in \mathcal{D}_{t\_\text{c}}. \end{align} Because the tenant as a slice owner is responsible for slice management, the IP does not have any information about the VNFs that will be executed on VMs. Therefore, the execution delay cannot be computed. Also, the transmission delay is not considered, for simplicity \cite{ebrahimi2019joint}. \subsection{Node Power Consumption} The linear node power consumption model proposed by \cite{dayarathna2015data} is modified as follows: \begin{align}\label{Node Power Consumption}\nonumber &\text{C10: }\sum_{n\in \mathcal{N}}(P^{\text{Max}}_n - P^{\text{Idle}}_n)\times \frac{{U}^{\text{CPU}}_n}{R^\text{CPU}_n} + (1-\alpha_n)\times \beta_{n}\times P^{\text{Idle}}_n \\ &={N}_{\text{c}}.\\\nonumber &\text{C11: }\pi_{n}^{m} \le \alpha_n,\forall n\in \mathcal{N}_\text{Used},\forall m \in \mathcal{M}_{t,d},\forall{t}\in\mathcal{T}_\text{c},\\ &\forall {d} \in \mathcal{D}_{t\_\text{c}}.\\\nonumber &\text{C12: }\pi_{n}^{m} \le \beta_{n},\forall n\in \mathcal{N}_\text{Unused},\forall m \in \mathcal{M}_{t,d},\forall{t}\in\mathcal{T}_\text{c},\\ &\forall{d}\in \mathcal{D}_{t\_\text{c}}. \end{align} In constraint C10, $P_{n} ^ \text{Idle}$ and $P_{n} ^ \text{Max}$ are equal to the power consumption volumes of the node $n$ at idle and maximum usage, respectively. The $\frac{U_{n} ^ \text{CPU}}{R_{n}^\text{CPU}}$ specifies the CPU utilization of node $n$ volume. To propose an online admission control and resource allocation with the aim of power consumption minimization, we need to divide nodes (set $\mathcal{N}$) into two sets $\mathcal{N}_{Used}$ and $\mathcal{N}_{Unused}$ to use nodes that have already been turned on as much as possible. The variable $\beta_{n}$ is set to 1 if node $n \in \mathcal{N}_{Unused}$ is turned on in the current time slot, while, $\alpha_n$ specifies the previously turned-on node $n \in \mathcal{N}_{Used}$. $\alpha_n$ is 0 if node $n$ is not previously applied. The circumstances that the nodes are turned-on and VMs are placed on the turned-on nodes is verified through constraints C11 and C12. By applying these constraints, a continuous variable, $N_{\text{c}}$, specifies the total nodes’ power consumption in the current time slot. \subsection{Switch Power Consumption} The volume of switch power consumption is computed through constraint C13. The switch power consumption model is proposed by \cite{farkiani2019fast}. For simplicity, a parameter $\zeta_{l_{u,u'}} $ is defined for each link $l_{n,n'}$, is the sum volume of: 1) the idle power consumption volume of switch $u$, 2) the idle power consumption volume of switch $u'$ , 3) the power consumption volume of switch $u$’s applied port for link $l_{u,u'}$, and 4) the power consumption volume of switch $u'$’s applied port for link $l_{u,u'}$. The formulation for computing the power of each link in its cumulative sense is as follows: \begin{align}\label{link_power_calculation} &\zeta_{l_{u,u'}} = (2\times S^\text{Port}_u)+\frac{S_{u}}{S'_{u}} + \frac{S_{u'}}{S'_{u'}},\forall l_{u,u'} \in \mathcal{L}, u \neq u'. \\ & \label{link_power_calculation2} \zeta_{l_{u,u'}} = (S^\text{Port}_u)+\frac{S_{u}}{S'_{u}},\forall l_{u,u'} \in \mathcal{L}, u = u'. \end{align} In these two equations, $S_u$ and $S^\text{Port}_u$ are switch $u$ and its one port power consumption, respectively. Also, $S'_u$ specifies number of connected ports of switch $u$ to other switches. When a previously unused link $l_{u,u'}$ is applied in the current time slot for embedding VLs of newly accepted slices, first, the $\vartheta_{l_{u,u'}}$ is set to 1, and next, the sum of $\zeta_{l_{u,u'}}$ for all newly activated links in current time slot specifies the switches' power consumption. Here, a continuous variable, $S_{\text{c}}$, specifies the total switches' power consumption in the current time slot. \begin{align}\label{Switch Power Consumption} &\text{C13: }\sum\limits_{u\in\mathcal{N}}\sum\limits_{u'\in\mathcal{N}}\vartheta_{l_{u,u'}}\times{\zeta}_{l_{u,u'}} = {S}_{\text{c}}, \vartheta_{l_{u,u'}} \neq \gamma_{l_{u,u'}}. \end{align} \subsection{The Admission Control and Resource Allocation Base Model} \label{The Admission Control and Resource Allocation Base Model} The nodes' and switches' power consumption and the accepted slices’ count are considered as the main components of the objective function in this article to reduce the power consumption volume and the rejected slices’ count. Here, the objective function, named C, is expressed as follows: \begin{align}\label{objective function}\nonumber &C = (\frac{{N}_{\text{c}}}{{N}_{\text{Total}}}) + (\frac{{S}_{\text{c}}}{{S}_{\text{Total}}}) + \eta+\sum\limits_{n\in\mathcal{N}}(\frac{U^\text{RAM}_n}{R^\text{RAM}_n}) + \\ &\sum\limits_{n\in\mathcal{N}}(\frac{U^\text{STOR}_n}{R^\text{STOR}_n}) + \sum\limits_{u\in\mathcal{N}}\sum\limits_{u'\in\mathcal{N}} (\frac{{U'}_{l_{u,u'}}}{{B}_\text{Total}}). \end{align} In the objective function C all components are normalized, to give priority to minimize $\eta$, the one that specifies the rejected slices’ count. Therefore, first $\eta$ and then the rest of the components are minimized. The nodes’ and switches’ power consumption are computed through $N_\text{c}$ and $S_\text{c}$, respectively. Because of the considered online approach in this article, the last three components in the objective function are defined to avoid the resources waste. The base model of admission control and resource allocation is presented as follows: \begin{align}\label{joint_optimization}\nonumber &\mathop{\min}~~C\\\nonumber &\text{subject to:}\\\nonumber &\text{(C1)-(C3),(C4-1)-(C4-3),(C5)-(C13)}\\\nonumber &\text{C14: }\beta_n \in \{0,1\}, \forall n.\\\nonumber &\text{C15: }\delta_{t,d} \in \{0,1\}, \forall t,d.\\\nonumber &\text{C16: }\pi^m_n \in \{0,1\}, \forall t,d,m,n.\\ &\text{C17: }\xi^{e_{m,m'}}_{p^b_{n,n'}} \in \{0,1\}, \forall e_{m,m'},b,n,n',t,d.\\\nonumber &\text{C18: }\vartheta^{l_{n,n'}} \in \{0,1\}, \forall l_{n,n'}.\\\nonumber \end{align} The rejected slices’ count is computed through constraint C1. Constraint C2 is applied in placing the accepted slices’ VMs on the nodes. By enforcing constraints C3 to C4-3, it is assumed that the VLs of accepted slices are located on only one path between nodes that include the associated VMs of the VLs. The volume of applied resources in each node $n$ by VMs on it concerning the available resources is specified by enforcing constraints C5 and C6. The applied bandwidth of each link $l_{u,u'}$ by VLs on it concerning the available bandwidth is specified by enforcing constraints C7 and C8. Constraint C9 is related to delay limit of each VL. The volume of nodes’ power consumption is computed through constraints C10, C11 and C12. The volume of switches’ power consumption is computed through constraint C13. The binary decision variables of the problem are specified through constraints C14-C18. \subsection{The Admission Control and Resource Allocation Robust Model} Considering the resources of each VM (CPU, RAM, and storage) and the data rate for each slice's link on constant basis, is a very ideal and unreal assumption. In this situation, if the demand face unusual changes, the VLs and VMs become congested, leading to slice violation. To overcome this phenomenon and formulate the problem in conditions closer to reality, the uncertainties in the data rates of VLs and the workloads of VMs are considered as the stochastic variables. For this purpose, the requested resources of a slice are modeled in a sense that for slice $s_ {t, d}$, ${\nu _ {m}}=[\nu _ {m}^\text{CPU},\nu_ {m} ^ \text{RAM}, \nu _ {m} ^ \text{STOR}]$ and data rate $\omega_ {e_ {m, m'}}$ are considered as ${\widetilde{\nu}_{m}} = [{\widetilde{\nu} _ {m}^ \text{CPU}}, {\widetilde{\nu} _ {m} ^ \text{RAM}}, {\widetilde{\nu} _ {m} ^ \text{STOR}}]$ and ${\widetilde{\omega}_ {e_ {m, m'}}}$ rather than being fixed, where the volumes of ${\widetilde{\nu} _ {m}^ \text{CPU}}$ , ${\widetilde{\nu} _ {m} ^ \text{RAM}}$ , ${\widetilde{\nu} _ {m} ^ \text{STOR}}$, and ${\widetilde{\omega}_ {e_ {m, m'}}}$ have a uniform distribution \cite{wen2017robust} in the intervals $[{ \overline{\nu} _ {m} ^ \text{CPU}}- {\widehat{\nu} _ {m} ^ \text{CPU}}, {\overline{\nu} _ {m} ^ \text{CPU}} + {\widehat{\nu} _ {m} ^ \text{CPU}}]$, $[{\overline{\nu} _ {m} ^ \text{RAM}}- {\widehat{\nu} _ {m} ^ \text{RAM}}, { \overline{\nu} _ {m} ^ \text{RAM}} + {\widehat{\nu} _ {m} ^ \text{RAM}}]$, $[{\overline{\nu} _ {m} ^ \text{STOR}}- {\widehat{\nu} _ {m} ^ \text{STOR}}, {\overline{\nu} _ {m} ^ \text{STOR}} + {\widehat{\nu} _ {m} ^ \text{STOR}}]$, and $[{\overline{\omega}_ {e_ {m, m'}}}- {\widehat{\omega}_ {e_ {m, m'}}}, {\overline{\omega}_ {e_ {m, m'}}}+ {\widehat{\omega}_ {e_ {m, m'}}}]$, respectively. In the defined intervals, ${\overline{\nu} _ {m} ^ \text{CPU}}, {\overline{\nu} _ {m} ^ \text{RAM}}, {\overline{\nu} _ {m} ^ \text{STOR}}$, and ${\overline{\omega}_ {e_ {m, m'}}}$ are the centers of intervals or said otherwise, the nominal volumes, and ${\widehat{\nu} _ {m} ^ \text{CPU}} , {\widehat{\nu} _ {m} ^ \text{RAM}}, {\widehat{\nu} _ {m} ^ \text{STOR}}$, and ${\widehat{\omega}_ {e_ {m, m'}}}$ are the maximum deviation of demand. These intervals can be defined by the tenant, where, a tenant for a given slice, declares the least and the most slice’s required data rates, likewise, an interval for the requested resources’ volumes by the tenant can be of concern, which would overcome the fluctuations in traffics and workloads. An optimization model of admission control and resource allocation with the defined stochastic variables is presented, where, the changes that is considered in slice requests affect the constraints C5 and C7 of the base model (\ref{joint_optimization}). The stated constraints will be changed as follows: \begin{align}\label{constraints_ with_stochastic_variables} &\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in{\mathcal{D}_\text{$t\_\text{c}$}}}\sum\limits_{m \in \mathcal{M}_{t,d}} \pi_{n}^m\times{{\widetilde{\nu_m}}} = {U_n}, \forall n \in \mathcal{N}.\\\nonumber &\sum\limits_{t\in \mathcal{T}_\text{c}}\sum\limits_{d\in \mathcal{D}_{t\_\text{c}}} \sum\limits_{e_{m,m'} \in \mathcal{E}_{t,k}} \sum\limits_{n\in\mathcal{N}} \sum\limits_{n'\in\mathcal{N}} \sum\limits_{b\in\mathcal{\mathcal{P}}_{n,n'}} I^{l_{u,u'}}_{p_{n,n'}^b}\times\text{${\xi}$}^{e_{m,m'}}_{p^b_{n,n'}}\\ &\times{\widetilde{\omega}_{e_{m,m'}}} = {U'}_{l_{u,u'}} ,\,\forall l_{u,u'}\in \mathcal{L}_{p_{n,n'}^b}. \end{align} Because the formulation with these modified constraints that includes stochastic variables cannot be solved directly, for its worst-case robust counterpart formulation, the $\Gamma$-Robustness is applied to encounter uncertainties. In this robust formulation, the stochastic variables are converted into deterministic ones, using the conversion process presented in \cite{marotta2017energy}. Therefore, the linear $\Gamma$-Robustness based model with 2 parameters $\Gamma_1$ and $\Gamma_2$, named \textbf{ROBINS}, is as follows:\\ \\ \textbf{(ROBINS)} \begin{align}\label{robust_joint_optimization}\nonumber &\mathop{\min}~~C\\\nonumber &\text{subject to:}\\\nonumber &\text{(C1)-(C3),(C4-1)-(C4-3)}\\\nonumber &\text{C5-1: }\sum\limits_{t}\sum\limits_{d}\sum\limits_{m} {\pi}_{n}^m\times{\overline{\nu}_m} + \sum\limits_{t}\sum\limits_{d}\sum\limits_{m}{\rho_{1}}^{m}_{n} + \Gamma_1\times{z_{1}}_{n}\\\nonumber &= {U_n},\forall n \in \mathcal{N}.\\\nonumber &\text{C5-2: }\pi_{n}^m\times{\widehat{\nu_m}} \le {\rho_{1}}^{m}_{n} + {z_{1}}_{n},\quad \forall n,m,t,d.\\\nonumber &\text{(C6)}\\\nonumber &\text{C7-1: }\sum\limits_{t}\sum\limits_{d} \sum\limits_{e_{m,m'}} \sum\limits_{n} \sum\limits_{n'} \sum\limits_{b} I^{l_{u,u'}}_{p_{n,n'}^b}\times \xi^{e_{m,m'}}_{p^b_{n,n'}}\times\overline{\omega}_{e_{m,m'}}+\\\nonumber &\sum\limits_{t}\sum\limits_{d}\sum\limits_{e_{m,m'}}\sum\limits_{n}\sum\limits_{n'} \sum\limits_{b}I^{l_{u,u'}}_{p_{n,n'}^b}\times{\rho}^{e_{m,m'}}_{p^b_{n,n'}} + \Gamma_2\times{z_2}_{l_{u,u'}} =\\ &{U'}_{l_{u,u'}} ,\forall l_{u,u'}.\\\nonumber &\text{C7-2: }I^{l_{u,u'}}_{p_{n,n'}^b} \times\xi^{e_{m,m'}}_{p^b_{n,n'}}\times{\widehat{\omega}_{e_{m,m'}}} \le {\rho_2}^{e_{m,m'}}_{p^b_{n,n'}} + {z_2}_{l_{u,u'}},\\\nonumber &\forall e_{m,m'},b,n,n',l_{u,u'}.\\\nonumber &\text{(C8)-(C18)}\\\nonumber \end{align} In ROBINS, the robustness auxiliary variables ${\rho_{1}}^{m}_{n}$, ${z_{1}}_{n}$, ${\rho_2}^{e_{m,m'}}_{p^b_{n,n'}}$, and ${z_2}_{l_{u,u'}}$ are integers. To specify the relative deviations, the ${\widehat{\nu} _ {m}} = \Delta_1 \times {\overline{\nu} _ {m}}$ and ${\widehat{\omega}_ {e_ {m, m'}}} = \Delta_2 \times {\overline{\omega}_ {e_ {m, m'}}}$ such that $\Delta_1$ and $\Delta_2$ are considered within 0 to 1 range. By applying $\Gamma_1$ and $\Gamma_2$ parameters, the count of the VMs and VLs that their demand can fluctuate, are specified, respectively. It is worth noting that this worst-case model that considers the resource requests according to the worst conditions, can be applied to any resource request distribution \cite{jiang2016data}, including uniform, normal, etc. On the other hand, the advantage of $\Gamma$-Robustness compared to the general worst-case form \cite{wen2017robust}, is that, the parameter $\Gamma$ can be used to control the number of random variables whose values can fluctuate. Therefore, the IP can use this parameter to control the robustness alongside the $\Delta$ parameter that specifies the fluctuation interval. \begin{center} \begin{figure}[t!h] \centering \includegraphics[width=\columnwidth]{images/Flow_chart.pdf} \caption{Online Admission Control and Resource Allocation Process}\label{flowchart} \end{figure} \end{center} \section{online robust admission control and resource allocation: Solution Methodology} \label{online robust admission control and resource allocation: Solution Methodology} An admission control and resource allocation algorithm is proposed through the ROBINS. ROBINS is based on BLP that can be solved by using any commercial optimization solver, such as IBM ILOG CPLEX. However, finding the optimal solution in reasonable amount of time may have difficulties for an advanced state-of-the-art slover like CPLEX when the size of the problem increase (e.g. real-world and large-scale networks)\cite{marotta2017energy}. Therefore, to find a near-optimal solution in a reasonable amount of time, a heuristic algorithm is presented. The general process of admission control and resource allocation of the newly arrived slices applied in this article is shown in Fig. (\ref{flowchart}), where, first, in each time slot except the first, the allocated resources to the slices that are expired are taken back, next, the possibility of acceptance of arrived slices are checked and if there exist slices which their requirements cannot be met, they are rejected and then the requested resources’ are allocated to the accepted slices. \subsection{Optimal Admission Control and Resource Allocation Algorithm}\label{Optimal Admission control and Resource Allocation Algorithm} An algorithm is proposed for optimal admission control and resource allocation, according to process shown in Fig. (\ref{flowchart}) and ROBINS, named OEA-ONSU. Algorithm \ref{algorithm_OEA_ONSU} shows the OEA-ONSU. In this algorithm, each time slot, consist of three steps: 1) the existence of expired slices are checked (line 2). If the slice lifespan is expired, both the allocated resources to its VMs (line 4-8), and VLs (line 9-15) will be taken back and the sets $\mathcal{T}$, $\mathcal{D}_t$,and $\mathcal{S}$ must be updated, otherwise, only the $\phi_{t,d}$' volume is reduced one unit (lines 19-21). If there exist no VMs and VLs on a previously applied server and link, respectively, these two will be turned OFF (lines 24-25 and 34-35), otherwise, if the lifespans of the slices in a previous time slot are expired, the resources considered for robustness will be taken back to the servers and links (lines 26-31 and 36-41). After the expired slices’ and the available network resources’ status are determined, it is time to apply the admission control and resource allocation to the newly arrived slices, that is, entering step 2), where, the ROBINS model must be solved (line 43) and 3) where, the sets $\mathcal{T}$, $\mathcal{D}_t$, and $\mathcal{S}$, and the volume of applied resources, and consumed power must be updated. \\ \begin{algorithm}[t] \caption{OEA-ONSU} \footnotesize \tiny \SetAlgoLined \label{algorithm_OEA_ONSU} \KwInput{$G,\mathcal{S},\mathcal{S}_\text{c},\mathcal{T},\mathcal{D}_t,\mathcal{T}_\text{c},\mathcal{D}_{t\_\text{c}},N_\text{Used},S_\text{Used},O^{t'}_n,O'^{t'}_{l_{n,n'}}, \Gamma_1,\Gamma_2,\Delta_1,\Delta_2$} \KwOutput{Rejected slices of $\mathcal{S}_\text{c}$, Servers and paths for embedding accepted slices of $\mathcal{S}_\text{c}$} \For {each time slot} { \tcc{STEP 1: Get resources of expired slices} \For {each tenant $t$ in $\mathcal{T}$ and each slice $d$ in $\mathcal{D}_t$} { \eIf {$\phi_{t,d} == 0$} { \For{each $m$ in $\mathcal{M}_{t,d}$ and each $n$ in $\mathcal{N}_\text{Used}$, if $\pi^m_n == 1$}{ $\pi^m_n = 0$\; ${R'}_n += \nu_m$\; Update $N_\text{Used}$\; } \For{each $e_{m,m'}$ in $\mathcal{E}_{t,d}$ and each Path $b$ in $\mathcal{P}_{n,n'}$, if $\xi^{e_{m,m'}}_{p^b_{n,n'}} == 1$} { $\xi^{e_{m,m'}}_{p^b_{n,n'}} = 0$\; \For{each used link $l_{u,u'}$, if $l_{u,u'}$ in $p^b_{n,n'}$} { ${B'}_{l_{u,u'}} += \omega_{e_{m,m'}}$\; Update $S_\text{Used}$\; } } Update $\mathcal{T}$\; Update $\mathcal{D}_t$\; Update $\mathcal{S}$\; }{ $\phi_{t,d} -= 1$\; } } \For {each $n$ in $\mathcal{N}_\text{Used}$} { \eIf {number of VMs on $n == 0$} { $\mathcal{N}_\text{Used}$.remove$(n)$\; }{ \For {each time slot $t'$ in previous time slots, if in current time slot,$\phi$ of all slices in $t' == 0$} { $R'_n += O^{t'}_{n}$\; Update $N_\text{Used}$\; } } } \For{each used link $l_{u,u'}$} { \eIf {number of VLs on $l_{u,u'} == 0$} { $\gamma_{l_{u,u'}} = 0$\; }{ \For{each time slot $t'$ in previous time slots, if in current time slot, $\phi$ of all slices in $t' == 0$} { ${B'}_{l_{u,u'}} + = O'^{t'}_{l_{u,u'}}$\; Update $S_\text{Used}$\; } } } \tcc{STEP 2: Admission control and Resource allocation} \textbf{Solve ROBINS}\; \tcc{STEP 3: Update $\mathcal{T}$, $\mathcal{D}_t$, $\mathcal{S}$, and the volume of available resources, applied resources for robustness, and the power consumption} \For {each accepted slice in current time slot} { Update $\mathcal{T}$\; Update $\mathcal{D}_t$\; Update $\mathcal{S}$\; } \For {each $n$ in $\mathcal{N}_\text{Used}$} { Update ${R'}_n$\; Update $O^t_n$ for current time slot $t$\; } \For{each used link ${l_{u,u'}}$} { Update ${B'}_{l_{u,u'}}$\; Update $O'^t_{l_{u,u'}}$ for current time slot $t$\; } Update $N_\text{Used}$\; Update $S_\text{Used}$\; } \end{algorithm} \subsection{Near Optimal Admission control and Resource Allocation Algorithm}\label{Near Optimal Admission control and Resource Allocation Algorithm} To find a near-optimal solution in a reasonable amount of time in the step 2 of Algorithm \ref{algorithm_OEA_ONSU}, a greedy algorithm is devised to use instead of ROBINS. In this context, the new three-step algorithm, named NEA-ONSU, is devised, and Algorithm \ref{algorithm_NEA_ONSU} shows its details. Step 1 of this algorithm resembles that of Algorithm \ref{algorithm_OEA_ONSU}. In step 2, first, all the servers are sorted in the descending order of their available capacities (line 3), next, all slices are sorted in a descending order based on the total capacity required for their VMs (line 4) and for each slice, the requested VMs are sorted in descending order of their capacities (line 6). For each VM, first, the first proper server should be found according to the resources required for that VM and robustness, and next, for each VM that is connected to this VM, the same process is run, where the required bandwidth capacity and robustness and the maximum tolerable propagation delay of the VL between the two current VMs are of concern. Consequently, first, the existing paths between the two candidate servers for placing the two VMs based on their propagation delays in an ascending order must be sorted and next, the first path that meets the requirements (lines 7-30) must be selected. If the required servers and links cannot be found for a slice, that slice will be rejected (lines 31-32), otherwise, the sets $\mathcal{T}$, $\mathcal{D}_t$, and $\mathcal{S}$ must be updated (lines 33-37). In step 3, the applied servers and links must be specified, and the volume of applied resources for robustness and the power consumption must be computed. \begin{itemize} \item \textbf{Computational complexity:} Consider $t$ as the current time slot count, and $|\mathcal{T}|$ and $|\mathcal{D}_t|$ as the total accepted tenants’ and slices’ count up to the current time slot, respectively; $|\mathcal{T}_\text{c}|$ and $|\mathcal{D}_{t\_\text{c}}|$ as the current time slot tenants’ and slices’ count, respectively; $\mathcal{|N|}$ as the nodes’ count; $|\mathcal{M}|$ as the VMs’ count; $|\mathcal{E}|$ as the VLs’ set length; $b$ is the count of the paths between two nodes, and $|\mathcal{L}|$ as the physical links’ set length. The computational complexity of step 1 is O($|\mathcal{T}|.|\mathcal{D}_t|.({\mathcal{|N|}}.|\mathcal{M}|+{\mathcal{|N|}}.t+|\mathcal{E}|.\mathcal{|N|}^2b.|\mathcal{L}|)$), step 2 is O($(|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|).\log(|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|)+\mathcal{|N|}.\log \mathcal{|N|} + (|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|).(|\mathcal{M}|^2.\mathcal{|N|}^4.b.\log(\mathcal{|N|}^2.b) + \mathcal{|N|}^4.b^2.|\mathcal{L}|^2.|\mathcal{E}| + |\mathcal{M}|.\mathcal{|N|}^3.b.|\mathcal{L}|)$), and step 3 is O($|\mathcal{T}_\text{c}|.|\mathcal{D}_{t\_\text{c}}|.({\mathcal{|N|}}.|\mathcal{M}|+|\mathcal{E}|.\mathcal{|N|}^2b.|\mathcal{L}|)$). Therefore, the total computational complexity of Algorithm \ref{algorithm_NEA_ONSU} is the sum of the step 1 and step 2 complexities, because, the complexity of step 3 is negligible compared to the steps 1 and 2. \end{itemize} \begin{algorithm}[t] \caption{NEA-ONSU} \footnotesize \tiny \SetAlgoLined \label{algorithm_NEA_ONSU} \KwInput{$G,\mathcal{S},\mathcal{S}_\text{c},\mathcal{T},\mathcal{D}_t,\mathcal{T}_\text{c},\mathcal{D}_{t\_\text{c}},N_\text{Used},S_\text{Used},O^{t'}_n,O'^{t'}_{l_{n,n'}},\Gamma_1,\Gamma_2,\Delta_1,\Delta_2$} \KwOutput{Rejected slices of $\mathcal{S}_\text{c}$, Servers and paths for embedding accepted slices of $\mathcal{S}_\text{c}$} \For {each time slot} { \tcc{STEP 1: Get resources of expired slices} Same~as~OEA-ONSU \\ \tcc{STEP 2: Admission control and Resource allocation} $\textbf{sorted\_N}$: sort~$\mathcal{N}$~descending~according~to~available~capacity~of~each~node\; $\textbf{sorted\_slices}$: sort~$\mathcal{S}_\text{c}$~descending~according~to~requested~VMs'~resources~of~slices\; \For {each slice $s$ in sorted\_slices} { $\textbf{sorted\_VMs}$: sort~VMs~descending according to their requested capacities\; \For{each $m$ in sorted\_VMs and each node $n$ in sorted\_N, if node~$n$~is~proper~for~$m$~by~considering~robustness} { \For{each $m'!=m$ in sorted\_VMs, if ${e}_{m,m'}$~exists} { \For{each node $n'$ in sorted\_N, if node $n'$ is proper for $m'$ by considering robustness} { $\textbf{sorted\_paths}$: sort~paths~between~$n$ ,~$n'$~ascending according~to~propagation~delay\; \For{each $p$ in ${{sorted\_paths}}$, if $p.prop\_delay$ $\le$ ${{\tau}^{e_{m,m'}}_{max}}$} { \For{each link ${l_{u,u'}}\in p$, if ${{B'}_{l_{u,u'}}}$ is enough for ${{e}_{m,m'}}$ by considering robustness} { \textbf{ }\\ ${\pi}^m_n = 1$\; \textbf{ }\\ ${R'}_n -= \nu_m$\; \textbf{ }\\ ${\pi}^{m'}_{n'} = 1$\; \textbf{ }\\ ${R'}_{n'} -= \nu_{m'}$\; \textbf{ }\\ ${\xi}^{e_{m,m'}}_{p} = 1$\; \For{each link $l_{u,u'} \in p$} { ${B'}_{l_{u,u'}}-=\omega_{e_{m,m'}}$\; } } } } } } \eIf{suitable servers and paths not found} { $\delta_{t,d} = 0$\; }{ Update $\mathcal{T}$\; Update $\mathcal{D}_t$\; Update $\mathcal{S}$\; } } \tcc{STEP 3: Specify the applied servers and links, and compute the volume of applied resources for robustness and the power consumption} Update $\mathcal{N}_{Used}$\; \For{each $l_{n,n'}$} { Update $\vartheta_{l_{n,n'}}$\; } \For {each $n$ in ${\mathcal{N}_\text{Used}}$} { Update $O^t_n$ for current time slot $t$\; ${R'}_n -= O^t_n$\; } \For{each used link ${l_{u,u'}}$} { Update $O'^t_{l_{u,u'}}$ for current time slot $t$\; ${B'}_{l_{u,u'}} -= O'^t_{l_{u,u'}}$\; } Update $N_\text{Used}$\; Update $S_\text{Used}$\; } \end{algorithm} \section{Numerical Results} \label{Numerical Results}\subsection{Simulation Environment} The simulation is run to evaluate the efficiency of the proposed algorithms. To implement the infrastructure network, the Abilene network \cite{orlowski2010sndlib} with 12 nodes and 27 links is involved, Fig. (\ref{fig:Abilene-Network}). The main Abilene network has 15 links, but here, each node has a link to itself to allow the VMs to interconnect on the common server, therefore, the link capacity for all links that interconnect the switches is set to $BW(i,j)=10$ Gbps, while, the capacity of each node’s local link is set to 40 Gbps, thus, no bottleneck is formed in the local communications. \begin{figure} \centering \includegraphics[width=0.65\columnwidth]{images/Abilene.pdf} \caption{Abilene Network Topology} \label{fig:Abilene-Network} \end{figure} The algorithms are implemented in Python 3.6. The docplex Python library 2.4.61 and IBM CPLEX Optimizer 12.8 are applied to implement and solve the ROBINS. Simulations are run on a machine with 20 cores 2.3GHz Intel Xeon E5-2695 v3 CPU and 128 GB RAM. The Poisson distribution is applied for new slice requests’ arrivals with the mean arrival rate $\lambda$ = 2 and max arrival rate which is set to 5 per time slot. The slice lifespan ($\mu$) is drawn from an exponential distribution with a mean of 10 time slots (40 time slots of the new and expired slice requests are simulated). The simulation parameters are tabulated in Table~\ref{simulation-parameters}. It is assumed that each tenant has only one slice request. Specifications of the physical servers proposed by \cite{farkiani2019fast} are modified and tabulated in Table \ref{servers-specifications}. The VMs' specifications are tabulated in Table \ref{VMs-specifications}. The Barabasi-Albert model \cite{hosseini2019probabilistic,barabasi1999emergence} is used to generate the VNs topologies. As observed in Table \ref{servers-specifications}, two types of servers are of concern and in each simulation iteration, for each server of the infrastructure network, the specifications are randomly selected from these two types. As observed in Table \ref{VMs-specifications}, three types of VMs are of concern and in any type of VM, the resources are designed to be compatible. The switch power data are extracted from \cite{marotta2017fast,bari2019esso} and are modified into: the power of each switch is 184 Watts, the power of each 10 Gbps port is 4.3 Watts and the power of each 40 Gbps port is 13.6 Watts. Consequently, the power of each link is computed through Eqs. (\ref{link_power_calculation} and \ref{link_power_calculation2}). The source code of simulation is available in \cite{ecv7-nz24-21}. \begin{table}[htpb] \centering \caption{Simulation Parameters} \label{simulation-parameters} \small \begin{center} \begin{tabular}{||m{5cm}|m{3cm}||} \hline {Number of time slots}& {40} \\ \hline {$\Gamma_1,\Gamma_2$} & {$[0,4],[0,4]$} \\ \hline {$\Delta_1,\Delta_2$} & {$(0\%,10\%,30\%)$}\\ \hline {Max number of $|\mathcal{T}_\text{c}|$} & 5 \\ \hline {$|\mathcal{D}_{t_\text{c}}|$} & 1 \\ \hline {Number of VMs of each slice request} & U(2-4) \\ \hline {Bandwidth of each VL (Mbps)} & U(100-1500) \\ \hline {Propagation delay of each VL (ms)} & U(4-13) \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \centering \caption{Servers Specifications} \label{servers-specifications} \small \begin{center} \begin{tabular}{||m{2cm}|m{2cm}|m{2cm}||} \hline Type & 1 & 2\\ \hline {CPU (core)}& 32 & 48\\ \hline {RAM (GB)} & 192 & 768 \\ \hline {Storage (GB)} & 4000 & 4000\\ \hline {$P^\text{Max}$ (Watts)} & 540 & 700 \\ \hline {$P^\text{Idle}$ (Watts)} & 170 & 180 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \centering \caption{VMs Specifications} \label{VMs-specifications} \small \begin{center} \begin{tabular}{||m{2cm}|m{1cm}|m{1cm}|m{1cm}||} \hline {Type}& {1} & {2}& {3} \\ \hline {CPU (core)} & 1 & 2 & 4 \\ \hline {RAM (GB)} & 2 & 4 & 16\\ \hline {Storage (GB)} & 120 & 120 & 120 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-AccRatio-fig-different-protection-levels.pdf} \caption{Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-AccRatio-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-AccRatio-fig-different-intervals.pdf} \caption{Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-AccRatio-Different-relative-deviations} \end{subfigure} \hfill \caption{Acceptance Ratio} \label{fig:comp-AccRatio} \end{figure} \subsection{Performance Metrics and Results} The results are presented for two scenarios: in the first, the relative deviations $\Delta_1$=$\Delta_2$\footnote{Considering $\Delta_1$=$\Delta_2$ is for simplicity and the algorithms have the ability to take different values for these two parameters as input.} for all requested VMs’ resources and VLs’ data rates are set to $10\%$, and results are presented for different protection levels that are specified with $\Gamma_1$=$\Gamma_2$\footnote{Considering $\Gamma_1$=$\Gamma_2$ is for simplicity and the algorithms have the ability to take different values for these two parameters as input.}, with 0 to 4 volumes; in the second, the $\Gamma_1$=$\Gamma_2$ and their volumes are set to 1 and the results are presented for different $\Delta_1$=$\Delta_2$, with 0\%, 10\%, and 30\% volumes. The reason for running these two scenarios is to compare the applied robustness using $\Gamma$ and $\Delta$, which are the two general parameters in providing robustness. The baseline is resource allocation without robustness \cite{marotta2017fast,wen2017robust}. Therefore, we consider the resource allocation without robustness ($\Delta$=0\% or $\Gamma$=0) as the baseline and compare the results of our algorithms in 1) $\Delta$=0\% or $\Gamma$=0 (as the baseline) with 2) $\Delta$ or $\Gamma$ with different values except 0, in two stated scenarios, based on the following performance metrics. The simulation results constitute the 20 simulations’ average. \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Node-Power-Consumption-fig-different-protection-levels.pdf} \caption{Server Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-NodePower-timeslot-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Switch-Power-Consumption-fig-different-protection-levels.pdf} \caption{Switch Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-SwitchPower-timeslot-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Total-Power-Consumption-fig-different-protection-levels.pdf} \caption{Total Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-TotalPower-timeslot-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Node-Power-Consumption-fig-different-intervals.pdf} \caption{Server Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-NodePower-timeslot-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Switch-Power-Consumption-fig-different-intervals.pdf} \caption{Switch Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-SwitchPower-timeslot-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Compare-Total-Power-Consumption-fig-different-intervals.pdf} \caption{Total Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-TotalPower-timeslot-Different-intervals} \end{subfigure} \caption{Power Consumption Changes in Time Slots} \label{fig:comp-PowerConsTimeslots} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Node-Power-Consumption-fig-different-protection-levels.pdf} \caption{Server Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-NodePower-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Switch-Power-Consumption-fig-different-protection-levels.pdf} \caption{Switch Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-SwitchPower-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Total-Power-Consumption-fig-different-protection-levels.pdf} \caption{Total Power Consumption in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-TotalPower-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Node-Power-Consumption-fig-Relative-deviation.pdf} \caption{Server Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-NodePower-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Switch-Power-Consumption-fig-Relative-deviation.pdf} \caption{Switch Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-SwitchPower-Different-intervals} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/Average-Compare-Total-Power-Consumption-fig-Relative-deviation.pdf} \caption{Total Power Consumption in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-TotalPower-Different-intervals} \end{subfigure} \caption{Average Power Consumption Over Time Slots} \label{fig:comp-AvgPowerCons} \end{figure*} \begin{enumerate} \begin{figure*} \centering \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-servers-fig-different-protection-levels.pdf} \caption{Active Servers in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-ActiveServers-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-servers-fig-different-intervals.pdf} \caption{Active Servers in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-ActiveServers-Different-relative-deviations} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-links-fig-different-protection-levels.pdf} \caption{Active Links in Different Protection Levels ($\Gamma_1$=$\Gamma_2$, $\Delta_1$=$\Delta_2$ and Their Values Are Equal to 10\%)} \label{fig:comp-ActiveLinks-Different-protection-levels} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{images/Compare-Active-links-fig-different-intervals.pdf} \caption{Active Links in Different Relative Deviations ($\Delta_1$=$\Delta_2$, $\Gamma_1$=$\Gamma_2$ and Their Values Are Equal to 1)} \label{fig:comp-ActiveLinks-Different-relative-deviations} \end{subfigure} \hfill \caption{Average Active Servers and Links} \label{fig:comp-ActiveServersLinks} \end{figure*} \item \textbf{Acceptance ratio:} The first metric is the accepted slices’ count to the total arrived slices’ count ratio, expressed through Eq. (\ref{Acc_ratio_calculation}): \begin{align} \label{Acc_ratio_calculation} &{Acceptance\_ratio} = {\frac{\#Accepted\_slices}{\#Arrived\_slices}}\times100. \end{align} The ratios of OEA-ONSU and NEA-ONSU are compared in two stated scenarios, Fig. (\ref{fig:comp-AccRatio}). The average of arrived slices in 20 simulations is 100. As observed in scenario 1, Fig. (\ref{fig:comp-AccRatio-Different-protection-levels})), an increase in $\Gamma_1$ and $\Gamma_2$, decreases the OEA-ONSU’s gained acceptance ratio, because the capacity of resources are limited and this algorithm cannot accept all slices with their requirements, while, in scenario 2 in Fig. (\ref{fig:comp-AccRatio-Different-relative-deviations}), the OEA-ONSU acceptance ratio is almost fixed and usually, higher than that of the scenario 1. As to NEA-ONSU, in scenario 1, compared to 2, its acceptance ratios are closer to the optimal state, and in scenario 2, acceptance ratios decrease by about 10\%. Unlike OEA-ONSU’s behavior in scenario 1, at $\Gamma$=0, the NEA-ONSU’s acceptance ratio is less than the acceptance ratio at $\Gamma$=1 and 2, and this is due to the greedy behavior of NEA-ONSU in choosing nodes to place VMs. On average, in terms of acceptance ratio, the OEA-ONSU has about 2\% gap with baseline and the NEA-ONSU has about 7\% optimality gap with OEA-ONSU. \item \textbf{Power consumption:} One of other metrics applied in assessing OEA-ONSU and NEA-ONSU is the power consumption metric, consisting of the servers’ power consumption, switches’ power consumption, and total power consumption, the sum of the two prior powers. The changes in this metric during time slots, are in two batches, described as follows: 1) Figs. (\ref{fig:comp-NodePower-timeslot-Different-protection-levels}, \ref{fig:comp-SwitchPower-timeslot-Different-protection-levels}, and \ref{fig:comp-TotalPower-timeslot-Different-protection-levels}), scenario 1, 2) Figs. (\ref{fig:comp-NodePower-timeslot-Different-intervals}, \ref{fig:comp-SwitchPower-timeslot-Different-intervals}, and \ref{fig:comp-TotalPower-timeslot-Different-intervals}), scenario 2. The average total power of servers and switches, when they are turned-on under maximum load, is 7450 and 2500 Watts, respectively. The average power consumption over time slots includes two batches of figures: 1) Figs. (\ref{fig:comp-NodePower-Different-protection-levels}, \ref{fig:comp-SwitchPower-Different-protection-levels}, and \ref{fig:comp-TotalPower-Different-protection-levels}), scenario 1, 2) Figs. (\ref{fig:comp-NodePower-Different-intervals}, \ref{fig:comp-SwitchPower-Different-intervals}, and \ref{fig:comp-TotalPower-Different-intervals}), scenario 2. As observed in Figs. (\ref{fig:comp-NodePower-timeslot-Different-protection-levels} and \ref{fig:comp-NodePower-timeslot-Different-intervals}), and considering the comparison of OEA-ONSU and NEA-ONSU in terms of acceptance ratio, the power consumption of the servers in scenario 1 with an increase $\Gamma$, from time slot 23 onwards, and in scenario 2 with an increase $\Delta$, from time slot 19 onwards, as approaches The average volume of total power of servers under maximum load, it begins to decrease due to the inability to accept the new slice requests and the expired slices exit. The server power consumption reduction in scenario 2 is higher than that of the scenario 1, which is due to a higher reduction in acceptance ratio in scenario 2. These conditions in switch and total power consumption, respectively, are evident in the figures. Because the effect of servers’ power consumption in total power consumption is more than the switches’ power consumption, the total power consumption changes are more affected by servers’ power consumption. In Fig. (\ref{fig:comp-AvgPowerCons}), where as observed, in general, by increasing $\Gamma$ and $\Delta$, the average power consumption increase. As observed in Fig. (\ref{fig:comp-NodePower-Different-protection-levels}), although, acceptance ratio of $\Gamma$=0, Fig. (\ref{fig:comp-AccRatio-Different-protection-levels}), is less than $\Gamma$=1 in NEA-ONSU it is expected that power consumption to be low, because the slice requests need more capacity for robustness and it is possible that the order of choosing servers in NEA-ONSU for $\Gamma$=1 vary from $\Gamma$=0, the server power consumption is decreased, and because the applied links’ count increase, the switch power consumption in $\Gamma$=1 becomes more than $\Gamma$=0. On average, as to power consumption, the OEA-ONSU has about 4\% gap with baseline and the NEA-ONSU has about 10\% optimality gap with OEA-ONSU. \item \textbf{Active servers and links:} The third evaluation metric includes the active servers' and links' count, separately. As observed in Fig. (\ref{fig:comp-ActiveServersLinks}), by increasing $\Gamma$, the active servers’ count, Fig. (\ref{fig:comp-ActiveServers-Different-protection-levels}), remains almost fixed, while the active links’ count, Fig. (\ref{fig:comp-ActiveLinks-Different-protection-levels}), increases. In scenario 1, increasing the servers’ power consumption, Fig. (\ref{fig:comp-NodePower-Different-protection-levels}), is due to increasing the applied resources’ volume instead of increasing applied servers’ counts. In Figs. (\ref{fig:comp-ActiveServers-Different-relative-deviations} and \ref{fig:comp-ActiveLinks-Different-relative-deviations}), by increasing $\Delta$, active servers’ count in OEA-ONSU increases more compared to the active links’ count, and as to NEA-ONSU, the active servers’ count increase. In Fig. (\ref{fig:comp-ActiveLinks-Different-relative-deviations}), for NEA-ONSU, the active links’ count in $\Delta$=10\% is less than $\Delta$=0\%, duo to low acceptance ratio in $\Delta$=10\%. At $\Delta$=30\%, although, the acceptance ratio is less than $\Delta$=0\%, the active links’ count is high, that is, the slices’ requests require more resources, that is, the active links’ count affects switch power consumption, Fig. (\ref{fig:comp-SwitchPower-Different-intervals}). In general, the percentage of active servers and links in scenario 1 are more than that of the scenario 2. In this metric, computing the gap of OEA-ONSU with baseline, and optimality gap of NEA-ONSU are not essential because, according to the objective function, here, the gaps are essential only in the two previous metrics. \item \textbf{Execution time:} Because finding the optimal solution for the ROBINS BLP problem may be unreachable in reasonable amount of time for real-world and large-scale networks, and, the near-optimal solution in a short time is sought, consequently, the algorithm NEA-ONSU is devised as an alternative to OEA-ONSU. In this context, execution time is a proper metric for comparing these two algorithms. The execution time of these algorithms is subject to the execution time of the three steps therein. The average execution time of these algorithms in each time slot is shown in Fig. (\ref{fig:comp-Executiontime}), where, as observed, NEA-ONSU is about 30X faster than OEA-ONSU. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{images/Compare-Time.pdf} \caption{Average Execution Time} \label{fig:comp-Executiontime} \end{figure} \end{enumerate} Generally, the exact gain of optimal algorithm compared to the stated baseline is providing reliable slices with a little gap with the baseline. \section{Conclusion} \label{Conclusion} We proposed two online admission control and resource allocation algorithms for network slicing under bandwidth and workload uncertainties. These include robustness vs. high fluctuations on both requested VLs' bandwidths and VMs' resources. Besides, because the slices' arrive to the network in different times and their lifespans vary, the solution dynamically react to the online slice requests. The joint problem of online admission control and resource allocation considering the energy consumption is formulated mathematically. The formulation is a BLP, where the $\Gamma$-Robustness concept is exploited to overcome VLs bandwidths' and VNFs workloads' uncertainties. Then, an optimal algorithm, named OEA-ONSU, that adopts this mathematical model is proposed. To find near-optimal solution in reasonable amount of time, a new heuristic algorithm, named NEA-ONSU, is proposed. The assessments’ results indicate that the efficiency of NEA-ONSU is vital in increasing the accepted requests’ count, decreasing power consumption and providing adjustable tolerance vs. the VNFs workloads’ and VLs traffics’ uncertainties, separately. Considering the acceptance ratio and power consumption that constitute the two important components of the objective function, NEA-ONSU has about 7\% and 10\% optimality gaps, respectively, while being about 30X faster than that of OEA-ONSU. A prospective extension to this work is to let the IP allocates more resources to the slices dynamically, if they need more resources than the provisioned resources for robustness. In other words, we need to add resource reallocation process to this work. This allows trade-off between proactive and reactive approaches. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \small{
1,314,259,995,473
arxiv
\section{Introduction} \label{sect:intro} \setcounter{equation}{0} The medium dependence of isospin-breaking effects belongs to one of the fundamental questions in nuclear physics \cite{Li:1997px,Baran:2004ih,Steiner:2004fi}. In particular, the neutron-proton mass difference in nuclear matter $\Delta m_{\rm np}^*$ is an interesting topic of nuclear astrophysics relevant to the evolution of the universe at an early stage~\cite{Steigman:2005uz,Cyburt:2004cq}. Furthermore, it is also important for the description of the properties of mirror nuclei~\cite{Nolen:1969ms}, the stability of drip-line nuclei~\cite{Woods:1997cs} and the transport in neutron-rich matter induced by heavy-ion collisions~\cite{Rizzo:2003if}. Although there exist various publications dealing directly with the density dependence of the neutron-proton mass difference \cite{H:90pj,H:90zj,Dr:04,A:91js,M:91if,F:91qf,S:93xt,S:94tq,Chr:96} and its implications for asymmetric nuclear matter and finite nuclei properties \cite{Bo91,Zu1,Zu5,Ch97,Le6,LQ88,vD5,vDC5,vD6,Ku97,Li01,Gr0,Ho0,Ts99,Ch7}, this quantity is still not well understood. Quantitatively and even qualitatively the predictions about the behavior of the neutron-proton mass difference in nuclear matter change from model to model. Skyrme-soliton models have the inherent advantage compared with other hadronic models that they are based on chiral input and that they treat the properties and interactions of the nucleons on an equal footing~\cite{Skyrme:1961vq,Skyrme:1962vh}. In this context we recently studied isospin-breaking effects in the baryonic sector of a medium-modified Skyrme model~\cite{Meissner:2006id}, by focusing on the single hadron properties in the nuclear environment rather than on the properties of the system as a whole. The approach predicted that the neutron-proton mass difference changes in an isospin-symmetric nuclear environment by a very small amount. The isospin-breaking leading to this result was only due to a modification of the mesonic sector of the Skyrme model, originally introduced to generate the strong neutron-proton mass splitting in free space (in addition to the electromagnetic one)~\cite{Rathske:1988qt}. However, when the nucleons are embedded in an isospin-asymmetric environment, {\em additional} medium effects can be expected. To evaluate the latter in the present work we will consider the nucleon properties in homogeneous, infinite and isospin-asymmetric nuclear matter. The Lagrangian of the present study is a generalization of the in-medium Skyrme-type Lagrangian of Ref.~\cite{Meissner:2006id}. In addition to the strong isospin-breaking in the mesonic sector it {\em explicitly} takes into account the different influences of the isospin-asymmetric environment on the charged ($\pi^\pm$) and neutral ($\pi^0$) pion fields via a built-in energy dependence in the $s$-wave pion-nucleon ($\pi^\pm{\rm N}$) scattering lengths~\cite{Ber:93,Ber:95,Kaiser:2001bx,Kolomeitsev:2002gc,Friedman:2003ji} and the $p$-wave optical potential~\cite{Ericsonbook}. This additional energy dependence also alters the predictions for iso-sym\-metric matter considerably. The paper is organized as follows: In Sect.~\ref{sect:model} we formulate the model including the optical potential input and medium modifications. Section~\ref{sect:classical} discusses the classical Lagrangian and the pertinent equation of motion. In Sect.~\ref{sect:mass_difference} we present the quantization procedure and the final expressions for the the strong and electromagnetic part of the in-medium neutron-proton mass difference. The results of the calculation are reported and discussed in Sect.~\ref{sect:results}. In Sect.~\ref{sect:summary} our conclusions are summarized and an outlook to future studies is given. Moreover, for clarification, two appendices are added. In the first one, we discuss in detail the setup of the classical equation of motion in the presence of isospin-breaking terms. The second appendix is devoted to peculiarities in the construction of the charges and magnetic moments. \section{Formulation of the problem} \label{sect:model} \subsection{Medium modification of the model} \label{subsect:modification} As in our previous work~\cite{Meissner:2006id}, we start with a generalized Skyrme-type Lagrangian which incorporates an explicit isospin-breaking term in the mesonic sector: \begin{eqnarray} {\cal L}&=&{\cal L}_2+{\cal L}_4+{\cal L}_{{\rm g}\chi {\rm SB}}\,, \label{lb}\\ {\cal L}_2&=&-\ds\frac{F_\pi^2}{16}\,\mbox{Tr}\left( L_\mu L^\mu\right)\,,\\ {\cal L}_4&=&\ds\frac{1}{32e^2}\,\mbox{Tr}\,[L_\mu,L_\nu]^2\,,\\ {\cal L}_{{\rm g}\chi {\rm SB}}&=&-\ds\frac{F_\pi^2}{16}\Bigl\{ \mbox{Tr}\left[(U-1){\cal M}_+^2(U^\dagger-1)\right]\quad\nonumber\\ &&\quad\ \ \mbox{}-\mbox{Tr}\left[(U-1) \tau_3{\cal M}_-^2 (U^\dagger-1)\tau_3\right]\Bigr\}, \label{LRathske} \end{eqnarray} where Einstein's summation convention is always assumed (if not specified otherwise). $L_\mu=U^\dagger \partial_\mu U$ is given in terms of the chiral $SU(2)$ matrix $U=\exp(2i \tau_a \pi_a/F_\pi)$, where $\pi_a$ ($a=1,2,3$) are the Cartesian isospin-components of the pion field. $F_\pi=2f_\pi$ is the pion-decay constant, while $e$ is the dimensionless Skyrme constant. Finally, ${\cal M}_\pm$ $\equiv$ $\sqrt{(m_{\pi^\pm}^2\pm m_{\pi^0}^2)/2}$ is defined in terms of the masses of the charged and neutral pions. As in Ref.~\cite{Meissner:2006id} we insist on reproducing the empiri\-cal (isospin-averaged) masses of the nucleon and delta, $m_{\rm N} = 938$~MeV and $M_{\Delta} = 1232$~MeV, in free space (density $\rho=0$) and without isospin breaking term (${\cal M}_-=0$). Furthermore, as input for the free mass of the neutral pion we take the PDG-value~\cite{Yao:2006px}: $m_{\pi^0}=134.977$~MeV. These choices induce the values $F_{\pi}= 108.11$~MeV and $e = 4.826$. Following Ref.~\cite{Meissner:2006id} the mass of the charged pions $m_{\pi^\pm}$ is extracted as a variational parameter $m_{\pi^\pm}=135.015$~MeV from the fit to the empirical value $\Delta m_{\rm np}^{(exp)}=1.29$~MeV of the neutron-proton mass splitting in free space. Note that the dominant electromagnetic contribution to $m_{\pi^\pm}-m_{\pi^0}$ is beyond the scope of the model. The generalized pion mass term~\re{LRathske}, which was originally proposed by Rathske~\cite{Rathske:1988qt}, can be rewritten as \begin{eqnarray} {\cal L}_{{\rm g}\chi {\rm SB}}&=&-\ds\frac{F_\pi^2}{16} \bigg\{\mbox{Tr}\left[(U-1)m_{\pi^0}^2(U^\dagger-1)\right]\nonumber\\ &&\quad\ \ \mbox{}+\sum_{a=1}^2\mbox{Tr}(\tau_aU)\,{\cal M}_-^2\,\mbox{Tr}(\tau_aU^\dagger)\bigg\}\,, \label{gchiSB} \end{eqnarray} which is convenient for our up-coming modifications. When the pion fields are small, \begin{equation} U=\exp\left\{\ds\frac{2i\vec\tau\cdot\vec \pi}{F_\pi}\right\} \approx 1+\ds\frac{2i\vec\tau\cdot\vec\pi}{F_\pi} +\dots\,, \label{linapp} \end{equation} the Lagrangian~\re{lb} reduces to the Lagrangian for free pion fields \begin{eqnarray} {\cal L}_{\rm low}&=& \partial_\mu\pi^+\partial^\mu\pi^--\pi^+m_{\pi^\pm}^2\pi^-\nonumber\\ &&\mbox{}+\ds\frac{1}{2} \left(\partial_\mu\pi^0 \partial^\mu\pi^0-\pi^0 m_{\pi^0}^2\pi^0\right)\,, \label{llowfree}\\ \pi^\pm&=&\ds\frac{1}{\sqrt2}(\pi_1\mp i\pi_2)\,,\quad \pi^0=\pi_3\,.\end{eqnarray} In the medium the analog of the Lagrangian~\re{llowfree} reads \begin{eqnarray} {\cal L}_{\rm low}^*&=& \ds\frac{1}{2}\sum_{\lambda=\pm,0} \big\{\partial_\mu{\pi^\lambda}^\dagger \partial^\mu\pi^{\lambda}-{\pi^{\lambda}}^\dagger\big(m_{\pi^\lambda}^2+ \hat\Pi^{\lambda}\big)\pi^\lambda\big\}\nonumber\quad\\ &=&{\cal L}_{\rm low}-\ds\frac{1}{2}\Big\{\pi_a\hat\Pi^{0}\pi_a +i\varepsilon_{ab3}\pi_a\Delta\hat\Pi\pi_b\Big\}\,, \label{llowmed} \end{eqnarray} where $\hat\Pi^0$ and $\Delta\hat\Pi$ are linear combinations of the self energies of the charged pions: \begin{eqnarray} \hat\Pi^0&=&\ds\frac{1}{2}\big(\hat\Pi^{-}+\hat\Pi^{+}\big)\,,\nonumber\\ \Delta\hat\Pi&=&\ds\frac{1}{2}\big(\hat\Pi^{-}-\hat\Pi^{+}\big)\,. \label{Pi0DPi} \end{eqnarray} As in Ref.~\cite{Meissner:2006id} the medium-modified version (always mar\-ked by an asterix) of the Lagrangian~\re{lb}, can be defined as \begin{equation} {\cal L}\to{\cal L}^* ={\cal L}_2+{\cal L}_4+{\cal L}_{{\rm g}\chi{\rm SB}}^*\,, \label{lmed} \end{equation} where for the general case of asymmetric matter ${\cal L}_{{\rm g}\chi{\rm SB}}^*$ is given by formula\footnote{{}From now on $m_\pi$ stands for the mass of the neutral pion, {\it i.e.} $m_\pi\equiv m_{\pi^0}$.} \begin{eqnarray} {\cal L}_{{\rm g}\chi{\rm SB}}^* &=&-\ds\frac{F_\pi^2}{16}\Bigl\{\mbox{Tr}\Bigl[(U-1) \big(m_{\pi}^2+\hat\Pi^{0}\big)(U^\dagger-1)\Bigr]\nonumber\\ &&\mbox{}+\sum_{a,b=1}^2 \mbox{Tr}(\tau_aU)\,\Bigl[\delta_{ab}{\cal M}_-^2\nonumber\\ &&\mbox{}\qquad+i\varepsilon_{ab3} \Delta\hat\Pi/2\Bigr]\,\mbox{Tr}(\tau_bU^\dagger)\Bigr\}\,. \label{gchiSBmed} \end{eqnarray} It is easy to check that the Lagrangian~\re{lmed} reduces to the Lagrangian~\re{llowmed} for the expansion~\re{linapp} as well as to the medium-modified Lagrangian of Ref.~\cite{Meissner:2006id} for the case of isospin-symmetric matter, $\hat\Pi^+=\hat\Pi^-=\hat\Pi^0$ (in the parameterization of Ref.~\cite{Meissner:2006id}). \subsection{Parameterization of the optical potentials} The polarization operators of the charged pions can be expressed in terms of energy-dependent pion-nucleus optical potentials~\cite{Ericsonbook} as follows: \begin{eqnarray} \hat\Pi^{\pm}_{s}(\omega,\vec r)&=&-4\pi b^\pm(\omega,\vec r)\equiv \chi^{\pm}_{s}(\omega,\vec r)\,, \label{chis}\\ \hat\Pi^{\pm}_{p}(\omega,\vec r)&=& \vec\nabla \ds\frac{4\pi c^\pm(\omega,\vec r)}{1\!+\!4\pi g^\prime c^\pm(\omega,\vec r)} \cdot\vec\nabla -\ds\frac{4\pi\omega}{2m_N}\left(\vec\nabla^2c^\pm(\omega,\vec r)\right)\nonumber\\ &\equiv&\vec\nabla\chi^{\pm}_{p,1}(\omega,\vec r)\cdot\vec\nabla -\ds\frac{\omega}{m_\pi}\chi^{\pm}_{p,2}(\omega,\vec r)\,, \label{chip}\\ b^\pm(\omega,\vec r) &\equiv&\Big(b_0^{\rm eff}(\omega)\rho(\vec r)\mp b_1(\omega) \delta\rho(\vec r)\Big)\eta\,,\\ c^\pm(\omega,\vec r)&\equiv&\Big(c_0(\omega)\rho(\vec r)\mp c_1(\omega)\delta\rho(\vec r)\Big)\eta^{-1},\\ \rho(\vec r)&=&\rho_n(\vec r)+\rho_p(\vec r)\,,\\ \delta\rho(\vec r)&=&\rho_n(\vec r)-\rho_p(\vec r)\,,\\ \eta&=&1+m_{\pi}/m_{\rm N}\,, \end{eqnarray} where $\rho_n$ and $\rho_p$ are the neutron and proton densities, respectively. Note that additional $\vec\nabla^2\rho$ and $\vec\nabla^2\delta\rho$ terms are included in the $p$-wave optical potential since they are needed for the description of realistic pion-nucleus scattering data~\cite{Ericsonbook}. The chiral expansion of the off-shell pion-nucleon scattering amplitudes at vanishing pion three-momentum leads to energy-dependent $s$-wave isoscalar and isovector scattering lengths, $b_0(\omega)$ and $b_1(\omega)$, respectively. The quantities $c_0(\omega)$ and $c_1(\omega)$ are the corresponding $p$-wave scattering volumes, whereas $b_0^{\rm eff}(\omega)$ is the {\em effective} isoscalar scattering length (see Eq.~\re{b0eff}). The correlation parameter $g'$, which renormalizes the pion dipole susceptibility, is fixed at $g^\prime=0.47$. Within the counting scheme of pion-nucleon chiral perturbation theory from Refs.~\cite{Ber:93,Ber:95} and based on input from these references $b_0(\omega)$ and $b_1(\omega)$ can be expressed at order ${\mathcal O}(m_\pi^3)$ as~\cite{Kaiser:2001bx,Kolomeitsev:2002gc} \begin{eqnarray} b_0(\omega)&\approx&\ds\frac{1}{4\pi\eta}\left(\ds\frac{\sigma_{\pi N}-\beta\omega^2} {f_{\pi,\rm ph}^2} +\ds\frac{3g_A^2m_{\pi}^3}{16\pi f_{\pi,\rm ph}^4}\right) \nonumber\\ &\approx& \ds\frac{1.206m_{\pi}^{-1}}{4\pi\eta}\big(1-m_{\pi}^{-2}\omega^2\big)\nonumber\\ &\equiv& -\ds\frac{\tilde b_0}{4\pi\eta}\big(1-m_{\pi}^{-2}\omega^2\big)\,, \label{b0omega} \\ b_1(\omega)&\approx&-\ds\frac{1}{4\pi\eta}\,\ds\frac{\omega}{2f_{\pi,ph}^2} \left(1+\ds\frac{\gamma\omega^2}{4\pi^2f_{\pi,\rm ph}^2}\right) \nonumber\\ &\approx& -\ds\frac{1.115m_{\pi}^{-1}}{4\pi\eta} \big(m_{\pi}^{-1}\omega+0.143m_{\pi}^{-3}\omega^3\big)\nonumber\\ &\equiv&\ds\frac{\tilde b_1}{4\pi\eta} \big(m_{\pi}^{-1}\omega+0.143m_{\pi}^{-3}\omega^3\big)\,. \label{b1omega} \end{eqnarray} Here $\sigma_{\pi N}=-4C_1m_{\pi}^2-9g_A^2m_{\pi}^3/64\pi f_{\pi,\rm ph}^2\approx 45$~MeV is the pion-nucleon sigma term, whereas the other parameters correspond to the ``range term'' \cite{Delorme:1992cn,Thorsson:1995rj,Meissner:2001gz} $\beta=g_A^2/4m_N-2C_2-2C_3\approx0.541m_{\pi}^{-1}$ and also $\gamma=(g_A\pi f_{\pi,\rm ph}/m_{\rm N})^2 + \ln(2\Lambda_{\rm c}/m_{\pi}) \approx2.523$. The axial-vector coupling constant $g_A=1.27$ and the pion decay constant $f_{\pi,\rm ph}=92.4$~MeV are fixed to their empirical values, since they refer to the parameterization of the nuclear (matter) background. On the other hand, the parameter $F_\pi=2f_\pi$ of the Skyrme Lagrangian is fixed to the value 108.11~MeV, since this parameter together with $e = 4.826$ and $m_{\pi}=134.977$~MeV refers to the soliton itself; {\it i.e.} the empiri\-cal (iso\-spin-ave\-raged) masses of the nucleon and delta are reproduced by this choice, as explained below Eq.~\re{LRathske}. The value of the cutoff-scale parameter $\Lambda_{\rm c}=737$~MeV is adjusted to the the threshold value of the isospin-odd on-shell $\pi{\rm N}$ scattering amplitude. The dimension-two low-energy constants $C_{1,2,3}$ can be found {\it e.g.} in Refs.~\cite{Ber:93,Ber:95}. These values are consistent with the most recent pion-nuclei scattering data as it was summarized in Ref.~\cite{Meissner:2005ba} and lead to the threshold values $b_0(m_{\pi})\approx 0$ and $b_1(m_{\pi})\approx-0.0883m_{\pi}^{-1}$, respectively~\cite{Schroder:2001rc}. Within the errors these values of the scattering lengths are consistent with the more refined analysis of Ref.~\cite{Meissner:2005ne}. Furthermore, the incorporation of double scattering corrections in the $s$-wave pion polarization operator leads to the effective isoscalar scattering length \begin{equation} b_0^{\rm eff}(\omega) \approx b_0(\omega)-\ds\frac{3k_F}{2\pi}\big[b_0^2(\omega)+2b_1^2(\omega)\big]\,, \label{b0eff} \end{equation} where $k_F=[3\pi^2\rho/2]^{1/3}$ is the total Fermi momentum. The terms of higher order than $\omega^2$ can be neglected in $b_0^{\rm eff}$ and $b_1$, provided that the condition $\omega < m_{\pi}$ is met.\footnote{Within the framework of the Skyrme model, this situation corresponds to nucleons with $S=T=\frac{1}{2}\sim \omega\Lambda$, where $\Lambda\approx 1$\,fm is the moment of inertia of the skyrmion. In the case of $\Delta$-isobar states ($S=T=\frac{3}{2}\sim \omega\Lambda$) also $\omega^n$ terms with $n\ge 3$ have to be taken into account.\label{foot-omega}} For simplicity, we ignore the energy dependence in the $p$-wave scattering volumes and replace $c_0(\omega)$ and $c_1(\omega)$ by the constant threshold values $c_0(m_\pi)$ = $0.21m_\pi^{-3}$ and $c_1(m_\pi)$ = $0.165m_\pi^{-3}$ of the `current' SAID analysis~\cite{SAID}. This is compatible with the discussion in Ref.~\cite{Friedman:2003ji}. Furthermore, all terms proportional to odd $\omega$ powers in $\hat\Pi^0_{p}$ and even ones in $\Delta\hat\Pi_{p}$ are neglected. This is consistent with the remark in footnote~\ref{foot-omega} and the disregard of pion-absorption in this approach. In summary, one can write the polarization opera\-tors \re{Pi0DPi} as follows: \begin{eqnarray} \hat\Pi^0_{s}&=&\ds\frac{\chi_{s}^-(\omega)+\chi_{s}^+(\omega)}{2} \nonumber\\ &\approx& \left(\tilde b_{0}+\ds\frac{3k_F}{8\pi^2\eta}\tilde b_{0}^2\right)\rho -\left(\tilde b_0+\ds\frac{3k_F}{4\pi^2\eta}\left(\tilde b_0^2-\tilde b_1^2\right) \right)\rho\ds\frac{\omega^2}{m_\pi^2}\nonumber\\ &\equiv&\chi_{s}^{00}-\chi_{s}^{02}{m_\pi^{-2}}{\omega^2}\,, \label{polOpb} \\ \hat\Pi^0_{p}&=& \vec\nabla\ds\frac{\chi_{p,1}^-(m_\pi)+\chi_{p,1}^+(m_\pi)}{2}\cdot\vec\nabla \nonumber\\ &\equiv& \vec\nabla\chi_{p}^0\cdot\vec\nabla \approx \vec\nabla\frac{4\pi c_0(m_\pi)\rho/\eta} {1+4\pi g'c_0(m_\pi)\rho/\eta} \cdot\vec\nabla, \label{polOpb2}\\ \Delta\hat\Pi_s&=&- \tilde b_{1}\delta\rho\,{m_\pi^{-1}}\omega \equiv -\Delta\chi_{s}{m_\pi^{-1}}\omega\,, \label{polOpbe}\\ \Delta\hat\Pi_p&=&-\ds\frac{2\pi\omega}{m_N\eta}\, c_1(m_\pi) \left(\vec\nabla^2\delta\rho\right) \equiv -\Delta\chi_{p}{m_\pi^{-1}}\omega\,. \label{polOpe} \end{eqnarray} \subsection{Medium-modified Lagrangian} Evidently the explicit expressions of the self energies in configuration space follow from the standard rules $\hat\Pi^0(\omega)$ $\to$ $\hat\Pi^0(i\partial_0)$ and $\Delta\hat\Pi(\omega)$ $\to$ $\Delta\hat\Pi(i\partial_0)$. After inserting the polarization operators $\Pi(i\partial_0)$ and $\Delta\Pi(i\partial_0)$ from \re{polOpb}-\re{polOpe} into the Lagrangian~\re{lmed} and integrating by part, in order to symmetrize in the time derivatives, one arrives at the final form of the medium-modified Lagrangian: \begin{eqnarray} {\cal L}^*&=& {\cal L}^*_{\rm sym}+{\cal L}^*_{\rm as}\,, \label{lmedbeg}\\ {\cal L}^*_{\rm sym}&=&{\cal L}_2^*+{\cal L}_4+{\cal L}_{\chi{\rm SB}}^*\,,\\ {\cal L}^*_{\rm as} &=& \Delta{\cal L}_{\rm mes}+\Delta{\cal L}_{\rm env}^*\,,\\ {\cal L}_2^* &=& \ds\frac{F_\pi^2}{16} \Big\{\big(1+m_\pi^{-2}\chi_{s}^{02}\big) \mbox{Tr}\left(\partial_0U\partial_0U^\dagger\right)\nonumber\\ &&\qquad\mbox{}-\left(1-\chi_{p}^0\right) \mbox{Tr}(\vec{\nabla} U\cdot\vec{\nabla} U^\dagger) \Big\}\,,\\ {\cal L}_{\chi {\rm SB}}^*&=&-\ds\frac{F_\pi^2 m_{\pi}^2}{16} \big(1+m_\pi^{-2}{\chi_{s}^{00}}\big) \nonumber\\&&\mbox{}\times \mbox{Tr}\left[(U-1)(U^\dagger-1)\right]\,,\\ \Delta{\cal L}_{\rm mes}&=&-\ds\frac{F_\pi^2}{16}\sum_{a=1}^2{\cal M}_-^2 \mbox{Tr}(\tau_aU)\mbox{Tr}(\tau_aU^\dagger),\qquad\\ \Delta{\cal L}_{\rm env}^*&=&-\ds\frac{F_\pi^2}{16}\sum_{a,b=1}^2 {\varepsilon_{ab3}(2m_\pi)^{-1}\left(\Delta\chi_{s}+\Delta\chi_{p}\right)}\nonumber\\ &&\qquad\qquad\mbox{}\times \mbox{Tr}(\tau_a U)\mbox{Tr}(\tau_b\partial_0 U^\dagger)\,. \label{lmedend} \end{eqnarray} Here $\Delta{\cal L}_{\rm mes}$ and $\Delta{\cal L}^*_{\rm env}$ are the isospin-breaking terms arising from the explicit symmetry breaking in the mesonic sector and the isospin asymmetry of the surrounding environment, respectively. Note that both the temporal part of ${\cal L}_2^*$ and the chiral symmetry breaking term ${\cal L}_{\chi {\rm SB}}^*$, decrease -- at leading order linearly -- with increasing matter density, since $\chi_{s}^{02}$ and $\chi_{s}^{00}$ are negative, see Eqs.~\re{b0omega} and \re{polOpb}. However, as the same equations indicate, $\chi_{s}^{02}\approx \chi_{s}^{00}\approx \tilde b_0\rho$, such that the effective mass, determined by the mass pole of the in-medium propagator, is approximately density-independent in agreement with the findings about the in-medium Gell-Mann--Oakes--Renner relation of Refs.~\cite{Thorsson:1995rj,Meissner:2001gz}. Furthermore, one can see that the Lagrangian~\re{lmedend} contains the Weinberg-Tomozawa term, as the relation \begin{equation} \ds\frac{\Delta\chi_s}{4\pi\eta}=-\ds\frac{m_\pi\delta\rho}{8\pi\eta f_{\pi,\rm ph}^2} =b_1^{\rm l.o.}\delta\rho \label{WeinTomo} \end{equation} is based on the isovector $s$-wave scattering length in the chiral expansion to lowest order~\cite{Weinberg:1966kf,Tomozawa:1966jm}. The Lagrangian~\re{lmedbeg}-\re{lmedend} will be used in our studies of isospin breaking effects in asymmetric nuclear matter. In the next sections we will present and discuss the changes that emerge due to the isospin asymmetry of the surrounding nuclear environment. Specifically, we will concentrate on isospin-breaking effects in infinite nuclear matter with a constant density, so that the $p$-wave contribution proportional to $\Delta\chi_p\sim \vec\nabla^2\delta\rho$ vanishes. Note that in the case of finite nuclei this term may be essential for nucleons located near the surface of the nucleus. \section{Classical solitonic solutions} \label{sect:classical} By following the two-stage method of Ref.~\cite{Meissner:2006id}\footnote{ An alternative, but equivalent way of introducing this method is presented in Appendix~\ref{app:ClassSol}.} of constrained and unconstrained collective isospin-rotations, applied to the hedgehog ansatz $U=\exp\left[i\tau\cdot (\vec{r}/r) F(r)\right]$, the time-dependent Lagrangian can be constructed from Eq.~\re{lmedbeg} in terms of the standard angular velocities $\omega_i$ of the collective modes and the constrained angular velocity $a^*$ (see below) as \begin{eqnarray} L^*&=&\int {\cal L}^* {\rm d}^3 {r} =-M_{\rm NP}^*-{\cal M}_-^2\Lambda_- + \ds\frac{\vec\omega^2}{2}\Lambda^*\nonumber\\ &&\mbox{}+\omega_3\big(a^*\Lambda^*+\Delta^*\big)+ a^*\left(\ds\frac{a^*}{2}\Lambda^*+\Delta^*\right). \label{lag1} \end{eqnarray} Here \begin{eqnarray} M_{\rm NP}^*&=&\pi\int\limits_0^\infty\bigg\{ \ds\frac{F_\pi^2}{2}\left(1-\chi_{p}^0\right)\left(F_r^2 +\frac{2\,S^2}{r^2}\right) \nonumber\\ &&\quad\mbox{}+ \ds\frac{2}{e^2}\left(2F_r^2+\frac{S^2}{r^2}\right)\frac{S^2}{r^2} \nonumber\\ &&\quad\mbox{}+ F_\pi^2\left(m_\pi^2+{\chi_{s}^{00}}\right) \left(1-\cos F\right)\bigg\}r^2\,{\rm d}r\qquad \label{clEm0} \end{eqnarray} is the in-medium mass of the soliton when it is not perturbed (NP) by any isospin breaking. The abbreviations $F_r\equiv {\rm d} F/{\rm d} r$ and $S\equiv\sin F$ have been used, where $F=F(r)$ is the chiral profile function of the hedgehog ansatz. Furthermore \begin{equation} \Lambda^*=\big(1+m_{\pi}^{-2}\chi_s^{02}\big)\Lambda_-+\Lambda_4 \label{momin} \end{equation} with the separate contributions \begin{eqnarray} \Lambda_-&=&\ds\frac{2\pi}{3}F_\pi^2\int\limits_0^\infty S^2\, r^2\,{\rm d}r\,, \label{clEm}\\ \Lambda_4&=&\ds\frac{8\pi}{3e^2}\int\limits_0^\infty \left(F_r^2+\frac{S^2}{r^2}\right)S^2 \,r^2\,{\rm d}r \end{eqnarray} is the in-medium moment-of-inertia, whereas \begin{equation} \Delta^*=(2m_\pi)^{-1}\Delta\chi_s\Lambda_- \label{clEe} \end{equation} is the response of the isospin-asymmet\-ric environment (see Eqs.~\re{polOpbe} and \re{lmedend}) to the collective iso-rotations. The constrained angular velocity parameter $a^*$ corresponds to a stationary rotation around the third axis in isotopic space that serves to undo the effect of the mesonic isospin-breaking term proportional to ${\cal M}_-$ at the classical level, when the collective rotational modes in the isospin-space are frozen ($\omega_{1,2,3}\rightarrow 0$). In this classical limit, applying the constraint~\cite{Meissner:2006id} \begin{equation} a^{*2}=2{\cal M}_-^2{\Lambda_-}/{\Lambda^*}\,, \label{constr} \end{equation} one generates the Lagrangian \begin{equation} L^*=-M_{\rm NP}^*+a^*\Delta^*\,. \label{lclassic} \end{equation} The pertinent equation of motion for the hedgehog profile function $F(r)$ takes then the form \begin{eqnarray} &&F_\pi^2(1-\chi_p^0) \left(r^2F_{rr}+{2}{r}F_r-{S_2}\right)\nonumber\\ &&\mbox{}+\ds\frac{4}{e^2}\left[{2S^2}F_{rr}+{S_2} \left(F_r^2-\ds\frac{S^2}{r^2}\right)\right]\nonumber\\ &&\mbox{}-{F_\pi^2}\left(m_{\pi}^2+\chi_s^{00}\right)Sr^2 +a^*\ds\frac{F_\pi^2\Delta\chi_s}{3m_\pi}S_2r^2 =0\,, \label{classic-eq} \end{eqnarray} where the additional abbreviations $S_2$ = $\sin 2F$ and $F_{rr}$ = $d^2F/dr^2$ were introduced. The solution corresponding to the soliton of baryon number $B=1$ fulfills the boundary conditions \begin{eqnarray} \lim_{r\rightarrow 0}F(r)&=&\pi-C r\,,\\ \lim_{r\rightarrow \infty}F(r)&=&{D}\left(1+m_\beta r\right) \exp\left\{-m_\beta r\right\}/r^2\,, \label{infasymp}\\ m_\beta^2&=&\ds\frac{m_\pi^2+\chi_s^{00}- 2a^*m_\pi^{-1}\Delta\chi_s/3}{1-\chi_p^0}\,,\qquad \label{beta} \end{eqnarray} where $C$ and $D$ are constants. Since the parameter $a^*$ is part of the classical equation \re{classic-eq}, {\it i.e.} $h(F_{rr},F_r,F,a^*)=0$, this equation together with the constraint \re{constr} can be solved by iteration:~\footnote{The choice of the sign of $a^*$ is fixed by the sign of $\Delta m_{\rm np}^{\rm strong}$ in free space~\cite{Meissner:2006id}.} \[ \begin{array}{lcl} h\big(F_{rr}^{(0)},F_r^{(0)},F^{(0)},0\big)= 0& \ \Rightarrow\ & a^*_0=a^*\big(F_r^{(0)},F^{(0)}\big)\,;\\ h\big(F_{rr}^{(n)},F_r^{(n)},F^{(n)},a_{n-1}^*\big)=0 &\ \Rightarrow\ & a^*_{n}=a^*\big(F_r^{(n)},F^{(n)}\big)\,. \end{array} \] In the actual calculation, this iteration scheme rapidly converges after 3 to 4 iteration steps. \section{In-medium neu{\-}tron-pro{\-}ton mass difference} \label{sect:mass_difference} \subsection{Strong part of $\Delta m_{\rm np}^*$} By taking into account the condition~\re{constr}, applying the definition $a_{\rm eff}^*\equiv a^*+{\Delta^{*}}/{\Lambda^{*}}$ and using the canonical quantization procedure as in Ref.~\cite{Meissner:2006id}, one can construct from the Lagrangian \re{lag1} the Hamiltonian in terms of the isospin operator $\hat{\vec T}$: \begin{eqnarray} {\hat H^*} &=&M_{\rm NP}^* +\ds\frac{\hat{T_1}^2}{2\Lambda^*} +\ds\frac{\hat{T_2}^2}{2\Lambda^*} +\ds\frac{\left(\hat{T_3}-\Lambda^* a_{\rm eff}^*\right)^2}{2\Lambda^*}\nonumber\\ &=&M_{\rm NP}^*+\ds\frac{\hat{\vec{T}}^2}{2\Lambda^*}- a_{\rm eff}^*\hat T_3+\Lambda^* \ds\frac{(a_{\rm eff}^*)^2}{2}\,. \label{Ham} \end{eqnarray} Thus the strong part of the neutron-proton mass difference can be identified as \begin{equation} \Delta m_{\rm np}^{*(\rm strong)}=a_{\rm eff}^*= a^*+\ds\frac{\Delta^{*}}{\Lambda^{*}}\,. \label{dm-strong} \end{equation} Note that the density-variation of the strong part of the neutron-proton mass difference will be more pronounced than in Ref.~\cite{Meissner:2006id} for the following reasons: (i) the explicit density-dependence of the moment of inertia $\Lambda^*$ (see Eq.~\re{momin}) resulting from the energy-dependent parameterization of the optical potentials, and (ii) the existence of the additional term $\Delta^*/\Lambda^*$ in an isospin-asymmetric environment. Even if the explicit isospin breaking in the mesonic sector were omitted, ${\cal M}_-=0$, there still would be a non-vanishing neutron-proton mass splitting proportional to the isospin-asymmetric environment factor $\Delta^*$. \subsection{Electromagnetic part of $\Delta m_{\rm np}^*$} As discussed in Appendix~\ref{app:Charges}, by calculating the pertinent Noether currents one can construct the following isoscalar (S) and isovector (V) electromagnetic (EM) form factors \begin{eqnarray} G_{\rm E}^{{\rm S}*}({\vec q}^2)&=&\int\limits_0^\infty \left(\ds\frac{\tilde B}{2}-\ds\frac{\Delta^*}{\Lambda^*}\,\tilde\Lambda+ \tilde\Delta^*\right) j_0(qr)\,{\rm d}r\,, \label{ges}\\ G_{\rm M}^{{\rm S}*}({\vec q}^2)&=&\ds\frac{m_{\rm N}(1+\Delta^*)}{2\Lambda^*} \int\limits_0^\infty\tilde Br^2\,\ds\frac{j_1(qr)}{qr}\,{\rm d}r\,, \label{gems}\\ G_{\rm E}^{{\rm V}*}({\vec q}^2)&=&\ds\frac{1}{2\Lambda^*} \int\limits_0^\infty \tilde\Lambda^*j_0(qr)\,{\rm d}r\,, \label{gev}\\ G_{\rm M}^{{\rm V}*}({\vec q}^2)&=&m_{\rm N}\int\limits_0^\infty \left[\big(1-\chi_{\rm p}^0\big)\tilde\Lambda_-+\tilde\Lambda_4^* \right]\ds\frac{j_1(qr)}{qr}\,{\rm d}r\,,\qquad \label{gmv} \end{eqnarray} in terms of the spherical Bessel functions $j_0$ and $j_1$ and the three-momentum transfer $q=|\vec q|$. Here a quantity with a tilde, say $\tilde Z=\tilde Z(F)$, is defined as the integrand of the corresponding functional, {\it i.e.}: \[ Z[F]\equiv \int\limits_0^\infty \tilde Z\bigl(F(r)\bigr)\, {\rm d}r\,. \] As usual, $B = 1$ is the baryon charge, such that $\tilde B(r)$ = $4\pi r^2 B^0(r)$, where $B^0(r)$ = $-\sin^2 F F_r/(2\pi^2 r^2) $ is the baryon density of the skyrmion. The medium-dependent form factors of the proton and neutron are defined as \[ G_{\rm E,M}^{\left({\rm p}\atop {\rm n}\right)*}({\vec q}^2) =G_{\rm E,M}^{{\rm S}*}({\vec q}^2) \pm G_{E,M}^{{\rm V}*}({\vec q}^2) \] with the normalization conditions $G_{\rm E}^{{\rm p}*}(0)$ = $1$, $G_{\rm E}^{{\rm n}*}(0)$ = $0$, $G_{\rm M}^{{\rm p}*}(0)$ = $\mu_{\rm p}^*$, $G_{\rm M}^{{\rm n}*}(0)$ = $\mu_{\rm n}^*$, where $\mu_{\rm p}^*$ and $\mu_{\rm n}^*$ are the magnetic moments of the in-medium proton and neutron, respectively. In the present approach all form factors {\em explicitly} depend on medium functionals, on the one hand, via the density-dependent moment of inertia $\Lambda^*$ (see Eq.~\re{momin}), and on the other hand, via additional terms resulting from the isospin-asymmetric nuclear environment. Moreover, note the additional terms in the isoscalar form factors as compared with Ref.~\cite{Meissner:2006id}, which emerge here from that part of the isospin charge density that is independent of the isospin $T_3$ (see Appendix~\ref{app:Charges}). Finally, applying the formula~\cite{Gasser:1982ap} \begin{eqnarray} \Delta m_{\rm np}^{*(\rm EM)}&=&-\ds\frac{4\alpha}{\pi}\int\limits_0^\infty {\rm d}q \bigg\{G_{\rm E}^{{\rm S}*}(\vec{q}^2)G_{\rm E}^{{\rm V}*}(\vec{q}^2) \nonumber\\ &&\qquad\mbox{}-\ds\frac{\vec{q}^2}{2m_{\rm N}^2} G_{\rm M}^{{\rm S}*}(\vec{q}^2)G_{\rm M}^{{\rm V}*}(\vec{q}^2)\bigg\}\,, \qquad \label{dm-EM} \end{eqnarray} where $\alpha\approx 1/137$ is the fine-structure constant, one can calculate the medium-dependent electromagnetic part of the neutron-proton mass difference as in Ref.~\cite{Meissner:2006id}. \section{Results and discussions} \label{sect:results} In Fig.~\ref{fig-strong} the strong part of the in-medium neutron-proton mass splitting, $\Delta m_{\rm np}^*$, is shown for isospin-symmetric nuclear matter (solid curve), neutron-rich matter (dashed curve), pure neutron matter (dotted curve), and proton-rich matter (dot-dashed curve). \begin{figure}[hbt] \epsfysize=5.8cm \epsffile{strong.eps} \vspace{0.2cm} \caption{Density dependence of the strong part $\Delta m_{\rm np}^{*(\rm strong)}$ of the neutron-proton mass difference. The abscissa represents the density $\rho$ normalized to the saturation density of ordinary nuclear matter $\rho_0=0.5m_{\pi}^3$, while the ordinate shows the mass difference in units of MeV. The result in isospin-symmetric matter is plotted as a solid curve, the result of neutron-rich matter with $\delta\rho/\rho=0.2$ as dashed curve, the dotted curve represents pure neutron matter ($\delta\rho/\rho=1$) and the dot-dashed curve proton-rich matter with $\delta\rho/\rho=-0.2$.} \label{fig-strong} \end{figure} Already in iso\-spin-symme\-tric matter $\Delta m_{\rm np}^{*(\rm strong)}$ has visibly a different density-behavior than the corresponding quantity of Ref.~\cite{Meissner:2006id}. For example, at normal nuclear matter density, $\Delta m_{\rm np}^{*(\rm strong)}$ has increased by about 42\% relative to its free space value (see the solid curve of Fig.~\ref{fig-strong}). In contrast to this in the previous work~\cite{Meissner:2006id}, where the optical potentials were assumed to be energy-independent, $\Delta m_{\rm np}^{*(\rm strong)}$ decreased by a very tiny amount, namely by about $2\%$ at normal nuclear matter densities; in other words, $\Delta m_{\rm np}^{*(\rm strong)}$ in Ref.~\cite{Meissner:2006id} was practically density-independent. Moreover, when the isospin symmetry of nuclear matter is broken, $\delta\rho/\rho\ne0$,\footnote{This quantity may be called the isospin-asymmetry parameter of the nuclear environment.} $\Delta m_{\rm np}^{*(\rm strong)}$ strongly varies at the qualitative as well as quantitative level (see the dashed curve in Fig.~\ref{fig-strong}). In pure neutron matter the change becomes very drastic (see the dotted curve in Fig.~\ref{fig-strong}), and $\Delta m_{\rm np}^{*(\rm strong)}$ decreases very fast with increasing density. In contrast to the strong part, the electromagnetic part of the neutron-proton mass difference varies only by a small amount when the isospin-asymmetry parameter is increased (see Fig.~\ref{fig-EM}). \begin{figure} \epsfysize=5.8cm \epsffile{EM.eps} \vspace{0.2cm} \caption{Density dependence of the electromagnetic part $\Delta m_{\rm np}^{*(\rm EM)}$ of the neutron-proton mass difference. The axes and curves are defined as in Fig.~\ref{fig-strong}.} \label{fig-EM} \end{figure} But compared with the result of the previous approach~\cite{Meissner:2006id}, the change is sizable, even in isospin-symmetric matter. This is again due to the explicit density dependence of the moment of inertia~\re{momin}, and the changes in the solutions of the classical equations~\re{classic-eq}. Note that with increasing density the moment of inertia $\Lambda^*$ decreases since $\chi_s^{02}<0$. In addition, the solutions of the classical equations~\re{classic-eq} are altered because $\chi_s>0$ in Eq.~(22) of Ref.~\cite{Meissner:2006id} is replaced by the $\omega$-independent part of the present Eq.~\re{polOpb}, namely by $\chi_s^{00}$ which is negative. For completeness, we present the total neutron-proton mass difference in Fig.~\ref{fig-total}. \begin{figure} \epsfysize=5.8cm \epsffile{total.eps} \vspace{0.2cm} \caption{ Density dependence of the total neutron-proton mass difference $\Delta m_{\rm np}^{*}$. The axes and curves are defined as in Fig.~\ref{fig-strong}.} \label{fig-total} \end{figure} {}From a comparison with Fig.~\ref{fig-strong} it is obvious that this mass difference is completely dominated by its strong part. In pure neutron matter and at the density $\rho_0$, the neutron-proton mass difference is $\Delta m_{\rm np}^* =-25$~MeV. For comparison, the authors of the work~\cite{Dr:04} got the result $\Delta m_{\rm np}^*\approx-70$~MeV in framework of QCD sum rule studies. Another interesting result is the difference between the values of $\Delta m_{\rm np}^{*}$ in neutron-rich and proton-rich matter -- compare the dashed and dash-dotted curves in Fig.~\ref{fig-total}. One can see that in neutron-rich matter $\Delta m_{\rm np}^{*}$ is decreased relative to the isospin-symmetric case, whereas in proton-rich matter the behavior is just opposite. This finding may become useful for future studies of mirror nuclei and their properties. For example, for the case of the mirror nuclei $^{48}\rm Ca$ and $^{48}\rm Ni$, a similar behavior of $\Delta m_{\rm np}^*$ was found in Ref.~\cite{Ho0} within density-dependent relativistic hadron field theory. The result of our work is also consistent with the findings of Refs.~\cite{LQ88,vD5,vDC5,vD6,Ku97,Li01,Gr0} that utilize a relativistic approach and with the nonrelativistic calculation~\cite{Ch97} based on Skyrme-like effective interactions. Even at the qualitative level, the various models mentioned in the introduction differ in their predictions of the neutron-proton mass difference in nuclear matter: (i) in nonrelativistic approaches~\cite{Bo91,Zu1,Zu5}, which are focused on the system properties as a whole, this difference mainly turns out to be positive ($\Delta m^*_{\rm np}>0$); (ii) however, it is negative ($\Delta m^*_{\rm np}<0$) in relativistic approaches \cite{LQ88,vD5,vDC5,vD6,Ku97,Li01,Gr0} and some nonrelativistic variational calculations~\cite{Ch97} or it becomes negative with increasing density in QCD sum rule studies \cite{H:90pj,H:90zj,Dr:04}; (iii) it depends on the isospin content of the system ($\Delta m^*_{\rm np} > 0$ or $\Delta m^*_{\rm np} < 0$) in relativistic hadron field theory~\cite{Ho0}. The effective masses in relativistic approaches are discussed in detail in Ref.~\cite{Jaminon:1989wj}. Furthermore, the difference in the behavior of $\Delta m_{\rm np}^*$ in the relativistic and nonrelativistic approaches is explained in Ref.~\cite{vD5}. Also our approach shows that $\Delta m^*_{\rm np}$ qualitatively depends on the isospin content of surrounding environment. It is always positive in proton-rich matter as well as in isospin-symmetric matter (see the solid and dot-dashed curves in Fig.~\ref{fig-total}). In neutron-rich matter, however, the sign may change. For the reader's convenience, we plot in Fig.~\ref{fig-as-rho} \begin{figure} \epsfysize=5.8cm \epsffile{asym-rho.eps} \vspace{0.2cm} \caption{The solutions of the 2-parameter equations $\Delta m_{\rm np}^*(\rho,\delta\rho)=0$ and $\Delta m_{\rm np}^{*\rm(strong)}(\rho,\delta\rho)=0$. The abscissa represents the isospin-asymmetry parameter, while the ordinate shows the density (in units of the ordinary nuclear matter density $\rho_0=0.5m_{\pi}^3$), where the neutron-proton mass difference (solid curve) or its strong part (dashed curve) vanishes.} \label{fig-as-rho} \end{figure} those values of the density $\rho$ as function of the isospin-asymmetry parameter $\delta\rho/\rho$ where the in-medium neutron-proton mass splitting or its strong part vanishes. In other words the solutions of the 2-parameter equations, $\Delta m_{\rm np}^*(\rho,\delta\rho)=0$ and $\Delta m_{\rm np}^{*\rm(strong)}(\rho,\delta\rho)=0$, are presented. For small positive input for the isospin-asymmetry parameter the neutron-proton mass difference or its strong part vanishes at high densities (of the order of the ordinary nuclear matter density $\rho_0$). With increasing $\delta\rho/\rho$, however, the mass difference changes its sign at moderate densities, and in strongly isospin-asymmetric matter this transition is already at low densities. For instance, in neutron-rich matter with the isospin-asymmetry parameter $\delta\rho/\rho\sim 0.1$ the proton becomes heavier at the density $\rho\sim 0.85\rho_0$. In pure neutron matter this change happens already at the density $\rho\sim 0.09\rho_0$. In addition, in Table~\ref{table1} we present the calculated effective masses and isoscalar as well as isovector charge radii of the in-medium nucleons for some values of the nuclear matter density. \footnote{Note that the tabulated values of the free proton and neutron mass differ from their PDG values~\cite{Yao:2006px} since the customary Skyrme value $M_N=938\,{\rm MeV}$ was used here and in Ref.~\cite{Meissner:2006id} as input for the isospin-averaged nucleon mass.} \begin{table} \caption{Calculated masses (in units of MeV) and isoscalar as well as isovector charge radii (in units of fm) of the nucleons in nuclear matter of density $\rho$ (in units of the saturation density of ordinary nuclear matter $\rho_0=0.5m_{\pi}^3$). } \label{table1} \begin{center} \begin{tabular}{ccccc} \noalign{\smallskip} \hline \noalign{\smallskip} $\rho/\rho_0$&$m_{\rm p}^*$\,&$m_{\rm n}^*$\,& $\langle r^2\rangle^{*1/2}_{\rm E,I=0}$\,& $\langle r^2\rangle^{*1/2}_{\rm E,I=1}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{5}{c}{In free space}\\ 0 &937.4\,\,&938.7\,\,& 0.49 & 0.74\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{5}{c}{In proton-rich matter ($\delta\rho/\rho=-0.2$)}\\ 0.5\,\,\,&729.6\,\,\,&733.3\,\,& 0.61 & 0.84\\ 1.0\,\,\,&547.9\,\,\,&555.9\,\,& 0.79 & 0.98\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{5}{c}{In isospin symmetric matter ($\delta\rho/\rho=0$)}\\ 0.5\,\,\,&729.7\,\,\,&731.4\,\,& 0.60 & 0.84\\ 1.0\,\,\,&547.9\,\,\,&550.2\,\,& 0.75 & 0.98\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{5}{c}{In neutron-rich matter ($\delta\rho/\rho=0.2$)}\\ 0.5\,\,\,&731.5\,\,\,&731.3\,\,& 0.58 & 0.84\\ 1.0\,\,\,&553.4\,\,\,&550.0\,\,& 0.72 & 0.98\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{5}{c}{In pure neutron matter ($\delta\rho/\rho=1$)}\\ 0.5\,\,\,&757.9\,\,\,&750.1\,\,& 0.54 & 0.83\\ 1.0\,\,\,&632.1\,\,\,&607.0\,\,& 0.52 & 0.94\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} In general, the nucleon masses strongly decrease in the nuclear medium and are qualitatively in agreement with the well known results~\cite{Jeukenne:1976uy,Mahaux85}. At normal nuclear matter density and for an isospin asymmetry $\delta\rho/\rho\sim0.25$, the difference in the effective masses (normalized to the corresponding free space values) of the neutron and proton, respectively, is $m^*_{\rm n}/m_{\rm n}-m^*_{\rm p}/m_{\rm p}\sim 0.01$. For comparison, the result of Ref.~\cite{Ho0} for nucleons located near the center of $^{132}\rm Sn$ is one order of magnitude bigger: $m^*_{\rm n}/m_{\rm n}-m^*_{\rm p}/m_{\rm p}\sim 0.1$. The isoscalar and isovector charge radii $\langle r^2\rangle^{*1/2}_{\rm E,I=0,1}$ increase with increasing density of the medium.\footnote{Note that our results in free space differ from the ones of Ref.~\cite{Adkins:1983hy} by a factor $\sqrt{2}$ due to different normalizations of the charge densities (see Eq.~\re{charge} in the Appendix~\ref{app:Charges} and the corresponding definitions in Ref.~\cite{Adkins:1983hy}).} The iso\-scalar electric radius is more strongly affected by the iso\-spin asymmetric environment than the isovector one because of the presence of the isospin breaking term $\Delta^*$ (see appendix~\ref{app:Charges}). Consequently, in proton-rich matter the ratio $\langle r^2\rangle^{*1/2}_{\rm E,I=0}/\langle r^2\rangle^{1/2}_{\rm E,I=0}$ is more enhanced than in neutron-rich matter. The density-dependence of the magnetic moments of the in-medium proton and the neutron is presented in Fig.~\ref{fig-muP} and Fig.~\ref{fig-muN}, respectively. \begin{figure} \epsfysize=5.8cm \epsffile{muP.eps} \vspace{0.2cm} \caption{Density dependence of the proton magnetic moment. The ordinate represents $\mu_{\rm p}^{*}$ in nuclear Bohr magnetons (n.m.). The other definitions are as in Fig.~\ref{fig-strong}.} \label{fig-muP} \end{figure} \begin{figure} \epsfysize=5.8cm \epsffile{muN.eps} \vspace{0.2cm} \caption{Density dependence of the neutron magnetic moment. The ordinate represents $\mu_{\rm n}^{*}$ in nuclear Bohr magnetons (n.m.). The other definitions are as in Fig.~\ref{fig-strong}.} \label{fig-muN} \end{figure} The influence of the isospin asymmetry of the surrounding environment on the in-medium magnetic moments is comparatively weak in this case. Relative to the result in isospin-symmetric matter both, the proton and neutron magnetic moments are decreased in neutron-rich matter and increased in proton-rich matter. Let us conclude by remarking that within the present approach the change of $\Delta m_{\rm np}^{*}$ is completely dictated by its strong part when the isospin-asymmetry parameter is sizable (compare Figs.~\ref{fig-strong}, \ref{fig-EM} and \ref{fig-total}). \section{Summary and outlook} \label{sect:summary} We have proposed an effective Lagrangian which incorporates energy-dependent optical potentials for the $s$- and $p$-waves and which takes into account the influence of the isospin-asymmetry of the environment onto the Skyrme-solitons of the model. As a result the neutron-proton mass splitting in asymmetric nuclear matter is predicted to vary strongly relative to its free space value. The predictions obtained in the present work are in a qualitative agreement with the ones obtained within relativistic hadron field theory \cite{Li01,Gr0,Ho0} and nonrelativistic variational calculations~\cite{Ch97}. Quantitatively, however, the changes of $\Delta m_{\rm np}^*$ are small in comparison to the results of those works. Our approach shows that $\Delta m_{\rm np}^*$ in nuclear matter with sizable asymmetry is mainly dictated by its strong part. In the case of more complicated calculations involving finite nuclei this may serve as a justification to evaluate only the strong part of the mass difference within the present approach. Consequently, the next step in our future studies will be the estimate of $\Delta m_{\rm np}^*$ in finite (particularly in mirror) nuclei. Here additional effects are expected because the $p$-wave contribution proportional to $\Delta\chi_p\sim \vec\nabla^2\delta\rho$ in the Lagrangian~\re{lmedend} and the deformation effects discussed in Refs.~\cite{Yakhshiev:2001ht,Yakhshiev:2002sr} become relevant. \begin{acknowledgement} The work of U.T.Y.\ was supported by the Alexander von Humboldt Foundation. Partial financial support from the EU Integrated Infrastructure Initiative Hadron Physics Project (contract number RII3-CT-2004-506078), by the DFG (TR 16, ``Subnuclear Structure of Matter'') and by BMBF (research grant 06BN411) is gratefully acknowledged. \end{acknowledgement} \begin{appendix} \section{Two-stage method and classical solutions} \label{app:ClassSol} \def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}} \setcounter{equation}{0} In this appendix we review a new interpretation for the inclusion of the stationary $a^*$ rotations that is different from the one of Ref.~\cite{Rathske:1988qt} or the one of Ref.~\cite{Meissner:2006id}, but which leads -- under the same input -- to the same results as in those references. This new strategy is more convenient for our analytic calculations. First of all we remark that stationary $a^*$ rotations essentially have to be introduced because of the explicit isospin breaking in the mesonic sector, {\it i.e.} when ${\cal M}_-\ne 0$. If ${\cal M}_-=0$ then $a^*=0$. Of course, the condition~\re{constr} satisfies this requirement. Secondly, at the classical level, at which the soliton solution is determined, this mesonic mass splitting can be ignored -- at the quantum level, however, this can not be the case since the symmetry breaking effect is enhanced by the coupling to the collective zero modes. Note the quadratic dependence on the small parameter ${\cal M}_-$ at the classical level, whereas at the quantum level the dependence is linear; compare Eq.~\re{constr} with Eq.~\re{Ham}. Let us, for the moment, put ${\cal L}_{\rm as}^*=0$ in the Lagrangian \re{lmedbeg}. Then one can construct the classical `hedgehog' solution(s) from the static Lagrangian ${\cal L}^{*\rm (static)}_{\rm sym}$. In the work \cite{Meissner:2006id} these solutions are referred to as the solutions of the non-perturbed (NP) system, determined by \begin{equation} \ds\frac{\delta M_{\rm NP}^*[F]}{\delta F}=0\,, \label{NPsol} \end{equation} where $F(r)$ is the usual chiral profile function of the hedgehog ansatz and where $M_{\rm NP}^*=-\int {\cal L}^{*\rm (static)}_{\rm sym}\,{\rm d}^3 r$ (compare with Eq.~\re{lag1}) is the in-medium mass of the soliton when it is not perturbed by any isospin breaking. The Lagrangian ${\cal L}^{*\rm (static)}_{\rm sym}$ is invariant under time-independent iso\-spin rotations \begin{equation} U\rightarrow AUA^\dagger,\quad A=\exp\{i\vec\tau\cdot{\vec\varphi}^{I}/2\}\,, \label{A-rot} \end{equation} which can be treated as the usual zero modes of the model. The resurrection of the time dependence ${\vec\varphi}^{I}\rightarrow{\vec\varphi}^{I}(t)$ in Eq.~\re{A-rot} leads to the (spatially integrated) Lagrangian \[ \int {\cal L}^*_{\rm sym}\, {\rm d}^3 {r}=-M_{\rm NP}^* +\ds\frac{\left(\dot{{\vec\varphi}}^{I}(t)\right)^2}{2}\Lambda^*\,, \] where $\Lambda^*$ is the in-medium moment-of-inertia. Now we plug the isospin breaking ${\cal M}_-\ne 0$ back into the mesonic sector, such that the corresponding Lagrangian reads \begin{equation} \int{\rm d}^3 {r}\,\Big\{ {\cal L}^*_{\rm sym}+\Delta{\cal L}_{\rm mes} \Big\}\nonumber\\ =-M_{\rm NP}^*+\ds\frac{(\dot{\vec\varphi}^{I})^{2}}{2}\Lambda^* -{\cal M}_-^2\Lambda_-\,. \label{Lplug} \end{equation} Note that the integrated Lagrangian \re{Lplug}, even in the presence ${\cal M}_-\ne 0$, is still invariant under additional time-independent isospin rotations, {\it e.g.} \begin{eqnarray} A(t) \to \tilde A(t) &=&\exp\{i\vec\tau\cdot[{\vec \varphi}^{I}(t)+{\vec\varphi}^{II}]/2\}\nonumber\\ &\equiv& \exp\{i\vec\tau\cdot {\vec \varphi}(t)/2\}\,. \label{Atilde} \end{eqnarray} In fact, it is a property of the hedgehog ansatz that these time-independent isospin rotations can be compensated by time-independent spatial rotations. The latter average out when the angular integration is performed, although ${\cal M}_-\ne 0$. But in order to be consistent in the use of the {\em unchanged} classical solution $F(r)$ determined by Eq.~\re{NPsol}, one has to introduce -- at the classical level -- the requirement \begin{equation} \ds\frac{(\dot{\vec\varphi}^{I})^{2}}{2}\Lambda^*-{\cal M}_-^2\Lambda_- = 0\,. \label{classconstr} \end{equation} One can see explicitly that classically not all the rotations $\varphi_1^{I},\varphi_2^{I},\varphi_3^{I}$ of Eq.~\re{A-rot} can be time-independent. In other words, at least one of the angular velocities $\dot\varphi_1^{I},\dot\varphi_2^{I},\dot\varphi_3^{I}$ must not vanish, say, the one with respect to the third axis in isotopic space (which will later be the quantization axis). We will call the corresponding (via Eq.~\re{classconstr}) constrained angular velocity $a^*$, such that the condition \re{classconstr} is nothing else than the constraint~\re{constr}. As mentioned above, after angular averaging, the system remains invariant under time-independent zero modes which we called $\varphi_1^{II},\varphi_2^{II},\varphi_3^{II}$. Again these zero modes can be made time-dependent \begin{equation} \dot\varphi_1^{II}=\omega_1,\quad \dot\varphi_2^{II}=\omega_2,\quad \dot\varphi_3^{II}=\omega_3, \label{qrot0} \end{equation} where $\omega_1,\omega_2,\omega_3$, in contrast to $a^*$, are unconstrained angular velocities. Transcribed to our starting point \re{Atilde}, we eventually arrive at \begin{equation} \dot\varphi_1=\omega_1,\quad \dot\varphi_2=\omega_2,\,\quad \dot\varphi_3=\omega_3+a^*. \label{qrot} \end{equation} Inserting $\dot\varphi_1,\dot\varphi_2,\dot\varphi_3$ into Eq.~\re{Lplug} one generates the Lagrangian~\re{lag1} with $\Delta^*=0$. Interpreting now $\phi_1=\varphi_1,\phi_2=\varphi_2,\phi_3=\varphi_3-a^*t$ as collective coordinates and $\omega_1,\omega_2,\omega_3$ as their pertinent velocities, the standard quantization procedure should be performed around the stationary point $a^* t$ with the constant angular velocity $a^*$. This way of handling the classical $a^*$ rotations is equivalent to the procedure performed in Refs.~\cite{Meissner:2006id,Rathske:1988qt}. But there is no need anymore for the explicit introduction of the matrix ${\cal T}(t)$ (see equation (15) in Ref.~\cite{Meissner:2006id}) and the interpretation of the corresponding matrix order ambiguities discussed in Ref.~\cite{Rathske:1988qt}. Thus at the analytical level one performs the usual Skyrme model calculations, where Eq.~\re{qrot} is inserted into the expression \re{Lplug} and where the constraint~\re{constr} is applied at the classical level, such that $L^\ast= -M_{\rm NP}^*$ is extremized under the hedgehog ansatz, see Eq.~\re{NPsol}. The latter corresponds to classical equation of motion \re{classic-eq} for the iso-symmetric matter case with vanishing $\delta\rho$ and therefore, according to the definition~\re{polOpbe}, vanishing $\Delta\chi_{s}$ (and vanishing $\Delta^*$, see Eq.~\re{clEe}). Finally, the inclusion of isospin-breaking effects due to the asymmetric environment ($\Delta{\cal L}^*_{\rm env}\neq 0$) leads to the Lagrangian \begin{eqnarray} \lefteqn{\int{\rm d}^3 {r}\,\Big\{ {\cal L}^*_{\rm sym} +\Delta{\cal L}_{\rm mes}+\Delta{\cal L}^*_{\rm env} \Big\}}\nonumber\\ &=&-M_{\rm NP}^*+\ds\frac{(\dot{\vec\varphi})^{2}}{2}\Lambda^* -{\cal M}_-^2\Lambda_- + \dot\varphi\Delta^*\,. \label{Lplug2} \end{eqnarray} Note that the Lagrangian \re{Lplug2}, because of the angular averaging, remains invariant under time-independent isospin-rotations as its counterpart \re{Lplug}. The collective coordinates are therefore $\phi_1=\varphi_1,\phi_2=\varphi_2,\phi_3=\varphi_3-a^*t$ as before and Eq.~\re{qrot} is still valid. When the latter is inserted, Eq.~\re{Lplug2} transforms to the Lagrangian~\re{lag1} which is the starting point for the quantization in this work. But in order to be consistent with the use of a solution of hedgehog type, one still has to satisfy two requirements: (i) to apply the constraint \re{constr} in order to remove $-{\cal M}_-^2\Lambda_-$ and $\Lambda^*{(\dot{\vec\varphi})^{2}}/2$ from \re{Lplug2}, and (ii) to extremize the remainder $-M_{\rm NP}^*+{\dot\varphi}\Delta^*$ (compare with Eq.~\re{lclassic}) under the hedgehog ansatz. Here $\dot\varphi(t)$ still corresponds to $\dot\varphi^{I}(t)=a^*$, {\it i.e.} $\varphi^{II}$ is still assumed to be time-independent. Consequently, instead of Eq.~\re{NPsol} it is Eq.~\re{classic-eq} with $\Delta\chi_{s} \neq 0$ that has to be solved (in practice by iteration) in order to determine the profile function $F(r)$. \section{Charges and magnetic moments} \label{app:Charges} \def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}} \setcounter{equation}{0} In calculating the third component of the isospin current $V_0^{(3)}$, one finds \begin{equation} \int{\rm d}^3{r}V_0^{(3)} =(\omega_3+a^*)\Lambda^*+\Delta^*\equiv T_3\,. \end{equation} Consequently, the isospin charge {\em density} -- modulo a factor $4\pi r^2$ -- is given as \begin{equation} \tilde T_3=(\omega_3+a^*)\tilde\Lambda^*+\tilde\Delta^* =(T_3-\Delta^*) \ds\frac{\tilde \Lambda^*}{\Lambda^*}+\tilde\Delta^*\,. \end{equation} Note that there are terms that are independent of the isospin $T_3$ on the r.h.s. Since the charges of the nucleons are defined as \begin{equation} Q=\ds\frac{B}{2}+T_3\equiv\int\limits\rho_{I=0}(r)\,{\rm d}^3r\pm \int\limits\rho_{I=1}(r)\,{\rm d}^3r\,,\label{charge} \end{equation} the isoscalar and the isovector density distributions have here the following form \begin{eqnarray} 4\pi r^2\rho_{I=0}(r)&=&\ds\frac{\tilde B}{2}-\ds\frac{\Delta^*}{\Lambda^*} {\tilde\Lambda^*}+\tilde\Delta^*\,,\\ 4\pi r^2\rho_{I=1}(r)&=&\ds\frac{\tilde\Lambda^*}{2\Lambda^*}\,.\\ &&\nonumber \end{eqnarray} Analogous calculations for the magnetic moments lead to \begin{eqnarray} \mu^{\left({\rm p}\atop {\rm n}\right)*}&=&\ds\frac{m_{\rm N} (1+\Delta^*)}{6\Lambda^*} \int\limits_0^\infty\tilde Br^2{\rm d}r\nonumber\\ &&\mbox{}\pm\ds\frac{m_{\rm N}}{3} \int\limits_0^\infty\left[\big(1-\chi_{\rm p}^0\big)\tilde\Lambda_- +\tilde\Lambda_4^*\right]{\rm d}r\,. \end{eqnarray} \end{appendix}
1,314,259,995,474
arxiv
\section{Introduction} \label{sec:intro} Ongoing improvements in detection limits and spectroscopic resolution in various wavelength regions have led to the discovery of many molecular and atomic emission lines in myriad astrophysical environments \citep{McGuire2018}, not the least of which are comets \citep{feldman2005,bm2017}. However, there are still thousands of unidentified cometary lines from the near-UV to visible with many likely resulting from unknown transitions of known molecules that cloud spectral classification \citep{brown96, morrison97, wyckoff1999, mumma2001, cochran02, cremonese2007, kawakita02, dr2013, opitom19}. Small molecular ions, such as H$_2$O$^+$, CO$_2^+$, CO$^+$, OH$^+$, and O$_2^+$, have been observed in a variety of various astronomical environments, including comet comae and tails, planetary atmospheres, planetary disks, and the interstellar medium (ISM) \citep{Larsson2012}. They are often sentinel species for underlying photochemical processes which are driven in our Solar System by the Sun. Molecular radical cations also typically produce emission features at longer wavelengths than those observed from closed-shell, neutral molecules (often in the visible, whereas neutrals are in the UV). Accurate and complete spectra, if publicly available, could be used to determine if currently unidentified lines observed in cometary spectra are due to molecular cations, as exemplified by the successful attribution of previously unidentified spectral lines in comets Hyakutake and Ikeya-Zhang \citep{wyckoff1999, kawakita02} to transitions from higher-excited levels of H$_2$O$^+$ \citep{Bodewits2016}. Such a database would also provide a means of quantifying the molecular cations such that ionization rates, sublimation and mass loss rates, local plasma conditions, and potentially even cometary chemical classifications would be enhanced \citep{Raghuram2021}. To derive column densities and chemical abundances, heliocentric velocity-dependent fluorescence efficiencies of H$_2$O$^+$, CO$_2^+$, CO$^+$, and OH$^+$ are needed. While such values are known in part for some of these molecules, the full set of fluorescence data are lacking in many, most notably in H$_2$O$^+$, and these knowledge gaps should be filled. The remote identification of ions, especially in comets, requires knowledge of their spectra. The analysis and interpretation of ion data requires spectral models based on well-characterized values for dipole moments, line positions, Einstein A and B coefficients, and fluorescence efficiency factors. Surprisingly, this information is not readily available. Spectral data for these molecules are mostly missing from major databases, such as the Cologne Database for Molecular Spectroscopy \citep[CDMS; ][]{muller2001}, the high-resolution transmission molecular absorption database \citep[HITRAN; ][]{Gordon2017}, the Virtual Planetary Laboratory (VPL) Molecular Spectroscopic Database\footnote{\url{http://vpl.astro.washington.edu/spectra/fundamentals.htm}}, and the Diatomic and Triatomic spectral databases at NIST \citep{NIST2,NIST3}. Most of these databases are predominantly geared towards applications in the IR and microwave regions, and near-UV to visible rovibronic spectra are not as well characterized. As a consequence, many currently unidentified lines can likely be attributed to these and other ions, as is the case in comet spectra \citep{wyckoff1999, kawakita02, bodewits2019}, but these lines go unattributed due to a lack of the mentioned reference data. The current hindrance of comet studies due to the lack of spectral data will be illustrated with three examples. First, the near-UV to visible spectrum of the unusual comet C/2016~R2 (PanSTARRS) was mostly dominated by CO$^+$ emission, with barely any evidence of water or its fragments \citep{mckay19, opitom19, venkat20}. Comparison with detailed spectral models would allow observers to carefully remove the emission of CO$^+$, which obscures the possible emission of other species \citep{venkat20}. \citet{cochran18} obtained robust spectra of CO$^{+}$ in the tail of comet C/2016~R2~(Pan-STARRS) when it was more than 3 AU from the sun, and these observations also yielded a rare, unequivocal cometary detection of N$_{2}^{+}$. Prior to this result, most spectroscopic detections of N$_{2}^{+}$ in comet spectra are thought to be telluric lines. The N$_{2}^{+}$/CO$^+$ ratio often serves as a substitute for the N$_{2}$/CO ratio in comets. The N$_2$/CO abundance ratio may reflect how cometary ices formed \citep{Rubin2015}. Other species may also be masked under similar circumstances and more data are needed to feed the models for comparison. Second, one of the most surprising results of the {\em Rosetta} mission to comet 67P/Churyumov-Gerasimenko was the detection of high amounts of O$_2$ in its coma (up to 10\%), which has a pronounced impact on planet formation models \citep{Bieler15, keeney17}. Molecular oxygen has no dipole moment and thus produces no emission from its rotational transitions. Its near-UV to visible emission features are dipole forbidden and likely weak if present \citep{Glinski2004}. On the other hand, its cation, which has been studied only at low resolution for its electronic emission as discussed later, has several emission lines in the visible regime (Table 1) that are currently lacking in full calibration. Reliable spectral models would allow observers to search for the remote emission of O$_2^+$ in comets and other small bodies. Third, CO$_2$ is one of the main volatiles in comets \citep{bm2017}, and its abundance relative to H$_2$O and CO may link comets to their formation regions \citep{ahearn2012}. Unfortunately, the emission of CO$_2$ is blocked by the Earth’s atmosphere. However, CO$_2^+$ produces bright lines in the near-UV ($B \rightarrow X$), and has been observed in many comets \citep[cf.][]{weaver81}, as well as the less frequently observed $ A \rightarrow X $ transition at longer wavelengths \citep{opitom19}. Observers interested in characterizing the activity and volatile content of comets would benefit greatly from digitized, electronically-available high-resolution spectra along with accurate fluorescence rates that are currently not available largely due to a lack of high-resolution experimental or theoretical results. This paper presents a critical analysis for the state of spectroscopic data currently available (Table \ref{overview}), as well as existing data gaps, for diatomic and triatomic oxygen-bearing cations that are abundant in comets:~~H$_2$O$^+$, CO$_2^+$, CO$^+$, OH$^+$, and O$_2^+$. These five cations are produced by photodissociation or electron-impact of the most common cometary neutral species:~~H$_2$O, CO$_2$, and CO \citep{bm2017, Beth2020}. Our discussion is limited to transitions above 200 nm, roughly the lower limit for the quantum efficiency of most CCD detectors requiring different hardware for observation. These knowledge gaps exist due to various factors ranging from difficulty of the experiments to simply a lack of motivation to study these systems. In any case, the following section describes what is currently known and offers a call to determine the remaining factors in order to produce much more insightful spectral models of comets or any other astrophysical objects where these molecules may be found. \section{Review of existing spectroscopic models} \begin{table} \begin{center} \caption{\label{overview} Summary of the main emission features of small molecular cations applicable to rovibronic modeling of comet spectra with recommended references.} \begin{tabular}{lclcc} {\bf Species} & {\bf Transition/System} & {\bf Name} & {\bf Wavelength} & {\bf References} \\ \hline H$_2$O$^+$ & $A\ ^2A_1 \rightarrow X\ ^2B_1$ & & 4200 -- 7500 \AA & \citet{Lew73, Lew76} \\ H$_2$O$^+$ & $B\ ^2B_2 \rightarrow X\ ^2B_1$ & & 2200 -- 2800 \AA & \citet{Reutt86} \\ H$_2$O$^+$ & $B\ ^2B_2 \rightarrow A\ ^2A_1$ & & 3000 -- 4300 \AA & \citet{Reutt86} \\ CO$_2^+$ & $A\ ^2\Pi_u \rightarrow X\ ^2\Pi_g$ & Fox-Duffendack-Barker & 2800 - 5000 \AA & \citet{Fox27, Gauyacq75} \\ CO$_2^+$ & $B\ ^2\Sigma_u \rightarrow X\ ^2\Pi_g$&& 2890 \AA & \citet{Gauyacq75} \\ CO$_2^+$ & $C\ ^2\Sigma_g \rightarrow X\ ^2\Pi_g$&& 2215 \AA & \citet{Wang88, Wyttenbach89} \\ CO$^+$ & $B\ ^2\Sigma^+ \rightarrow X\ ^2\Sigma^+$ & First Negative &2100 -- 2500 \AA & \citet{judge72, lee74} \\ CO$^+$ & $B\ ^2\Sigma \rightarrow A\ ^2\Pi$ & Baldet-Johnson & 3620 -- 6165 \AA & \citet{dotchin73} \\ CO$^+$ & $A\ ^2\Pi \rightarrow X\ ^2\Sigma^+$ & Comet tail & 4000 -- 6000 \AA & \citet{judge72} \\ OH$^+$ & $A\ ^3\Pi \rightarrow X\ ^3\Sigma^-$ & & 3300 -- 3600 \AA & \citet{Hodges2017} \\ O$_2^+$ & $A\ ^2\Pi_u \rightarrow X\ ^2\Pi_g$ &Second Negative& 1800 -- 5300 \AA & \citet{Terrell2004} \\ O$_2^+$ & $b\ ^4\Sigma_g^- \rightarrow a\ ^4\Pi_u$ &First Negative & 4500 -- 8500 \AA & \citet{Terrell2004,Glinski2004} \\ \hline \multicolumn{5}{l}{{\bf Note:} Where applicable, we follow the bent notation for the vibronic structure \citep[cf.][]{cochran02}.} \end{tabular} \end{center} \end{table} \subsection{H$_2$O$^+$} The water cation first made its presence felt in planetary science in 1974 when the $A\ ^2A_1 \rightarrow X\ ^2B_1$ emission was tentatively observed nearly simultaneously by two groups \citep{Benvenuti74, Herzberg74} at near-UV to visible wavelengths in the tail of comet Kohoutek (1973f or C/1973 E1). Additionally, \citet{wehinger74b} made a detection of the same transition in the tail of comet Bradfield (1974b = C/1974 C1) shortly thereafter, and follow-up work confirmed the presence of H$_2$O$^+$ in Kohoutek \citep{Wehinger74}, opening the door for observations in other comets. Thirteen other comets showed spectral signatures of H$_2$O$^+$ within the next few years \citep{Miller80}. The high-resolution spectral atlas by \citet{cochran02} contains an inventory of 129 observed lines of 3 vibrational transitions of H$_2$O$^+$ spanning the wavelength range of 5800 -- 7500 \AA\ in comet 122P/De Vico. \cite{bodewits2019} used electron impact-induced spectra to attribute numerous unidentified lines in the ion tail of comet Hyakutake, reported by \citet{wyckoff1999} and \citet{kawakita02}, to transitions from higher vibrational levels of the H$_2$O$^+$ $A\ ^2A_1 \rightarrow X\ ^2B_1$ transition. Additionally, \citet{lutz87} calculated fluorescence efficiency factors and absorption oscillator strengths for the same transition of H$_{2}$O$^{+}$ in order to determine its abundance in the tail of comet Kohoutek. Even with this relatively early detection of H$_2$O$^+$ in comets, it was not until 2010 before H$_2$O$^+$ was observed in the ISM via radiotelescopic detection towards various star-forming regions \citep{Ossenkopf10}, and H$_{2}$O$^{+}$ has now even been observed in older, extragalactic objects with $Z>0.8$ with ALMA \citep{Muller16}. In fact, this work exploring spectral features at such high redshift found that the existing rotational spectral data for $X\ ^2B_1$ H$_2$O$^+$ at the time were not of sufficient quality for more detailed modeling to be performed \citep{Muller16}. The lines in the CDMS \citep{muller2001} matched the observations adequately in order to claim a detection, but the new rest frequencies reported in CDMS are actually those from ALMA observations combined with laboratory work done as a part of the \cite{Muller16} study. These are listed in Table \ref{h2o+} for the $X\ ^2B_1$ $\nu=0$ state. Consequently, the pure rotational frequencies and spectroscopic constants of H$_2$O$^+$ are established to high precision. The three fundamental vibrational frequencies for $X\ ^2B_1$ of H$_2$O$^+$ in the gas phase have been determined through photoelectron and laser difference spectroscopy at 3212.86 cm$^{-1}$, 1408.42 cm$^{-1}$, and 3259.04 cm$^{-1}$ \citep{Reutt86, Jacox94}, as shown in Table \ref{h2o+}. Refinements to these values were made by \cite{Muller16}, extending the number of significant figures. Additionally, this refinement also generated rovibrational spectroscopic constants for these singly-excited/one quantum fundamental vibrational frequencies \citep{Muller16} allowing for accurate rovibrational models of H$_2$O$^+$ to be produced. However, the electronic and vibronic bands are still incomplete. The electronic spectral characterization of H$_2$O$^+$ coincided with its initial observation in comet Kohoutek. Work by \cite{Lew73} and \cite{Lew76} provides clear descriptions of the electronic feature for the $A\ ^2A_1 \rightarrow X\ ^2B_1$ transition, but the full vibronic nature of any excited states have yet to be established. The $A\ ^2A_1$ and $B\ ^2B_2$ states have accurate transition energies from photoelectron experiments \citep{Reutt86}, putting the $B\ ^2B_2 \rightarrow X\ ^2B_1$ transition in the 2200 -- 2800 \AA\ range, and the $B\ ^2B_2 \rightarrow A\ ^2A_1$ transition in the 3000 -- 4300 \AA~range, as given in Table \ref{overview}. These ranges, while largely estimated for the two transitions involving the $B\ ^2B_2$ state, are well-separated, giving H$_2$O$^+$ full emission coverage from 2200 \AA\ through the UV to the very edge of the visible regime at 7500 \AA. More resolved electronic transition wavelengths to at least the sub nm scale, and certainly more accurate and complete vibrational levels to at least the single cm$^{-1}$ scale for these electronically excited states of H$_2$O$^+$, are needed in order to model these spectra more effectively. Additionally, more narrowly defined state lifetimes (and, hence, Einstein coefficients) are also needed as the current measurements are more than 40 years old and two different studies disagree with one another by nearly a factor of three \citep{Curtis77, Mohlmann78}. \begin{table} \begin{center} \caption{\label{h2o+} Reported Vibrational Frequencies (cm$^{-1}$) and Rotational Constants (MHz) for the Various Electronic States of H$_2$O$^+$} \begin{tabular}{llcccc} State & & $\nu=0$ & $\nu_1=1$ ($a_1$) & $\nu_2=1$ ($a_1$) & $\nu_3=1$ ($b_2$) \\ \hline $X\ ^2B_1$$^a$ & $h\nu$ & 0.0 & 1408.4131 & 3212.8567 & 3259.0341 \\ & $A$ & 870 580.8 & 1 001 285. & 851 254. & 835 041. \\ & $B$ & 372 365.4 & 374 077.4 & 365 511.7 & 367 803.7 \\ & $C$ & 253 880.4 & 249 275.7 & 248 680.5 & 349 733.7 \\ & $\Delta_K$ & 1375.3 & 2902.6 & 1348.5 & 1269.2 \\ & $\Delta_{NK}$& -155.30 & -246.4 & -154.6 & -158.6 \\ & $\Delta_N$ & 29.66 & 31.00 & 29.80 & 30.23 \\ \hline $A\ ^2A_1$$^b$ & $h\nu$ & 13409.3 & 3547 & 876.8 & \\ $B\ ^2B_2$$^b$ & $h\nu$ & 36757 & 2968 & 1596 & \\ \hline \end{tabular} \end{center} $^a$All rovibrational data for the $X\ ^2B_1$ state are from \cite{Muller16} which further includes $\delta$, $\Phi$, $\phi$, $L$, and $\epsilon$ values.\\ $^b$The vibronic fundamentals are compiled from \cite{Lew76, Reutt86, Jacox94}. See text for discussion.\\ \end{table} The experimental photoelectron results given in Table \ref{h2o+} only produce estimates for fundamental vibrational frequencies, and only for the $a_1$ modes \citep{Reutt86}. The $\nu_2$ bend of the $A\ ^2A_1$ state at 876.8 cm$^{-1}$ is the most concretely established fundamental frequency based on comparisons between the photoelectron data and the prior existing emission spectra \citep{Lew76, Reutt86, Jacox94}. These values guided electron impact studies that established many of the experimental rovibronic bands for the $\nu_2$ bend of the $A\ ^2A_1$ state \citep{Kuchenev96}. Subsequent quantum chemical vibronic lines in the $A\ ^2A_1 \rightarrow X\ ^2B_1$ transition were compared to those from the electron impact study. The error is less than 2 cm$^{-1}$ for this purely theoretical study \citep{Wu04}, providing high-accuracy vibronic lines for higher quanta of the $A\ ^2A_1$ $\nu_2$ bend. These, in turn, were subsequently corroborated and extended experimentally \citep{Gan04}. Even so, no lines or spectroscopic constants for higher quanta in either of the other two modes of $A\ ^2A_1$, or for any modes of the $B\ ^2B_2$ state, are currently given in the literature. Noble gas tagging experiments have produced estimates for the hydride stretches, but the proton-bound nature of these complexes likely makes them irrelevant for reference data required of H$_2$O$^+$ \citep{Roth01, Dopfer01}. Consequently, the high-accuracy spectral data (both experimental and theoretical) available for the water cation from the literature currently consist of: \begin{itemize} \item Pure rotational spectroscopic constants and lines available in CDMS, \item Fundamental vibrational frequencies and rovibrational spectroscopic constants that can be produced from \cite{Muller16}, \item Onset energies for the $A\ ^2A_1 \rightarrow X\ ^2B_1$, $B\ ^2B_2 \rightarrow X\ ^2B_1$, and $B\ ^2B_2 \rightarrow A\ ^2A_1$ transitions \citep{Lew73, Lew76, Reutt86, Jacox94} \item The $\nu_2$ bands of the $A\ ^2A_1$ state provided by \cite{Wu04}. \end{itemize} The remaining data necessary for proper cometary or other astrochemical modeling are the rovibrational spectroscopic constants and lines for the $A\ ^2A_1$ and $B\ ^2B_2$ excited states, save for the $\nu_2$ bands of the $A\ ^2A_1$ state as established by \citet{Kuchenev96}, \citet{Wu04}, and \citet{Gan04}. These electronic states lie at 7457.7 \AA\ and 2708.7 \AA\ above the ground state \citep{Lew73, Lew76, Reutt86, Jacox94}, making them the only two states of interest for purely solar excitation models like those for comets. Hence, full modeling of H$_2$O$^+$ in comets will require line lists for these two electronic states, which are currently unavailable from any previous experimental or theoretical studies. Finally, Einstein coefficients for transitions involving each of these states are also needed in order for the spectral models to be complete. \subsection{CO$_2^+$} The detection of the carbon dioxide cation predates that of H$_2$O$^+$ by nearly 25 years. Observations of the long-period comet Bester (1947k or C/1947 F1) acquired as it was approaching the sun revealed numerous bands of the CO$_2^+ \widetilde{A}\ ^2\Pi_u \rightarrow \widetilde{X}\ ^2\Pi_g$ emission transition in the comet tail at wavelengths from 3500 \AA\ to 5000 \AA\ \citep{Swings50}. The spectrum of this molecule had been known since the late 1920s \citep{Fox27, Duffendack29} and was refined in the early 1940s \citep{Mrozowski41}, but comet Bester provided the first natural observation of CO$_2^+$ beyond the Earth's atmosphere. Both CO$_2^+$ $\widetilde{A}\ ^2\Pi_u \rightarrow \widetilde{X}\ ^2\Pi_g$ and the $\widetilde{B}\ ^2\Sigma^+_u \rightarrow \widetilde{X}\ ^2\Pi_g$ transitions were observed by {\em Mariner 6} and {\em Mariner 7}~in the dayglow of Mars \citep{Barth71}. Sounding rocket observations of comet C/1975 V1 (West) yielded the first cometary detection of the CO$_2^+ \widetilde{B}\ ^2\Sigma^+_u \rightarrow \widetilde{X}\ ^2\Pi_g$ transition in the near-UV \citep{Feldman1976}. The detection of CO$_2^+$ also predates the first cometary observation of the neutral by 36 years, when its fluorescence spectrum was observed by the \emph{Vega 1} probe to comet 1P/Halley in 1986 \citep{Combes86}. Later, the emission of neutral CO$_2$ was first discovered in the ISM via IR spectra in 1989 via the bright antisymmetric stretch mode \citep{Hendecourt89}, but the cation has yet to be documented in any environment beyond the Solar System. \begin{table} \begin{center} \caption{\label{co2+} Reported Vibrational Frequencies (cm$^{-1}$) and Rotational Constants (MHz) for the Various Electronic States of CO$_2^+$.} \begin{tabular}{llcccc} State & & $\nu=0$ & $\nu_1=1$ ($\sigma_g^+$) & $\nu_2=1$ ($\pi_g$) & $\nu_3=1$ ($\sigma_u^+$) \\ \hline $X\ ^2\Pi_g$$^a$ & $h\nu$ & 0.0 & 1264.98 & 513.262 & 1423.08 \\ & $B$ & 11 410 & & 11 380 & \\ & $D$ & 0.033 & & & \\ \hline $A\ ^2\Pi_u$$^a$ & $h\nu$ & 28500.35 & 1122.41 & 466.75 & 2653.07\\ & $B$ & 10 500 & & & \\ \hline $B\ ^2\Sigma_u^+$$^b$& $h\nu$ & 34591.6 & 1284 & 590 & 1891 \\ & $B$ & 11 340 & & 11 500 & \\ & $D$ & 0.069 & & & \\ \hline $C\ ^2\Sigma_g^+$$^c$& $h\nu$ & 45157 & 1384 & 614 & 1567 \\ & $B$ & 11 837 & 11 820 & & \\ & $D$ & 0.1 & 0.1 & & \\ \hline \end{tabular} \end{center} $^a$ Data collected from \cite{Gauyacq75, Varfalzy07, Gharaibeh10, Jacox03}.\\ $^b$ Data collected from \cite{Gauyacq75, Jacox03}.\\ \new{$^c$} Data collected from \cite{Wang88, Wyttenbach89, Jacox03}. \\ \end{table} The vibronic spectrum of CO$_2^+$ is very well established, even if very complicated \citep{Gauyacq75}. This molecular spectrum is known primarily due to its presence in comet Halley \citep{Johnson95}. As referenced in Table \ref{co2+}, high precision experimental results are available for the ground state ($X\ ^2\Pi_g$) and first excited state ($A\ ^2\Pi_u$ at 3508.73 \AA), along with detailed, though less precise, data for the higher $B\ ^2\Sigma_u^+$ (2890.88 \AA) and $C\ ^2\Sigma_g^+$ (2214.88 \AA) states \citep{Jacox03, McCallum72, Frye87, Wang88, Wyttenbach89, Johnson95, Chambaud92, Liu00, Shaw05, Gharaibeh10, NIST3}. These higher states, especially the $C\ ^2\Sigma_g^+$, also lie beyond what can be excited through photofluorescence, the dominant emission mechanism for most cometary fragment species \citep{Shaw05}. \cite{Johnson95} provide a thorough review of the state of CO$_2^+$ spectral data that were available at the time of their publication. This molecule has problematic Fermi resonances, Renner-Teller distortions, and vibronic couplings \citep{Varfalzy07}; all of which shift the spectral lines from the locations predicted by the standard model and put vibrational transitions of electronically excited states in competition with other features, making them difficult to be clearly delineated. However, \cite{Chambaud92} report an extensive, experimentally-benchmarked list of theoretically-computed vibronic transitions within the 5000 cm$^{-1}$ of the $X\ ^2\Pi_g$ ground state for CO$_2^+$, where the complicated interactions are mitigated through control of the quantum chemical computations employed. These values have been corroborated and extended more recently by \cite{Liu00} and \cite{Gharaibeh10}, among others, through pulsed field ionization–photoelectron and laser-induced fluorescence, respectively, to more than 10000 cm$^{-1}$ beyond the onset of the $A\ ^2\Pi_u$ state. Rotational and spectroscopic constants have also been provided for the fundamental frequencies of the IR-allowed $\nu_3$ mode of the ground electronic state, as well as the $\nu_1$ and $\nu_3$ modes of the $A\ ^2\Pi_u$ state via high-precision emission spectroscopy \citep{Johnson95}. Similar to its neutral counterpart, CO$_2^+$ does possess the bright $\nu_3$ antisymmetric stretching mode, but the intensity of this mode for the cation is reduced by approximately 75\% compared to its neutral counterpart according to double-harmonic quantum chemical computations performed presently. Photofluorescence efficiencies for the most important bands in the $A\ ^2\Pi_u \rightarrow X\ ^2\Pi_g$ and $B\ ^2\Sigma_u^+ \rightarrow X\ ^2\Pi_g$ transitions have been determined by \cite{Kim99}, using the transition probability data of \citet{mccallum1971}, and the results were compared to ground-based spectra of comets Austin (1989c1 = C/1989 X1) and 21P/Giacobini-Zinner. \cite{Kim99} argued that the Swings effect did not significantly affect the fluorescence efficiency of the CO$_2^+$ transitions and that the accuracy of their model was mostly limited by spectroscopic data (line positions and transition probabilities) available to them. Furthermore, their equilibrium model only includes vibrational-vibrational transitions, unlike the aforementioned studies in the prior paragraph and others \citep{Itikawa2001}, which cover rovibrational transitions and focus heavily on the Renner-Teller effect. However, as demonstrated in the discussion above, significant work has been done to understand the spectrum of CO$_{2}^{+}$ after their study was published, thus meriting a re-evaluation of the data. Consequently, a majority of the necessary spectral data to model CO$_2^+$ should already be available from these references, which are largely curated at NIST \citep{NIST3}. The exceptions are the vibrationally-excited rotational constants of the excited electronic states and absolute line intensities (oscillator strengths) for these transitions, as most currently available are relative at best. While not as necessary for the production of the electronic band progressions, having the full rovibronic scope of data, Einstein coefficients, and the absolute intensities would produce the most accurate and descriptive models for comparison to observation. The $C\ ^2\Sigma_g^+$ state has not been observed in comets even though excitations into the longer wavelength $B\ ^2\Sigma_u^+$ state have been observed from space-based instruments \citep{weaver81}. However, some rovibrational data for this state are known with fairly high resolution \citep{Wyttenbach89}, especially the excitation energy and the rotational constants. However, the rotational quartic distortion $D$ constant was held fixed in the work by \cite{Wyttenbach89} for each vibrational state. \cite{Wang88} and \cite{Wyttenbach89} disagree as to the assignment of the $\nu_1$ symmetric stretching frequency (1352 cm$^{-1}$ versus 1384 cm$^{-1}$, respectively), but the latter also reports the $B$ rotational constant for this mode lending more credence to their assignment. The other fundamental vibrational frequencies are from \cite{Wang88}, and the rotational data for these modes are also missing. Hence, the $C\ ^2\Sigma_g^+$ state of CO$_2^+$ could also benefit from more advanced study. Such data may be able to isolate this state in cometary spectra which is in a region that is difficult to observe. Such observations, however, can only occur if more data are on hand for fluorescence models to compare with observation. Transitions between the various excited electronic states of CO$_2^+$ are also possible originating from solar radiation. However, most are at near-infrared wavelengths and would likely have relatively low molecular state populations involved in such transitions giving little signal. Additionally, such weak signals also create challenges for producing high-resolution experimental spectral data in the laboratory. Regardless, the $B\ ^2\Sigma_u^+ \rightarrow A\ ^2\Pi_u$ transition should take place at 16417 \AA\ based on the difference in the electronic energies for each of these states from \cite{Gharaibeh10} and \cite{Johnson95}. The $C\ ^2\Sigma_g^+ \rightarrow A\ ^2\Pi_u$ transition wavelength is actually in the visible region at 6003.6 \AA\ \citep{Wyttenbach89} leaving the $C\ ^2\Sigma_g^+ \rightarrow B\ ^2\Sigma_u^+$ in between these two at 9464.9 \AA. The photofluorescence models of CO$_2^+$ would also benefit from data for these excited states even if they would be minor contributors to the overall cometary spectra of interest. \subsection{CO$^+$} The detection of the $A\ ^{2} \Pi_{i} \rightarrow X\ ^{2} \Sigma^{+}$ transition of CO$^+$ at 4820 \AA\ represents the earliest cometary detection of the cations discussed here \citep{Swings65}. The spectra of the tails from comet C/1907 L2 (Daniel) and the exceptionally bright comet C/1908 R1 (Morehouse) both yielded correspondence to emission bands that were soon matched to discharge spectra acquired in the laboratory \citep{Fowler1909}. Owing to the low sublimation temperature of CO, CO$^{+}$ emission can be seen in comet spectra at distances exceeding 5 AU \citep{cochran91}. CO$^+$ has subsequently been utilized as a spectral reference point in the cometary detection of CO$_2^+$ and OH$^+$ \citep{Swings50}. Extrasolar rotational lines of CO$^+$ were first observed towards the photodissociation region M17SW in 1993 \citep{Latter93}, further showcasing the near-century lag between comet cation chemistry and interstellar cation chemistry. CO$^+$ has since been observed towards other photodissociation regions, planetary nebulae, in the disk of M82, and in the circumnuclear torus of Cygnus-A \citep{Fuente2000,Fuente2006, Bell2007, Stauber2009}. Even though CO$^+$ had been known in comets previously from near-UV to visible spectra, the established $J=2 \rightarrow J=1$ rotational line of CO$^+$ was also observed in comet Hale-Bopp with the Caltech Submillimeter Observatory in 1997 \citep{lis97a, lis97b}. However, more modern spectral benchmark data would greatly enhance the spectral resolution needed to inform more detailed future observations. \begin{table} \begin{center} \caption{\label{co+} Reported Vibrational Frequencies (cm$^{-1}$) and Rotational Constants (MHz) for the Various Electronic States of CO$^+$.$^a$} \begin{tabular}{llcc} State & & $\nu=0$ & $\nu=1$ \\ \hline $X\ ^2\Sigma^+$ & $h\nu$ & 0.0 & 2183.9 \\ & $B$ & 59 270.5 & \\ & $D$ & 0.190 & \\ \hline $A\ ^2\Pi$ & $h\nu$ & 20733.3 & 1534.9 \\ & $B$ & 47 649 & \\ & $D$ & 0.20 & \\ \hline $B\ ^2\Sigma^+$ & $h\nu$ & 45876.7 & 1678.3 \\ & $B$ & 53 930 & \\ & $D$ & 0.23 & \\ \hline \end{tabular} \end{center} $^a$ Data collected from \cite{Irikura07, Hakalla19}. \\ \end{table} There is an extensive body of work reporting on spectral data of CO$^+$ that has spanned multiple decades with most of the modern foundation coming the past 40 years \citep{Marchand69, Brown84, Kuo86, Bembenek94, Haridass92}. The current state of knowledge is listed in Table \ref{co+}. Some of the most complete examinations for the spectral classification of CO$^+$ are from observations of cometary ion tails \citep{vujisic88, Haridass00, kepa04} and from quantum chemical explorations \citep{lavendy93, xin18}. A very recent laboratory-based Fourier-transform emission spectral examination of this simple molecule has revealed an extensive trove of rovibronic data for the $A\ ^{2} \Pi_{i} \rightarrow X\ ^{2} \Sigma^{+}$ transition \citep{Hakalla19}. The experimental data are fit from previous known values, including additional reference points from the $B\ ^{1}\Sigma^+ \rightarrow A\ ^{1} \Pi$ transition of neutral carbon monoxide. In turn, these results have been used to provide input parameters for spectroscopic modeling with the PGOPHER software \citep{western10}. The remaining, highly-resolved ($\sim1$ cm$^{-1}$) experimental lines were then matched with the output from PGOPHER such that the rest of the $A\ ^{2} \Pi_{i} \rightarrow X\ ^{2} \Sigma^{+}$ rovibronic features within 2000 cm$^{-1}$ of the onset are classified. Hence, the data necessary to model CO$^+$ have been produced and are curated within this paper but are not accessible in a standard format beyond that which is listed as data tables. \citet{magnani86} performed a comprehensive series of calculations on the fluorescent equilibrium of CO$^{+}$. Unlike prior studies, they included the Baldet-Johnson bands and the first negative bands. The data used for their calculations were based on a compilation of nearly two dozen empirical and theoretical papers. Line positions (given in cm$^{-1}$) were taken from \citet{schmid1933}, \citet{Bulthuis1934}, \citet{rao1950b}, and \citet{herzberg1950}. Oscillator strengths were drawn from \citet{joshi1966} and \citet{Jain1972}. The Frank-Condon factors were taken from \citet{nicholls1962}, and H\"{o}nl-London factors were utilized from \citet{rao1950a}, \citet{rao1953}, and \citet{schadee1964}. Transition probabilities were based on \citet{arpigny1976} and \citet{crovisier1985}. Information on transitions were taken from \citet{kopelman1962}, \citet{arpigny1964a}, \citet{arpigny1964b}, \citet{certain1973}, \citet{Feldman1976}, \citet{rosmus1982}, and \citet{chin1984}. Lab measurement data from \citet{lawrence1965} and \citet{judge1972} were also used. Formulas for the Einstein A and B coefficients were taken from \citet{schleicher1982}. Solar spectral data for the full solar disk were taken from \citet{broadfoot72} and \citet{ahearn1983}. Because the oscillator strength for the Baldet-Johnson bands was unknown, \citet{magnani86} assigned a value such that transition probabilities were in agreement with available laboratory observations, resulting in projected band intensities that were uncertain but likely still reliable at the 20$\%$ level and consistent with their non-detection in most comets. Tables are provided in their paper with the fluorescence efficiencies of 60 transitions for a range of heliocentric velocities between -50 to +350 km/s. However, as explained in our discussion above, significant work has been done on the spectrum of CO$^{+}$ after the \citet{magnani86} paper was published (see Table 4 for recommended constants and frequencies). These recent studies, such as \citep{xin18, Hakalla19}, have increased the spectral resolution as well as the accuracy of known transitions. As such, the fluorescence model of CO$^+$ would benefit from including these spectral models. \subsection{OH$^+$} The $A\ ^2\Sigma^+ - X\ ^2\Pi_i$ emission of neutral OH around 3085 \AA\ is readily used to derive water production rates of comets \citep[cf.][]{ahearn1995}, and in the ISM, the hydroxyl radical was actually the first molecule to be observed via rotational spectroscopy in 1963 towards the very powerful radio source in the Cas A supernova remnant \citep{Weinreb63} and has since been observed in diffuse clouds, as well \citep{Neufeld10, Porras13}. Both OH$^+$ and CO$_2^+$ were observed first in comet C/1947 F1 (Bester) \citep{Swings50}. The dominant feature for this OH$^+$ observation is what spectroscopists now label as the $A\ ^3\Pi \rightarrow X\ ^3\Sigma^-$ transition between 3300 -- 3600 \AA. This broad emission feature may contaminate observations acquired with comet narrowband filters designed for continuum measurements or for the emission of NH around 345 nm \citep{Bodewits2016}. Outside comets, OH$^+$ was not observed until 2010, toward Sgr B2 \citep{Wyrowski10} and background stars at near-UV wavelengths \citep{Krelowski10}. OH$^{+}$ has also been observed around ultraluminous galaxies \citep{vdw2010}, toward a lensed quasar \citep{muller2016}, in the Orion bar \citep{vdt2013}, and in cometary knots of planetary nebulae \citep{aleman14, priestley18}. \begin{table} \begin{center} \caption{\label{oh+} Reported Vibrational Frequencies (cm$^{-1}$) and Rotational Constants (MHz) for the Various Electronic States of OH$^+$.$^a$} \begin{tabular}{llcc} State & & $\nu=0$ & $\nu=1$ \\ \hline $X\ ^3\Sigma^-$ & $h\nu$ & 0.0 & 2956.358469 \\ & $B$ & 492 346.37 & 470 531.9\\ & $D$ & 57.609 9 & 56.014 5 \\ & $H$ & 0.003 942 & 0.003 903\\ \hline $A\ ^3\Pi$ & $h\nu$ & 27935.6930 & 1975.9872 \\ & $B$ & 400 841.1 & 375 144.7 \\ & $D$ & 68.056 & 65.885 4 \\ & $H$ & 0.004 350 & 0.003 007\\ \hline \hline \end{tabular} \end{center} $^a$ Data collected from \cite{Hodges2017}. \\ \end{table} Like with CO$^+$, the foundational experimental spectroscopic work on OH$^+$ comes from the previous 40 years or so \citep{Merer75, Bekooy85, Gruebele86, Liu87, Varberg87, Rehfuss92}. During this same era, \citet{saxon86} conducted a theoretical study of OH$^{+}$ photodissociation from the ground state and calculated potential curves for each of the lowest three $\Sigma^{-}$ and $\Pi$ states. They also evaluated transition dipole moments between the ground and excited states and used them to calculate photodissociation cross sections. \citet{gc2014} conducted a theoretical study of OH$^{+}$ which included calculating the Einstein coefficients for ro-vibrational bands involving the X$^3\Sigma^-$ and $A\ ^3\Pi$ electronic states and calculating the state-to-state rate constants for inelastic collisions between He and OH$^{+}$ (X$^3\Sigma^-$). A more recent study by \cite{Hodges2018} combines empirical energy surfaces with the Rydberg-Klein-Rees method, dipole moment calculations calculated quantum chemically, and oscillator strengths and Einstein A coefficients calculated using PGOPHER. Line lists are created with positions, oscillator strengths, and Einstein A coefficients for the $A\ ^3\Pi \rightarrow X\ ^3\Sigma^-$ rovibronic transition and the rovibrational transitions of the X$^3\Sigma^-$ ground state, including the following levels: (A, $\nu=0$, 1; X, $\nu=0, 1, 2, 3, 4$) up to $J''=30$. The line positions of the $A\ ^3\Pi \rightarrow X\ ^3\Sigma^-$ band (3300 -- 3600 \AA) are compared to those determined using a discharge cell combined with a Fourier transform spectrometer \citep{Hodges2017}. The authors state that there is no intensity measurement to verify the vibrational transitions (around 3.4 $\mu m$) of the {\em X} -- {\em X} band. A table of their data is available electronically through the VizieR database, and the pertinent rovibronic features of OH$^+$, are listed in Table \ref{oh+}. \subsection{O$_2^+$} Surprisingly, neutral molecular oxygen was detected at relatively large abundances with the ROSINA and Alice instruments on board \emph{Rosetta} as it orbited comet 67P/Churyumov–Gerasimenko \citep{Bieler15, keeney17}. It is unclear whether this O$_2$ was present at the comet's formation, is trapped in cometary ices or clathrates \citep{LuspayKuti2018}, or is produced by a chemical reaction with materials on the surface or in the coma \citep{Fortenberry2019}. Albeit its presence in cometary atmospheres was first suggested 70 years ago \citep{Swings50}, the molecular oxygen cation actually has not been detected remotely to date in either cometary or interstellar media \citep{Glinski2004}, and was only spuriously detected in situ by \emph{Rosetta}'s ROSINA instrument \citep{Beth2020}. Three faint rotational lines of $X\ ^3\Sigma_g^-$ O$_2$ were finally detected in 2011 towards Orion \citep{Goldsmith11}, but the cation remains elusive for detection in the ISM as well as in comets. In comets, photoionization rates for the production of O$_2^+$ from O$_2$ are larger ($\sim 10\%$) than for the production of H$_2$O$^+$ by photoioniziation of H$_2$O ($\sim3 \%$; \citet{Huebner2015}), implying that O$_2^+$ should be present as well, especially outside the collisionaly thick inner coma where it can quickly dissipate through chemical reactions \citep{Beth2020}. However, high-quality spectral reference data and fluorescence efficiency rates are needed to look for its signature in cometary and other astrophysical spectra. \begin{table} \begin{center} \caption{\label{o2+} Reported Vibrational Frequencies (cm$^{-1}$) and Rotational Constants (MHz) for the Various Electronic States of O$_2^+$.$^a$} \begin{tabular}{llcc} State & & $\nu=0$ & $\nu=1$ \\ \hline $X\ ^2\Pi_g$ & $h\nu$ & 0.0 & 1872.27 \\ & $B$ & 50 704 & \\ & $D$ & 0.159 & \\ \hline $A\ ^2\Pi_u$ & $h\nu$ & 40669.3 & 871.11 \\ & $B$ & 31 829 & \\ & $D$ & 0.178 & \\ \hline \hline \end{tabular} \end{center} $^a$ Data collected from \cite{Irikura07}. \\ \end{table} \citet{li2000} present results of laser-induced fluorescence (LIF) of the (8,0) and (8,1) bands of the $A\ ^2\Pi_u \rightarrow X\ ^2\Pi_g$ system (between 1800 -- 5300 \AA), as well as preliminary LIF determinations of ($\nu$ = 0, 1) distributions from reactions involving Ar$^{+}$ and Xe$^{+}$. Electron impact ionization-induced spectra between 2200 -- 6000 \AA\ are presented in \cite{Terrell2004}, and these are gathered in Table \ref{o2+}. They present a simplified molecular model to interpret their data, which includes $A\ ^2\Pi_u \rightarrow X\ ^2\Pi_g$ and $b\ ^4\Sigma^-_g \rightarrow a\ ^4\Pi_u$ (between 4500 -- 8500 \AA). All other excitations are outside the window of solar radiation observed from the ground \citep{NIST2}, and spin-flip excitations are not likely to produce any measurable intensity under cometary conditions. The models of these band features were made possible by the rotational constants and strong classification of the vibrational fundamental from \cite{NIST2}. The fundamental frequency varies significantly across these four states from 898.2 cm$^{-1}$ in $A\ ^2\Pi_u$ \citep{Colburn77} to 1904.7 cm$^{-1}$ in the ground $X\ ^2\Pi_g$ state, where these bookend the 1580.19 cm$^{-1}$ fundamental in the $X\ ^3\Sigma^-_g$ in the neutral \citep{NIST2}. The rotational constants vary from 31829 MHz to 50704 MHz, again respective of $A\ ^2\Pi_u$ and $X\ ^2\Pi_g$ \citep{Irikura07}. Consequently, the rovibronic lines have notably different progressions within them in the doublet excitation of the shorter wavelengths. \section{Conclusions} The state of available spectral data for small, oxygen-containing molecular cations is mixed but is rife with notable knowledge gaps that hampers the full diagnostic use of the emission of molecular ions. These holes must be filled in order to better understand physical processes and chemical evolution for tenuous atmospheres of various astronomical bodies, such as comets around perihelion. General near-UV to visible spectral progressions are largely established for these molecules, and the rovibrational nature of the ground electronic states have been well-classified for these relatively simple molecules. However, a rigorous, quantitatively-predictive analysis of the full rovibronic structure, especially the rotational constants of the excited rovibronic states, is currently lacking in most cases. Existing excitation models are decades old, and rely on limited spectral models. This work shows where the data in these models can and should be improved. The worst case of the current state of knowledge of the spectral properties of the H$_2$O$^+$, CO$_2^+$, CO$^+$, OH$^+$, and O$_2^+$ set is the water cation, where only low-resolution characterization for the rovibrational transitions of the two lowest electronic states is available in the literature currently. These available data are also mostly limited to the vibrational band positions and have no rotational substructure described. While surprising for a molecular product of such a common and abundant molecule, this represents a readily-surmountable challenge for spectroscopic characterization for the astrochemical community for both high-resolution experiment and modern quantum chemistry, if not in a combination of the two approaches. The reference data for CO$_2^+$ and CO$^+$ have similar rotational spectral holes, but the vibronic spectra of the higher electronic states have been explored at high resolution for the most part. In any case, these small, oxygen-containing molecular cations and their daughter species are often observed in bodies such as comets, but clear identification of the lines, especially the rovibronic features, is necessary in order to gain deeper insights into the photochemistry playing out in various Solar System environments. \section{Acknowledgements} RCF acknowledges funding from NASA Grant NNX17AH15G, NSF Grant OIA-1757220, and start-up funds provided by the University of Mississippi. DB acknowledges support from NASA Grant 80NSSC19K1304 for the `Rosetta Data Analysis Program'. DMP acknowledges support from the Southeastern Conference Faculty Travel Program.
1,314,259,995,475
arxiv
\section{Introduction} Despite the importance and prevalence of warm-core rings (WCRs) in the Gulf of Mexico (GoM), many of the details of the transport and small-scale coherent structures in these flows remain poorly understood. WCRs have a significant impact on oceanic transport and energy balances~\citep{Elliott:79a,Lewis:89a,Oey:08a} and can also influence weather patterns including hurricanes due to their interaction with the atmosphere. It is known that hurricanes may rapidly intensify when passing over a WCR~\citep{Hong:00a,Shay:00a,Scharroo:05a}. Additionally, the mixing behavior in a WCR differs significantly from that of the surrounding ocean and may greatly influence biological activity such as plankton blooms~\citep{Franks:86a,Biggs:92a}. The primary purpose of this paper is to present some newly observed structures and their associated transport behavior in a warm-core ring in the Gulf of Mexico. We have chosen to focus on newly observed small-scale coherent structures seen in the near surface region (that is, the ocean-atmosphere boundary) of the WCR. To study the structures in these flows, we use the technique of Lagrangian coherent structures (LCS). This technique is used to identify structures relevant to the Lagrangian transport of fluid. These structures represent barriers to transport and often reveal flow behavior that may be difficult or impossible to detect with Eulerian measures. Computing the three-dimensional LCS present in numerical simulations of a WCR has revealed a previously unobserved type of structure in the mixed layer. Due to the transversely intersection LCS structures that appear in this region, we will refer to these structures as ``checkerboard LCS''. The flow in the checkerboard LCS region is characterized by consistent stretching behavior over a long time period and a large spacial domain. A parcel of fluid placed in this region is found to elongate in the azimuthal direction and compress in the radial direction. To better understand the mechanisms that generate the checkerboard LCS structures, we present a simple flow model which produces similar structures. This flow model consists of both hyperbolic and shear stretching components of the flow as well as a spatially periodic perturbation. The velocity field generates large, nearly uniform stretching in the flow. The perturbation then creates localized regions of increased stretching, generating the checkerboard LCS. \section{Lagrangian coherent structures} \label{sec:LCS} The term Lagrangian coherent structures (LCS) has come to refer to a class of techniques used to identify coherent structures in aperiodic, finite time flows. In steady state or periodic systems, classical dynamical systems techniques can be used to identify hyperbolic fixed points and their stable and unstable manifolds as well as other invariant manifolds in the system. These structures may then be used to study the flow topology and the corresponding mixing and transport in the system. However, in systems with general time dependence the same techniques no longer apply. To address this shortcoming and identify coherent structures in systems with general time dependence, various methods have been proposed. One of the most popular and successful methods has been the use of the finite time Lyapunov exponent (FTLE) to identify regions of locally maximum stretching in the flow~\citep{HallerG:00a,Marsden:05g}. Intuitively, one expects that regions with qualitatively different dynamics will be separated by a thin region of very large Lagrangian stretching~\citep{HallerG:00a}. Fluid parcels that straddle the boundary between two regions will be greatly deformed and stretched over time as they separate into different regions. The notion of defining coherent structures by the stretching at their boundaries was formalized in \citet{Marsden:05g} by defining LCS as ridges in the FTLE field. This method of defining and computing LCS has since proven to be very effective in many cases, but it is worth noting that there are alternative definitions as well. In particular, Haller and others have developed a variational formulation for computing LCS~\citep{HallerG:11a,HallerG:11b}, there are methods for finding ``distinguished'' trajectories~\citep{WigginsS:02a,MadridJAJ:09a}, and there there are methods for finding maximally invariant sets~\citep{Marsden:05b,PadbergK:09a}. In this paper, we will use LCS as defined in \citet{Marsden:05g}: ridges of the FTLE field. LCS techniques are primarily useful for determining and examining the Lagrangian transport of a system. In the past, these techniques have been used to investigate unsteady separation~\citep{HallerG:04a} and the flow over an airfoil~\citep{Mohseni:08f}, transport in jellyfish swimming and feeding~\citep{Mohseni:09d,DabiriJO:09a}, three-dimensional turbulence~\citep{Green:07a}, atmospheric transport~\citep{LeKienF:10a}, and many other applications. LCS provide a way to precisely identify the extent or boundaries of coherent structures and, correspondingly, a way to quantify their impact on transport phenomena. Crucially, LCS identify \emph{barriers to transport} and therefore reveal the structure underlying mixing and transport in a flow. LCS have been applied to many ocean flows with good results. LCS have been used to study optimal pollution mitigation~\citep{Marsden:07e}, transport in a wind-driven double gyre~\citep{Coulliette:01a}, identify mesoscale eddies~\citep{Beron-Vera:08a}, and even investigate the transport of oil from the Deepwater Horizon oil spill~\citep{Mezic:10a,Huntley:11a,Olascoaga:12a}. Most ocean studies using LCS have focused solely on surface flows and used two-dimensional LCS computations. The expectation has been that since oceanic flows are highly stratified and vertical velocities are typically orders of magnitude smaller than the horizontal velocities, two-dimensional computations are appropriate. However, the length scales in the vertical direction are also orders of magnitude smaller than those in the horizontal, leading to flow gradients in the vertical direction that may exceed those in the horizontal. Recently, researchers have begun to focus more directly on the impacts of three-dimensionality on ocean LCS, noting that three-dimensional effects may be critically important even if the vertical velocity component is small~\citep{Sulman:13a}. To compute the LCS, it is necessary to first compute the FTLE field. The most common method for doing so involves seeding a region of the flow with a grid of passive drifter particles at some initial time $t_0$ and advecting these particles with the flow field for some integration time $T$. This gives an approximation to the flow map \begin{equation} \vec{\Phi}_{t_0}^T(\vec{x}_0) = \vec{x}_0 + \int_{t_0}^{t+0+T} \vec{v}(\vec{x}(t),t) dt. \end{equation} Once the flow map has been computed, the Cauchy-Green deformation tensor is computed as \begin{equation} \Delta = \left( \dfrac{d\vec{\Phi}}{d\vec{x}_0} \right)^* \left( \dfrac{d\vec{\Phi}}{d\vec{x}_0} \right), \end{equation} where $^*$ denotes the transpose operator. This tensor contains information about the geometric deformation of the flow. The FTLE is then given by \begin{equation} \sigma_{t_0}^T(\vec{x}_0) = \dfrac{1}{\abs{T}} \ln \sqrt{\lambda_{\max}} \end{equation} where $\lambda_{\max}$ is the maximum eigenvalue of $\Delta$. Note that the integration time $T$ may be either positive or negative so for any flow there are always two sets of LCS. For $T>0$ the LCS are typically repelling structures and for $T<0$ they are attracting. Additionally, the magnitude of $T$ should be chosen so that sufficient detail is resolved in the LCS. As $T$ is increased, more LCS are revealed, but if $T$ is too large the complexity of the resulting structures may be difficult to interpret. Once the FTLE field has been computed, the LCS are often visualized by directly plotting the FTLE field. The LCS are defined as ridges in the FTLE field which are visible just as ridges are visible on a topographical map. Various mathematical definitions of ridges are available but this choice does not seem to greatly affect the resulting LCS. For well defined ridges, the LCS typically permit very low or negligible flux and can be thought of as denoting barriers to fluid transport~\citep{Marsden:05g}. If desired, one may explicitly extract the ridges from the FTLE field in an additional step to obtain the LCS. In this study, we have used an efficient ridge tracing algorithm to directly compute the LCS ridge surfaces in the WCR. This method greatly speeds the LCS computations by detecting some initial points on the LCS ridges and then tracing the ridges through space. By avoiding computations away from the FTLE ridges, the algorithm reduces the computational complexity from $\mathcal{O}(\delta x^{-3})$ to $\mathcal{O}(\delta x^{-2})$ in three-dimensional domains~\citep{Lipinski:12a}. \section{Observations} \label{sec:obs} The observations presented here are based on data from the ECCO2 global ocean simulation which is publicly available at \url{http://ecco2.jpl.nasa.gov/}. We focus on the structure and transport of a warm-core ring in the Gulf of Mexico as found in the ECCO2 simulation on 1 February 2010. WCRs periodically form in the GoM when the loop current in the eastern Gom pinches off in a closed ring which contains warm Caribbean water. These rings typically persist for about 7-13 months~\citep{Sturges:00} as they drift slowly across the GoM before dissipating in the western GoM~\citep{Hurlburt:80}. In WCRs, the flow field is largely two-dimensional, with horizontal flow speeds on the order of 1 m s$^{-1}$ and vertical speeds three to four orders of magnitude smaller. However, the horizontal length scales are on the order of 100 km while the vertical lengths scales are a few hundred meters. In combination, this means that flow gradients in the vertical direction may be of the same order or larger than those in the horizontal direction. Because of this, even a small vertical motion may have a large impact on the trajectories of fluid particles and the full three dimensional flow structure must be considered when analyzing fluid transport in WCRs. \begin{figure} \centering \includegraphics[width=.8\textwidth]{figs/surfaceLCS} \caption{A two-dimensional slice of the near-surface LCS in the Gulf of Mexico as computed from the ECCO2 dataset for 1 February 2010. Attracting LCS are shown in red and repelling are shown in blue. A large WCR is present in the center of the GoM and an older ring is in the western GoM. The loop current is also clearly visible.\label{fig:surfaceLCS}} \end{figure} A view of the near surface LCS in the GoM on 1 February 2010 is shown in figure~\ref{fig:surfaceLCS}. We have computed the three-dimensional LCS in the GoM using a ridge tracking algorithm to speed computations~\citep{Lipinski:12a}. An integration time of $T=\pm4$ weeks was used. This time was chosen to reveal the major structures in the flow. In figure~\ref{fig:surfaceLCS}, the loop current is clearly visible in the eastern GoM as it enters through the Yucatan Channel and exits through the Florida Straits. A recently shed WCR is in the central GoM and an older WCR is visible in the western GoM. Below, we will focus on the single WCR in the central GoM. Figure~\ref{fig:WCRcheckerboard} shows a vertical cross section of the LCS in this WCR. There are several features of note in this figure. First, the LCS reveal a closed bottom to the WCR. In this part of the WCR, the attracting and repelling LCS are nearly parallel and prevent transport in or out of the eddy. This limited transport is one of the reasons WCRs persist for so long. Additionally, this closed bottom indicates that the WCR has limited influence below a certain depth (about 550m in this case) and fluid below this depth does not get entrained or carried with the WCR. The closed bottom and finite depth of the WCR potentially allows for volumetric computations to determine precisely how much water is carried with the WCR and quantify the corresponding influence on heat, salt, and mass balances in the GoM. \begin{figure} \centering \includegraphics[width=1\textwidth]{figs/WCRcheckerboard} \caption{A vertical cross section of the LCS in a warm-core ring in the Gulf of Mexico computed from the publicly available ECCO2 dataset. The inset shows the characteristic ``checkerboard'' pattern formed by the interaction of the attracting (red) and repelling (blue) LCS. \label{fig:WCRcheckerboard}} \end{figure} Next, we note a clear difference between the LCS below about 120 m and those above with a sharp transition between these two regions. This 120 m depth marks the bottom of the mixed layer in the WCR. Below this depth, the attracting (red) and repelling (blue) LCS are approximately aligned with one another. However, in the mixed layer, the attracting and repelling LCS intersect transversely, forming a cross-hatched or ``checkerboard'' pattern. This checkerboard pattern is a new flow structure which has not been previously reported in geophysical flows. To investigate the flow behavior within the checkerboard region we place a box of passive drifter particles in this region of the WCR and track their motion over time. The particle positions are shown after 0, 12.5, and 25 days in figure~\ref{fig:drifterbox}. The box is initially $\approx$21 km across in the latitudinal and longitudinal directions and covers a depth range of 50 m. As can be clearly seen in figure~\ref{fig:drifterbox}, the initial box of drifter particles is quickly stretched and wrapped around the circumference of the WCR while being compressed in the radial direction. The box very quickly becomes greatly deformed by the horizontal velocities in the flow, but very little motion occurs in the vertical direction. Although it is difficult to see in figure~\ref{fig:WCRcheckerboard}, the box is also slowly compressed in the vertical direction and pushed upwards while being sheared in the radial direction with particles near the surface moving radially outward with respect to those below. The compression and upward motion is a small effect compared to the other motions observed. The near-surface radial outflow is expected due to friction-based disruption of the cyclo-geostrophic balance that is present in much of the WCR. \begin{figure} \centering \includegraphics[width=.32\textwidth]{figs/drifter_box0} \includegraphics[width=.32\textwidth]{figs/drifter_box25} \includegraphics[width=.32\textwidth]{figs/drifter_box50} \caption{A box of drifters placed in the checkerboard LCS region of a WCR. The drifter positions are shown at times of 0, 12.5, and 25 days. The black circle shows the approximate position of the center of the checkerboard region surrounding the warm-core ring.} \label{fig:drifterbox} \end{figure} By carefully tracking the drifter particles it is possible to directly estimate the deformation caused by the flow. Since the box is quickly deformed from its initial configuration care must be used in such computations. Here, we focus on the circumferential length of the box and the radial thickness. The circumferential length is computed as the mean length of the elongating edges of the box, computed by integrating along each of these edges. The radial thickness is computed as the average of the distance between points on the longest two edges of the upper and lower faces of the box. To ensure accuracy, only the middle half of these edges is used. The results of this process are plotted in figure~\ref{fig:box_size}. The length of the box increases approximately linearly for the entire 25 day time period at a rate of approximately 17.2 km day$^{-1}$. The radial thickness decreases approximately exponentially over this same period, reaching a thickness of about 1.6 km after 25 days. \begin{figure} \centering \hfill \includegraphics[width=.48\textwidth]{figs/box_length}\hfill \includegraphics[width=.48\textwidth]{figs/box_thickness}\hfill~ \caption{The deformed circumferencial length and radial thickness of the the drifter box shown in figure~\ref{fig:drifterbox}. The length grows linearly at a rate of about 17.2 km day$^{-1}$ while the thickness decays approximately exponentially. As shown, a fit of $D(t)=1/(c_1+c_2t^{c_3})$ was also tried for the thickness, but the exponential provided a better fit. A least squares best fit was used to determine the regression curves.} \label{fig:box_size} \end{figure} The flow deformation revealed in this analysis gives two additional insights. First, the observed stretching occurs over a relatively large region of space (an initially 21 km $\times$ 21 km $\times$ 50 m box) and time (25 days). The passive drifters that begin in this checkerboard LCS region remain in the checkerboard LCS region for the entire 25 day time period investigated and experience consistent deformation in both the circumferential length and radial thickness directions as shown in figure~\ref{fig:box_size}. Secondly, the velocity field appears to consist of three main components in the checkerboard region: (1) a large and sheared velocity component in the azimuthal direction (2) A small radial outflow near the surface (3) a small compression and upward motion in the vertical direction which must be balanced by expansion in the radial direction for conservation of volume. \section{Checkerboard LCS model} \label{sec:model} As discussed in the previous section, the checkerboard LCS pattern is a new and prominent feature observed in this WCR. This pattern only appears in the mixed layer and there is a sharp transition at the bottom of the mixed layer from the transversely intersecting LCS above to the parallel LCS below. We attribute this sudden change to the greatly reduced radial and vertical flow components below the mixed layer. In the mixed layer, boundary interactions and wind forcing cause unique flow characteristics which can generate the checkerboard LCS pattern. Specifically, wind forcing influences the near surface velocity field through frictional affects and disrupts the cyclo-geostrophic balance of the WCR. This leads to a pressure gradient driven radial flow component due to the elevated sea surface height of the WCR. This radial outflow component decreases quickly with depth, leading to relatively large values of shear rate in the radial flow. Combined with the shear in the much larger azimuthal velocity component, these two shear components in different planes acts to create stretching that depends on the direction of time integration. When a small perturbation is added, this generates the checkerboard LCS pattern seen in the WCR. A small amount of compression in the vertical direction was also observed, but it is omitted from the model since the resulting stretching was much smaller than that due to shear effects. The model we use is a simple, non-time-dependent velocity field given by \begin{align} \dot{x} & = \dot{\gamma}_1 y, \notag \\ \dot{y} & = 0, \label{eq:model}\\ \dot{z} & = -\dot{\gamma}_2 x\left[1+\varepsilon\sin\left(\dfrac{2\pi}{\lambda}z\right)\right], \notag \end{align} where $\dot{\gamma}_1$ and $\dot{\gamma}_2$ are shear rates, $\varepsilon$ is a small perturbation magnitude, and $\lambda$ is the perturbation wavelength. Although this flow is given in Cartesian coordinates, $x$, $y$, and $z$ are analogous to the radial, vertical, and azimuthal directions in the WCR. The $z$ dependence is periodic just as the WCR is periodic in the azimuthal direction. The periodic perturbation is associated with deviations from axisymmetry in the WCR. To more closely match the velocity magnitudes seen in the WCR, constants could be added to the velocity components in equation~\ref{eq:model}. However, constant terms have no affect on the resulting FTLE values or LCS so they omitted here for simplicity. To understand why this flow creates the checkerboard pattern in the LCS we consider the affect of each flow component separately. This is shown schematically in figure~\ref{fig:schematic}. Starting from an initially square domain representing the view shown in figure~\ref{fig:model_LCS}, the $\dot{x}$ component shears the square into a rhombus in opposite directions depending on whether the forward or backward flow map is considered. The $\dot{z}$ component shears the rhombus out of the $x$-$y$ plane in the $z$ direction. Perturbations to the flow then act at positions that are mapped to $z=$constant, generating larger stretching at these locations and therefore LCS. These LCS are aligned as shown in figure~\ref{fig:schematic}, creating transversely intersecting forward and backward LCS. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{figs/schematic} \caption{The stretching behavior resulting from Eq. \ref{eq:model}. An initial square at constant $z$ is stretched into a rhombus in different directions by the forward or backward flow map based on the shear in the $\dot{x}$ flow component, the $\dot{z}$ component shears the rhombus in the $z$ direction and the perturbation creates LCS at locations that have been mapped to $z(T)=$constant.\label{fig:schematic}} \end{figure} The velocity field of equation~\ref{eq:model} is simple enough to be integrated analytically, but the full solution for $z(t)$ is sufficiently complex that it is most instructive to examine the flow for $\varepsilon=0$ and then consider the effect of the perturbation. If $\epsilon=0$, the flow map from $t=0$ to $t=T$ is \begin{align} x(T) &= x_0+\dot{\gamma}_1y_0T \notag\\ y(T) &= y_0 \label{eq:flowmap}\\ z(T) &= z_0 +\dot{\gamma}_2 x_0 T + \dfrac{\dot{\gamma}_1\dot{\gamma}_2 y_0 T^2}{2}. \notag \end{align} Thus, material which is mapped to $z(T)$=constant and therefore acted on uniformly by the perturbations originates on the plane defined by \begin{equation} z(T) = z_0 +\dot{\gamma}_2 T x_0 + \dfrac{\dot{\gamma}_1\dot{\gamma}_2 T^2}{2}y_0 \end{equation} where $(x_0,y_0,z_0)$ are the starting coordinates. The perturbation creates compression and expansion in $z(T)$ at a wavelength of $\lambda$. Thus, we expect the resulting LCS to spaced at intervals of \begin{equation} \Delta x=\dfrac{\lambda}{\dot{\gamma}_2 T},~~~\Delta y=\dfrac{2\lambda}{\dot{\gamma}_1 \dot{\gamma}_2 T^2},~~~\Delta z = \lambda, \end{equation} and a slope in $x$-$y$ plane of \begin{equation} \dfrac{\Delta y}{\Delta x} = \dfrac{2}{\dot{\gamma}_1 T}. \end{equation} These results are valid for $\varepsilon\ll 1$. Larger values of $\varepsilon$ increase the spacing between the LCS, but do not change the slope. To compare the model velocity field to the checkerboard LCS seen in the WCR we must first estimate the parameters $\dot{\gamma}_1$, $\dot{\gamma}_2$, and $\lambda$ in the WCR. $\dot{\gamma}_1$ corresponds to the shear rate of the radial velocity component with respect to the vertical direction. An examination of the azimuthally averaged WCR shows that the radial outflow component has a maximum near the surface of about $2.5\times10^{-2}$ m s$^{-1}$ and decreases to zero at a depth of about 75 m so $\dot{\gamma}_1\approx3.33\times10^{-4}$ s$^{-1}$. $\dot{\gamma}_2$ corresponds to the shear rate of the azimuthal velocity with respect to the radius in the checkerboard region. In the same way, we find that $\dot{\gamma}_2\approx1.15\times 10^{-5}$ s$^{-1}$ in the checkerboard region. Despite the fact that the radial flow velocity in the WCR is much smaller than the azimuthal component, $\dot{\gamma}_1$ is an order of magnitude larger than $\dot{\gamma}_2$ due to the smaller length scales in the vertical direction. For this reason, the shear in the WCR radial outflow is a critical component for generating the checkerboard LCS pattern. Finally, we determine the parameters of the model associated with the sinusoidal perturbation. The value of $\varepsilon$ does not significantly affect the resulting LCS. We use $\varepsilon=0.01$, corresponding to a 1\% perturbation of the azimuthal velocity. An examination of the WCR shape reveals that it is slightly elliptical. This is common in WCRs~\citep{Cushman:85a} and generates a perturbation wavelength of 1/2 the WCR circumference. The circumference of the checkerboard region is about $5.9\times10^{5}$ m, giving a value of $\lambda=2.95\times10^{5}$ m. Computing the LCS in this model with the same integration time that was used for the WCR ($T=\pm4$ weeks) results in the checkerboard LCS pattern shown in figure~\ref{fig:model_LCS}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figs/model_LCS} \caption{The LCS resulting from Eq. \ref{eq:model}. The LCS show a checkerboard pattern which is similar to the that seen in the WCR. The red, vertical curves are attracting LCS and the blue horizontal curves are repelling LCS. \label{fig:model_LCS}} \end{figure} To directly compare the checkerboard LCS resulting from the model and the WCR, we examine two metrics: the spacing between the LCS and the slope of the LCS. For the WCR, the LCS have a slope of around 2-3 m km$^{-1}$, a horizontal spacing of about 8-15 km and a vertical spacing of 20-40 m depending on precisely where these measurements are made. For the model velocity field of equation~\ref{eq:model}, the LCS have a slope of 2.5 m km$^{-1}$, a horizontal spacing of 10.6 km, and a vertical spacing of 26.6 m. These results all lie within the range of values measured in the WCR and show very good agreement between the model and the checkerboard region in the WCR. \section{Conclusions} A Lagrangian coherent structure (LCS) analysis of a warm-core ring (WCR) in the Gulf of Mexico has revealed some previously unobserved transport structures. In the WCR, the vortex core is surrounded by a series of ``checkerboard'' LCS in the mixed layer which form a cross-hatched pattern of transversely intersecting LCS when viewed in the $r$-$z$ plane. Fluid in this region undergoes consistent stretching behavior. As with most ocean flows, there is very little vertical transport in the WCR and a box of passive drifter particles placed in the checkerboard region is elongated in the circumferential direction, becoming wrapped around the WCR, while becoming thinner in the radial direction. Such uniform stretching behavior does not admit transport across the checkerboard LCS region, contributing to the long life of WCRs. A detailed investigation of the Lagrangian stretching behavior in the checkerboard region gives rise to an analytical model which reproduces the checkerboard LCS. The most important parameters of the model are the vertical shear rate of the radial velocity component, the horizontal shear rate of the azimuthal velocity, and the principle perturbation wavelength of deviations from axisymmetry. These parameters were estimated directly from the WCR velocity field and used to compute the LCS generated by the model. The LCS which are produced by this analytical model show very good agreement with those seen in the WCR. Transversely intersecting LCS are produced by the model with slopes and spacings that match those seen in the WCR. It is important to note that although the velocity field in the WCR is largely two dimensional, the length scales in the vertical direction are much smaller than the horizontal. For this reason, gradients in the vertical direction can be of the same order or larger than those in the horizontal. The model used to produce the checkerboard LCS here relies on three dimensional stretching behavior which cannot occur in a two dimensional domain. Even though the vertical motion is ignored, the vertical shear is critically important. Additionally, the small radial outflow near the surface of the WCR is absolutely critical in producing the transversely intersecting LCS pattern seen here. This radial flow is thought to occur primarily because frictional effects in the mixed layer disrupt the cyclo-geostrophic balance of the WCR and enable a pressure driven radial outflow. Below the mixed layer, there is essentially no radial flow component and therefore the character of the LCS changes to include only parallel LCS. The checkerboard LCS structures seen in this study have been shown to be associated with the shear present in this region of the WCR. In WCRs and other well developed mesoscale ocean eddies, flow tends to be well ordered and laminar and the lack of strong updrafts and overturning flow in WCRs limits the available mechanisms for mixing and homogenization. The presence of the checkerboard LCS reveals one region in the WCR where shear greatly affects the Lagrangian dynamics of fluid motion. A study of drifters placed in this region revealed that a large box shaped domain is quickly stretched into a long, thin filament around the ring circumference. Small scale mixing caused by breaking waves, wind gusts, biological interactions, and small-scale turbulence is always present in the ocean and the stretching and shearing in the checkerboard region creates opportunities for mixing and homogenization within the WCR on small scales while minimizing fluid exchange with the surrounding ocean. The enhanced shearing, mixing, and homogenization within the checkerboard region likely has affects on biological systems. It is known that warm-core rings influence phytoplankton blooms~\citep{Franks:86a,Biggs:92a} and fish distributions~\citep{Olson:85a} in the ocean. In fact, the advective transport in this region will impact the distribution of nutrients, pollutants, temperature, salt, oxygen, etc. within and around the WCR. \newcommand{AIAA J.} \newcommand{\AIAAP}{AIAA Paper}{AIAA J.} \newcommand{\AIAAP}{AIAA Paper} \newcommand{\AM}{Acta Math.} \newcommand{\ARMA}{Archive for Rational Mechanics and Analysis} \newcommand{J. Fluids Eng., Trans. ASME}{J. Fluids Eng., Trans. ASME} \newcommand{\ASR}{Applied Scientific Research} \newcommand{\CF}{Computers Fluids} \newcommand{Experimental Thermal and Fluid Science}{Experimental Thermal and Fluid Science} \newcommand{\EF}{Experiments in Fluids} \newcommand{\FDR}{Fluid Dynamics Research} \newcommand{Int. J. Heat Mass Transfer}{Int. J. Heat Mass Transfer} \newcommand{J. Acoust. Soc. Am.} \newcommand{\JCP}{J. Comp. Physics}{J. Acoust. Soc. Am.} \newcommand{\JCP}{J. Comp. Physics} \newcommand{J. Fluid Mech} \newcommand{\JMP}{J. Math. Phys.}{J. Fluid Mech} \newcommand{\JMP}{J. Math. Phys.} \newcommand{J. Scientific Computing} \newcommand{\JSP}{J. Stat. Phys.}{J. Scientific Computing} \newcommand{\JSP}{J. Stat. Phys.} \newcommand{\JSV}{J. of Sound and Vibration} \newcommand{\MC}{Mathematics of Computation} \newcommand{Monthly Weather Review}{Monthly Weather Review} \newcommand{\PAS}{Prog. in Aerospace. Sci.} \newcommand{\PCPS}{Proc. Camb. Phil. Soc.} \newcommand{Physica D} \newcommand{\PRA}{Physical Rev. A}{Physica D} \newcommand{\PRA}{Physical Rev. A} \newcommand{Physical Rev. E} \newcommand{\PRL}{Phys. Rev. Lett.}{Physical Rev. E} \newcommand{\PRL}{Phys. Rev. Lett.} \newcommand{Phys. Fluids} \newcommand{\PFA}{Phys. Fluids A.}{Phys. Fluids} \newcommand{\PFA}{Phys. Fluids A.} \newcommand{Phys. Lett.} \newcommand{\PRSLA}{Proc. R. Soc. Lond. A}{Phys. Lett.} \newcommand{\PRSLA}{Proc. R. Soc. Lond. A} \newcommand{\SIAMJMA}{SIAM J. Math. Anal.} \newcommand{\SIAMJNA}{SIAM J. Numer. Anal.} \newcommand{SIAM J. Sci. Comput.}{SIAM J. Sci. Comput.} \newcommand{SIAM J. Sci. Stat. Comput.}{SIAM J. Sci. Stat. Comput.} \newcommand{Theoret. Comput. Fluid Dynamics} \newcommand{\ZAMM}{ZAMM}{Theoret. Comput. Fluid Dynamics} \newcommand{\ZAMM}{ZAMM} \newcommand{ZAMP} \newcommand{\ICASER}{ICASE Rep. No.}{ZAMP} \newcommand{\ICASER}{ICASE Rep. No.} \newcommand{NASA CR} \newcommand{\NASATM}{NASA TM}{NASA CR} \newcommand{\NASATM}{NASA TM} \newcommand{NASA TP} \newcommand{\ARFM}{Ann. Rev. Fluid Mech.}{NASA TP} \newcommand{\ARFM}{Ann. Rev. Fluid Mech.} \newcommand{\WWW}{from {\tt www}.} \newcommand{\CTR}{Center for Turbulence Research, Annual Research Briefs} \newcommand{\vonKarman}{von Karman Institute for Fluid Dynamics Lecture Series}
1,314,259,995,476
arxiv
\section{Introduction} Einstein's general relativity (GR), since its inception in 1916, has passed all experimental tests \cite{Weinberg:1972kfs}. To move towards the correct quantum theory of gravity, it is important to test which variations of classical GR fail the experimental tests or have some theoretical inconsistencies. One such variation of GR which has been widely studied is the Fierz-Pauli (FP) theory of massive gravity \cite{Fierz:1939ix}. In a scalar or vector field theory a massive particle exchange gives rise to a $(1/r)e^{-m_g r}$ Yukawa potential which goes to the $1/r$ potential in the $m_g\rightarrow 0$ limit. The FP theory of massive graviton has the peculiarity that in the zero graviton mass limit the Lagrangian goes smoothly to Einstein-Hilbert (EH) linearized gravity theory, while the graviton propagator has additional contributions from the scalar modes of the metric which do not decouple in the zero graviton mass limit. As a result, the Newtonian potential in the zero-mass limit of FP theory is a factor $(4/3)$ larger than the prediction from the EH theory (which of course agrees with the Newtonian potential). This peculiarity of the FP theory where the action goes to the EH theory in the zero mass limit but the graviton propagator does not was first pointed out by van Dam and Veltman \cite{vanDam:1970vg} and independently by Zakharov \cite{Zakharov:1970cc} and this feature which arises in most massive gravity theories \cite{Hinterbichler:2011tt,deRham:2014zqa, Mitsou:2015yfa,Joyce:2014kja} is called the van Dam-Veltman-Zakharov (vDVZ) discontinuity (however, in the nonlinear FP theory, a proper decoupling limit will display the vDVZ discontinuity already in the action). Experimental constraints on the graviton mass are listed in \cite{deRham:2016nuf}. It is of interest to ask if instead of a graviton exchange diagram we consider a one graviton vertex process like gravitational radiation then, whether there is a difference in the result between the predictions of GR and the predictions of massive gravity theories in the $m_g\rightarrow 0$ limit and whether a manifestation of the vDVZ discontinuity can be seen in this phenomenon. GR in the weak field limit can be treated as a quantum field theory of spin-2 fields in the Minkowski space \cite{Feynman:1996kb, Weinberg:1964ew,Veltman:1975vx, Donoghue:2017pgk,Kuntz:2019zef}. Any classical gravity interaction like Newtonian potential between massive bodies or bending of light by a massive body can be described by a tree level graviton exchange diagram. The result of the tree level diagrams should match the weak field classical GR results. The derivation of gravitational radiation from binary stars as a single vertex Feynman diagram of massless graviton emission from a classical source has been performed in \cite{Mohanty:1994yi,Mohanty:2020pfa} and the results match with the result of Peter and Mathews \cite{Peters:1963ux} who used the quadrupole formula of classical GR. The first evidence of Gravitational Wave (GW) radiation was obtained from precision measurements of the Hulse -Taylor binary system \cite{Hulse:1974eb,Taylor:1982zz,Weisberg:1984zz}. The orbital period loss of the compact binary system confirms Einstein's GR \cite{Peters:1963ux} to $\sim 0.1 \% $ accuracy \cite{Weisberg:2016jye}. Following the Hulse-Taylor binary there have been other precision measurements from compact binary systems \cite{Kramer:2006nb,Antoniadis:2013pzd,Freire:2012mg}. Binary stars can also radiate other ultra-light fields like axions or gauge bosons. The angular frequency of pulsar binaries is $\Omega \sim 10^{-19}$ eV and particles with a mass lower than $\Omega$ can be radiated like the radiation of gravitational waves. The Feynman diagram method is pedagogically simpler to generalize the calculation of scalars and gauge bosons. A calculation of radiation of ultra-light scalars \cite{Mohanty:1994yi}, axions \cite{Poddar:2019zoe}, and gauge bosons \cite{Poddar:2019wvu} has been performed with this method and compared with experimental observations of binary pulsars (or pulsar-white dwarf binaries). This enables us to probe the couplings of ultra-light dark matter \cite{Hu:2000ke, Hui:2016ltb} which are predicted to be in the mass range $\sim 10^{-21}-10^{-22}$ eV to be probed with binary pulsar timing measurements. In this paper, we study massive graviton theories with a single vertex process namely graviton radiation from binary stars and we consider three models (1) the Fierz-Pauli ghost free theory which has a vDVZ discontinuity in the propagator, (2) a modification of Fierz-Pauli theory where there is a cancellation between the ghost and the scalar degrees so that there is no vDVZ discontinuity \cite{Visser:1997hd,Finn:2001qi,Gambuti:2020onb,Gambuti:2021meo} and (3) the Dvali-Gabadadze-Porrati (DGP) theory \cite{ Dvali:2000hr, Dvali:2000rv,Dvali:2000xg} which is ghost-free but the extra scalar degree of freedom gives rise to the vDVZ discontinuity. The mass term in DGP gravity is momentum dependent which serves the purpose of suppressing the long range interactions in a virtual graviton exchange process. For real gravitons the graviton mass is tachyonic. We compare our results with observations and put limits on the graviton mass allowed in each of these theories. We also compare our results with the earlier classical field calculations in massive gravity theories \cite{VanNieuwenhuizen:1973qf, Will:1997bb, Larson:1999kg,Finn:2001qi, deRham:2012fw, Shao:2020fka}. There are several existing bounds on graviton mass considering the tests of Yukawa potential, from modified dispersion relation, fifth force constraints, etc. (see \cite{deRham:2016nuf} for review). In particular, considering Vainshtein screening at the non-linear scales of the massive theories of gravity, measurements have already ruled out a range of $m_g$ below the Vainshtein threshold in various systems. For example, from the Lunar Laser ranging experiments for the Earth-Moon system, the graviton mass range $10^{-32}$ eV $<m_g< 10^{-20}$ eV is ruled out \cite{Dvali:2002vf}. For any theory containing the cubic Galileon in the decoupling limit (i.e. the Vainshtein screened regime), from the Hulse-Taylor pulsar the mass range $10^{-27}$ eV $< m_g < 10^{-24}$ eV is ruled out \cite{deRham:2012fw}. In this paper, we investigate the complementary regime, i.e. the unscreened linear regime and, hence the mass ranges greater than the Vainshtein threshold value for the binary pulsar systems. This paper is organized as follows. In Section \ref{FP} we discuss the Fierz-Pauli theory and derive the formula for energy loss by graviton radiation using the Feynman diagram method. In Section \ref{FPa} we do the same study for the modified FP theory without the vDVZ discontinuity and in Section \ref{DGP} we study the DGP theory. In Section \ref{Observations}, we compare the results with observations from the Hulse-Taylor binary (PSR B1913+16) and pulsar white dwarf binary (PSR J1738+0333) and put limits on the graviton mass for each of the massive gravity theories discussed. We also discuss the limits of applicability of the perturbation theory from the Vainshtein criterion and the corresponding limits on the range of graviton mass established from binary stars. In Section \ref{Conclusions}, we summarise the results and discuss future directions. In Appendix \ref{appendi}, we give the detailed derivation of the Feynman diagram method of calculating gravitational radiation from compact binaries in GR for comparison with massive gravity theory results discussed in this paper. \section{Fierz-Pauli massive gravity theory} \label{FP} The Fierz-Pauli theory \cite{Fierz:1939ix} is {described by the action \begin{eqnarray} S&=&\int d^4x\Big[ -\frac{1}{2} (\partial_\mu h_{\nu \rho})^2 + \frac{1}{2} (\partial_\mu h)^2 - (\partial_\mu h)(\partial^\nu h^\mu_\nu)+(\partial_\mu h_{\nu \rho} )(\partial^\nu h^{\mu \rho})\nonumber\\ &&\quad \quad \quad\quad+\frac{1}{2}m^2_g\Big(h_{\mu\nu}h^{\mu\nu}-h^2\Big)+\frac{\kappa}{2}h_{\mu\nu}T^{\mu\nu} \Big] \nonumber\\ &=&\int d^4x \left[ \frac{1}{2} h_{\mu \nu} {\cal E}^{\mu \nu \alpha \beta}h_{\alpha \beta} +\frac{1}{2}m^2_g h_{\mu \nu} (\eta^{\mu (\alpha } \eta^{\beta) \nu}- \eta^{\mu \nu}\eta^{\alpha \beta})h_{\alpha \beta}+ \frac{\kappa}{2} h_{\mu \nu }T^{\mu \nu} \right], \label{eq:FP_action} \end{eqnarray} where the operator $ {\cal E}^{\mu \nu \alpha \beta}$ is given in Eq.\ref{kineticOp}. The mass term breaks the gauge symmetry $h_{\mu \nu} \rightarrow h_{\mu \nu} - \partial_\mu \xi_\nu -\partial_\nu \xi_\mu$. We will assume that the energy-momentum is conserved, $\partial_\mu T^{\mu \nu}=0$. The equation of motion from Eq.\ref{eq:FP_action} is \begin{eqnarray} \left( \Box +m_g^2 \right) h_{\mu \nu} -\eta_{\mu \nu} \left(\Box +m_g^2\right) h - \partial_\mu \partial^\alpha h_{\alpha \nu} - \partial_\nu \partial^\alpha h_{\alpha \mu} + \eta_{\mu \nu} \partial^\alpha \partial^\beta h_{\alpha \beta}+ \partial_\mu \partial_\nu h = -\kappa T_{\mu \nu}. \nonumber\\ \label{eom1} \end{eqnarray} Taking the divergence of Eq.\ref{eom1} we have \begin{equation} m_g^2 \left( \partial^\mu h_{\mu \nu} -\partial_\nu h \right)=0. \label{eom2} \end{equation} These are 4 constraint equations which reduce the independent degrees of freedom of the graviton from 10 to 6. Using Eq.\ref{eom2} in Eq.\ref{eom1} we obtain \begin{equation} \Box h_{\mu \nu} -\partial_\mu \partial_\nu h +m_g^2\left(h_{\mu \nu} -\eta_{\mu \nu}h\right)= -\kappa T_{\mu \nu}. \end{equation} Taking the trace of this equation we obtain the relation \begin{equation} h=\frac{\kappa }{3 m_g^2} T. \end{equation} Therefore trace $h$ is not a propagating mode but is determined algebraically from the trace of the stress tensor. This is the ghost mode as the kinetic term for $h$ in Eq.\ref{eom1} appears with the wrong sign. Therefore in the Fierz-Pauli theory the ghost mode does not propagate. The number of independent propagating degrees of freedom of the Fierz Pauli theory is therefore 5. These are 2 tensor modes, 2 three-vector degrees of freedom which do not couple to the energy-momentum tensor and 1 scalar which couples to the trace of the energy-momentum tensor. The propagator in the FP theory is given formally by \begin{equation} \left[ {\cal E}^{\mu \nu \alpha \beta}+m_g^2 \left(\eta^{\mu (\alpha } \eta^{\beta) \nu}- \eta^{\mu \nu}\eta^{\alpha \beta} \right)\right] D^{(m)}_{\alpha \beta \rho \sigma}(x-y)=\delta^\mu_{(\rho}\delta^\nu_{\sigma)}\delta^4(x-y). \label{Dm1} \end{equation} Going to momentum space ($\partial_\mu \rightarrow i k_\mu$) we can find $D^{(m)}_{\alpha \beta \rho \sigma}(k)$ from Eq.\ref{Dm1}. The propagator for the Pauli-Fierz massive graviton turns out to be \begin{equation} D^{(m)}_{\alpha \beta \rho \sigma}(k)= \frac{1}{-k^2 +m_g^2} \left (\frac{1}{2} (P_{\alpha \rho} P_{\beta \sigma} + P_{\alpha \sigma }P_{\beta \rho}) - \frac{1}{3} P_{\alpha \beta}P_{\rho \sigma}\right), \label{Dm2} \end{equation} where \begin{equation} P_{\alpha \beta}\equiv\eta_{\alpha \beta}- \frac{k_\alpha k_\beta}{m_g^2}. \end{equation} In tree level processes where there is a graviton exchange between conserved currents, the amplitude is of the form \begin{equation} {\cal A}_{FP} =\frac{\kappa^2}{4} T^{\alpha \beta} D^{(m)}_{\alpha \beta \mu \nu}T^{\prime \mu \nu}. \label{ampFP} \end{equation} The momentum dependent terms will vanish due to conservation of the stress tensor $k_\mu T^{\mu \nu}= k_\nu T^{\mu \nu}=0$. Hence, for tree level calculations one may drop the momentum dependent terms in Eq.\ref{Dm2} and the propagator for the FP theory may be written as \begin{equation} D^{(m)}_{\mu \nu \alpha \beta }(p)= \frac{1}{-k^2 +m_g^2} \left (\frac{1}{2} (\eta_{\alpha \mu} \eta_{\beta \nu} + \eta_{\alpha \nu }\eta_{\beta \mu}) - \frac{1}{3} \eta_{\alpha \beta}\eta_{\mu \nu} + (k{\rm -dependent\, terms}) \right). \label{Dm3} \end{equation} When the graviton is treated as a quantum field, the Feynman propagator is defined as in the massless theory Eq.\ref{prop2}, \begin{eqnarray} D^{(m)}_{\mu \nu \alpha \beta}(x-y)&=& \langle 0 | T (\hat {h}_{\mu \nu}(x) \hat {h}_{\alpha \beta}(y))|0\rangle\nonumber\\ &=&\int \frac{d^4 k}{(2 \pi)^4}\frac{1}{-k^2 + m_g^2 + i \epsilon} e^{i k(x-y)} \sum_{\lambda} \epsilon_{\mu\nu}^\lambda(k)\epsilon_{ \alpha\beta}^{*\lambda}(k). \label{Dm4} \end{eqnarray} Comparing Eq.\ref{Dm3} and Eq.\ref{Dm4} we see that the polarisation sum for the FP massive gravity theory can be written as \begin{equation} \sum_{\lambda} \epsilon_{\mu\nu}^\lambda(k)\epsilon_{ \alpha\beta}^{*\lambda}(k)= \frac{1}{2} (\eta_{\mu \alpha} \eta_{\nu \beta} + \eta_{\nu \alpha }\eta_{\mu \beta }) - \frac{1}{3} \eta_{\alpha \beta}\eta_{\mu \nu} + (k{\rm -dependent\, terms}). \label{polsumFP} \end{equation} In processes where there is graviton emission from an external leg as in the case of gravitational wave radiation from a classical current, the amplitude square will have the form \begin{equation} |{\cal M}|^2 = \left(\frac{\kappa^2}{4} \right )\sum_\lambda | \epsilon_{\mu\nu}^\lambda(k) T^{\mu\nu}(k^\prime)|^2= \left(\frac{\kappa^2}{4} \right ) \sum_\lambda \epsilon_{\mu\nu}^\lambda(k)\epsilon_{ \alpha\beta}^{*\lambda}(k) T^{\mu\nu}(k^\prime) T^{*\alpha \beta}(k^\prime). \end{equation} Since $T^{\mu \nu}$ is a conserved current, the momentum dependent pieces in the polarisation sum will give zero and we can drop them from Eq.\ref{polsumFP} for the calculations of diagrams with graviton emission from external legs as we will do in this paper. We see that when the propagator Eq.\ref{prop1} and polarisation sum Eq.\ref{app3} of the massless graviton theory is compared with the corresponding quantities Eq.\ref{Dm1} and Eq.\ref{polsumFP}, the massive theory differs from the massless theory even in the $m_g\rightarrow 0$ limit. There is an extra contribution of $(1/6) T^* T^\prime$ to the amplitude Eq.\ref{ampFP} in the FP theory. This is the contribution of the scalar degree of freedom of $g_{\mu \nu}$ which does not decouple in the $m_g\rightarrow 0$ limit. Consider the Newtonian potential between two massive bodies. The amplitude for the diagram with one graviton exchange is in GR is \begin{equation} {\cal A}_{GR}= \frac{\kappa^2}{4} T^{\mu \nu} D^{(0)}_{\mu \nu \alpha \beta}(k)T^{\prime \alpha \beta}\,. \label{amp1} \end{equation} The stress tensor for massive bodies at rest in a given reference frame is of the form $T^{\mu \nu}= (M_1, 0,0,0)$ and $T^{\prime \alpha \beta}= (M_2, 0,0,0)$ and the massless graviton propagator in GR is Eq.\ref{prop1}. The potential derived from Eq.\ref{amp1} is the usual Newtonian form \begin{eqnarray} V_{GR}&= &\frac{\kappa^2}{4}\int\frac{d^3k}{(2\pi)^3} e^{i k\cdot r}\frac{1}{-k^2} \left(T_{\mu \nu}-\frac{1}{2} \eta_{\mu \nu} T^{\alpha}_{\alpha}\right) T^{\prime \mu \nu}\nonumber\\ &=& \frac{G M_1 M_2}{r}, \end{eqnarray} where $\kappa=\sqrt{32\pi G}$, and $G$ stands for universal gravitational constant. On the other hand in the Fierz-Pauli theory the one graviton exchange amplitude Eq.\ref{ampFP} is \begin{equation} {\cal A}_{FP} = \frac{\kappa^2}{4}\frac{1}{-k^2+m_g^2} \left(T_{\mu \nu}-\frac{1}{3} \eta_{\mu \nu} T^{\alpha}_{\alpha}\right) T^{\prime \mu \nu}, \end{equation} and the gravitational potential between two massive bodies in the FP theory is \begin{eqnarray} V_{FP}= &= &\frac{\kappa^2}{4}\int\frac{d^3k}{(2\pi)^3} e^{i k\cdot r}\frac{1}{-k^2+m_g^2} \left(T_{\mu \nu}-\frac{1}{3} \eta_{\mu \nu} T^{\alpha}_{\alpha}\right) T^{\prime \mu \nu}\nonumber\\ &=& \left( \frac{4}{3}\right) \frac{G M_1 M_2}{r}e^{-m_g r}. \label{VFP} \end{eqnarray} The FP theory gives a Yukawa potential as expected however, in the $m_g \rightarrow 0$ limit the gravitational potential between massive bodies in the FP theory is a factor $4/3$ larger than the Newtonian potential arising from GR. This is ruled out from solar system tests of gravity \cite{Talmadge:1988qz} even in the $m_g\rightarrow 0$ limit. We note here that the bending of light by massive bodies is unaffected (in $m_g\rightarrow 0$ limit) as the stress tensor for photons $T^\mu_\nu=(\omega,0,0,-\omega)$ is traceless and the scattering amplitudes ${\cal A}_{FP}(m_g\rightarrow 0)= {\cal A}_{GR}$. Experimental observations \cite{Fomalont:2009zg} of the bending of radio waves by the Sun matches GR to 1\%. The two observations together imply that the extra factor of (4/3) in the Newtonian potential of FP theory cannot be absorbed by redefining $G$. The fact that the action of the FP theory \ref{eq:FP_action} goes to the Einstein-Hilbert action Eq.\ref{EH} in the $m_g \rightarrow 0$ limit while the propagator Eq.\ref{Dm3} does not go to the massless form Eq.\ref{prop1}, is what is called the vDVZ discontinuity pointed out by van Dam and Veltman \cite{vanDam:1970vg} and Zakharov \cite{Zakharov:1970cc}. It has been pointed out by Vainshtein \cite{Vainshtein:1972sx, Babichev:2013usa} that the linear FP theory breaks down at distances much larger than the Schwarzschild radius $R_s=2GM$ below which the linearised GR is no longer valid ($\kappa h_{\mu \nu} \sim 1$ (below this distance)). The scalar mode in FP theory becomes strongly coupled with decreasing $m_g$ and the minimum radius from a massive body at which the linearised FP theory is valid is called the Vainshtein radius and is given by $R_V = (R_s /m_g^4)^{1/5}$. We will discuss the Vainshtein radius of different theories of gravity discussed in this paper and how this consideration limits the bounds on $m_g$ from binary systems derived in this paper in Section\ref{Vainshtein}. \subsection{Graviton radiation from binaries in Fierz-Pauli theory} \begin{figure} \centering \includegraphics[width=4.0in,angle=360]{feyn.png} \caption{Emission of graviton from a classical source.} \label{fig:feyn} \end{figure} We consider the graviton radiation from the compact binary systems classically. The pictorial representation of graviton emission from a classical source is shown in FIG.\ref{fig:feyn}. The classical graviton current $T^{\mu\nu}$ is detemined from the Kepler's orbit and the interaction vertex is $\frac{1}{2}\kappa h_{\mu\nu}T^{\mu\nu}$, where $h_{\mu\nu}$ is the graviton field and $\kappa=\sqrt{32\pi G}$. Here we use linearized gravity formulation with an extension of non zero graviton mass term Eq.\ref{eq:FP_action} to calculate the energy loss of a compact binary system due to graviton emission. The emission rate of graviton from the interaction Lagrangian between the gravity and source $(\frac{1}{2}\kappa h_{\mu\nu}T^{\mu\nu})$ is given by \begin{equation} d\Gamma= \frac{\kappa^2}{4}\sum_{\lambda} |T_{\mu\nu}(k^\prime)\epsilon^{\mu\nu}_\lambda(k)|^2 2\pi \delta(\omega-\omega^\prime)\frac{d^3k}{(2\pi)^3}\frac{1}{2\omega}, \label{eq:1} \end{equation} where $T_{\mu\nu}(k^\prime)$ is the classical graviton current in the momentum space. Expanding the modulus squared in Eq.\ref{eq:1}, we can write \begin{equation} d\Gamma=\frac{\kappa^2}{8(2\pi)^2}\sum_{\lambda}\Big(T_{\mu\nu}(k^\prime)T^*_{\alpha\beta}(k^\prime)\epsilon^{\mu\nu}_\lambda(k)\epsilon^{*\alpha\beta}(k)\Big)\frac{d^3k}{\omega}\delta(\omega-\omega^\prime). \label{eq:2} \end{equation} Using the polarization sum of the Fierz-Pauli theory \ref{polsumFP} this expression becomes \begin{eqnarray} d\Gamma&=&\frac{\kappa^2}{8(2\pi)^2}\int \Big[T_{\mu\nu}(k^\prime)T^*_{\alpha\beta}(k^\prime)\Big]\Big[\frac{1}{2}(\eta^{\mu\alpha}\eta^{\nu\beta}+\eta^{\mu\beta}\eta^{\nu\alpha}-\eta^{\mu\nu}\eta^{\alpha\beta})+\frac{1}{6}\eta^{\mu\nu}\eta^{\alpha\beta}\Big]\frac{d^3k}{\omega}\delta(\omega-\omega^\prime)\,.\nonumber\\ \end{eqnarray} The extra $(1/6)\eta^{\mu\nu}\eta^{\alpha\beta}$ term compared to the massless graviton case is the contribution of the scalar mode in FP theory. Simplifying, we obtain \begin{eqnarray} d\Gamma&=&\frac{\kappa^2}{8(2\pi)^2}\int \Big[|T_{\mu\nu}(k^\prime)|^2-\frac{1}{3}|T^{\mu}{}_{\mu}(k^\prime)|^2\Big]\delta(\omega-\omega^\prime)\omega \Big(1-\frac{m^2_{g}}{\omega^2}\Big)^\frac{1}{2}d\omega d\Omega_k, \label{eq:4} \end{eqnarray} where we have used $d^3k=k^2dkd\Omega$ and the dispersion relation $k^2=(\omega^2-m^2_g)$. From the emission rate we can calculate the rate of energy loss due to massive graviton emission which is \begin{equation} \frac{dE}{dt}=\frac{\kappa^2}{8(2\pi)^2}\int \Big[|T_{\mu\nu}(k^\prime)|^2-\frac{1}{3}|T^{\mu}{}_{\mu}(k^\prime)|^2\Big]\delta(\omega-\omega^\prime)\omega^2 \Big(1-\frac{m^2_{g}}{\omega^2}\Big)^\frac{1}{2}d\omega d\Omega_k. \label{eq:5} \end{equation} For the massive graviton, the dispersion relation is \begin{equation} |\textbf{k}|^2=\omega^2\Big(1-\frac{m^2_g}{\omega^2}\Big). \label{eq:a1} \end{equation} Hence, the unit vector along the momentum direction of graviton is $\hat{k^i}=\frac{k^i}{\omega\sqrt{1-\frac{m^2_g}{\omega^2}}}$. Using the relation $k_\mu T^{\mu\nu}=0$ and Eq.\ref{eq:a1}, we can write the $T_{00}$ and $T_{i0}$ components of the stress tensor in terms of $T_{ij}$ as follows \begin{equation} T_{0j}=-\sqrt{1-\frac{m^2_g}{\omega^2}}\hat{k^i}T_{ij},\hspace{0.5cm} T_{00}=\Big(1-\frac{m^2_g}{\omega^2}\Big)\hat{k^i}\hat{k^j}T_{ij}. \end{equation} Hence, we can write \begin{equation} \Big[|T_{\mu\nu}(k^\prime)|^2-\frac{1}{3}|T^{\mu}{}_{\mu}(k^\prime)|^2\Big]\equiv {\Lambda_{ij,lm}}T^{ij*}T^{lm}, \end{equation} where, \begin{equation} {\Lambda_{ij,lm}}=\Big[\delta_{il}\delta_{jm}-2\Big(1-\frac{m^2_g}{\omega^2}\Big)\hat{k_j}\hat{k_m}\delta_{il}+\frac{2}{3}\Big(1-\frac{m^2_g}{\omega^2}\Big)^2\hat{k_i}\hat{k_j}\hat{k_l}\hat{k_m}-\frac{1}{3}\delta_{ij}\delta_{lm}+\frac{1}{3}\Big(1-\frac{m^2_g}{\omega^2}\Big)\Big(\delta_{ij}\hat{k_l}\hat{k_m}+\delta_{lm}\hat{k_i}\hat{k_j}\Big)\Big]. \label{rat1} \end{equation} Therefore, we can write Eq.\ref{eq:5} as \begin{equation} \frac{dE}{dt}=\frac{\kappa^2}{8(2\pi)^2}\int {\Lambda_{ij,lm}}T^{ij*}T^{lm} \delta(\omega-\omega^\prime)\omega^2 \Big(1-\frac{m^2_{g}}{\omega^2}\Big)^\frac{1}{2}d\omega d\Omega_k. \end{equation} We can do the angular integrals using the relations \ref{dOmegak} and obtain, \begin{eqnarray} \int d\Omega_k \Lambda_{ij,lm}T^{ij*}(\omega^\prime)T^{lm}({\omega^\prime})&=& \frac{8\pi}{5}\left(\left[\frac{5}{2}-\frac{5}{3}\left(1-\frac{m_g^2}{\omega'^2}\right)+\frac{2}{9}\left(1-\frac{m_g^2}{\omega'^2}\right)^2\right]T^{ij}T*_{ij}\right.\nonumber\\ && \left. +\left[-\frac{5}{6}+\frac{5}{9}\left(1-\frac{m_g^2}{\omega'^2}\right)+\frac{1}{9}\left(1-\frac{m_g^2}{\omega'^2}\right)^2\right]\vert T^i{}_{i}\vert^2\right). \label{eq:a2} \end{eqnarray} Hence, the rate of energy loss becomes \begin{eqnarray} \frac{dE}{dt}&=& \frac{8G}{5}\int \left[\left\lbrace\frac{5}{2}-\frac{5}{3}\left(1-\frac{m_g^2}{\omega'^2}\right)+\frac{2}{9}\left(1-\frac{m_g^2}{\omega'^2}\right)^2\right\rbrace T^{ij}T^*_{ij}\right.\nonumber\\ && \left. +\left\lbrace -\frac{5}{6}+\frac{5}{9}\left(1-\frac{m_g^2}{\omega'^2}\right)+\frac{1}{9}\left(1-\frac{m_g^2}{\omega'^2}\right)^2\right\rbrace\vert T^i{}_{i}\vert^2\right]\delta(\omega-\omega^\prime)\omega^2 \Big(1-\frac{m^2_{g}}{\omega^2}\Big)^\frac{1}{2}d\omega. \nonumber\\ \label{eq:a4} \end{eqnarray} In the massless gravity theory the prefactors of $T^{ij}T^*_{ij}$ and $\vert T^i{}_i\vert^2$ are $1$ and $-1/3$ respectively. Note that the $m_g\rightarrow 0$ limit of Eq.\ref{eq:a4} gives different prefactors. In the massive graviton limit, all the five polarization components contribute to the energy loss instead of two as in the massless limit. Therefore, from Eq.\ref{eq:a4}, we will not obtain the energy loss for massless limit by simply putting $m_g\rightarrow 0$. In Appendix\ref{appendi} we obtain the energy loss due to massless graviton radiation from compact binary systems. In massive gravity theories, the Newtonian gravitational potential takes different form than GR. As a result the Keplerian orbits are also affected. For, FP theory the potential energy for binary system takes the form of Yukawa-type with 4/3 extra pre-factor as discussed in Eq.~(\ref{VFP}) when there is no screening. However, for GW emission we must have $n_0=m_g/\Omega<1$ which implies that $a<R_V$ and therefore the Newtonian potential for orbital motion of the binary system is Vainshtein screened. There will be the corrections in the Newtonian gravitational potential energy from the screened scalar mode.\\ Concretely, to see the effects of the scalar polarisation in this $a < R_V$ limit one can split the massive $h$ into $\tilde{h}+\partial A/m_g + \partial \partial \phi/m_g^2$ such that $\tilde{h}_{\mu\nu}$ now enjoys a gauge invariance and carries only the two tensor modes, while $\phi$ carries the scalar mode (the vector mode $A_{\mu}$ can be consistently set to zero for this matter configuration). After $\tilde{h}_{\mu\nu}$ and $\phi$, the action in the decoupling limit is \cite{Arkani-Hamed:2002bjr}, \begin{equation} S=\int d^4x \left[ \frac{1}{2}\tilde{h}_{\mu\nu}\mathcal{E}^{\mu\nu\alpha\beta}\tilde{h}_{\alpha\beta} -\frac{1}{2}\phi\Box\phi + \frac{1}{2M_{pl}} \tilde{h}_{\mu\nu}T^{\mu\nu}+ \frac{1}{2M_{pl}}\phi T+ \mathcal{L}_{\rm{int}}\right] \label{action_decoupling} \end{equation} The precise interactions will depend on specific massive gravity theory. For FP theory, there will be non-linearities like, \begin{equation} \mathcal{L}_{\rm{int}}\sim \left[\alpha(\Box \phi)^3+\beta \Box\phi \phi_{,\mu\nu}\phi^{,\mu\nu}\right], \end{equation} where $\alpha$ and $\beta$ are model dependent coefficients. At $r=a<<R_V$, deep inside the Vainshtein region, the equation of motion for $\phi$ gives, \begin{eqnarray*} \frac{\phi}{M_{pl}}\sim m_g^2\sqrt{R_sa^3}\sim n_0\frac{h}{M_{pl}}, \end{eqnarray*} from balancing $\mathcal{L}_{\rm{int}}\sim \phi^3/(M_{pl}m_g^4r^6)$ against $\phi T/M_{pl} \sim \phi M/(M_{pl}r^3)$. Here $a$ denotes the semi major axis of the binary orbit. So the scalar fifth force is suppressed by $n_0$ relative to the Newtonian force. \\ However, we neglect the corrections as they are small and will not affect our order of magnitude results and, therefore, we only consider the GW stress-energy tensor. Thus our results are approximate and not valid for all orders of $n_0$. From Eq.\ref{eq:13} we get \begin{equation} \Big[T_{ij}(\omega^\prime)T^*_{ji}(\omega^\prime)-\frac{1}{3}|T^{i}{}_{i}(\omega^\prime)|^2\Big]=4\mu^2{\omega^{\prime}}^4a^4f(n,e). \label{s1} \end{equation} where $n_0=\frac{m_g}{\Omega}$, and \begin{equation} \begin{split} f(n,e)=\frac{1}{32n^2}\Big\{[J_{n-2}(ne)-2eJ_{n-1}(ne)+2eJ_{n+1}(ne)+\frac{2}{n}J_n(ne)-J_{n+2}(ne)]^2+\\ (1-e^2)[J_{n-2}(ne)-2J_n(ne)+J_{n+2}(ne)]^2+\frac{4}{3n^2}J^2_{n}(ne)\Big\}. \end{split} \end{equation} The final expression of $dE/dt$ for massive Fierz Pauli theory can be written in the compact form as \begin{equation} \begin{split} \frac{dE}{dt}= \frac{32G}{5} \mu^2 a^4\Omega^6 \sum_{n=1}^{\infty} n^6\sqrt{1-\frac{n_0^2}{n^2}}\left[f(n,e)\left( \frac{19}{18}+\frac{11}{9}\frac{n_0^2}{n^2}+\frac{2}{9}\frac{n_0^4}{n^4}\right) + \frac{5J^2_n(ne)}{108n^4}\left(1-\frac{n_0^2}{n^2}\right)^2\right]. \end{split} \label{eq:dedt_FP} \end{equation} We can split Eq.\ref{eq:dedt_FP} as \begin{equation} \begin{split} \frac{dE}{dt}= \frac{32G}{5} \mu^2 a^4\Omega^6 \sum_{n=1}^{\infty} n^6\sqrt{1-\frac{n_0^2}{n^2}}\left[f(n,e)\left( 1+\frac{4}{3}\frac{n_0^2}{n^2}+\frac{1}{6}\frac{n_0^4}{n^4}\right) -\frac{5J^2_n(ne)}{36n^4}\frac{n_0^2}{n^2}\left(1-\frac{n_0^2}{4n^2}\right)\right]+\\ \frac{32G}{5} \mu^2 a^4\Omega^6 \sum_{n=1}^{\infty} n^6\sqrt{1-\frac{n_0^2}{n^2}}\Big[\frac{1}{18}f(n,e)\Big(1-\frac{n^2_0}{n^2}\Big)^2+\frac{5J^2_n(ne)}{108n^4}\Big(1+\frac{n^2_0}{2n^2}\Big)^2\Big], \end{split} \label{eq:wa} \end{equation} where the first term in Eq.\ref{eq:wa} denotes the energy loss in the massive gravity theory without vDVZ discontinuity (Eq.\ref{eq:dedt_FPd}) and the second term denotes the contribution due to the scalar mode associated with $\frac{1}{6}\eta_{\mu\nu}\eta_{\alpha\beta}$. We can also write Eq.\ref{eq:wa} to the leading order in $n^2_0$ as \begin{equation} \frac{dE}{dt}\simeq \frac{32G}{5} \mu^2 a^4\Omega^6 \Big[\sum_{n=1}^{\infty}\Big(\frac{19}{18}n^6f(n,e)+\frac{5}{108}n^2J^2_n(ne)\Big)+n^2_0\sum_{n=1}^{\infty}\Big(\frac{25}{36}n^4f(n,e)-\frac{25}{216}J^2_n(ne)\Big)\Big]+\mathcal{O}(n^4_0). \end{equation} The rate of energy loss in the Keplerian orbit leads to the decrease in orbital period decay at a rate \begin{equation} \dot{P_b}=-6\pi G^{-\frac{3}{2}}(m_1m_2)^{-1}(m_1+m_2)^{-\frac{1}{2}}a^{\frac{5}{2}}\Big(\frac{dE}{dt}\Big). \label{ms} \end{equation} \begin{figure} \centering \includegraphics[width=12.0cm]{grav3} \caption{Variation of $n^6f(n,e)$ with $n$ for different orbital eccentricity.} \label{fig:feynu} \end{figure} The energy loss or the power radiated from the binary system increases with increasing the eccentricity as it is clear from FIG.\ref{fig:feynu}, since the energy loss in the first term is proprtional to $n^6f(n,e)$. The radiation is dominated by the higher harmonics for $e\approx 1$. The radiation has a peak at some particular value of $n$ for a given eccentric orbit. \section{Massive gravity without vDVZ discontinuity}\label{FPa} In the Fierz-Pauli theory Eq.\ref{eq:FP_action} there is no ghost owing to the fact that the relative coefficients of the $h^2$ and $h_{\mu \nu}h^{\mu \nu}$ terms is choosen as $-1$. Generalising the theory beyond this point will lead to the appearance of ghosts. There is a special choice of coefficient where the ghost term cancels the extra scalar contribution to the propagator. In this theory therefore there is no vDVZ discontinuity and there are no ghosts \cite{Gambuti:2020onb,Gambuti:2021meo}. Phenomenologically this theory has the simple generalisation of the spin-2 graviton with 2 polarizations which obey the dispersion relation $k_0^2=|\vec k|^2 +m_g^2$. Consider the one parameter generalisation of the Fierz-Pauli theory \begin{eqnarray} S=\int d^4x \left[ \frac{1}{2} h_{\mu \nu} {\cal E}^{\mu \nu \alpha \beta}h_{\alpha \beta} +\frac{1}{2}m^2_g h_{\mu \nu} \left(\eta^{\mu (\alpha } \eta^{\beta) \nu}-(1-a) \eta^{\mu \nu}\eta^{\alpha \beta})\right) h_{\alpha \beta}+ \frac{\kappa}{2} h_{\mu \nu }T^{\mu \nu} \right], \label{eq:FP_deformed} \end{eqnarray} where $a=0$ corresponds to the Fierz-Pauli theory Eq.\ref{eq:FP_action}. We will derive the equations assuming $a\neq 0$ and see which values of $a$ can solve the problem of vDVZ discontinuity which is generic in massive gravity theories. The equation of motion from Eq.\ref{eq:FP_deformed} is \begin{eqnarray} \left( \Box +m_g^2 \right) h_{\mu \nu} -\eta_{\mu \nu} \left(\Box +m_g^2(1-a)\right) h - \partial_\mu \partial^\alpha h_{\alpha \nu} - \partial_\nu \partial^\alpha h_{\alpha \mu} + \eta_{\mu \nu} \partial^\alpha \partial^\beta h_{\alpha \beta}+ \partial_\mu \partial_\nu h = -\kappa T_{\mu \nu}. \nonumber\\ \label{deom1} \end{eqnarray} Taking the divergence of Eq.\ref{deom1} we have \begin{equation} m_g^2 \left( \partial^\mu h_{\mu \nu} -(1-a) \partial_\nu h \right)=0. \label{deom2} \end{equation} These are 4 constraint equations which reduce the independent degrees of freedom of the graviton from 10 to 6. Using Eq.\ref{deom2} in Eq.\ref{deom1} we obtain \begin{equation} (\Box +m_g^2) h_{\mu \nu}- a \eta_{\mu \nu} \Box h -(1-2 a)\partial_\mu\partial_\nu h - m_g^2 \eta_{\mu \nu}(1-a)h = -\kappa T_{\mu \nu}. \end{equation} Taking the trace of this equation we obtain \begin{equation} -2 a \Box h-(3m^2_g -4 m^2_g a)h=- \kappa T. \end{equation} We see that the $h$ is now a propagating field if $a \neq 0$. The kinetic term for $h$ appears with a minus sign so $h$ is a ghost field. The homogenous equation for $h$ can be written as \begin{equation} \Box h-m_h^2 h =0 \end{equation} with the ghost mass given by \begin{equation} m_h^2 = \frac{m^2_g}{2} \left( 1+3 \left(1- \frac{1}{a}\right) \right). \label{mh} \end{equation} The propagator of the deformed Fierz-Pauli theory Eq.\ref{eq:FP_deformed} is given \begin{equation} \left[ {\cal E}^{\mu \nu \alpha \beta}+m_g^2 \left(\eta^{\mu (\alpha } \eta^{\beta) \nu}- \eta^{\mu \nu}\eta^{\alpha \beta} (1-a) \right)\right] D^{(a)}_{\alpha \beta \rho \sigma}(x-y)=\delta^\mu_{(\rho}\delta^\nu_{\sigma)}\delta^4(x-y). \label{Dalpha1} \end{equation} This equation can be inverted to give the $D^{(a)}_{\alpha \beta \rho \sigma}$ which in momentum space turns out of the form \begin{eqnarray} D^{(a)}_{\alpha \beta \mu \nu}(k) &=& \frac{1}{-k^2 +m_g^2} \left (\frac{1}{2} (\eta_{\alpha \mu} \eta_{\beta \nu} + \eta_{\alpha \nu }\eta_{\beta \mu}) - \frac{1}{3} \eta_{\alpha \beta}\eta_{\mu \nu} \right) + \frac{i}{k^2 +m_h^2} \left( \frac{1}{6} \eta_{\alpha \beta}\eta_{\mu \nu} \right ) \nonumber\\ &+& (k{\rm -dependent\, terms}). \label{Dalpha2} \end{eqnarray} This shows that there are two types of contributions to the propagator helicity-2 states of spin-2 massive gravitons (there are also helicity-1 and helicity-0 states) and a massive scalar with mass $m_h$. This part is identical to the propagator of the Fierz-Pauli theory. In \ref{Dalpha2} there is an additional contribution from the ghost mode with the kinetic operator $k^2$ with the wrong sign and mass $m_h$ given in Eq.\ref{mh}. The remaining 3 vector degrees of freedom do not couple to the energy momentum tensor and we ignore their contribution here. Now if we choose the parameter $a=1/2$, the mass of the ghost mode Eq.\ref{mh} becomes $m_h^2 = -m_g^2$. The ghost mode for $a=1/2$ becomes tachyonic. Substituting $m_h^2=-m_g^2$ in Eq.\ref{Dalpha2} we see that the propagator simplifies to the form \begin{eqnarray} D^{(1/2)}_{\alpha \beta \mu \nu}(k) &=& \frac{1}{-k^2 +m_g^2} \left (\frac{1}{2} (\eta_{\alpha \mu} \eta_{\beta \nu} + \eta_{\alpha \nu }\eta_{\beta \mu}) - \frac{1}{2} \eta_{\alpha \beta}\eta_{\mu \nu} \right) + (k{\rm -dependent\, terms}). \nonumber\\ \label{Dalpha3} \end{eqnarray} The ghost term with tachyonic mass cancels the extra scalar contribution to the propagator and we are left with the tensor structure of the propagator which is the same as for the massless gravitons Eq.\ref{prop1} but which have the dispersion relations of massive gravitons, $k_0^2= |\vec k|^2 + m_g^2$. In the limit $m_g \rightarrow 0$ the propagator goes to the massles propagator form Eq.\ref{prop1} and thus there is no vDVZ discontinuity. Form the tensor structure of Eq.\ref{Dalpha3} it is clear that for $m_g\rightarrow 0$ the polarisation sum takes the form as Eq.\ref{app3} same as that of the massless theory. The gravitational potential in this theory takes the Yukawa form \begin{eqnarray} V^{(1/2)}(r)= \frac{G M_1 M_2}{r}\, e^{-m_g r}, \end{eqnarray} and the extra factor of (4/3) which was there in the FP theory Eq.\ref{VFP} is absent due to cancellation of the scalar graviton mode with the ghost contribution in the propagator. The Yukawa corrections to the $1/r$ potential will give rise to a perihelion precession in planetary orbits \cite{Poddar:2020exe}. Constraints on the Yukawa potential between planets and the sun which give bounds in the mass of the exchanged particle have been obtained in \cite{Poddar:2020exe}. The long range Yukawa potential caused by axions can also affect the gravitational light bending and Shapiro time delay which is discussed in \cite{KumarPoddar:2021ked}. This theory which avoids contributions from the extra scalars mode is phenomenologically the most acceptable. The classical calculation of energy loss from binaries in this spin-2 massive gravity theory was done by Finn and Sutton \cite{Finn:2001qi}. Our calculation which we present now is the QFT version of this calculation. We find that the result of our tree level QFT calculation agrees in the leading order with the result of \cite{Finn:2001qi}. From the direct detection of gravitational waves by Virgo and Ligo \cite{TheLIGOScientific:2016src}, mass of the spin-2 graviton is $m_g<1.2\times 10^{-22}\rm{eV}$ which is derived from the experimental upper bound on the dispersion of the gravitational wave event GW150914. \subsection{Graviton radiation in massive gravity without vDVZ discontinuity} In limit $a<R_V$ the Keplarian orbits are also Vainshtein screened similar to FP theory as discussed before and there will be corrections at $\mathcal{O}(n_0)$ in Newtonian potential. Therefore, we consider GR stress-tensor in this case as well. Following the steps described in the Appendix~\ref{appendi}, we compute the rate of energy loss due to the massive graviton radiation as \begin{eqnarray} \frac{dE}{dt}&=&\frac{\kappa^2}{8(2\pi)^2}\int \Big[|T_{\mu\nu}(k^\prime)|^2-\frac{1}{2}|T^{\mu}{}_{\mu}(k^\prime)|^2\Big]\delta(\omega-\omega^\prime)\omega^2 \Big(1-\frac{m^2_g}{\omega^2}\Big)^\frac{1}{2}d\omega d\Omega_k\\ &=& \frac{\kappa^2}{8(2\pi)^2}\int \tilde{\Lambda}_{ij,lm}T^{ij*}T^{lm} \delta(\omega-\omega^\prime)\omega^2 \Big(1-\frac{m^2_g}{\omega^2}\Big)^\frac{1}{2}d\omega d\Omega_k, \label{eq:dedt_deformedFP} \end{eqnarray} where \begin{equation} \begin{split} {\tilde{\Lambda}_{ij,lm}}=\Big[\delta_{il}\delta_{jm}-2\left(1-\frac{m_g^2}{\omega^2}\right)\hat{k_j}\hat{k_m}\delta_{il}+\frac{1}{2}\left(1-\frac{m_g^2}{\omega^2}\right)^2\hat{k_i}\hat{k_j}\hat{k_l}\hat{k_m}-\frac{1}{2}\delta_{ij}\delta_{lm} \\ +\frac{1}{2}\left(1-\frac{m_g^2}{\omega^2}\right)\Big(\delta_{ij}\hat{k_l}\hat{k_m}+\delta_{lm}\hat{k_i}\hat{k_j}\Big)\Big]. \end{split} \label{eq;Lambda_deformedFP} \end{equation} After computation of the angular integration we obtain \begin{eqnarray} \frac{dE}{dt}&=& \frac{8G}{5}\int \left[\left\lbrace\frac{5}{2}-\frac{5}{3}\left(1-\frac{m_g^2}{\omega'^2}\right)+\frac{1}{6}\left(1-\frac{m_g^2}{\omega'^2}\right)^2\right\rbrace T^{ij}T^*_{ij}\right.\nonumber\\ && \left. +\left\lbrace -\frac{5}{4}+\frac{5}{6}\left(1-\frac{m_g^2}{\omega'^2}\right)+\frac{1}{12}\left(1-\frac{m_g^2}{\omega'^2}\right)^2\right\rbrace\vert T^i{}_{i}\vert^2\right]\delta(\omega-\omega^\prime)\omega^2 \Big(1-\frac{m^2_{g}}{\omega^2}\Big)^\frac{1}{2}d\omega. \nonumber\\ \label{eq:a4_deformedFP} \end{eqnarray} Finally we get the expression for the rate of energy loss as \begin{equation} \begin{split} \frac{dE}{dt}= \frac{32G}{5} \mu^2 a^4\Omega^6 \sum_{n=1}^{\infty} n^6\sqrt{1-\frac{n_0^2}{n^2}}\left[f(n,e)\left( 1+\frac{4}{3}\frac{n_0^2}{n^2}+\frac{1}{6}\frac{n_0^4}{n^4}\right) -\frac{5J^2_n(ne)}{36n^4}\frac{n_0^2}{n^2}\left(1-\frac{n_0^2}{4n^2}\right)\right]. \end{split} \label{eq:dedt_FPd} \end{equation} To the leading order in $n_0^2$, we can write Eq.\ref{eq:dedt_FPd} as \begin{equation} \frac{dE}{dt}\simeq \frac{32G}{5} \mu^2 a^4\Omega^6\Big[\sum_{n=1}^{\infty} n^6 f(n,e)+n^2_0 \sum_{n=1}^{\infty}\Big(\frac{5}{6}n^4 f(n,e)-\frac{5}{36}J^2_n(ne)\Big)\Big]+\mathcal{O}(n^4_0). \end{equation} We note that the expression reduces to that of GR in the limit $n_0=0$. Thus there is no vDVZ discontinuity. To the leading order in $n_0^2$ this agrees with the result of the classical calculation of Finn and Sutton \cite{Finn:2001qi}. \section{ Dvali-Gabadadze-Porrati (DGP) theory}\label{DGP} The GR theory is a non linear theory which obeys diffeomorphism invariance. However, this symmetry is broken in theories with a massive graviton. In FP theory, if the graviton is expanded around curved spacetime a ghost degree of freedom appears \cite{Boulware:1973my}. To obtain a consistent massive gravity theory free from any ghost, one can go to higher dimension. One such massive gravity theory in higher dimension using a braneworld model framework is the DGP theory \cite{Dvali:2000hr, Dvali:2000rv,Dvali:2000xg,Dvali:2006su}. In the higher dimensions the massless gravity theory has a general covariance symmetry. The number of polarisation states of the spin-2 massless graviton in 5-dimensions is $5$. When the extra dimension compactifes the number of massive graviton degrees of freedom in 4-d remains 5 and there is no (Boulware Deser) BD ghost. The DGP theory in a cosmological background can account for the cosmological constant \cite{Deffayet:2001pu}. The mass of gravitons is momentum dependent so that one can modify the infrared theory (at cosmological scales) while retaining Newtonian theory at solar system scales. The scalar degree of freedom of the graviton however still contributes to the vDVZ discontinuity which remains a problem for the phenomenological study of the DGP theory of massive gravity \cite{Dvali:2006su}. In the five dimensional DGP theory, the matter field is localized in a four dimensional brane world which leads to an induced curvature term on the brane. The Planck scales of the five dimensional DGP theory with the four dimensional brane world are denoted by $M_5$ and $M_{pl}$ respectively. The action of five dimensional DGP model \cite{Dvali:2000hr,Dvali:2000rv,Dvali:2000xg} with the matter field localized in four dimensional brane world at $y=0$ is \begin{equation} \mathcal{S}\supset\int d^4x dy\Big(\frac{M^3_5}{4}\sqrt{-^{(5)}g} {}^{(5)}R+\delta(y)\Big[\sqrt{-g}\frac{M_{pl}^2}{2}R[g]+\mathcal{L}_m(g,\psi_i)\Big]\Big), \label{j1} \end{equation} where $\psi_i$ denotes the matter field with the energy stress tensor $T_{\mu\nu}$ in the brane world. The resulting modified linearized Einstein equation on the $y=0$ brane is \cite{deRham:2014zqa} \begin{equation} \left(\Box h_{\mu\nu} - \partial_{\mu}\partial_{\nu} h\right) -m_0\sqrt{-\Box}\left(h_{\mu\nu}-h\eta_{\mu\nu}\right)=- \frac{\kappa}{2}T_{\mu\nu}(x), \end{equation} where $m_0=\frac{M^3_5}{M^2_{pl}}$, $M_{pl}^2=1/{8\pi G}=4/\kappa^2$. Here, the Fierz-Pauli mass term ($h_{\mu\nu}-h\eta_{\mu\nu}$) appears naturally from the higher dimensional DGP theory. This corresponds to the linearized massive gravity with a scale-dependent effective mass $m^2_g(\Box)=m_0\sqrt{-\Box}$. The propagator is \begin{eqnarray} D^{(5)}_{\alpha \beta \mu \nu}(k) &=& \frac{i}{(-\omega^2 +|\textbf{k}|^2) +m_0 (\omega^2 -|\textbf{k}|^2)^{1/2}} \left (\frac{1}{2} (\eta_{\alpha \mu} \eta_{\beta \nu} + \eta_{\alpha \nu }\eta_{\beta \mu}) - \frac{1}{3} \eta_{\alpha \beta}\eta_{\mu \nu} \right). \label{propDGP} \end{eqnarray} The terms in the brackets represent the polarization sum which is identical to that of the FP theory Eq.\ref{polsumFP}. In the $m_0\rightarrow 0$ limit the DGP propagator does not go to the massless form Eq.\ref{prop1} and the DGP theory also has the vDVZ discontinuity. The dispersion relation corresponding to real gravitons in the DGP model is given by the pole of the propagator Eq.\ref{propDGP}, \begin{equation} \omega^2= |\textbf{k}|^2 - m_0^2, \end{equation} where $|\textbf{k}|$ is the magnitude of the propagation vector. We note that in the DGP theory the graviton has a tachyonic mass. Following the same steps of FP theory that we have done in the previous section, we write down the energy loss due to massive graviton radiation in DGP theory. All the relevant expressions in DGP theory differ from those of the FP theory by replacing $m_g^2\rightarrow -m_0^2$ and $\tilde{n}_0^2=m_0^2/\Omega^2=-n_0^2$, i.e. \begin{equation} \frac{dE}{dt}=\frac{\kappa^2}{8(2\pi)^2}\int \Big[|T_{\mu\nu}(k^\prime)|^2-\frac{1}{3}|T^{\mu}{}_{\mu}(k^\prime)|^2\Big]\delta(\omega-\omega^\prime)\omega^2 \Big(1+\frac{m^2_{0}}{\omega^2}\Big)^\frac{1}{2}d\omega d\Omega_k. \label{j2} \end{equation} The components of the stress tensor in $x-y$ plane is given in Eq.\ref{eq:13}. The dispersion relation gives $\hat{k}^i=\frac{k^i}{\omega\sqrt{1+\frac{m^2_0}{\omega^2}}}$. The other components of $T_{\mu\nu}$ can be obtained by using the current conservation relation $k_\mu T^{\mu\nu}=0$ which yields, \begin{equation} T_{0j}=-\sqrt{1+\frac{m^2_0}{\omega^2}}\hat{k^i}T_{ij},\hspace{0.5cm} T_{00}=\Big(1+\frac{m^2_0}{\omega^2}\Big)\hat{k^i}\hat{k^j}T_{ij}. \label{j3} \end{equation} Hence, the term in the third bracket of Eq.\ref{j2} can be written in terms of the projection operator $\tilde{\Lambda}_{ij,lm}$ as \begin{equation} \Big[|T_{\mu\nu}(k^\prime)|^2-\frac{1}{3}|T^{\mu}{}_{\mu}(k^\prime)|^2\Big]=\tilde{\Lambda}_{ij,lm}T^{ij*}T^{lm}, \label{j4} \end{equation} where \begin{equation} {\tilde{\Lambda}_{ij,lm}}=\Big[\delta_{il}\delta_{jm}-2\Big(1+\frac{m^2_0}{\omega^2}\Big)\hat{k_j}\hat{k_m}\delta_{il}+\frac{2}{3}\Big(1+\frac{m^2_0}{\omega^2}\Big)^2\hat{k_i}\hat{k_j}\hat{k_l}\hat{k_m}-\frac{1}{3}\delta_{ij}\delta_{lm}+\frac{1}{3}\Big(1+\frac{m^2_0}{\omega^2}\Big)\Big(\delta_{ij}\hat{k_l}\hat{k_m}+\delta_{lm}\hat{k_i}\hat{k_j}\Big)\Big]. \label{j5} \end{equation} In DGP theory, there will be corrections to Newtonian gravitational potential at $\mathcal{O}(n_0)$ in the $a<R_V$ region where Vainshtein screening is active. We can arrive at this from the similar analysis as described in the FP theory. However, in the action \ref{action_decoupling}, there will be non-linearities like \cite{deRham:2012fw}, \begin{equation} \mathcal{L}_{\rm{int}}\sim \frac{1}{M_{pl}m_g^2}(\partial \phi)^2\Box \phi. \end{equation} At $r=a<<R_V$, deep inside the Vainshtein region, the equation of motion for $\phi$ gives, \begin{equation} \frac{\phi}{M_{pl}}\sim m_g\sqrt{\frac{a^3}{R_s}}\frac{R_s}{a} \sim n_0\frac{h}{M_{pl}} \end{equation} from balancing $\mathcal{L}_{\rm{int}}\sim \phi^3/(M_{pl}m_g^2r^4)$ against $\phi T/M_{pl} \sim \phi M/(M_{pl}r^3)$, and so the fifth force mediated by the scalar polarisation is only suppressed by $n_0$ relative to the Newtonian force. As before we neglect the correction and consider the GR stress-energy tensor in the calculation of graviton emission rate. The final expression of $dE/dt$ for massive DGP theory can be written in the compact form as \begin{equation} \begin{split} \frac{dE}{dt}= \frac{32G}{5} \mu^2 a^4\Omega^6 \sum_{n=1}^{\infty} n^6\sqrt{1+\frac{\tilde{n}_0^2}{n^2}}\left[f(n,e)\left( \frac{19}{18}-\frac{11}{9}\frac{\tilde{n}_0^2}{n^2}+\frac{2}{9}\frac{\tilde{n}_0^4}{n^4}\right) + \frac{5J^2_n(ne)}{108n^4}\left(1+\frac{\tilde{n}_0^2}{n^2}\right)^2\right]. \end{split} \label{eq:dedt_DGP} \end{equation} We can write Eq.\ref{eq:dedt_DGP} to the leading order in $n^2_0$ as \begin{equation} \frac{dE}{dt}\simeq \frac{32G}{5} \mu^2 a^4\Omega^6 \Big[\sum_{n=1}^{\infty}\Big(\frac{19}{18}n^6f(n,e)+\frac{5}{108}n^2J^2_n(ne)\Big)-\tilde{n}_0^2\sum_{n=1}^{\infty}\Big(\frac{25}{36}n^4f(n,e)-\frac{25}{216}J^2_n(ne)\Big)\Big]+\mathcal{O}(\tilde{n}_0^4). \end{equation} \section{Constraints from observations}\label{Observations} \begin{table}[h] \caption{\label{tableI}Summary of the measured orbital parameters and the orbital period derivative values from observation and GR for PSR B1913+16 \cite{Weisberg:2016jye} and PSR J1738+0333 \cite{Freire:2012mg}. The uncertainties in the last digits are quoted in the parenthesis.} \centering \begin{tabular}{ |l|c|c|c|c|c| } \hline Parameters \hspace{0.01cm} & PSR B1913+16\hspace{0.01cm}&PSR J1738+0333\hspace{0.01cm}\\ \hline Pulsar mass $m_1$ (solar masses) &$1.438\pm 0.001$ &$1.46^{+0.06}_{-0.05}$ \\ Companion mass $m_2$ (solar masses)&$1.390\pm 0.001$ & $0.181^{+0.008}_{-0.007}$\\ Eccentricity $e$ &$0.6171340(4)$ &$(3.4\pm 1.1)\times10^{-7}$ \\ Orbital period $P_b$ (d)&$0.322997448918(3)$&$0.3547907398724(13)$\\ Intrinsic $\dot{P_b}(10^{-12}\rm{ss^{-1}})$ &$-2.398\pm 0.004$ &$(-25.9\pm 3.2)\times 10^{-3}$\\ GR $\dot{P_b}(10^{-12}\rm{ss^{-1}})$ &$-2.40263\pm 0.00005$&$-27.7^{+1.5}_{-1.9}\times 10^{-3}$\\ \hline \end{tabular} \end{table} In this section we calculate the graviton mass from the observation of orbital period decay of the Hulse -Taylor binary system (PSR B1913+16) and a pulsar white-dwarf binary system (PSR J1738+0333). The orbital parameters of the two compact binary systems and the orbital period derivative values from observation and GR are given in TABLE \ref{tableI}. Massless graviton has two states of polarization and the rate of energy loss of the compact binary system due to the emission of massless graviton radiation is given by Eq.~(\ref{eq:app10}) and it agrees with the Peters Mathews formula \cite{Peters:1963ux}. \begin{figure} \centering \includegraphics[width=8.0cm,angle=360]{grav2} \caption{Variation of $F(e)$ with the eccentricity.} \label{fig:feynl} \end{figure} The variation of $F(e)=(1-e^2)^{-7/2}\Big(1+\frac{73}{24}e^2+\frac{37}{96}e^4\Big)$ with the eccentricity is shown in FIG.\ref{fig:feynl} which implies that the energy loss due to the GR value is largely enhanced by the eccentricity enhancement factor $F(e)$. Its value is always greater than one for non zero eccentric orbit. Large eccentric binary orbit has strong speed variation as it moves from periastron to apiastron which leads to produce a large amount of radiation in higher harmonics of orbital frequency. In the following, we compare three massive theories of gravity and find limits on the graviton mass for PSR B1913+16 and PSR J1738+0333. \subsection{Vainshtein radius and limits of linear theory} \label{Vainshtein} We have used the leading order perturbation of the metric for calculating the graviton emission. In linearised Einstein's gravity the perturbation theory holds as long as $\kappa h_{\mu \nu}\ll1$. This implies that perturbation theory breaks down at radius smaller than $R_s=2 G M$ of the source. If the Fierz-Pauli and no-vDVZ theories are effective field theories describing gravity, with a non-linearly realised diffeomorphism symmetry, then there will inevitably be interactions below the scale $\Lambda_5\sim (m_g^4M_{pl})^{1/5}$ and these will set the Vainshtein limit of these linearised theories \cite{Arkani-Hamed:2002bjr}. Therefore, the smallest radius until which the perturbation theory can be applied is Vainshtein radius \cite{Vainshtein:1972sx} given by \begin{equation} R_V= \left( \frac{R_s}{ m_g^4}\right)^{1/5}\,. \label{rvFP} \end{equation} The Vaishtein radius is much larger than the $R_s$ and perturbative calculations of the Fierz-Pauli theory are valid in regions with $r>R_V$ away from the source. In our application of binary pulsar radiation, classically the gravitational field is evaluated at the radiation zone such that $R_V < \lambda$ (where $\lambda \sim \pi/\Omega$ is the wavelength of the gravitational waves radiated). In the FP theory this implies that we must have \begin{equation} \lambda \sim \pi \Omega^{-1} > R_V= \left( \frac{R_s}{ m_g^4}\right)^{1/5}\,. \end{equation} We therefore have a lower bound on the graviton mass above which the perturbative calculations is valid given by \begin{equation} m_g > \frac{\Omega^{5/4}}{\pi^{5/4}} ( 2 GM)^{1/4}. \end{equation} Using the numbers as shown in TABLE.\ref{tableI} for PSR B1913+16, we find that the region of $m_g$ of the Fierz-Pauli theory where the perturbative calculation is valid is $m_g > 3.06 \times 10^{-22}\, {\rm eV}$. For PSR J1738+0333, we use the Vainshtein limit and obtain the region of graviton mass for the validity of the perturbative calculation $m_g>2.456\times 10^{-22}\rm{eV}$ for FP theory. For the DGP theory the Vainshtein radius is given by \cite{Dvali:2000hr,Babichev:2013usa} \begin{equation} R_V= \left( \frac{R_s}{ m_g^2}\right)^{1/3}\,. \end{equation} Again we must have $\lambda \sim \pi\Omega^{-1} > R_V$ which gives a lower bound on the graviton mass in the DGP theory above which the perturbative calculation is valid, given by \begin{equation} m_g> \frac{\Omega^{3/2}}{\pi^{3/2}} (2 GM )^{1/2} \,. \end{equation} This number is $7.84 \times 10^{-24} \,{\rm eV}$ for PSR B1913+16 and $1.406\times 10^{-24}\rm{eV}$ for PSR J1738+0333. \subsection{Constraints from observation for FP Theory} \begin{figure}[!htbp] \centering \subfigure[Variation of orbital period loss with graviton mass for PSR B1913+16 in FP theory]{\includegraphics[width=3.0in,angle=360]{m1.pdf}\label{subfig:f1}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR B1913+16 in FP theory in large $m_g$ limit]{\includegraphics[width=3.0in,angle=360]{m2.pdf}\label{subfig:f2}} \subfigure[Variation of orbital period loss with graviton mass for PSR J1738+0333 in FP theory]{\includegraphics[width=3.0in,angle=360]{s1.pdf}\label{subfig:p1}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR J1738+0333 in FP theory in large $m_g$ limit]{\includegraphics[width=3.0in,angle=360]{s2.pdf}\label{subfig:p2}} \caption{In the upper panel, we have shown (a) the Variation of orbital period loss with graviton mass and (b) comparing the theoretical value for the orbital period loss with observation for PSR B1913+16 in FP theory. In the lower panel (c) and (d) we have shown the same variation as above for PSR J1738+0333. } \label{fig:extremalx} \end{figure} The massive graviton has five states of polarization and of these the scalar and the tensor modes couple to the energy momentum tensor and contribute to the energy loss for the compact binary systems. In the massless limit $m_g\rightarrow 0$ of the FP theory, the extra scalar mode does not decouple and one encounters vDVZ discontinuity. In FIG.\ref{subfig:f1} and FIG.\ref{subfig:f2}, we show the variation of the orbital period loss with the graviton mass for PSR B1913+16 and In FIG.\ref{subfig:p1} and FIG.\ref{subfig:p2} we obtain the same variation for PSR J1738+0333. The dotted lines denote the corresponding Vainshtein limit for the two binary system. The red line denotes the analytical result of orbital period loss in FP theory as obtained above and the blue line denotes the corresponding GR value. The gray band denotes the allowed region of the orbital period loss from observation. In the region $m_g \sim \Omega$ the energy loss falls with increasing $m_g$ as the phase space of graviton momentum shrinks. There is a region where the theoretical curve goes through the observational band as shown FIG.\ref{subfig:f2} and FIG.\ref{subfig:p2} where the variation of orbital period derivative is shown with the observational uncertainty for the two compact binary systems. The range of the graviton mass corresponds to $m_g\in (6.88-6.96)\times 10^{-19}\rm{eV}$ (FIG.\ref{subfig:f2}) for PSR B1913+16 and $m_g\in (2.31-2.48)\times 10^{-19}\rm{eV}$ for PSR J1738+0333. There is no common mass range in the overlap region where the red line passes through the gray band for the two binary systems for any value of $m_g$. For the FP theory therefore, the limit on graviton mass from observations PSR B1913+16 together with PSR J1738+0333 comes from the Vainshtein limit $m_g>3.06\times 10^{-22}\rm{eV}$. \subsection{Constraints from observation for DGP Theory} \begin{figure}[!htbp] \centering \subfigure[Variation of orbital period loss with graviton mass for PSR B1913+16 in DGP theory]{\includegraphics[width=3.0in,angle=360]{m3.pdf}\label{subfig:f3}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR B1913+16 in DGP theory in large $m_g$ limit]{\includegraphics[width=3.0in,angle=360]{m4.pdf}\label{subfig:f4}} \subfigure[Variation of orbital period loss with graviton mass for PSR J1738+0333 in DGP theory]{\includegraphics[width=3.0in,angle=360]{s3.pdf}\label{subfig:p3}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR J1738+0333 in DGP theory in large $m_g$ limit]{\includegraphics[width=3.0in,angle=360]{s4.pdf}\label{subfig:p4}} \caption{In the upper panel, we have shown (a) the Variation of orbital period loss with graviton mass and (b) comparing the theoretical value for the orbital period loss with observation for PSR B1913+16 in DGP theory. In the lower panel (c) and (d) we have shown the same variation as above for PSR J1738+0333. } \label{fig:extremaly} \end{figure} In DGP theory, the massless limit $m_0\rightarrow 0$ of the DGP theory does not simply give the massless result and here also one encounters vDVZ discontinuity due to the extra contribution of the scalar gravitons. In FIG.\ref{subfig:f3} and FIG.\ref{subfig:f4}, we show the variation of the orbital period loss with $m_0$ for PSR B1913+16 and in FIG.\ref{subfig:p3} and FIG.\ref{subfig:p4} we obtain the same for PSR J1738+0333. The dotted lines denote the corresponding Vainshtein limit for the two binary system which are $m_g>7.84\times 10^{-24}\rm{eV}$ for PSR B1913+16 and $m_g>1.406\times 10^{-24}\rm{eV}$ for PSR J1738+0333. As in FP theory, in the DGP theory also there is some region where the theoretical prediction crosses the observed band value which corresponds to the graviton mass $m_0\in(2.45-2.47)\times 10^{-19}\rm{eV}$ (FIG.\ref{subfig:f2}) for PSR B1913+16 and $m_0\in(0.31-1.41)\times 10^{-19}\rm{eV}$ for PSR J1738+0333. Since, for DGP theory also, there is no common mass range in the overlap region for the two binary systems, we obtain the graviton mass bound from Vainshtein limit as $m_0 >7.84\times 10^{-24}\rm{eV}$. \subsection{No vDVZ discontinuity theory} \begin{figure}[!htbp] \centering \subfigure[Variation of orbital period loss with graviton mass for PSR B1913+16 in massive gravity theory without vDVZ discontinuity]{\includegraphics[width=3.0in,angle=360]{m5.pdf}\label{subfig:f5}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR B1913+16 in massive gravity theory without vDVZ discontinuity for higher graviton mass]{\includegraphics[width=3.0in,angle=360]{m6.pdf}\label{subfig:f6}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR B1913+16 in massive gravity theory without vDVZ discontinuity for lower graviton mass] {\includegraphics[width=3.2in,angle=360]{m7.pdf}\label{subfig:f7}} \caption{We have plotted the variation of orbital period loss with graviton mass for PSR B1913+16 in massive gravity theory without vDVZ discontinuity in (a). In (b) and (c) we have compared the theoretical value for the orbital period loss with observation for PSR B1913+16 in massive gravity theory without vDVZ discontinuity for higher graviton mass and lower graviton mass respectively. } \label{fig:extremalz} \end{figure} \begin{figure}[!htbp] \centering \subfigure[Variation of orbital period loss with graviton mass for PSR J1738+0333 in massive gravity theory without vDVZ discontinuity] {\includegraphics[width=3.0in,angle=360]{s5.pdf}\label{subfig:p5}} \subfigure[Comparing the theoretical value for the orbital period loss with observation for PSR J1738+0333 in massive gravity theory without vDVZ discontinuity]{\includegraphics[width=3.0in,angle=360]{s6.pdf}\label{subfig:p6}} \caption{We have plotted the variation of orbital period loss with graviton mass for PSR J1738+0333 in massive gravity theory without vDVZ discontinuity in (a). In (b) we have compared the theoretical value for the orbital period loss with observation for PSR J1738+0333 in massive gravity theory without vDVZ discontinuity. } \label{fig:extremalz1} \end{figure} The section \ref{FPa} is a special case of massive gravity theory without vDVZ discontinuity at linear order. If one tunes the Fierz-Pauli term $(h_{\mu\nu}h^{\mu\nu}-h^2)$ to $(h_{\mu\nu}h^{\mu\nu}-\frac{1}{2}h^2)$ then at the linear order the ghost term with tachyonic mass cancels the scalar contribution to the propagator. Hence, we are left with the tensor structure of the propagator similar for the massless graviton but having dispersion relation that of massive graviton. Due to this cancellation, there is no vDVZ discontinuity in the $n_0\rightarrow 0$ limit. All our calculations in the paper are at the linear order. However, at the non linear order, there are interactions which will not eliminate the vDVZ discontinuity and the ghost will remain in the theory. In massive gravity theory without vDVZ discontinuity, the scalar mode is cancelled by the ghost mode. However, there will be Vainshtein radius in the theory similar to FP theory as mentioned in Eq.~(\ref{rvFP}). In FIG.\ref{fig:extremalz} and FIG.\ref{fig:extremalz1}, we have shown the variation of orbital period loss with the graviton mass for the two compact binary systems and in the low graviton mass limit, the orbital period loss for this theory and massless theory become degenerate. There exist two regions where the theoretical prediction agrees with the observational band. For PSR B1913+16 this corresponds to the graviton mass $m_g\in (6.32-6.50)\times 10^{-19}\rm{eV}$ and $m_g< 1.81 \times 10^{-20}\rm{eV}$ (FIG.\ref{fig:extremalz}). For PSR J1738+0333, the corresponding graviton mass range are $m_g\in(2.18-2.34)\times 10^{-19}\rm{eV}$ and $m_g < 5.29\times 10^{-20}\rm{eV}$ (FIG.\ref{fig:extremalz1}). Here for the two binary systems we find common graviton mass region where there is agreement with both observations and the bound on graviton mass is $m_g< 1.81 \times 10^{-20}\rm{eV}$. All the bounds derived in the paper are at $68\%$ C.L. \section{Conclusions}\label{Conclusions} In this paper we put constraints on three massive gravity theories from binary pulsar observations. We show that the bounds on gravitational mass from binary observations are highly model dependent as the predictions for the gravitational luminosity for different graviton mass models have significant differences. In massive gravity theories like FP and DGP with an extra propagating scalar, the contribution of the extra scalar to the energy loss is of the same order as that of the tensor gravitational waves and the region $m_g < \Omega$ is ruled out from binary pulsar observations. As the graviton mass approaches and becomes larger than $\Omega$ the energy radiated drops with increasing mass. For each binary system there is therefore a range of graviton mass where the theoretical predictions are within observational limits. We found that the allowed ranges of graviton mass from PSR B1913+16 and PSR J1738+0333 do not have any overlap. Therefore combining observations from the two pulsars we see that no range of graviton mass is consistent with both pulsar observations. In these theories the linear order calculation breaks down below a Vainshtein radius. The bound on graviton mass from Vainshtein limit is a theoretical bound. Whereas, we describe an independent method of obtaining the mass bound from observation. In the paper, we have chosen two binary systems PSR B1913+16 and PSR J1738+0333 and compute the orbital period loss for the three massive gravity theories viz, Fierz-Pauli theory, DGP theory and modified Fierz Pauli theory. Comparing with the observational data, we did not find any overlapping region of graviton mass for FP and DGP theory. For example, in DGP theory, the allowed ranges of mass are $(2.45-2.47)\times 10^{-19}$ eV for PSR B1913+16 and $(0.31-1.41)\times 10^{-19}$ eV for PSR J1738+0333. So, there is no common allowed mass range valid for both the compact systems and we can not give a universal graviton mass from the observation in DGP theory. Similarly, it is the case for FP theory as well. Therefore, we conclude that for FP and DGP theory the Vainshtein limit puts the stronger bound on the graviton mass. Before comparing the observational data with our calculation, we cannot tell whether the Vainshtein limit puts stronger limit on graviton mass or not. Although for modified FP theory with no vDVZ discontinuity, we found a common mass region for the two binary systems and put bound on the graviton mass by comparing the observational data with our analytical calculations. To summarise, observations from PSR B1913+16 and PSR J1738+0333 rule out all value of graviton mass and from the Vainshtein limit we can put the lower bounds $m_g > 3.06 \times 10^{-22 }$ eV for the FP theory and $m_0 > 7.84 \times 10^{-24}$ eV for the DGP theory. For the No-vDVZ discontinuity theory the upper bound from combined PSR B1913+16 and PSR J1738+0333 data is $m_g < 1.81 \times 10^{-20} $eV. All bounds quoted in the paper are at $68\%$ C.L. In \cite{Finn:2001qi} the authors used the method of classical multipole expansion of the metric perturbation and kept the term in the expression of the energy loss upto $\mathcal{O}(m^2_g)$. However in our paper, we use the effective field theoretic approach where we treat the graviton as the quantum field and the binary stars as its classical source and we compute the graviton emission rate. The graviton emission is not possible for $\Omega< m_g$ and this is taken care by the factor $(1-m_g^2/\Omega^2)^{1/2}$ in the expression of emission rate. In our study the hierarchy of scales is \begin{equation} \frac{a^2}{\lambda^2}\sim \frac{R_s}{a} << \frac{a^2}{R_V^2} <<1 << \frac{R_V}{R_S}, \end{equation} where $R_s\sim 2M/M_{pl}^2$ and $R_V$ are the usual Schwarzschild and Vainshtein radii around a compact object of mass $M$, $a$ is the orbital separation of the binary, and $\lambda$ is the wavelength of the emitted GW radiation. The condition for graviton emission $\Omega> m_g$ implies that $a< R_V$. This corresponds to a region of space screened by Vainshtein mechanism. Therefore, we can use the Keplerian orbit in GR in their evaluation of stress-energy tensor $T_{\mu\nu}$. Thus we neglect the corrections in the gravitational potential energy from the screened scalar mode, which are of $\mathcal{O}(n_0)$ for FP and DGP theories. Therefore, our results are approximate and not valid for all orders of $n_0$. These corrections in the Newtonian gravitational potential might change some order unity numerical factors but the order of magnitude of bounds on the graviton mass are expected to be the same as we otained. In \cite{deRham:2012fw}, the obejective of the paper is different from ours. In this paper, decoupling limit of the DGP theory has been considered, i.e. $M_{pl}\rightarrow \infty$ and $m_g\rightarrow 0$ keeping $m_g^2 M_{pl}$ fixed, where the helicity-2 modes are decoupled from the 0 mode. However, we keep $m_g$ finite. The key difference in our analysis is that we explore the regime $\Omega R_V<< 1$, so that the radiation is described by the linear theory. Where as the paper \cite{deRham:2012fw} use the opposite $\Omega R_V>> 1$ so that the radiation is Vainshtein screened. Also, there the authors used the classical multipole expansion method to obtain monopole, dipole, and quadrupole corrections at the leading and subleading orders. Therefore, our method as mentioned earlier is quite different from theirs. It should be noted that the upper bound on the graviton mass depends on the length scale of the observation. In fact for DGP theory the mass of the graviton is scale dependant. Naturally, different observation will give different bound on the mass of the graviton. The bounds on the graviton mass mentioned in \cite{deRham:2016nuf} and \cite{Shao:2020fka} are obtained for cubic galileon model which was originally derived from the decoupling limit of DGP model. However, in our work we have considered the actions for FP, DGP and modified FP theories from the first principle and calculate the energy loss from the binary system using Feynman diagram techniques in the tree level. The bounds on graviton mass that we have obtained are weaker than that for cubic galileon models however our results are comparable with the LIGO bound for direct detection of gravitational waves. Moreover, the calculations for energy loss that we have derived from Feynman diagram techniques are novel and provide interesting results. There are other massive gravity theories like Lorentz violating gravitational mass \cite{Rubakov:2004eb, Dubovsky:2004sg, Rubakov:2008nh} and more general Lorentz violating graviton bilinear terms \cite{Kostelecky:2016kfm, Kostelecky:2016uex} which we have not covered in the Lorentz covariant calculation in this paper. We will address these theories in a separate publication. The diagramatic method can also be used for computing the wave-form of gravitational waves observed in direct detection experiments like LIGO and VIRGO \cite{TheLIGOScientific:2014jea, TheVirgo:2014hva}. The gravitational wave from extreme mass ratio mergers in massive graviton theories can also constrain the mass of the graviton \cite{Cardoso:2018zhm}. It will be interesting to test massive gravity theory predictions \cite{Will:1997bb,Larson:1999kg} with direct observations and in particular to constrain the scalar and vector modes of gravity from direct detection \cite{TheLIGOScientific:2016src}. \section*{Acknowledgements} The authors are indebted to Vitor Cardoso for mentioning useful constraints. The authors would also like to thank the anonymous referee for useful comments and suggestions.
1,314,259,995,477
arxiv
\section{Introduction.} When one is dealing with classical field theories on a spacetime, the metric may appear as a given background field or it may be a genuine dynamic field satisfying the Einstein equations. The latter theories are often generally covariant, with the spacetime diffeomorphism group as symmetry group, but the former often are considered to have only the isometry group of the metric as a symmetry group. However, \cite{Kuchar1973} (see also \cite{IsKu1985}) indicated how theories with a background metric can be {\bfi parametrized}, that is, considered as theories that are fully covariant, if one introduces the diffeomorphisms themselves as dynamic fields. The goal of this paper is to develop this idea in the context of multisymplectic classical field theory and to make connections with stress-energy-momentum (``SEM'') tensors. As we shall see, the multimomenta conjugate to these new {\bfi covariance fields} form, to borrow a phrase from elasticity theory, the Piola--Kirchhoff version of the SEM tensor, and their Euler--Lagrange equations are vacuously satisfied by virtue of the fact that the SEM tensor is covariantly conserved. Thus these fields have no physical content; they serve only to provide an efficient way of parametrizing a field theory. Nonetheless, the resulting generally covariant field theory has several attractive features, chief among which is that it is fully dynamic---all fields satisfy Euler--Lagrange equations. Structurally, such theories are much simpler to analyze than ones with absolute objects or noncovariant elements. We emphasize that the results of this paper are for those field theories whose Lagrangians are built from dynamic matter or other fields and a non-dynamic background metric. One of our motivations was to find a way to treat background fields and dynamic fields in a unified way in the context of the adjoint formalism. Many of the ideas are applicable to a wider range of field theories, as \cite{Kuchar1973} already indicates, but in this paper we confine ourselves to this important class. The general case is presented in \cite{GoMa2008b} along with a more detailed discussion of parametrization theory and related topics. \section{The Covariance Construction.} Suppose that we have a metric field theory in which the metric is an absolute object in the sense of \cite{Anderson1967}. For instance, one might consider a dynamic electromagnetic field propagating on a Schwarzschild spacetime. Such a theory is not generally covariant, because the spacetime is fixed, and not all fields are on an equal footing, as the electromagnetic field is dynamic while the gravitational field is not. A somewhat different example is provided by Nordstr$\o$m's theory of gravity (see \S17.6 of \cite{MiThWh1973}), which is set against a Minkowskian background. In this section we explain how to take such a system and construct from it an equivalent field theory that achieves the following goals: \begin{description} \item[\hspace{.75in} (I)] The new field theory is generally covariant, and \item[\hspace{.75in} (II)] All fields in the new field theory are dynamic. \end{description} This ``covariance construction'' is an extension and refinement of the param\-etrization procedure introduced by \cite{Kuchar1973}. \paragraph{\large Setup.} As usual for a first order classical field theory, we start with a bundle $Y \rightarrow X $ whose sections, denoted $\phi$, are the fields under consideration. The dimension of $X$ is taken to be $n + 1$, and we suppose that $X$ is oriented. Let \[ \mathcal{L} : J^1 Y \to \Lambda^{n+1} X \] be a Lagrangian density for this field theory, where $J^1 Y$ is the first jet bundle of $Y $ and $\Lambda^{n+1} X$ is the space of top forms on $X$. Loosely following the notation of \cite{GoMa1992} or \cite{GoMa2008a}, we write coordinates for $ J^1 Y$ as $ \left( x^\mu, y^A, y^A{}_{\mu} \right)$. In addition, in coordinates, we shall write $$\mathcal{L} = L\! \left( x^\mu, y^A, y^A{}_{ \mu} \right) \! d^{\ps n+1} \ns x.$$ Evaluated on the first jet prolongation of a section $\phi$, the Lagrangian becomes a function of $ \left( x^\mu, \phi^A, \phi^A{}_{, \mu}\right)$; we shall abbreviate this when convenient and simply write $\mathcal{L} ( j^1\ns\phi )$. We assume that the fields $\phi$ are dynamic. \paragraph{Example.} We will intersperse the example of electromagnetism throughout the paper to illustrate our results. Then $Y$ is the cotangent bundle of 4-dimensional spacetime $X$, sections of which are electromagnetic potentials $A$. The corresponding Lagrangian is written below. \hfill $\blacklozenge$ \paragraph{\large A First Attempt at General Covariance.} Suppose that the spacetime $X$ comes equipped with a fixed, background metric $g$. The obvious first step in attaining general covariance is to allow $g$ to vary; thus the metric will now be regarded as a genuine \emph{field} $G$ on $X$. (When the metric is regarded as variable, we denote it by $G$, and when we want to revert to its fixed value we use $g$.) So we are led to view the Lagrangian density as a map \[ \mathcal{L} : J^1 Y \times {\rm Lor}(X) \to \Lambda^{n+1} X \] where $ {\rm Lor}(X)$ is the bundle whose sections are Lorentz metrics on $X$. We correspondingly write $\mathcal{L} ( j^1\ns \phi\, ; G )$; the semicolon is used to separate the dynamic from the nondynamic fields. (We emphasize that $G$ being variable does not mean that it is dynamic; we discuss this point momentarily.) Notice that we have tacitly assumed that the dependence of $\mathcal{L}$ on the metric is pointwise---that is, we have non-derivative coupling. (The more general case of derivative coupling will be considered in \S5. In any event, we remark that derivatively-coupled theories are considered by many to be pathological.) \paragraph{Example.} The electromagnetic Lagrangian density \[ \mathcal{L}\colon J^1(T^*X) \times \mathrm{Lor}(X)\to \Lambda ^{4} X\] is \begin{equation} \mathcal{L}(j^1 \ns A \, ; G )= -\frac14 G^{\mu\alpha}G^{\nu\beta}F_{\alpha\beta} F_{\mu \nu}\sqrt{-G} \, d^{\ps 4}\ns x \label{oldEMLag} \end{equation} where $F_{\mu \nu} = A_{\nu, \mu} - A _{\mu, \nu}.$ \hfill $\blacklozenge$ \bigskip Next, assume that the given Lagrangian density $\mathcal{L}$ has the following (eminently reasonable) covariance property for a diffeomorphism $\sigma: X \rightarrow X $: \begin{equation} \label{cov} \sigma _{\ast}\ns \left( \mathcal{L} ( j^1\ns\phi\, ; G ) \right) = \mathcal{L} \big( j^1\ns(\sigma_Y( \phi))\, ; \sigma_{\ast} G \big) \end{equation} where we assume that a way to lift the spacetime diffeomorphism $\sigma$ to a bundle automorphism $\sigma_Y$ of $Y$ has been chosen. \paragraph{Example.} For the electromagnetic 1-form potential $A$, we take the lift to be push-forward on the fiber, which makes it obvious that \eqref{cov} holds in this case. \hfill $\blacklozenge$ \bigskip When condition \eqref{cov} holds, we say that the theory is {\bfi generally covariant}, i.e., the Lagrangian density is ${\rm Diff}(X)$-equivariant. Thus we have accomplished objective (I). However, the reader may well remark that this was `too easy,' and would be quite right. The problem is that it is not clear how, or even {\it if}, $G$ can now be made dynamic. Certainly, $G$ cannot be taken to be variational unless one adds a source term to the Lagrangian density for $G$, for otherwise \[ \frac{\partial L}{\partial G_{\mu\nu}} = \frac{\delta L}{\delta G_{\mu\nu}} = 0 \] as the metric non-derivatively couples to the other fields. But what should this source term be? If $G$ is gravity, we could use the Hilbert Lagrangian, but otherwise this is unclear. \paragraph{\large The Covariance Field.} The solution to our problem requires more subtlety. We will sidestep both the issues of making $g$ variable, and then making $G$ dynamic, in one fell swoop as follows. We introduce an entirely new field, the ``covariance field'' into the theory. It will `soak up' the arbitrariness in $G$, and will be dynamic. In this way we are able to generate a new generally covariant field theory, physically equivalent to the original one, in which all fields are dynamic. Here is the construction. The key idea is to introduce a copy $(S,g)$ of spacetime into the fiber of the configuration bundle. Consider (oriented) diffeomorphisms $\eta: X \to S$, thought of as sections of the bundle $S \times X \to X$. We regard the diffeomorphisms $\eta$ as new fields, and correspondingly replace the configuration bundle by $\widetilde Y = Y \times_X (S \times X) \to X$. Next, modify $\mathcal{L}$ to get the new Lagrangian $\widetilde{\mathcal{L}}$ defined on $J^1\widetilde Y$: \begin{equation} \label{newLag} \widetilde{\mathcal{L}} ( j^1\ns\phi, j^1\ns\eta ) = \mathcal{L} ( j^1\ns\phi; \eta ^{\ast} g ). \end{equation} Thus, we obtain a modified field theory with the underlying bundle $\widetilde Y$. The general set up is shown in the figure below. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.9,angle=0]{CovarianceFig.pdf} \end{center} \end{figure} \vskip -40pt \begin{center} {\footnotesize The general set up for the introduction of covariance fields.} \end{center} \bigskip Let coordinates on $S$ be denoted $u^a$ and the associated jet coordinates be denoted $u ^a {} _{\mu}$. Then, writing $\mathcal L = L\, d^{\ps 4}x$ and similarly for $\widetilde {\mathcal L}$, in coordinates equation \eqref{newLag} reads \begin{equation}\label{widetL} \widetilde{{L}} \left( x^\mu, y^A, y^A{}_{\mu}, u^a, u ^a {} _{\mu} \right) = L\! \left( x^\mu, y^A, y^A{}_{\mu}\, ; G _{\mu \nu} \right)\! , \end{equation} where from the definition of pull-back \[ ( \eta ^{\ast} g )_{\mu\nu } (x ) = \eta ^{a}{}_{, \mu} (x) \eta ^{b}{}_{, \nu} (x) g _{a b } ( \eta (x) ) \] we obtain \begin{equation}\label{G} G_{\mu\nu} = u ^a {} _{\mu} u ^b {} _{\nu} \, g _{a b }. \end{equation} From \eqref{widetL} one verifies that the Euler--Lagrange equations for the fields $\phi^A$ remain unchanged. \paragraph{Example.} For the electromagnetic field, our construction produces \begin{equation} \tilde{\mathcal L}(j^1\ns A ,j^1 \eta )= -\frac14 g^{ac} g^{bd} \kappa^\mu{}_c \kappa^\alpha{}_a \kappa^\nu{}_d \kappa^\beta{}_b\ps F_{\mu\nu} F_{\alpha\beta} \, \sqrt{-g}\, (\det \eta_*) \, d^{\ps 4}\ns x \label{newEM Lag} \end{equation} where $\eta_*$ is the Jacobian of $\eta$ and $\kappa = \eta^{-1}.$ \hfill $\blacklozenge$ \bigskip We pause to point out the salient features of our construction. First, the fixed metric $g$ on spacetime is no longer regarded as living on $X$, but rather on the copy $S$ of $X$ in the fiber of the configuration bundle $\widetilde Y$. So $g$ is no longer considered to be a field---it has been demoted to a mere geometric object on the fiber $S$. Second, the variable metric $G$ on $X$ is identified with $\eta^*g$, and thus acquires its variability from that of $\eta$. So $G$ as well is no longer a field per se, but simply an abbreviation for the quantity $\eta^*g$. Finally, we gain a field $\eta$ which we allow to be dynamic; in the next subsection we will see that this imposes no restrictions on the theory at all. The first key observation is that the modified theory is indeed generally covariant. To this end, recall that, as was explained earlier, given $\sigma \in \mathrm{Diff}X$, there is assumed to be a lift $\sigma _Y \colon Y \to Y$. For the trivial bundle $S \times X$, we define \begin{eqnarray} \sigma _{S}\colon S \times X &\to& S \times X \nonumber \\ (u,x)& \mapsto & (u,\sigma (x)). \label{psiS} \end{eqnarray} \begin{theorem} The Lagrangian density $\widetilde{\mathcal{L}}\colon J^1(Y \times_X (S\times X)) \to \Lambda ^{n+1}X$ is $\operatorname{Diff} (X) $-equivariant, that is, \[ \sigma _{\ast} \Big( \widetilde{\mathcal{L}} \big( j^1\ns\phi, j^1\ns\eta \big) \Big) = \widetilde{\mathcal{L}} \left( j^1\ns( \sigma_Y(\phi)), j^1\ns( \sigma_S(\eta) )\right). \] \end{theorem} \begin{proof} This is an easy consequence of the definitions \eqref{newLag} and \eqref{psiS}, and the covariance assumption \eqref{cov}. Indeed \begin{align*} \widetilde{\mathcal{L}} \left( j^1\ns( \sigma_Y(\phi)), j^1\ns( \sigma_S(\eta) )\right) &= {\mathcal{L}} \left( j^1\ns( \sigma_Y(\phi))\, ; (\eta \circ \sigma^{-1} )^*g\right) \\[1.5ex] &= {\mathcal{L}} \left( j^1\ns( \sigma_Y(\phi))\, ; (\sigma^{-1} )^*(\eta^* g)\right) \\[1.5ex] &= \sigma_*\!\left(\mathcal{L}(j^1 \ns \phi)\, ; (\eta ^* g))\right)\\[1.5ex] &= \sigma_*\!\left(\widetilde {\mathcal{L}} (j^1 \ns \phi, j^1 \ns\eta )\right). \end{align*} \vskip -24pt \end{proof} Because of this property, we call $\eta$ the {\bfi covariance field}. \paragraph{Example.} From \eqref{newEM Lag} it is clear that the modified electromagnetic theory is generally covariant. \hfill $\blacklozenge$ \section{The Dynamics of the Covariance Field.} Next we will show something remarkable: the Euler--Lagrange equation for the covariance field $\eta$ is vacuous. This is the main reason that, in the present context, we can introduce $\eta$ as a dynamic field with impunity, namely, its Euler--Lagrange equation does not add any new information to, or impose any restrictions upon, the system. Since, as we mentioned earlier, the Euler--Lagrange equations for the fields $\phi^A$ remain unaltered, we see that \emph{the parametrized system is physically equivalent to the original system.} First we compute the multimomenta conjugate to the field $\eta$ for the parame\-trized field theory with Lagrangian $\widetilde{\mathcal{L}}$. Recall that in multisymplectic field theory, the multimomenta conjugate to the multivelocities $u^A{}_{\mu}$ are defined by \begin{equation*} \rho _a {} ^{\mu} = \frac{\partial \widetilde{L } }{\partial u ^a {} _{\mu}}. \end{equation*} Using the chain rule together with the relations \eqref{widetL} and \eqref{G}, we find that \begin{equation} \label{pkmom} \rho _a {} ^{\mu} = 2 \ps \frac{\partial L }{\partial G _{\mu \nu } } \ps u ^b{} _{\nu } g _{ab}. \end{equation} Recall from \cite{GoMa1992} that, as we have assumed that $G$ is the only nondynamic field, and does not derivatively couple to the other fields, the SEM tensor density for the \emph{original} system with Lagrangian $L$ and metric $G$ is given by the Hilbert formula: \begin{equation} \label{hsemdef} \mathfrak T ^{\mu \nu} = 2\ps \frac{\delta L }{\delta G _{\mu \nu } } = 2\ps \frac{\partial L }{\partial G _{\mu \nu } }. \end{equation} From \eqref{pkmom} we conclude that the multimomenta conjugate to to the covariance field $\eta$ are given by the {\bfi Piola-Kirchhoff SEM tensor density}: \begin{equation*} \rho _a {} ^{\mu} = \mathfrak T ^{\mu \nu} u ^b{} _{ \nu } g _{ab}. \end{equation*} This is a familiar object in elasticity theory. Observe that $\rho_a{}^\mu$ is a two-point tensor density: it has one leg ($a$) in the spacetime $S$ in the fiber---analogous to the spatial representation in elasticity theory, and the other leg ($\mu$) in the spacetime $X$ in the base---analogous to the material representation. Now we compute the Euler--Lagrange equations for the $\eta^a$. These are: \[ \frac{\partial \tilde{L} }{\partial \eta ^a} - \frac{\partial}{\partial x^\mu} \left( \frac{\partial \tilde{L} }{\partial \eta ^a{} _{,\mu}}\right)=0 \] for $a=1,\ldots,\operatorname{dim}X$. Expanding the derivatives via the chain rule and using the same type of calculation as in the derivation of \eqref{pkmom} to write the equations in terms of $L$ rather than $\tilde{L} $, the preceding equation becomes \[ \frac{\partial L}{\partial G_{\mu \nu}}\ps \eta^c{}_{,\mu}\eta^d{}_{,\nu}\ps \frac{\partial g_{cd}} {\partial u^a}- 2\ps \frac{\partial}{\partial x^\mu}\ns \left(\frac{\partial L} {\partial G_{\mu \nu}}\ps \eta ^c{}_{,\nu}g_{ac}\right)=0. \] Replacing $\partial L /\partial G_{\mu \nu}$ by (half of) $\mathfrak T^{\mu \nu}$, and differentiating using the product rule, we obtain \[ \mathfrak T^{\mu \nu}\eta^c{}_{,\mu} \eta^d{}_{,\nu} \ps \frac{\partial g_{cd}}{\partial u^a} -2\left(\frac{\partial \mathfrak T^{\mu \nu}}{\partial x^\mu}\ps \eta^c{}_{,\nu}g_{ac}+\mathfrak T^{\mu \nu} \eta^c{}_{,\mu \nu}g_{ac}+\mathfrak T^{\mu \nu}\eta ^c{}_{,\nu}\ps \frac{\partial g_{ac}}{\partial u^d} \ps \eta^d{}_{,\mu}\right)=0, \] for $a=1,\ldots, \dim X$. Multiplying by the inverse matrix $g^{ab}$ one gets \[ \mathfrak T^{\mu \nu}\eta^c{}_{,\mu} \eta^d{}_{,\nu} \ps \frac{\partial g_{cd}}{\partial u^a}\ps g^{ab} -2\left(\frac{\partial \mathfrak T^{\mu \nu}}{\partial x^\mu}\ps \eta ^b{}_{,\nu}+\mathfrak T^{\mu \nu} \eta^b{}_{,\mu \nu}+\mathfrak T^{\mu \nu}\eta^c{}_{,\nu}\eta^d{}_{,\mu}\frac{\partial g_{ac}}{\partial u^d}\ps g^{ab}\right)=0, \] for $b=1,\ldots,\dim X$. And now, we multiply by $\kappa^\rho{}_b$, the inverse matrix of the Jacobian $\eta ^b{}_{,\nu}$ \[ \mathfrak T^{\mu \nu}\eta^c{}_{,\mu} \eta^d{}_{,\nu} \ps \frac{\partial g_{cd}}{\partial u^a}\ps g^{ab}\kappa^\rho{}_b -2\left(\frac{\partial \mathfrak T^{\mu \rho}}{\partial x^\mu}+\mathfrak T^{\mu \nu}\eta ^b{}_{,\mu \nu} \kappa^\rho{}_b +\mathfrak T^{\mu \nu}\eta^c{}_{,\nu}\eta^d{}_{,\mu}\ps \frac{\partial g_{ac}}{\partial u^d}\ps g^{ab}\kappa^\rho{}_b \right)=0, \] for $\nu =1,\ldots,\dim X$. Taking into account the symmetry $\mathfrak T^{\mu \nu}=\mathfrak T^{\nu \mu}$, the preceding equation becomes \begin{align*} \mathfrak T^{\mu \nu}\eta^c{}_{,\mu} \eta^d{}_{,\nu} \kappa^\rho{}_b \left( \frac{\partial g_{cd}}{\partial u^a}\ps g^{ab} \right. & \left. - \ \frac{\partial g_{ad}} {\partial u^c}\ps g^{ab} - \frac{\partial g_{ac}}{\partial u^d}\ps g^{ab} \right) \\[2ex] & - \ 2 \left(\frac{\partial \mathfrak T^{\mu \rho}}{\partial x^\mu}+\mathfrak T^{\mu \nu}\eta ^b{}_{,\mu \nu} \kappa^\rho{}_b \right) = 0. \end{align*} Recalling the expression of the Christoffel symbols of the metric $g$, namely, \[ \gamma ^b _{cd}=\frac{1}{2}g^{ab}\left(\frac{\partial g_{ac}}{\partial u^d}+\frac{\partial g_{ad}}{\partial u^c}-\frac{\partial g_{cd}}{\partial u^a}\right), \] we obtain \begin{equation} \label{etaeqn} - 2 \mathfrak T^{\mu \nu}\eta^c{}_{,\mu} \eta^d{}_{,\nu} \gamma ^b _{cd}\ps \kappa^\rho{}_b -2\left(\frac{\partial \mathfrak T^{\mu \rho}}{\partial x^\mu}+\mathfrak T^{\mu \nu}\eta ^b{}_{,\mu \nu} \kappa^\rho{}_b \right) =0. \end{equation} Finally, recall how the Christoffel symbols $\gamma ^b _{cd}$ for $g$ and the symbols $\Gamma ^\rho _{\mu \nu}$ for $G=\eta ^* g$ are related: \begin{equation} \Gamma ^\rho _{\mu \nu} = \frac{\partial ^2u^b}{\partial x^\mu \partial x^\nu}\frac{\partial x^\rho} {\partial u^b} + \frac{\partial u^c}{\partial x^\mu}\frac{\partial u^d}{\partial x^\nu} \ps \gamma ^b _{cd} \ps \frac{\partial x^\rho}{\partial u^b}. \label{Gg} \end{equation} Using this in \eqref{etaeqn} gives \[ -2\left(\frac{\partial \mathfrak T^{\mu \rho}}{\partial x^\mu} + \mathfrak T^{\mu \nu}\Gamma ^\rho _{\mu \nu}\right)=0, \] for $\nu =1,\ldots,\dim X$, which is exactly the vanishing of the covariant divergence of the tensor \emph{density} $\mathfrak T^{\mu \nu}$. Thus, we have proven the following basic result. \begin{theorem} The Euler--Lagrange equations for the covariance field $\eta$ are that the covariant divergence of the SEM tensor density $\mathfrak T ^{\mu \nu}$ is zero. \end{theorem} It is known from Proposition 5 in \cite{GoMa1992} that the SEM tensor is covariantly conserved when the metric $G$ is the \emph{only} nondynamic field. Thus, in our context, the equation $\nabla_\mu \mathfrak T^{\mu\nu} = 0$ is an identity, whence \begin{corollary} The Euler--Lagrange equations for the covariance field $\eta$ are vacuously satisfied. \end{corollary} Consequently the covariance field has no physical import. We are free to suppose $\eta$ is dynamic, and so we have accomplished goal (II): we have constructed a new field theory in which \emph{all} fields are dynamic. \section{The SEM Tensor.} It is interesting to compare the SEM tensors for the original and parame\-trized systems. In \cite{GoMa1992} the SEM tensor density $\mathfrak T^\mu{}_\nu$ is defined in terms of fluxes of the multimomentum map $J^{\mathcal L}$ associated to the action of the spacetime diffeomorphism group. We rapidly recount some of the basic ideas. Consider the lift of an infinitesimal diffeomorphism $\xi \in \mathfrak{X}(X)$ to $Y$; it can be expressed \[ \xi _Y = \xi ^{\mu} \frac{\partial}{\partial x^{\mu}} + \xi ^A \frac{\partial}{\partial y ^A} \] where we suppose that \begin{equation* \xi ^A = C^{A\rho _1 \dots \rho _k}_{\ \ \nu} \xi ^{\nu}{}_{,\rho _1 \ldots \rho _k} + \ldots + C^{A\rho}_{\ \ \nu} \xi^{\nu}{}_{,\rho}+C^A{}_{\nu} \xi ^{\nu} \end{equation*} for some coefficients $C^{A\rho _1 \dots \rho _k}_{\ \ \nu} , \ldots ,C^{A\rho}_{\ \ \nu} , C^A{}_{\nu}$. The largest value of $k$ for which one of the top coefficients $ C^{A\rho _1 \dots \rho _k}_{\ \ \nu} $ is nonzero is the {\bfi differential index} of the field theory. We assume henceforth that the index $\leq 1$---the most common and important case (e.g., when the fields are all tensor fields). In this context, Theorem 1 along with Remark 4 of \cite{GoMa1992} shows that the SEM tensor density $\mathfrak T$ for a Lagrangian density ${\mathcal{L}}$ is uniquely determined by \begin{equation}\label{DefSEM} \int_ \Sigma i ^\ast _\Sigma (j ^1\ns \phi) ^\ast J ^{ \mathcal{L}} (\xi _Y) = \int _\Sigma \mathfrak{T} ^\mu {}_{\nu} (\phi) \xi ^\nu d ^{\ps n} \ns x _\mu \end{equation} for all vector fields $ \xi$ on $X$ with compact support and all hypersurfaces $ \Sigma $, where $ i _ \Sigma : \Sigma \rightarrow X $ is the inclusion. The multimomentum map $J ^{ \mathcal{L}} $ gives, roughly speaking, the flow of momentum and energy through spacetime; according to the quoted theorem, the fluxes of this flow across hypersurfaces are realized via the SEM tensor density. Manipulation of \eqref{DefSEM} (see formula (3.12) of \cite{GoMa1992}) shows that $\mathfrak T$ is given by \[ {\mathfrak{T}}^{\mu}{}_{\nu} ={L}\delta ^{\mu}{}_{\nu} - \frac{\partial{L}}{\partial \psi ^A{}_{,\mu}} \ps \psi ^A{}_{,\nu} + \frac{\partial{L}}{\partial \psi ^A{}_{,\mu}} \ps C^A{} _\nu + D_{\rho} \!\left(\frac{\partial{L}}{\partial \psi ^A{}_{,\rho}}\ps C^{A\mu}_{\ \ \nu}\right) \] where the summation extends over \emph{all} fields $\psi^A$. We apply this to the newly parametrized theory. Note that if the index of the original theory is $\leq 1$, then that for the parametrized theory will be also. As well from \eqref{psiS} we see that the lift of $\xi$ to $S \times X$ is trivial: \[ \xi ^{a} = 0, \] that is, there are no terms in the $\partial /\partial u^a$ directions in $\xi_{\widetilde Y}$. Thus the corresponding coefficients $C^{a\cdots}_{\ \ \nu}$ all vanish. The SEM tensor for $\widetilde{\mathcal{L}}$ therefore reduces to \[ \widetilde{\mathfrak{T}}^{\mu}{}_{\nu} = \widetilde{L}\delta ^{\mu}{}_{\nu} - \frac{\partial \widetilde{L}}{\partial \phi ^A{}_{,\mu}} \ps \phi ^A{}_{,\nu} + \frac{\partial \widetilde{L}}{\partial \phi ^A{}_{,\mu}} \ps C^A{} _\nu + D_{\rho} \! \left(\frac{\partial \widetilde{L}}{\partial \phi ^A{}_{,\rho}}\ps C^{A\mu}_{\ \ \nu}\right) - \frac{\partial \widetilde{L}} {\partial \eta ^a{}_{,\mu}}\ps \eta ^a{}_{,\nu}. \] On the other hand, \[ \frac{\partial \widetilde{L}}{\partial \eta ^a{}_{,\mu}}\ps \eta ^a{}_{,\nu} = 2 \frac{\partial L}{\partial G_{\mu\rho}} \ps \eta ^b{}_{,\rho}\,g_{ab}\,\eta ^a{}_{,\nu} = 2 \ps \frac{\partial L}{\partial G_{\mu\rho}} \ps G_{\rho \nu} \] and \[ \frac{\partial \widetilde{L}}{\partial \phi ^A{}_{,\mu}} = \frac{\partial L}{\partial \phi ^A{}_{,\mu}}, \] so that we can write \begin{equation*} \widetilde{\mathfrak{T}}^{\mu}{}_{\nu} = L\delta ^{\mu}{}_{\nu} - \frac{\partial L}{\partial \phi ^A{}_{,\mu}}\ps \phi ^A{}_{,\nu} + \frac{\partial L}{\partial \phi ^A{}_{,\mu}} \ps C^A{}_\nu + D_{\rho}\! \left(\frac{\partial L}{\partial \phi ^A{}_{,\rho}}\ps C^{A\mu}_{\ \ \nu}\right) - 2 \frac{\partial L}{\partial G_{\mu\rho}}\ps G_{\rho \nu}. \end{equation*} But the first four terms on the RHS of this equation comprise the SEM tensor density of the original theory since the $G_{\mu\nu}$ do not derivatively couple to the $\phi^A$ (cf. eqn. (4.4) in \cite{GoMa1992}). Thus the SEM tensor densities of the original and parametrized systems are related according to: \begin{proposition} \begin{equation*}\label{SEM2} \widetilde{\mathfrak{T}}^{\mu}{}_{\nu} = \mathfrak{T}^{\mu}{}_{\nu} - 2 \frac{\partial L}{\partial G_{\mu\rho}} \ps G_{\rho \nu}. \end{equation*} \end{proposition} But then $\widetilde{\mathfrak{T}}^{\mu}{}_{\nu} = 0$ on shell by the Hilbert formula \eqref{hsemdef}. Therefore, we explicitly see that the SEM tensor density for the fully covariant, fully dynamic modified theory vanishes. One can also obtain this result directly by applying the generalized Hilbert formula (3.13) in \cite{GoMa1992} to the parametrized theory, since it is fully dynamic. \paragraph{Example.} In the case of electromagnetism, one may compute directly from \eqref{newEM Lag} that $\widetilde{\mathfrak{T}}^{\mu}{}_{\nu} = 0$. One could also compute from \eqref{oldEMLag} that \begin{equation*} {\mathfrak{T}}^{\mu}{}_{\nu} = -\left(\frac14 \delta^{\mu}{}_{\nu} F_{\alpha\beta}F^{\alpha\beta} + F^{\alpha\mu}F_{\nu\alpha}\right)\! \sqrt{-G} = 2 \ps \frac{\partial L}{\partial G_{\mu\rho}} \ps G_{\rho\nu}. \end{equation*} \vskip -32pt \hfill $\blacklozenge$ \section{Derivative Couplings.} Here we briefly consider the situation, although perhaps exotic, when the metric derivatively couples to the other fields. For simplicity, however, we suppose the theory remains first order. So the Lagrangian density is taken to be a map \[ \mathcal{L} : J^1\ns \big(Y \times_X {\rm Lor}(X)\big) \to \Lambda^{n+1} X. \] As before, modify $\mathcal{L}$ to get the new Lagrangian $\widetilde{\mathcal{L}}$ defined on $J^2\widetilde Y$: \begin{equation*} \widetilde{\mathcal{L}} \big( j^1\ns\phi, j^2 \eta \big) = \mathcal{L} \big( j^1\ns\phi \ps ; j^1\ns (\eta ^{\ast} g) \big). \end{equation*} (Since $\eta^* g$ depends upon the first derivatives of $\eta$, $j^1(\eta^* g)$ will depend upon its second derivatives. Thus, we obtain a modified \emph{second} order field theory with the underlying bundle $\widetilde Y$.) The discussion proceeds as in the above, with only obvious changes. In particular, if $\mathcal L$ is Diff$(X)$-covariant, then so is $\widetilde{\mathcal L}$. \paragraph {\bf Example.} As a simple illustration of a derivatively coupled theory, consider a vector meson with mass $m$. Then $Y$ is the tangent bundle of spacetime, and its sections $\phi^{\ps \mu}$ are Klein--Gordon vector fields. The Lagrangian density is the map \[ \mathcal{L}\colon J^1\ns \big(TX \times_X \mathrm{Lor}(X)\big)\to \Lambda ^{4} X \] defined by \begin{equation*} \mathcal{L}(j^1\ns \phi \, ; j^1\ns G )= \frac12 G_{\sigma\ns \rho}\Big(G^{\mu \nu}\phi^\sigma{}_{;\mu} \ps \phi^{\ps\rho}{}_{;\nu} - m^2 \phi^\sigma \phi^{\ps\rho} \Big)\sqrt{-G} \, d^{\ps 4}\ns x. \end{equation*} where the semicolon denotes the covariant derivative with respect to $G$. Our construction produces \begin{align*} \tilde{\mathcal L}(j^1 \phi ,j^2 \eta )= &\ \frac12 \eta^c{}_{,\sigma} \eta^d{}_{,\rho} \ps g_{cd} \bigg(\! \kappa^\mu{}_a \kappa^\nu{}_b \ps g^{ab}\Big[\phi^ \sigma{}_{,\mu} + \big(\eta^ g{}_{,\mu\tau} +\eta^e{}_{,\mu} \ps \eta^f{}_{,\tau} \ps \gamma ^ g _{ef} \big) \kappa^ \sigma{}_g\ps \phi^\tau \Big] \nonumber \\[1.5ex] & \times \Big [\phi^{\ps \rho}{}_{,\nu} + \big(\eta^h{}_{,\nu\xi} + \eta^p{}_{,\nu} \ps \eta^q{}_{,\xi} \ps \gamma ^h _{pq} \big) \kappa^\rho{}_h\ps \phi^\xi\Big ] \nonumber \\[1.5ex] & - m^2 \phi^\sigma \phi^{\ps \rho} \bigg)\sqrt{-g}\, (\det \eta_*) \, d^{\ps 4}\ns x \end{align*} where $\eta_*$ is the Jacobian of $\eta$ and we have made use of \eqref{Gg}. \hfill $\blacklozenge$ \bigskip Now we turn to the Euler--Lagrange equations for the $\eta^a$ which, since $\widetilde{\mathcal L}$ is second order in the $\eta^a$, are: \[ \frac{\partial \widetilde{L} }{\partial \eta ^a} - \frac{\partial}{\partial x^\mu} \left( \frac{\partial \widetilde{L} }{\partial \eta ^a{} _{,\mu}}\right) + \frac{\partial^2}{\partial x^\nu \partial x^\mu} \left( \frac{\partial \widetilde{L} }{\partial \eta ^a{} _{,\mu\nu}}\right) =0 \] for $a=1,\ldots,\operatorname{dim}X$. The calculation of the LHS is similar to the previous one, but slightly more complicated. In any event, we find that $\eta$ satisfies the Euler--Lagrange equations $\iff$ $\nabla_\mu \mathfrak T^{\mu\nu} = 0$, where now by the Hilbert formula $$\mathfrak T^{\mu\nu} = 2\ps \frac{\delta L }{\delta G _{\mu \nu } } = 2\ps \bigg[\frac{\partial L }{\partial G _{\mu \nu }} - \frac{\partial}{\partial x^\rho} \left(\frac{\partial L }{\partial G _{\mu \nu,\rho } }\right) \bigg]. $$ Thus for (first order) derivative couplings the covariance field remains vacuously dynamic. It is likely this will remain true for derivative couplings of arbitrary order, but we have not verified this as yet. \paragraph{\Large Acknowledgments.} We dedicate this paper to Darryl Holm on his 60$^{\rm th}$ birthday. We thank him for his interest in the ideas in this paper and for his many inspiring works over the years. MJG and JEM thank the National Science Foundation for its occasional support of work of this sort. MCL was partially supported by DGSIC (Spain) under grant MTM2007-60017.
1,314,259,995,478
arxiv
\section{Introduction, results, and conclusions} \label{sec:intro} A number of experiments aimed at studying parity violation in low-energy processes involving few nucleon systems are being completed or are in an advanced stage of planning at cold neutron facilities, such as the Los Alamos Neutron Science Center, the NIST Center for Neutron Research, and the Spallation Neutron Source at Oak Ridge. The primary objective of this program is to determine the fundamental parameters of hadronic weak interactions, in particular the strength of the long-range part of the parity-violating (PV) two-nucleon ($NN$) potential, mediated by one-pion exchange (OPE). While such a component is theoretically expected on the basis of the weak interactions between quarks and the spontaneously-broken chiral symmetry of QCD, experimental evidence for its presence has proven to be elusive, and indeed current constraints are inconclusive, for a review see Ref.~\cite{Haxton08}. In contrast, in the strong-interaction sector OPE dominates the $NN$ potential at internucleon separations larger than 1.5 fm, and the spatial-spin-isospin correlations it induces leave their imprint on many nuclear properties. These include, for example, i) the observed ordering of levels in light nuclei and, in particular, the observed absence of stable systems with mass number $A$~=~8~\cite{Wiringa02}, ii) the single-particle energy spacings and shell structure of medium- and heavy-weight nuclei~\cite{Otsuka05} and, in particular, the observed change in the energy gap between the $h_{11/2}$ and $g_{7/2}$ orbits in tin isotopes~\cite{Schiffer04}, and iii) the relative magnitude of the momentum distributions of $pp$ versus $np$ pairs in nuclei~\cite{Schiavilla07}, which leads to the strong suppression of $(e,e^\prime pp)$ relative to $(e,e^\prime np)$ knock-out cross sections from $^{12}$C, recently measured at Jefferson Lab~\cite{Subedi08}. The determination of the parameters that characterize parity violation in nuclei requires evaluating matrix elements of hadronic weak-interaction operators between eigenstates of the strong-interaction Hamiltonian. Thus, experiments in this field are especially reliant on theory for their analysis and interpretation. For this reason, over the last several years, we have embarked on a program aimed at developing a systematic framework for studying PV observables in few-nucleon systems, for which accurate---essentially exact---calculations are possible. Two earlier papers~\cite{Carlson02,Schiavilla04} dealt with the two-nucleon system, and provided a rather complete analysis of the longitudinal asymmetry in $\vec{p}$-$p$ scattering~\cite{Carlson02} up to 300 MeV lab energies, and of a variety of PV observables in the $np$ system~\cite{Schiavilla04}, including, among others, the neutron spin rotation in $\vec{n}$-$p$ scattering and the photon angular asymmetry in the $\vec{n}$-$p$ radiative capture at thermal neutron energies. In the next phase, we have studied the spin rotation in $\vec{n}$-$d$~\cite{Schiavilla08} and $\vec{n}$-$\alpha$~\cite{Arriaga10} scattering at cold neutron energies. Measurements are available for the following PV observables: the longitudinal analyzing power in $\vec{p}$-$p$~\cite{Balzer80}--\cite{Berdoz03} and $\vec{p}$-$\alpha$~\cite{Lang85} scattering, the photon asymmetry and photon circular polarization in, respectively, the $^1$H($\vec{n},\gamma$)$^2$H~\cite{Cavaignac77}--\cite{Gericke09} and $^1$H($n,\vec{\gamma}$)$^2$H~\cite{Knyazkov84} radiative captures, and the neutron spin rotation in $\vec{n}$-$\alpha$ scattering~\cite{Snow09,Bass09}. There is also a set of experiments which are currently being planned, including measurements of the neutron spin rotation in $\vec{n}$-$p$~\cite{Snow09} and $\vec{n}$-$d$~\cite{Markoff07} scattering, and of the longitudinal asymmetry in the charge-exchange reaction $^3$He($\vec{n},p$)$^3$H at cold neutron energies~\cite{Bowman07}, the subject of the present paper. At vanishing neutron energies, the only channels entering the incoming $n$-$^3$He scattering state have quantum numbers $^{2S+1}L_J = \,^1$S$_0$ and $^3$S$_1$. In the outgoing $p$-$^3$H scattering state, the relevant channels are: $^{2S+1}L_J = \, ^1$S$_0$, $^3$S$_1$, $^3$D$_1$ with positive parity, and $^3$P$_0$, $^1$P$_1$, $^3$P$_1$ with negative parity. We show (in Sec.~\ref{sec:form}) that the PV observable in this process, {\it i.e.}~the longitudinal analyzing power $A_z$, reads \begin{equation} A_z= a_z \, {\rm cos}\, \theta \ , \label{eq:aazz} \end{equation} where $\theta$ is the angle between the proton momentum and the neutron beam direction, and the coefficient $a_z$ can be expressed in terms of products of $T$-matrix elements involving (three) parity-conserving (PC) and (three) PV transitions as \begin{widetext} \begin{equation} a_z = -{4\over \Sigma}{\rm Re} \Big( \sqrt{3}\, \overline T^{21,1}_{01,10}\, \overline T^{21,0\, *}_{00,00}- \overline T^{21, 0 }_{00,11}\, \overline T^{21,1\, *}_{01,01} +\sqrt{2}\, \overline T^{21,0 }_{00,11}\, \overline T^{21,1\, * }_{01,21} +\sqrt{6}\, \overline T^{21,1}_{01,11}\, \overline T^{21,1\, *}_{01,01} +\sqrt{3}\, \overline T^{21,1 }_{01,11}\, \overline T^{21,1\, *}_{01,21} \Big)\ , \label{eq:aaz1} \end{equation} \end{widetext} and \begin{equation} \Sigma= \left|\overline{T}^{21, 0}_{00,00}\right |^2 + 3\,\left |\overline{T}^{21, 1}_{01,01}\right |^2 + 3\, \left |\overline{T}^{21,1}_{01,21}\right |^2 \ . \label{eq:ssss} \end{equation} In $\overline{T}^{21,J}_{LS,L'S'}$ the label $J$ specifies the total angular momentum, the superscripts $21$ denote the charge-exchange transition $n$-$^3$He to $p$-$^3$H (as opposed, for example, to the elastic transition, which would be denoted by the superscripts $22$), the subscripts $LS$ ($L^\prime S^\prime$) are the relative orbital angular momentum and channel spin of the $n$-$^3$He ($p$-$^3$H) clusters, and lastly the overline is to note the inclusion of a convenient phase factor---see Eq.~(\ref{eq:tbar}) below. The PC (PV) $T$-matrix elements have $L+L^\prime$ even (odd), and the sum $\Sigma$ in Eq.~(\ref{eq:ssss}) is proportional to the $^3$He($n,p$)$^3$H cross section. We observe that $a_z$ vanishes if only the channels $^1$S$_0$ and $^3$P$_0$ (with $J=0$) are retained. The $T$-matrix elements are related to the (real) $R$-matrix elements (Sec.~\ref{sec:tma} and Appendix~\ref{app:a1}), and the latter for PC transitions are calculated via the Kohn variational principle with the hyperspherical-harmonics (HH) method~\cite{Kievsky08,Viviani10} (Sec.~\ref{sec:whh}). We use strong-interaction Hamiltonian models, consisting of the Argonne $v_{18}$ (AV18)~\cite{Wiringa95} or chiral (N3LO)~\cite{Entem03} two-nucleon potential in combination with the Urbana IX (UIX)~\cite{Pudliner97} or chiral (N2LO)~\cite{Navratil07} three-nucleon potential. The HH calculation is a challenging one, for two reasons. The first is the coupled-channel nature of the scattering problem: even at vanishing energies for the incident neutron, the elastic $n$-$^3$He and charge-exchange $p$-$^3$H channels are both open. The second is the presence of a $J^\pi=0^+$ resonant state (of zero total isospin) between the $p$-$^3$H and $n$-$^3$He thresholds, which slows down the convergence of the expansion, and requires a large number of HH basis functions in order to achieve numerically stable results. Further discussion of this aspect of the calculations is in Sec.~\ref{sec:whh}, where we also present current predictions for the $n$-$^3$He scattering lengths corresponding to the Hamiltonian models mentioned earlier. They are in good agreement with the measured values. The $R$-matrix elements involving PV transitions are computed in first-order perturbation theory with Quantum Monte Carlo techniques (Sec.~\ref{sec:calc}). We adopt as PV potential the meson-exchange (DDH) model of Desplanques {\it et al.}~\cite{Desplanques80} as well as the pionless effective-field-theory (EFT) model recently derived in Refs.~\cite{Zhu05,Girlanda08} (Sec.~\ref{sec:pv}), and present results for the various components of the DDH and EFT potentials in combination with the AV18, AV18/UIX, N3LO, and N3LO/N2LO Hamiltonians in Sec.~\ref{sec:res}. Additional results for the $R$- and $T$-matrix elements, and combinations thereof entering the PV observable, are listed (for the AV18/UIX) in Appendix~\ref{app:a2} for completeness. For the DDH model only, we also present predictions for $a_z$ corresponding essentially---but see Sec.~\ref{sec:pv} for further details---to the ``best values'' of the $\pi$-, $\rho$-, and $\omega$-meson weak-interaction coupling constants~\cite{Desplanques80}. These predictions range from --9.44 to --2.48 in units of $10^{-8}$ depending on whether the N3LO/N2LO or AV18/UIX Hamiltonian is considered, and thus exhibit a significant model dependence due to cancellations (or lack thereof) between the pion and vector-meson contributions. It is useful to express the asymmetry as \begin{equation} a_z = h_\pi^1 \, C_\pi^1 +h_\rho^0 \, C_\rho^0+h_\rho^1 \, C_\rho^1+ h_\rho^2 \, C_\rho^2 +h_\omega^0\, C_\omega^0 +h_\omega^1\, C_\omega^1 \ ,\label{eq:cddh} \end{equation} where the $h^i_\alpha$'s, $\alpha=\pi$, $\rho$, $\omega$ and $i=0$, 1, 2, denote the PV coupling constants in the DDH model along with the isospin content of the corresponding interaction. The coefficients $C_\alpha^i$ are listed in Table~\ref{tb:tcs}, and depend on the input Hamiltonian used to generate the continuum wave functions, as well as on the assumed values for the PC pion- and vector-meson coupling constants and associated cutoffs (see Table~\ref{tb:tabpv}). \begin{widetext} \begin{center} \begin{table}[bth] \begin{tabular}{c|c||c|c|c||c|c} \hline \hline & $C_\pi^1$ & $C_\rho^0$ & $C_\rho^1$ & $C_\rho^2$ & $C_\omega^0$ &$C_\omega^1$ \\ \tableline AV18 & --0.1892(86) & --0.0364(40) &+0.0193(9) & --0.0006(1) &--0.0334(29) & +0.0413(10) \\ AV18/UIX &--0.1853(150) &--0.0380(70) & +0.0230(18) & --0.0011(1) &--0.0231(56) & +0.0500(20) \\ N3LO & --0.1989(87) & --0.0120(49) & +0.0242 (9) & +0.0002(1) & +0.0080(30) & +0.0587(11) \\ N3LO/N2LO & --0.1110(75) & +0.0379(56) & +0.0194 (10) & --0.0007(1) & +0.0457(36) & +0.0408(14) \\ \hline \hline \end{tabular} \caption{The coefficients $C_\alpha^i$ entering the PV observable $a_z$, corresponding to the AV18, AV18/UIX, N3LO, and N3LO/N2LO strong-interaction Hamiltonians. The statistical errors due to the Monte Carlo integrations are indicated in parentheses, and correspond to a sample consisting of $\sim 130$k configurations.} \label{tb:tcs} \end{table} \end{center} \end{widetext} The coefficients $C_\alpha^i$ follow from the linear combination given in Eq.~(\ref{eq:aaz1}). Isotensor $\rho$-exchange ($C_\rho^2$) is negligible. The isoscalar and isovector vector-meson exchanges give contributions of the same magnitude, both of which are smaller than OPE. However, the OPE contribution seems to be significantly suppressed. For example, in the case of the neutron spin rotation in $\vec{n}$-$d$ scattering this contribution is calculated to be at least a factor of $\sim 30$ larger than that of any of the $\rho$ and $\omega$ exchanges, which is not the case for the process under consideration. This may be due to the predominant isoscalar character of the $^1$S$_0$ and $^3$P$_0$ channels---see discussion in Appendix~\ref{app:a2}. The N3LO/N2LO results should be considered as preliminary, since the HH solution for the $0^+$ wave function has not yet fully converged (at least as far as the singlet scattering length is concerned, see Sec.~\ref{sec:whh}). This fact may explain why the inclusion of a three-nucleon potential like N2LO~\cite{Navratil07} should reduce $C_\pi^{1}$ by almost a factor of two relative to the other models. This point will be discussed in Secs.~\ref{sec:whh} and~\ref{sec:calc}. Finally we note that the ``best values'' for the PV couplings constants of the pion and $\rho$-meson are (in units of $10^{-7}$) respectively +4.56 and --16.4, and this leads to the large cancellation (and consequent model dependence) in the values predicted for $a_z$ and referred to earlier. We conclude by observing that the EFT analysis presented in this work could be improved by employing chiral potentials in both the strong- and weak-interaction sectors. At order $Q/\Lambda_\chi$, where $Q$ is the low energy/momentum scale that characterizes the particular process of interest, and $\Lambda_\chi \simeq 1$ GeV is the chiral-symmetry-breaking scale, the PV potential contains 7 low-energy constants (LECs), 5 of which are associated with four-nucleon contact terms, and the remaining 2 with long-range OPE components~\cite{Zhu05}. When electromagnetic interactions are also introduced, another (unknown) LEC must be included---it is needed to fix the strength of a PV two-body current operator of pion range~\cite{Zhu05}. One can envisage, at least in principle, a suite of experiments involving $A=2$--5 systems, which would constrain, in fact over-constrain, these eight LECs. Some of these have been mentioned above, additional ones include, for example, measurements of the photon asymmetries in the radiative captures $^2$H($\vec{n},\gamma$)$^3$H and $^3$He($\vec{n},\gamma$)$^4$He. These processes are strongly suppressed: the experimental values for the corresponding (PC) cross sections~\cite{Jurney82,Wolfs89} are, respectively, almost 3 and 4 orders of magnitude smaller than measured in $^1$H($n,\gamma$)$^2$H. One would naively expect relatively large PV asymmetries in these cases, possibly orders of magnitude larger than in the $A$=2 system. Clearly, accurate theoretical estimates for them could be useful in motivating our experimental colleagues to carry out these extremely challenging measurements. From a theoretical perspective, most of the methodological and technical developments needed to carry out the calculations are already in place. We have recently reported results~\cite{Girlanda10} for the $A=3$ and 4 (PC) captures, using wave functions obtained from the N3LO/N2LO Hamiltonian and electromagnetic currents derived in chiral EFT up to one loop~\cite{Pastore09}, which are in excellent agreement with data. However, there is one aspect in the computation of the proposed PV threshold captures, which still needs to be addressed: the determination of the small admixtures induced by the PV potential into the bound and continuum wave functions. Even a first-order perturbative treatment of those admixtures requires construction of the full Green's function for the strong (PC) Hamiltonian, an impractical task. However, it may be possible to generate them using correlated basis methods, similar to those employed in Ref.~\cite{Schiavilla87}. \section{The parity-violating observable} \label{sec:form} The neutron energies in the reaction $^3$He($\vec{n},p$)$^3$H of interest here are in the meV range, and at these energies only two channels are open: the $n$-$^3$He elastic channel and the $p$-$^3$H charge-exchange channel. In the following, the index $\gamma$=1 (2) is used to identify the $p$-$^3$H ($n$-$^3$He) clusters in the final (initial) state. In the absence of strong and weak interactions between the two clusters, the wave function in channel $\gamma$ is written as \begin{eqnarray} \Phi_{\gamma}^{m_3 \, m_1}&=&{1\over\sqrt{4}} \sum_{p=1}^4 \Psi_{\gamma}^{m_3}(ijk) \chi_{\gamma}^{m_1}(l)\, \phi_{{\bf q}_\gamma}({\bf y}_p) \nonumber \\ &\equiv& {1\over\sqrt{4}} \sum_{p=1}^4 \Phi_{\gamma,\, p}^{m_3 \, m_1}\ , \label{eq:pw} \end{eqnarray} where $\Psi_\gamma^{m_3}$ is the (antisymmetrized) trinucleon bound-state wave function in spin projection $m_3$, $\chi_\gamma^{m_1}$ is the nucleon spin-isospin state with spin and isospin projections $m_1$ and $p$ for $\gamma$=1 or $n$ for $\gamma$=2, respectively, and $\phi$ is the inter-cluster wave function, {\it i.e.}~a Coulomb wave function for $\gamma=1$ or simply a plane wave $e^{i\,{\bf q}_2 \cdot {\bf y}_p}$ for $\gamma=2$. The separation between the center-of-mass positions of the two clusters is denoted by ${\bf y}_p$ with ${\bf y}_p ={\bf r}_l- {\bf R}_{ijk}$, and their relative momentum is specified by ${\bf q}_\gamma$, so that the energy $E$ is given by \begin{equation} E=-B_\gamma+{q_\gamma^2\over 2\, \mu_\gamma}\ , \qquad {1\over \mu_\gamma}= {1\over m_\gamma}+{1\over M_\gamma}\ .\label{eq:ecm} \end{equation} Here $B_\gamma$ and $M_\gamma$ are the binding energy and mass of $^3$H ($^3$He) for $\gamma$=1 (2), and $m_\gamma$ is the proton (neutron) mass for $\gamma$=1 (2). Lastly, the wave functions in Eq.~(\ref{eq:pw}) are antisymmetrized by summing over the four permutations $p$ with $(ijk,l)\equiv (123,4)$, $(124,3)$, $(134,2)$, and $(234,1)$. It is useful to expand the wave functions in Eq.~(\ref{eq:pw}) in partial waves as \begin{equation} \Phi_\gamma^{m_3\, m_1}= {1\over\sqrt{4}} \sum_{p=1}^4 \sum_{LSJ} i^L\, Z_{m_3 \, m_1}^{L0SJJ_z}\,\, \Omega^{JJ_z}_{\gamma LS,\, p}\,\, \frac{{\cal F}^F_L(q_\gamma; y_{p})}{q_\gamma y_p}\ , \label{eq:pw1} \end{equation} where ${\cal F}^F_L(q_\gamma; y_{p})$ reduces to a regular Coulomb function $F_L(q_\gamma; y_{p})$ (multiplied by a phase factor we need not specify here) for $\gamma=1$ or a spherical Bessel function $x\,j_L(x)$ for $\gamma=2$, with $x=q_\gamma y_p$. The channel functions $ \Omega^{JJ_z}_{\gamma LS,\, p}$ are defined as \begin{equation} \Omega^{JJ_z}_{\gamma LS,\, p}=\Big [ Y_L(\hat{\bf y}_p)\otimes \big[ \Psi_\gamma (ijk) \otimes \chi_\gamma(l) \big]_S \Big]_{JJ_z}\ , \label{eq:omega1} \end{equation} while the Clebsch-Gordan coefficients associated with the re-coupling of the angular momenta (and other factors) are lumped into \begin{eqnarray} Z_{m_3\, m_1}^{LMSJJ_z}& =& \sqrt{4\pi}\, \sqrt{2L+1}\, \langle 1/2, m_3\ ;1/2, m_1 | S , S_z\rangle \nonumber \\ &&\qquad\qquad\qquad \times \langle L, M; S, S_z|J, J_z\rangle \ . \end{eqnarray} The momentum ${\bf q}_\gamma$ has been taken to define the spin-quantization axis, {\it i.e.}~the $z$-axis. In the presence of inter-cluster interactions, the $n$-$^3$He wave function in the asymptotic region reads \begin{widetext} \begin{eqnarray} \Psi_{\gamma=2}^{m_3\, m_1}&\simeq& {1\over\sqrt{4}}\sum_{p=1}^4 \sum_{LSJ} i^L\, Z_{m_3 \, m_1}^{L0SJJ_z}\, \Biggl[\Omega^{JJ_z}_{2 LS,\, p}\, j_L(q_2 y_{p}) +\sum_{L'S'} T^{22, J}_{LS,\,L'S'} \Omega^{JJ_z}_{2 L'S',\, p}\; {e^{i(q_2 y_{p}-L'\pi/2)} \over y_{p}} \nonumber \\ &&\qquad +\sum_{L'S'} T^{21, J}_{LS,\, L'S'} \Omega^{JJ_z}_{1 L'S',\, p} \; {e^{i[q_1 y_{p}-L'\pi/2-\eta_1\ln(2\, q_1 y_{p})+\sigma_{L'}]} \over y_{p}} \Biggr]\ , \label{eq:dw} \end{eqnarray} \end{widetext} and contains outgoing spherical waves in the $n$-$^3$He elastic channel ($\gamma=2$) as well as in the $p$-$^3$H charge-exchange channel ($\gamma=1$) multiplied by corresponding $T$-matrix elements $T^{\, \gamma \gamma', J}_{LS,L'S'}$. Here $\eta_1=\alpha\mu_1/q_1$, where $\alpha$ is the fine structure constant and $\mu_1$ is the $p$-$^3$H reduced mass defined above, and $\sigma_L$ is the Coulomb phase-shift. Thus Coulomb distortion in the $p$-$^3$H outgoing state is fully accounted for. The probability amplitude $M_{m_3' \,m_1', \, m_3 \, m_1}$ to observe a $p$-$^3$H final state with spin projections $m_1'$ and $m_3'$, respectively, is obtained from \begin{eqnarray} \langle \Phi^{m_3'\, m_1'}_{\gamma=1,\, p=1}\mid \Psi_{\gamma=2}^{m_3\, m_1}\rangle ={1\over\sqrt{4}} M_{m_3'\, m_1', \, m_3\, m_1} \nonumber \\ \times {e^{i[q_1 y-\eta_1\ln(2\, q_1 y)]} \over y}\ , \label{eq:mmat} \end{eqnarray} where we have assumed that the $p$-$^3$H state is in partition (123,4) corresponding to permutation $p=1$, namely the bound cluster consists of particles 123 and the proton is particle 4. For brevity, we have also set ${\bf y}\equiv {\bf y}_{p=1}$. Using the orthonormality of the channel functions $\Omega^{JJ_z}_{\gamma LS,\, p}$, we find \begin{eqnarray} M_{m_3' m_1', \, m_3 \, m_1}\!\!&=&\!\!\frac{1}{\sqrt{4\pi}}\!\! \sum_{JLS L'S'} i^L (-i)^{L'} \frac{e^{i\,\sigma_{L'}}}{\sqrt{2L'+1}}\, Z_{m_3 \, m_1}^{L0SJJ_z}\nonumber \\ &&\times T^{21, J}_{LS,\, L'S'}\, Z_{m_3' \, m_1'}^{L'M'S'JJ_z}\, Y_{L'M'}(\hat{\bf y})\ , \label{eq:mmat1} \end{eqnarray} where the Clebsch-Gordan coefficients require $J_z=S_z=m_3+m_1$, $S_z'=m_3'+m_1'$, and $M'=J_z-S_z'= m_3+m_1- (m_3'+m_1')$. The spin-averaged cross section follows from \begin{equation} \sigma_0\equiv \frac{{\rm d}\sigma}{{\rm d}\Omega}={1\over 4} { \mu_2 \over \mu_1} {q_1\over q_2} \sum_{m_3,m_1} \sum_{m_3', m_1'} | M_{m_3'\, m_1',\, m_3\, m_1}|^2 \ , \label{eq:ucs} \end{equation} since $(1/4) (q_1/\mu_1)\, |M_{m_3' \, m_1',\, m_3\ ,m_1}|^2 \,{\rm d} \Omega$ is the flux of outgoing particles in the solid angle \hbox{${\rm d}\Omega\equiv {\rm d}\hat{\bf y}$}, and $(1/4)(q_2/\mu_2)$ is the incident flux, where the factors $1/4$ originate from the normalization factors $1/\sqrt{4}$ in Eqs.~(\ref{eq:pw}) and (\ref{eq:mmat}). These cancel out in Eq.~(\ref{eq:ucs}), leaving an extra $1/4$ coming from the average over the initial polarizations. The longitudinal asymmetry $A_z$ is defined as \begin{eqnarray} \sigma_0 \, A_z&=&{1\over 2} { \mu_2 \over \mu_1} {q_1\over q_2} \sum_{m_3} \sum_{m_3', m_1'} \biggl[ | M_{m_3'\, m_1',\,m_3\, m_1=+{1\over 2}}|^2 \nonumber \\ &&\qquad\qquad - | M_{m'_3\, m_1', \,m_3\,m_1=-{1\over 2}}|^2 \biggr] \ . \label{eq:az} \end{eqnarray} At meV energies it suffices to keep only $L=0$ in the entrance channel, so that \begin{eqnarray} M_{m_3'\, m_1', \, m_3\, m_1}\!\!\!&=&\!\!\! \sum_{J=0,1} \sum_{L'S'} \langle 1/2, m_3;1/2, m_1 | J , J_z\rangle \nonumber \\ \!\!&\times &\!\!\frac{\overline{T}^{21,J}_{0J,L'S'}}{\sqrt{2L'+1}} Z_{m_3' \, m_1'}^{L'M'S'JJ_z} Y_{L'M'}(\hat{\bf y}) \ , \label{eq:m1} \end{eqnarray} where we have defined \begin{equation} \overline T^{21,J}_{0J,L'S'}= (-i)^{L'}\, e^{i\, \sigma_{L'}}\, T^{21, J}_{0J,L'S'}\ . \label{eq:tbar} \end{equation} After inserting the expression for $Z_{m_3' \, m_1'}^{L'M'S'JJ_z}$ and carrying out the sums over $m_1, m_3$ and $m_1',m_3'$, we find the unpolarized cross section to be given by \begin{eqnarray} \sigma_0\!\!&=&\!\! {1\over 4} { \mu_2 \over \mu_1} {q_1\over q_2} \sum_{J=0,1}\sum_{L'S'} (2J+1) \left |\overline T^{21,J}_{0J,L'S'}\right|^2 \nonumber \\ \!\!&=& \!\!{1\over 4} { \mu_2 \over \mu_1} {q_1\over q_2} \Bigg[\left|T^{21, 0}_{00,00}\right |^2\!\! + 3\! \left |T^{21, 1}_{01,01}\right |^2 \!\!+ 3\! \left |T^{21,1}_{01,21}\right |^2\Bigg] \ , \label{eq:ucs1} \end{eqnarray} where in the second line we have ignored $T$-matrix elements involving transitions to odd parity final states (and hence parity violating), since these are induced by hadronic weak interactions and consequently are much smaller than the parity-conserving $T$-matrices associated with strong interactions. We observe that the matrix elements $T^{21,J}$ (and $\overline T^{21,J}$) are finite in the limit $q_2=0$, and therefore $\sigma_0$ is divergent as $q_2$ goes to zero, as expected for a neutron capture reaction. \begin{table}[bth] \caption{The coefficients $C^{J_1 J_2}_{L_1 L_2 S}(|M|)$ for the relevant channels.} \begin{tabular}{l| llll|l} \hline \hline $J_1,J_2$ & $L_1$ & $L_2$ & $S$ & $|M|$ & $ C^{J_1 J_2}_{L_1 L_2 S}(|M|)$ \\ \hline $0,1$ & $0$ & $1$ & $0$ & $0$ & $-\sqrt{3}$ \\ & $1$ & $0$ & $1$ & $0$ & $+1$ \\ & $1$ & $2$ & $1$ & $0$ & $-\sqrt{2}$ \\ & $1$ & $2$ & $1$ & $1$ & $-\sqrt{1/2}$ \\ \hline $1,0$ & $1$ & $0$ & $0$ & $0$ & $-\sqrt{3}$ \\ & $0$ & $1$ & $1$ & $0$ & $+1$ \\ & $2$ & $1$ & $1$ & $0$ & $-\sqrt{2}$ \\ & $2$ & $1$ & $1$ & $1$ & $-\sqrt{1/2}$ \\ \hline $1,1$ & $0$ & $1$ & $1$ & $0$ & $-\sqrt{6}$ \\ & $2$ & $1$ & $1$ & $0$ & $-\sqrt{3}$ \\ & $2$ & $1$ & $1$ & $1$ & $-\sqrt{3/4}$ \\ & $1$ & $0$ & $1$ & $0$ & $-\sqrt{6}$ \\ & $1$ & $2$ & $1$ & $0$ & $-\sqrt{3}$ \\ & $1$ & $2$ & $1$ & $1$ & $-\sqrt{3/4}$ \\ \hline \hline \end{tabular} \label{tb:tabc} \end{table} The asymmetry $A_z$ can be written as \begin{eqnarray} \sigma_0 \, A_z \!\!&=&\!\! \frac{1}{2}{ \mu_2 \over \mu_1} {q_1\over q_2}\!\! \sum_{J_1,J_2=0,1}\sum_{L_1L_2S}\!\! \epsilon_{L_1 L_2} \overline T^{21,J_1}_{0J_1,L_1S} \left[\overline T^{21,J_2}_{0J_2,L_2S}\right]^* \nonumber \\ &\times& \sum_{|M|} C^{J_1 J_2}_{L_1L_2S}(|M|) P^{|M|}_{L_1}(\theta) P^{|M|}_{L_2}(\theta) \ ,\label{eq:d2} \end{eqnarray} where the $P_L^{|M|}(\theta)$'s are associated Legendre functions, $\theta$ is the angle of the outgoing proton momentum relative to the direction of the incident beam, the $C^{J_1 J_2}_{L_1L_2S}(|M|)$'s denote combinations of Clebsch-Gordan coefficients, defined as \begin{eqnarray} C^{J_1 J_2}_{L_1 L_2S}(|M|)\!\!&=&\!\!\frac{1}{2 \pi} \sum_{J_z} \sum_{\mu =\pm |M|}\! \sqrt{(L_1\!-\!|M|)! (L_2\!-\!|M|)!\over (L_1\!+\!|M|)! (L_2\!+\!|M|)!}\nonumber \\ &&\times Z_{m_3 \, m_1=+1/2}^{L_1 \mu\, SJ_1 J_z}\, Z_{m_3 \, m_1=+1/2}^{L_2 \mu\, SJ_2 J_z} \ , \label{eq:ccoef} \end{eqnarray} and lastly the phase factor $\epsilon_{L_1 L_2}$, \begin{equation} \epsilon_{L_1 L_2}\equiv{1-(-)^{L_1+L_2}\over 2} \ , \end{equation} ensures that either $L_1$ or $L_2$ must be odd, which in turn implies that either $T^{21,J_1}_{0J_1,L_1S}$ or $T^{21,J_2}_{0J_2,L_2S}$ involves a parity-violating transition, {\it i.e.}~a transition from an incoming positive parity $n$-$^3$He state to an outgoing negative parity $p$-$^3$H state. The non-vanishing $C$'s for the relevant channels are listed in Table~\ref{tb:tabc}, and evaluation of the sums in Eq.~(\ref{eq:d2}) allows one to express the parity-violating asymmetry as in Eqs.(\ref{eq:aazz})--(\ref{eq:ssss}). \section{$T$-matrix elements} \label{sec:tma} The calculation proceeds in two steps: we first determine, via the Kohn variational principle, the $R$-matrix elements, and then relate these to the $T$-matrix elements. The wave function describing a scattering state with total angular momentum $JJ_z$ in channel $\gamma LS$ is written as \begin{equation} \Psi^{JJ_z}_{\gamma,LS}=\Psi^{C,JJ_z}_{\gamma,LS}+\Psi^{F,JJ_z}_{\gamma,LS}+ \sum_{\gamma' L'S'} R^{\gamma \gamma', J}_{LS,L'S'} \Psi^{G,JJ_z}_{\gamma',L'S'}\ , \label{eq:wf} \end{equation} where the asymptotic wave functions $\Psi^{\lambda,JJ_z}_{\gamma,LS}$ with $\lambda=F,G$ are defined as \begin{equation} \Psi^{\lambda,JJ_z}_{\gamma,LS} = { D_\gamma\over \sqrt{4}} \sum_{p=1}^{4} \Omega^{JJ_z}_{\gamma LS,p} { {\cal F}^\lambda_L(q_\gamma;y_{p}) \over q_\gamma y_{p} }\ , \label{eq:awf} \end{equation} and the superscript $\lambda=F$ is to denote the regular radial functions introduced earlier in Eq.~(\ref{eq:pw1}), and $\lambda=G$ is to denote the irregular Coulomb or spherical Bessel functions, namely \begin{equation} \gamma=1\!:\, {\cal F}^{\, G}_L(x)= \widetilde G_L(\eta_1,x) \ ; \gamma=2\!:\, {\cal F}^{\, G}_L(x)= -x \widetilde y_L(x)\ . \label{eq:g} \end{equation} The tilde over $G_L$ and $y_L$ indicates that they have been multiplied by short-range cutoffs in order to remove the singularity at the origin. Thus ${\cal F}_L^{\, G}$ is well-behaved in all space. The normalization factor $D_\gamma$, \begin{equation} D_\gamma=\sqrt{2\, \mu_\gamma q_\gamma\over \kappa^3} \label{eq:d} \end{equation} and $\kappa=\sqrt{3/2}$, is introduced for convenience---$\kappa$ is a numerical factor relating the inter-cluster separation ${\bf y}_p$ to the Jacobi variable ${\bf x}_{1p}$, {\it i.e.}~${\bf x}_{1p}=\kappa \, {\bf y}_p$ (see Eq.~(\ref{eq:JcbV}) below). The wave functions $\Psi^{C,JJ_z}_{\gamma,LS}$ vanish in the asymptotic region, and describe the dynamics of the interacting nucleons when they are close to each other, while the $R^{\gamma \gamma',J}_{LS,L'S'}$'s are the $R$-matrix elements. The latter, as well as the coefficients entering the expansion of $\Psi^{C,JJ_z}_{\gamma,LS}$ in terms of hyperspherical-harmonics functions, are determined via the Kohn variational principle \begin{equation} \Bigl[R^{\gamma\gamma', J}_{LS,L'S'}\Bigr] = R^{\gamma' \gamma, J}_{L'S',LS}- \langle\Psi^{JJ_z}_{\gamma,LS}|H-E|\Psi^{JJ_z}_{\gamma',L'S'}\rangle \ , \\ \label{eq:kohn} \end{equation} as discussed in Sec.~\ref{sec:whh}. The next step consists in relating the $R$- to the $T$-matrix elements. To this end, it is convenient to simplify the notation by dropping the superscripts $JJ_z$ and by introducing a single label $\alpha$ to denote the channel quantum numbers $LS$, so that the wave functions in Eq.~(\ref{eq:wf}) corresponding to $\gamma=1$ and 2 are written as \begin{eqnarray} \!\!\! \!\!\! \Psi_{1,\alpha}\!\!\!&=&\!\!\!\Psi^C_{1,\alpha}\!+\!\Psi^F_{1,\alpha}\! + \!\sum_{\alpha'}\! R^{11}_{\alpha,\alpha'} \!\Psi^G_{1,\alpha'} + \!\sum_{\alpha'}\! R^{12}_{\alpha,\alpha'} \! \Psi^G_{2,\alpha'}\ , \label{eq:wf1}\\ \!\!\!\!\!\! \Psi_{2,\alpha}\!\!\!&=&\!\!\!\Psi^C_{2,\alpha}\! +\!\Psi^F_{2,\alpha} + \!\sum_{\alpha'}\! R^{21}_{\alpha,\alpha'}\! \Psi^G_{1,\alpha'}\! + \!\sum_{\alpha'}\! R^{22}_{\alpha,\alpha'}\! \Psi^G_{2,\alpha'}\ . \label{eq:wf2} \end{eqnarray} From these we form the linear combination \begin{equation} \Psi=\sum_{\alpha'} \left(U_{\alpha,\alpha'} \Psi_{1,\alpha'}+ V_{\alpha,\alpha'} \Psi_{2,\alpha'}\right)\ , \end{equation} where the matrices $U$ and $V$ are determined below. Inserting the expressions above for $ \Psi_{\gamma,\alpha}$ and rearranging terms lead to \begin{widetext} \begin{eqnarray} \Psi&=& \Psi^C + \sum_{\alpha'} \Bigl[U-i\, (U R^{11}+V R^{21}) \Bigr]_{\alpha,\alpha'} \Psi^F_{1,\alpha'} + \sum_{\alpha'} \left( U R^{11}+V R^{21}\right)_{\alpha,\alpha'} \Bigl(\Psi^G_{1,\alpha'}+i\, \Psi^F_{1,\alpha'}\Bigr) \nonumber\\ &+&\sum_{\alpha'} \Bigl[V-i\, (U R^{12}+V R^{22})\Bigr]_{\alpha,\alpha'} \Psi^F_{2,\alpha'} + \sum_{\alpha'} \left( U R^{12}+V R^{22}\right)_{\alpha,\alpha'} \Bigl(\Psi^G_{2,\alpha'}+i\Psi^F_{2,\alpha'}\Bigr)\ , \label{eq:wfUV1} \end{eqnarray} \end{widetext} where $\Psi^C$ is a combination of internal parts of no interest here. We now require $\Psi$ to consist, in the asymptotic region, of a plane wave in channel $\gamma$=2 (or $n$-$^3$He) and of a purely outgoing wave in channel $\gamma$=1 (or $p$-$^3$H). These requirements are satisfied by demanding that \begin{eqnarray} U-i\, (U R^{11}+V R^{21})&=&0\ , \\ V-i\, (U R^{12}+V R^{22})&=&I\ , \label{eq:UV} \end{eqnarray} where $I$ is the identity matrix. Comparing the resulting $\Psi$ with the wave function given in Eq.~(\ref{eq:dw}), specifically its component in channel $LSJ$, allows one to express the $T$-matrix as \begin{eqnarray} T^{21,J}_{LS,L'S'}&=& {D_1\over D_2 \, q_1} \left( U^J R^{11,J}+V^J R^{21,J}\right)_{LS,L'S'} \nonumber \\ &=& -i \,{D_1\over D_2\, q_1}\, U^J_{LS,L'S'}\ , \label{eq:t21} \end{eqnarray} where we have reinstated the $LSJ$ labels. Finally the matrix $U$ is obtained by solving the system in Eq.~(\ref{eq:UV}): \begin{widetext} \begin{equation} T^{21,J}_{LS,L'S'}= {D_1\over D_2 \, q_1}\Bigg[ \Bigl[I-i\,R^{22,J}+R^{21,J}(I-i\,R^{11,J})^{-1}R^{12,J}\Bigr]^{-1} \, R^{21,J} (I-i\,R^{11,J})^{-1}\Bigg]_{LS,L'S'} \ . \label{eq:t21b} \end{equation} \end{widetext} In fact, we compute the $R$-matrix elements at zero energy, {\it i.e.}~in the limit $q_2 \rightarrow 0$, and define \begin{eqnarray} &&\overline R^{12,J}_{LS,L'S'}= {R^{12,J}_{LS,L'S'} \over q_2^{L'+1/2}}\ ,\qquad \overline R^{21,J}_{LS,L'S'}= {R^{21,J}_{LS,L'S'} \over q_2^{L+1/2}}\ , \nonumber \\ &&\qquad\qquad \overline R^{22,J}_{LS,L'S'}= {R^{22,J}_{LS,L'S'} \over q_2^{L+L'+1}}\ , \label{eq:rbar} \end{eqnarray} and it can be shown that the $\overline R$-matrix elements are finite in this limit. In particular, we note that the factor $q_2^{L}$ follows from the small argument expansion of the spherical Bessel function $j_L$ in $\Psi^{F,JJ_z}_{\gamma=2,LS}$, while the extra $q_2^{1/2}$ is due to the normalization $D_2$. At zero energy, we have \begin{equation} \Bigl[I-i\,R^{22,J}+R^{21,J}(I-i\,R^{11,J})^{-1}R^{12,J}\Bigr] \rightarrow I \ , \end{equation} since $R^{22,J}$ and the product $R^{21,J}R^{12,J}$ are proportional to $q_2$ or higher powers of $q_2$. Furthermore, the relevant $T$-matrix elements entering the expression for the asymmetry $A_z$ are those with quantum number $L=0$ in channel $\gamma=2$, and hence \begin{equation} T^{21,J}_{0J,L'S'}= {1\over \sqrt{q_1}} \sum_{L''S''}\overline R^{21,J}_{0J,L''S''} \left(I-i\, R^{11,J}\right)^{-1}_{L''S'',L'S'} \ , \label{eq:t21c} \end{equation} with $J=0,1$. Note that we have neglected the difference in the $n$-$^3$He and $p$-$^3$H reduced masses. \section{The parity-violating potential} \label{sec:pv} Two different models of the PV weak-interaction potentials are adopted in the calculations reported below. One is the model developed thirty years ago by Desplanques {\it et al.}~\cite{Desplanques80} (and known as DDH): it is parametrized in terms of $\pi$-, $\rho$-, and $\omega$-meson exchanges, and involves in practice six weak pion and vector-meson coupling constants to the nucleon~\cite{Note1}. These were estimated within a quark model approach incorporating symmetry arguments and current algebra requirements~\cite{Desplanques80,Holstein81}. Due to the inherent limitations of such an analysis, however, the coupling constants determined in this way have rather wide ranges of allowed values. The other model for the PV potential considered in the present work is that formulated by Zhu {\it et al.}~\cite{Zhu05} in 2005, and reduced to its minimal form by Girlanda~\cite{Girlanda08} in 2008, within an effective-field-theory (EFT) approach in which only nucleon degrees of freedom are retained explicitly. At lowest order $Q/\Lambda_\chi$, where $Q$ is the small momentum scale characterizing the low-energy PV process and $\Lambda_\chi \simeq 1$ GeV is the scale of chiral symmetry breaking, it is parametrized by a set of five contact four-nucleon terms. \begin{widetext} \begin{center} \begin{table}[btt] \begin{tabular}{c|c|c|c|c|c} \hline \hline $n$ & $c^{\rm DDH}_n$ & $f^{\rm DDH}_n(r)$ & $c^{\rm EFT}_n$ & $f^{\rm EFT}_n(r)$ & $O^{(n)}_{ij}$ \\ \tableline 1 & $+\frac{g_\pi \, h^1_\pi}{2\sqrt{2} \, m}$ & $f_\pi(r)$ & $\frac{2\, \mu^2}{\Lambda_\chi^3}\, C_6$ & $f_\mu(r)$ & $ ({\bm \tau}_i\times {\bm \tau}_j)_z \,({\bm \sigma}_i+{\bm \sigma}_j)\cdot {\bf X}^{(1)}_{ij,-}$ \\ 2 & $-\frac{g_\rho \, h^0_\rho}{ m}$ & $f_\rho(r)$ & 0 & 0 & ${\bm \tau}_i \cdot {\bm \tau}_j \,({\bm \sigma}_i-{\bm \sigma}_j)\cdot {\bf X}^{(2)}_{ij,+} $ \\ 3 & $-\frac{g_\rho \, h^0_\rho(1+\kappa_\rho)}{ m}$ & $f_\rho(r)$ & 0 & 0 & ${\bm \tau}_i \cdot {\bm \tau}_j \,({\bm \sigma}_i\times{\bm \sigma}_j)\cdot {\bf X}^{(3)}_{ij,-} $ \\ 4 & $-\frac{g_\rho \, h^1_\rho}{2\, m}$ & $f_\rho(r)$ & $\frac{ \mu^2}{\Lambda_\chi^3}\, (C_2+C_4)$ & $f_\mu(r)$ & $ ({\bm \tau}_i + {\bm \tau}_j)_z\,({\bm \sigma}_i-{\bm \sigma}_j)\cdot {\bf X}^{(4)}_{ij,+} $ \\ 5 & $-\frac{g_\rho \, h^1_\rho(1+\kappa_\rho)}{2\, m}$ & $f_\rho(r)$ & 0 & 0 & $({\bm \tau}_i + {\bm \tau}_j)_z \,({\bm \sigma}_i\times{\bm \sigma}_j)\cdot {\bf X}^{(5)}_{ij,-} $ \\ 6 & $-\frac{g_\rho \, h^2_\rho}{2 \sqrt{6}\, m}$ & $f_\rho(r)$ & $-\frac{2\, \mu^2}{\Lambda_\chi^3}\, C_5$ & $f_\mu(r)$ & $ (3\, \tau_{i,z} \tau_{j,z}-{\bm \tau}_i \cdot {\bm \tau}_j) \,({\bm \sigma}_i-{\bm \sigma}_j)\cdot {\bf X}^{(6)}_{ij,+} $ \\ 7 & $-\frac{g_\rho \, h^2_\rho(1+\kappa_\rho)}{2\sqrt{6}\, m}$ & $f_\rho(r)$ & 0 & 0 & $ (3\, \tau_{i,z} \tau_{j,z}-{\bm \tau}_i \cdot {\bm \tau}_j) \,({\bm \sigma}_i\times {\bm \sigma}_j)\cdot {\bf X}^{(7)}_{ij,-} $ \\ 8 & $-\frac{g_\omega \, h^0_\omega}{ m}$ & $f_\omega(r)$ & $\frac{2\, \mu^2}{\Lambda_\chi^3} \, C_1$ & $f_\mu(r)$ & $({\bm \sigma}_i-{\bm \sigma}_j)\cdot {\bf X}^{(8)}_{ij,+} $ \\ 9 & $-\frac{g_\omega \, h^0_\omega (1+\kappa_\omega)}{ m}$ & $f_\omega(r)$ & $\frac{2\, \mu^2}{\Lambda_\chi^3} \,\tilde{C}_1$ & $f_\mu(r)$ & $ ({\bm \sigma}_i\times {\bm \sigma}_j)\cdot {\bf X}^{(9)}_{ij,-} $ \\ 10 & $-\frac{g_\omega \, h^1_\omega}{2\, m}$ & $f_\omega(r)$ & 0 & 0 & $ ({\bm \tau}_i + {\bm \tau}_j)_z\, ({\bm \sigma}_i-{\bm \sigma}_j)\cdot {\bf X}^{(10)}_{ij,+} $ \\ 11 & $-\frac{g_\omega \, h^1_\omega(1+\kappa_\omega)}{2\, m}$ & $f_\omega(r)$ & 0 & 0 & $ ({\bm \tau}_i + {\bm \tau}_j)_z\, ({\bm \sigma}_i\times {\bm \sigma}_j)\cdot {\bf X}^{(11)}_{ij,-} $ \\ 12 & $-\frac{g_\omega h^1_\omega -g_\rho h^1_\rho}{2\, m}$ & $f_\rho(r)$ & 0 & 0 & $ ({\bm \tau}_i -{\bm \tau}_j)_z \,({\bm \sigma}_i+{\bm \sigma}_j)\cdot {\bf X}^{(12)}_{ij,+} $ \\ \hline \hline \end{tabular} \caption{Components of the DDH and EFT models for the parity-violating potential. The vector operators ${\bf X}^{(n)}_{ij,\mp}$ and functions $f_x(r)$, $x=\pi,\, \rho,\, \omega,\, \mu$, are defined in Eqs.~(\protect\ref{eq:x+})--(\protect\ref{eq:x-}) and Eqs.~(\protect\ref{eq:fx})--(\ref{eq:fmu}), respectively. Only 5 operators and low-energy constants enter the pionless EFT interaction at the leading order, and in this paper they have been chosen to correspond to the rows 1, 4, 6, 8 and 9.} \label{tb:tab1} \end{table} \end{center} \end{widetext} The DDH and EFT PV two-nucleon potentials are conveniently written as \begin{equation} v^\alpha_{ij}=\sum_{n=1}^{12} c^\alpha_n \, O^{(n)}_{ij} \ , \qquad \alpha={\rm DDH} \,\, {\rm or}\,\, {\rm EFT} \ , \end{equation} where the parameters $c^\alpha_n$ and operators $O^{(n)}_{ij}$, $n=1,\dots, 12$, are listed in Table~\ref{tb:tab1}. In this table the vector operators ${\bf X}^{(n)}_{ij,\pm}$ are defined as \begin{eqnarray} {\bf X}^{(n)}_{ij,+} &\equiv& \left[ {\bf p}_{ij} \, ,\, f_n(r_{ij}) \right]_+ \ , \label{eq:x+} \\ {\bf X}^{(n)}_{ij,-} &\equiv& {\rm i}\, \left[ {\bf p}_{ij} \, ,\, f_n(r_{ij}) \right]_- \ , \label{eq:x-} \end{eqnarray} where $\left[ \dots \, , \, \dots \right]_{\mp}$ denotes the commutator ($-$) or anticommutator ($+$), and ${\bf p}_{ij}$ is the relative momentum operator, ${\bf p}_{ij} \equiv ({\bf p}_i-{\bf p}_j)/2$. In the DDH model, the functions $f_x(r)$, $x=\pi, \rho$ and $\omega$, are Yukawa functions, suitably modified by the inclusion of monopole form factors, \begin{equation} f_x(r) =\frac{1}{4\pi\, r} \left\{ {\rm e}^{-m_x r} -{\rm e}^{-\Lambda_x r}\left[1+ \frac{\Lambda_x r }{2}\left(1-\frac{m_x^2} {\Lambda_x^2}\right)\right] \right\} \ . \label{eq:fx} \end{equation} In the EFT model, however, the short-distance behavior is described by a single function $f_\mu(r)$, which is itself taken as a Yukawa function with mass parameter $\mu$, \begin{equation} f_\mu(r)=\frac{1}{4\pi\, r}{\rm e}^{-\mu r} \ , \label{eq:fmu} \end{equation} with $\mu \simeq m_\pi$ as appropriate in the present formulation, in which pion degrees of freedom are integrated out. In the potential $v^{\rm DDH}_{ij}$, the strong-interaction coupling constants of the $\pi$-, $\rho$-, and \hbox{$\omega$-meson} to the nucleon are denoted as $g_\pi,\, g_\rho,\, \kappa_\rho,\, g_\omega,\, \kappa_\omega$, while the weak-interaction ones as $h^1_\pi,\, h^0_\rho,\, h^1_\rho,\, h^2_\rho,\, h^0_\omega,\, h^1_\omega$, where the superscripts 0, 1, and 2 specify the isoscalar, isovector, and isotensor content of the corresponding interaction components. In the EFT model, the five low-energy constants $C_1, \tilde{C}_1, C_2+C_4 , C_5$ and $C_6$ completely characterize $v^{\rm EFT}_{ij}$, to lowest order $Q/\Lambda_\chi$. \begin{table}[bth] \begin{tabular}{c|c|c|c|c|c|c} \hline \hline & $g^2_\alpha/4\pi$ & $\kappa_\alpha$ & $10^7\times h_\alpha^0$ & $10^7\times h_\alpha^1$ & $10^7\times h_\alpha^2 $ & $\Lambda_\alpha$ (GeV/c) \\ \hline $\pi$ & 13.2 & & & 4.56 & & 1.72 \\ $\rho$ & 0.840 & 6.1 & --16.4 & --2.77 & --13.7 & 1.31 \\ $\omega$ & 20.0 & 0.0 & 3.23 & 1.94 & & 1.50 \\ \hline \hline \end{tabular} \caption{Values used for the strong- and weak-interaction coupling constants and short-range cutoff parameters of the $\pi$-, $\rho$-, and $\omega$-meson in the DDH potential.} \label{tb:tabpv} \end{table} The values for the coupling constants and short-range cutoffs in the DDH model are listed in Table~\ref{tb:tabpv}, while the mass $\mu$ in the EFT model is taken to be $m_\pi$. These values for coupling constants and cutoffs were also used in the DDH-based calculations of PV two-nucleon observables in Refs.~\cite{Carlson02,Schiavilla04} and neutron spin rotation in $\vec{n}\, d$ scattering~\cite{Schiavilla08}. In particular, we note that the linear combination of $\rho$- and $\omega$-meson weak coupling constants corresponding to $pp$ states has been taken from an earlier analysis of $\vec{p}\, p$ elastic scattering experiments~\cite{Carlson02}. The remaining couplings are the ``best value'' estimates, suggested in Ref.~\cite{Desplanques80}. In the analysis of the $a_z$ observable to follow, we will report results for the coefficients $I^{\rm DDH}_n$ and $I^{\rm EFT}_n$ in the expansion \begin{equation} a_z = \sum_{n=1}^{12} c^\alpha_n \, I^\alpha_n \ . \label{eq:in} \end{equation} Thus we will not need to consider specific values (or range of values) for the strength parameters $c^\alpha_n$. However, the $I^\alpha_n$ depend on the masses (and short-range cutoffs $\Lambda_x$ for the DDH model) occurring in the Yukawa functions. Note that the coefficients $C^i_\alpha$ entering Eq.~(\ref{eq:cddh}) are obtained from the $I^{\rm DDH}_n$'s and $c^{\rm DDH}_n$'s listed in Table~\ref{tb:tab1} via \begin{eqnarray} C^1_\pi &=& +\frac{g_\pi }{2\sqrt{2} \, m} I^{\rm DDH}_1\ , \nonumber \\ C^0_\rho &=& -\frac{g_\rho}{ m} I^{\rm DDH}_2 -\frac{g_\rho \,(1+\kappa_\rho)}{ m} I^{\rm DDH}_3\ , \nonumber \\ C^1_\rho &=& -\frac{g_\rho }{2\, m} I^{\rm DDH}_4 -\frac{g_\rho (1+\kappa_\rho)}{2\, m} I^{\rm DDH}_5 +\frac{g_\rho}{2\, m} I^{\rm DDH}_{12}\ , \nonumber \\ C^2_\rho &=& -\frac{g_\rho}{2 \sqrt{6}\, m} I^{\rm DDH}_6 -\frac{g_\rho (1+\kappa_\rho)}{2\sqrt{6}\, m} I^{\rm DDH}_7\ , \\ C^0_\omega &=& -\frac{g_\omega}{ m} I^{\rm DDH}_8 -\frac{g_\omega (1+\kappa_\omega)}{ m} I^{\rm DDH}_9\ , \nonumber \\ C^1_\omega &=& -\frac{g_\omega }{2\, m} I^{\rm DDH}_{10} -\frac{g_\omega (1+\kappa_\omega)}{2\, m}I^{\rm DDH}_{11} -\frac{g_\omega }{2\, m}I^{\rm DDH}_{12}\ . \nonumber \end{eqnarray} \section{The HH wave functions} \label{sec:whh} The \lq\lq internal\rq\rq wave function $\Psi^{C,JJ_z}_{\gamma,LS}$, see Eq.~(\ref{eq:wf}), is expanded in the HH basis. For four equal mass particles, a suitable choice for the Jacobi vectors is \begin{eqnarray} {\bf x}_{1p}& = & \sqrt{\frac{3}{2}} \left ({\bf r}_l - \frac{ {\bf r}_i+{\bf r}_j +{\bf r}_k}{3} \right )\ , \nonumber\\ {\bf x}_{2p} & = & \sqrt{\frac{4}{3}} \left ({\bf r}_k- \frac{ {\bf r}_i+{\bf r}_j}{2} \right )\ , \label{eq:JcbV}\\ {\bf x}_{3p} & =& {\bf r}_j-{\bf r}_i\ , \nonumber \end{eqnarray} where $p$ specifies a given permutation corresponding to the ordering $(ijkl)$. By definition, the permutation $p=1$ is chosen to correspond to (1234). For the given Jacobi vectors, the hyperspherical coordinates include the so-called hyperradius $\rho$, defined by \begin{equation} \rho=\sqrt{x_{1p}^2+x_{2p}^2+x_{3p}^2}\quad ({\rm independent\ of\ }p)\ , \label{eq:rho} \end{equation} and a set of angular variables which in the Zernike and Brinkman~\cite{zerni,F83} representation are (i) the polar angles $\hat {\bf x}_{ip}\equiv (\theta_{ip},\phi_{ip})$ of each Jacobi vector, and (ii) the two additional ``hyperspherical'' angles $\phi_{2p}$ and $\phi_{3p}$, defined as \begin{equation} \cos\phi_{2p} = \frac{ x_{2p} }{\sqrt{x_{1p}^2+x_{2p}^2}}\ , \quad \cos\phi_{3p} = \frac{ x_{3p} }{\sqrt{x_{1p}^2+x_{2p}^2+x_{3p}^2}}\ , \label{eq:phi} \end{equation} where $x_{jp}$ is the magnitude of the Jacobi vector ${\bf x}_{jp}$. The set of angular variables $\hat {\bf x}_{1p}, \hat {\bf x}_{2p}, \hat {\bf x}_{3p}, \phi_{2p}$, and $\phi_{3p}$ is denoted hereafter as $\Omega_p$. A generic HH function reads \begin{widetext} \begin{eqnarray} {\cal H}^{K\Lambda M}_{\ell_1 \ell_2\ell_3 L_2 n_2 n_3}(\Omega_p) &=& {\cal N}^{\ell_1 \ell_2 \ell_3}_{ n_2 n_3} \left [ \Bigl [ Y_{\ell_1}(\hat {\bf x}_{1p}) \otimes Y_{\ell_2}(\hat{\bf x}_{2p}) \Bigr ]_{L_2} \otimes Y_{\ell_3}(\hat {\bf x}_{3p}) \right ]_{\Lambda M} (\sin\phi_{2p})^{\ell_1 } (\cos\phi_{2p})^{\ell_2} (\sin\phi_{3p})^{\ell_1+\ell_2+2n_2} \nonumber \\ &\times& (\cos\phi_{3p})^{\ell_3}\, P^{\ell_1+1/2\, , \,\ell_2+1/2}_{n_2}(\cos2\phi_{2p}) P^{\ell_1+\ell_2+2\, n_2+2\, , \,\ell_3+1/2}_{n_3}(\cos2\phi_{3p})\ , \label{eq:hh4P} \end{eqnarray} \end{widetext} where $P^{a,b}_n$ are Jacobi polynomials, and the coefficients ${\cal N}^{\ell_1 \ell_2 \ell_3}_{ n_2 n_3}$ are normalization factors. The quantity $K=\ell_1+\ell_2+\ell_3+2\,(n_2+n_3)$ is the so-called grand angular quantum number. The HH functions are the eigenfunctions of the hyperangular part of the kinetic energy operator. Another important property is that $\rho^K \, {\cal H}^{K \Lambda M}_{\ell_1 \ell_2 \ell_3 L_2 n_2 n_3}(\Omega_p)$ are homogeneous polynomials of the particle coordinates of degree $K$. A set of antisymmetrized hyperangular-spin-isospin states of grand angular quantum number $K$, total orbital angular momentum $\Lambda$, total spin $\Sigma$, and total isospin $T$ (for the given values of total angular momentum $J$ and parity $\pi$) can be constructed as follows: \begin{equation} \Psi_{\mu}^{K\Lambda\Sigma T} = \sum_{p=1}^{12} \Phi_\mu^{K\Lambda\Sigma T}(ijkl)\ , \label{eq:PSI} \end{equation} where the sum is over the $12$ even permutations $p\equiv ijkl$, and \begin{widetext} \begin{equation} \Phi^{K\Lambda\Sigma T}_{\mu}(ijkl) = \biggl [ {\cal H}^{K \Lambda M}_{\ell_1\ell_2\ell_3 L_2 n_2 n_3}(\Omega_p) \otimes \biggl [\Bigl[\bigl[ \chi_i \otimes \chi_j \bigr]_{S_a}\otimes \chi_k\Bigr]_{S_b} \otimes \chi_l \biggr]_{\Sigma} \biggr ]_{JJ_z} \biggl [\Bigl[\bigl[ \xi_i \otimes \xi_j \bigr]_{T_a} \otimes \xi_k\Bigr]_{T_b} \otimes \xi_l \biggr]_{TT_z}\ . \label{eq:PHI} \end{equation} \end{widetext} Here, $\chi_i$ ($\xi_i$) denotes the spin (isospin) state of particle $i$. The total orbital angular momentum $\Lambda$ of the HH function is coupled to the total spin $\Sigma$ to give the total angular momentum $JJ_z$, whereas the parity $\pi$ is $(-1)^{\ell_1+\ell_2+\ell_3}$. The quantum number $T$ specifies the total isospin of the state, and $\mu$ labels the possible choices of hyperangular, spin and isospin quantum numbers, namely \begin{equation} \mu \equiv \{ \ell_1,\ell_2,\ell_3, L_2 ,n_2, n_3, S_a,S_b, T_a,T_b \}\ ,\label{eq:mu} \end{equation} compatible with the given values of $K$, $\Lambda$, $\Sigma$, $T$, $J$, and $\pi$. Another important classification scheme for the states is to group them in ``channels'': states belonging to the same channel have the same values of angular ($\ell_1,\ell_2,\ell_3, L_2 ,\Lambda$), spin ($S_a,S_b,\Sigma$), and isospin ($T_a,T_b,T$) quantum numbers, but different values of $n_2$ and $n_3$. Each state $\Psi^{K\Lambda\Sigma T}_\mu$ entering the expansion of the four-nucleon wave function must be antisymmetric under the exchange of any pair of particles. Consequently, it is necessary to consider states such that \begin{equation} \Phi^{K\Lambda\Sigma T}_\mu(ijkl)= -\Phi^{K\Lambda\Sigma T}_\mu(jikl)\ , \label{eq:exij} \end{equation} which is fulfilled when the condition \begin{equation} \ell_3+S_a+T_a = {\rm odd}\ , \label{eq:lsa} \end{equation} is satisfied. The number $M_{K\Lambda\Sigma T}$ of antisymmetrized functions $\Psi^{K\Lambda\Sigma T}_\mu$ having given values of $K$, $\Lambda$, $\Sigma$, and $T$, but different combinations of quantum numbers $\mu$---see Eq.(\ref{eq:mu})---is in general very large. In addition to the degeneracy of the HH basis, the four spins (isospins) can be coupled in different ways to total $\Sigma$ ($T$). However, many of the states $\Psi^{K\Lambda\Sigma T}_\mu$, with $\mu$ ranging from 1 to $M_{K\Lambda\Sigma T}$, are linearly dependent. In the expansion of $\Psi^{C,JJ_z}_{\gamma,LS}$, it is necessary to include only the subset of linearly independent states, whose number is fortunately significantly smaller than $M_{K\Lambda\Sigma T}$. The internal part of the wave function can be finally written as \begin{equation}\label{eq:PSI3} \Psi^{C,JJ_z}_{\gamma,LS}= \sum_{K\Lambda\Sigma T}\sum_{\mu} u^{\gamma,LS}_{K\Lambda\Sigma T\mu}(\rho) \Psi_{\mu}^{K\Lambda\Sigma T}\ , \end{equation} where the sum is restricted only to the linearly independent states. We have found it convenient to expand the ``hyperradial'' functions $u^{\gamma,LS}_{K\Lambda\Sigma T\mu}(\rho)$ in a complete set of functions, namely \begin{equation} u^{\gamma,LS}_{K\Lambda\Sigma T\mu}(\rho)=\sum_{m=0}^{M-1} c^{\gamma,LS}_{K\Lambda\Sigma T\mu,m} \; g_m(\rho) \ , \label{eq:fllag} \end{equation} and have chosen \begin{equation} g_m(\rho)= \sqrt{\frac{m!}{(m+8)!}}\,\beta^{9/2}\, L^{(8)}_m(\beta\rho)\,\,{\rm e}^{-\beta \rho/2} \ , \label{eq:fllag2} \end{equation} where $L^{(8)}_l(\beta\rho)$ are Laguerre polynomials~\cite{abra}. The $c$ coefficients of the expansion~(\ref{eq:fllag}) and the R-matrix elements of Eq.~(\ref{eq:wf}) are determined variationally via the Kohn variational principle. This principle states that the functional $\Bigl[R^{\gamma\gamma', J}_{LS,L'S'}\Bigr]$ defined in Eq.~(\ref{eq:kohn}) is stationary with respect to variations in the $R^{\gamma\gamma',J}_{LS,L^\prime S^\prime}$ and $c^{\gamma,LS}_{K\Lambda\Sigma T\mu,m}$. By applying this principle, a linear set of equations for $R^{\gamma\gamma',J}_{LS,L^\prime S^\prime}$ and $c^{\gamma,LS}_{K\Lambda\Sigma T\mu,m}$ is obtained~\cite{Kievsky08}, then solved using the Lanczos algorithm. The other parameter entering the expansion is the (non linear) parameter $\beta$ (see Eq.~(\ref{eq:fllag2})), used to describe the hyperradial functions $u^{\gamma,LS}_{K\Lambda\Sigma T\mu}(\rho)$. We have checked that, once a sufficient number $M$ of functions $g_m(\rho)$ are employed ($M\approx 20$), the results are practically independent on $\beta$. In the present work we have used $\beta=4$ fm$^{-1}$. The application of the method has two main difficulties. The first is the accurate computation of the matrix elements of the Hamiltonian. By exploiting the properties of the HH functions, however, this task can be noticeably simplified, as discussed in Refs.~\cite{Kievsky08,Viviani05}. The second difficulty is the slow convergence of the HH expansion. This problem has been overcome by dividing the set of states $\Psi_{\mu}^{K\Lambda\Sigma T}$ defined in Eq.~(\ref{eq:PSI}) (in the following referred to simply as ``HH states'') in \textit{classes}, depending on the value of $\mathcal{L} = \ell_1 + \ell_2 + \ell_3$, total isospin $T$, and $n_2$ and $n_3$. In the present paper, we have considered four different classes. Since for $n$-$^3$He scattering the asymptotic states do not have a definite total isospin (they are a superposition of $T=0$ and $T=1$ components), it is mandatory to include HH states with both $T=0$ and $1$. The contribution of $T=2$ states is expected to be tiny and consequently they have been ignored in the present paper. Following Refs.~\cite{Viviani05,Fisher06}, in the first class we have included the $n_2=0$ HH states belonging to some special channels, for which the convergence has been found to be critical. The radial part of these HH states depends only on $\cos\phi_{3p}=r_{ij}/\rho$ and thus they take into account two-body correlations. The $n_2>0$ HH states belonging to the same channels are included in the second class, together with those having $\mathcal{L}\le 2$. The other classes are then defined simply by grouping HH states belonging to channels with an increasing value of $\mathcal{L}$. In particular, for the construction of the positive (negative) parity \lq\lq internal\rq\rq wave function $\Psi^{C,JJ_z}_{\gamma,LS}$, classes 3 and 4 include all HH states with $\mathcal{L}=4$ and $6$ ($\mathcal{L}=3$ and $5$), respectively. The convergence of these last two classes is less critical, and consequently, only HH states with lower values of grand angular quantum number $K$ need be considered. Moreover, the convergence with $\mathcal{L}$ is quite fast. In particular, we have found that, at the energy considered, the contribution of HH states with $\mathcal{L}>6$ can be neglected. The calculation is performed including in the expansion all HH states belonging to classes $i=1,\ldots,4$ with grand angular quantum number $K\le K_i$, where $K_1,\ldots,K_4$ are a set of nonnegative integers. The convergence of a quantity of interest (for example, the phase-shifts, or the coefficient $a_z$ defining the PV asymmetry) is then studied by increasing the values of $K_i$. A more complete study of the convergence will be presented elsewhere~\cite{Viviani10}. To exhibit the convergence pattern, we report in Table~\ref{tb:conv1} the calculated $n$-$^3$He scattering lengths. As is evident from Eq.~(\ref{eq:dw}), they are defined as \begin{equation} a_J=-\lim_{q_2\rightarrow0} T^{22,J}_{0J,\,0J} \ , \end{equation} with both incoming and outgoing $n$-$^3$He clusters in relative S-wave. Note that in general this scattering length is complex, since the channel $p$-$^3$H is always open, and therefore the unitarity condition imposes that ${\rm Im}\, a_J <0$, since the total cross section is proportional to $\sum_{J=0,1} (2J+1)\,{\rm Im}\, T^{22,J}_{0J,\,0J}$. The results obtained for the singlet ($J=0$) and triplet ($J=1$) scattering lengths are reported in Table~\ref{tb:conv1}, for all four potential models used in this work. The calculated $n$-$^3$He scattering lengths are compared with experimental values and the results of other calculations available in the literature. Inspection of the table shows that the convergence for the triplet scattering length is very good, and that there is reasonable agreement with available experimental values, and the results of other calculations, in particular those of the AGS method. In the case of the singlet scattering length, the situation is more delicate, since in the channel $J^\pi=0^+$ the $n$-$^3$He interaction is attractive and the wave function must be orthogonal to the $^4$He bound state. Consequently, the convergence is more problematic, in particular for the N3LO/N2LO interaction model. In the row labeled ``EXT'', we have reported the extrapolated values for this quantity obtained by analyzing the convergence pattern. For the AV18, N3LO, and AV18/UIX interaction models we observe reasonable agreement with the results of other calculations and the experimental data. The N3LO/N2LO values are significantly different from those obtained with the other interaction models, which is presumably related to the slow convergence observed in this case. A complete study of the $n$-$^3$He scattering lengths is in progress~\cite{Viviani10}. \begin{widetext} \begin{center} \begin{table}[h] \begin{tabular}{c@{$\quad$}|c@{$\quad$}|c@{$\quad\ $}|c@{$\quad\ $} @{$\ $} |c@{$\ $}|c@{$\ $} @{$\ $} |c@{$\ $}|c@{$\ $} } \hline \hline \multicolumn{8}{c}{Triplet scattering length $a_1$ (fm)} \\ \hline $K_1$ & $K_2$ & $K_3$ & $K_4$ & AV18 & N3LO & AV18/UIX & N3LO/N2LO \\ \hline 28 & 28 & 20 & 20 & $3.56- i\, 0.0078$ & $3.47-i\, 0.0047$ & $3.39-i\, 0.0059$ & $3.37-i\, 0.0042$ \\ 30 & 30 & 22 & 22 & $3.56-i\, 0.0077$ & $3.46-i\, 0.0048$ & $3.39-i\, 0.0059$ & $3.37-i\, 0.0042$ \\ \hline \multicolumn{4}{l|}{RGM~\protect\cite{HH08}} & $3.45-i\, 0.0066$ & & $3.31-i\, 0.0051$ & \\ \multicolumn{4}{l|}{FY~\protect\cite{Lazauskas09}} & $3.43-i\, 0.0082$ & $3.56-i\, 0.0070$ & $3.23-i\, 0.0054$ & \\ \multicolumn{4}{l|}{AGS~\protect\cite{Deltuvapv}} & $3.51-i\,0.0074$ & $3.47-i\, 0.0068$ & & \\ \hline \multicolumn{4}{l}{R-matrix~\protect\cite{HH08}} & \multicolumn{4}{l}{$3.29\phantom{(6)}-i\, 0.0012\phantom{(2)}$} \\ \multicolumn{4}{l}{EXP~\protect\cite{Zimmer02}} & \multicolumn{4}{l}{$3.28(5)-i\, 0.001(2)$} \\ \multicolumn{4}{l}{EXP~\protect\cite{Huffman04}} & \multicolumn{4}{l}{$3.36(1)\phantom{-i\, 0.001(2)}$} \\ \multicolumn{4}{l}{EXP~\protect\cite{Ketter06}} & \multicolumn{4}{l}{$3.48(2)\phantom{-i\, 0.001(2)}$} \\ \hline \hline \multicolumn{8}{c}{Singlet scattering length $a_0$ (fm)} \\ \hline $K_1$ & $K_2$ & $K_3$ & $K_4$ & AV18 & N3LO & AV18/UIX & N3LO/N2LO \\ \hline 48 & 44 & 30 & 22 & $7.34-i\, 6.27$ & $7.38-i\, 5.23$ & $7.90-i\, 3.65$ & $4.45-i\, 9.02$ \\ 50 & 46 & 32 & 24 & $7.41-i\, 6.16$ & $7.40-i\, 5.20$ & $7.90-i\, 3.59$ & $5.25-i\, 9.25$ \\ \hline \multicolumn{4}{l|}{EXT} & $7.69-i\, 5.70$ & $7.57-i\, 4.97$ & $7.89-i\, 3.44$ & $6.02 -i\, 9.48 $ \\ \hline \multicolumn{4}{l|}{RGM~\protect\cite{HH08}} & $7.78-i\, 5.02$ & & $7.62-i\, 4.09$ & \\ \multicolumn{4}{l|}{AGS~\protect\cite{Deltuvapv}} & $7.80-i\,4.97$ & $7.82-i\, 4.51$ & & \\ \hline \multicolumn{4}{l}{R-matrix~\protect\cite{HH08}} & \multicolumn{4}{l}{$7.40\phantom{(6)}-i\, 4.449\phantom{(5)}$} \\ \multicolumn{4}{l}{EXP~\protect\cite{Zimmer02}} & \multicolumn{4}{l}{$7.37(6)-i\, 4.448(5)$} \\ \multicolumn{4}{l}{EXP~\protect\cite{Huffman04}} & \multicolumn{4}{l}{$7.46(2)\phantom{-i\, 4.448(5)}$} \\ \multicolumn{4}{l}{EXP~\protect\cite{Ketter06}} & \multicolumn{4}{l}{$7.57(3)\phantom{-i\, 4.448(5)}$} \\ \hline \end{tabular} \caption[Table]{\label{tb:conv1} Convergence of the $n$-$^3$He singlet and triplet scattering lengths corresponding to the inclusion, in the internal part of the wave function, of four different classes in which the HH basis has been subdivided. For the singlet scattering length, the line labeled ``EXT'' reports the extrapolated values obtained by examining the convergence pattern with increasing number of HH functions in the expansion. The calculated scattering lengths are compared with results obtained using the Resonating Group Method (RGM), Faddeev-Yakubovsky (FY) equations, Alt-Grassberger-Sandhas (AGS) equations, as well as with results of R-matrix analyses. The experimental values are reported in the rows labeled ``EXP'' (the imaginary parts are taken from Ref.~\protect\cite{HH08}). } \end{table} \end{center} \end{widetext} Recently, there has been a new measurement~\cite{Huber09} for the quantity $\Delta a'={\rm Re}(a_1-a_0)= -4.20(3)$ fm. The calculated values of $\Delta a'$ with the AV18, N3LO, AV18/UIX, and N3LO/N2LO models are $-4.13$, $-4.11$, $-4.50$, and $-2.65$ fm, respectively. Again the N3LO/N2LO value stands out: it is off that obtained with the other interaction models and the measured value. The convergence for the negative-parity states is similar to that discussed above. For the $0^-$ state, there is a close resonant state and the convergence is slow as in the $0^+$ case. For the $1^-$ state, the resonance is far and we observe good convergence, as for the $1^+$ state. Note, however, that in these cases the N3LO/N2LO convergence pattern is not different from that observed with the other models. \section{Calculation} \label{sec:calc} There is a total of two (four) states with $J=0$ ($J=1$): one (two) with positive parity having $LS=00$ ($LS=01, 21$) and one (two) with negative parity having $LS=11$ ($LS=10,11$). The $R$-matrix elements $R^{\gamma \gamma',0}_{LS,LS}$ with $LS=00$ or $LS=11$ for $J=0$, and $R^{\gamma \gamma',1}_{LS,L'S'}$ with $LS,L'S'=01,21$ or $LS,L'S'=10,11$ for $J=1$, involving parity-conserving transitions induced by the strong interactions are calculated with the HH method, as described in the previous section. However, the $R$-matrix elements involving parity-violating (PV) transitions are obtained in first-order perturbation theory as \begin{equation} R^{\gamma \gamma',J}_{LS,L'S'}=- \langle\Psi^{JJ_z} _{\gamma',L'S'}\mid v^{PV}\mid \Psi^{JJ_z}_{\gamma,LS}\rangle \ , \label{eq:rpf1} \end{equation} where $L+L'$ must be odd. Specifically, the $R$-matrix elements relevant for the calculation of the asymmetry are: $R^{11,0}_{00,11}$ and $\overline{R}^{21,0}_{00,11}$ for $J=0$, and $R^{11,1}_{01,10}$, $R^{11,1}_{01,11}$, $R^{11,1}_{21,10}$, $R^{11,1}_{21,11}$, $\overline{R}^{21,1}_{01,10}$, and $\overline R^{21,1}_{01,11}$ for $J=1$. Quantum Monte Carlo (QMC) techniques are employed to evaluate these matrix elements (see below). The asymmetry in Eq.~(\ref{eq:aaz1}) is expressed in terms of $T$-matrix elements, which are in turn derived from $R$-matrix elements via Eq.~(\ref{eq:t21c}). This latter equation can be further simplified by retaining only linear terms in the PV $R$-matrix elements, and the resulting expressions for the PC $T^{21,0}_{00,00}$, $T^{21,1}_{01,01}$ and $T^{21,1}_{01,21}$ and PV $T^{21,0}_{00,11}$, $T^{21,1}_{01,10}$ and $T^{21,1}_{01,11}$ matrix elements are listed in Appendix~\ref{app:a1}. The QMC techniques used to evaluate the matrix element in Eq.~(\ref{eq:rpf1}) are similar to those discussed in Ref.~\cite{Schiavilla08} for the neutron spin rotation in $\vec{n}\, d$ scattering. The wave functions for an assigned spatial configuration specified by the set of Jacobi variables $({\bf x}_1,{\bf x}_2,{\bf x}_3)$ are expanded on a basis of $16 \times 6$ spin-isospin states for the four nucleons as \begin{equation} \psi({\bf x}_1,{\bf x}_2,{\bf x}_3) = \sum_{a=1}^{96} \psi_a({\bf x}_1,{\bf x}_2,{\bf x}_3) \mid\! a\rangle \ , \end{equation} where the components $ \psi_a({\bf x}_1,{\bf x}_2,{\bf x}_3)$ are generally complex functions, and the basis states $\mid\! a\rangle$= $\mid\! (n\! \downarrow)_1 (p\! \downarrow)_2 (n\! \downarrow)_3 (p\! \downarrow)_4\rangle$, $\mid\! (n\! \downarrow)_1 (n\! \downarrow)_2 (p\! \downarrow)_3 (p\! \downarrow)_4\rangle$, and so on. Matrix elements of the PV potential components are written schematically as \begin{eqnarray} \!\!\!\langle f\!\mid O \mid \! i \rangle&=&\sum_{a,b=1}^{96} \int {\rm d}{\bf x}_1 \, {\rm d}{\bf x}_2 {\rm d}{\bf x}_3 \, \psi^*_{f,a}({\bf x}_1,{\bf x}_2,{\bf x}_3) \nonumber \\ \!\!\!&&\times \left[ O({\bf x}_1,{\bf x}_2,{\bf x}_3)\right]_{ab} \psi_{i,b}({\bf x}_1,{\bf x}_2,{\bf x}_3) \ , \label{eq:mci} \end{eqnarray} where $\left[ O({\bf x}_1,{\bf x}_2,{\bf x}_3)\right]_{ab}$ denotes the matrix representing in configuration space any of the components in Table~\ref{tb:tab1}. Note that the operators ${\bf X}^{(n)}_{ij,\mp}$ occurring in $v^{\rm PV}_{ij}$ are conveniently expressed as \begin{eqnarray} \label{eq:xcomm} {\bf X}_{ij,+}^{(n)}&= &-{\rm i} \left[ 2\, f_n(r_{ij}) \,{\bm \nabla}_{{ij}} + \hat{\bf r}_{ij}\, f^\prime_n(r_{ij}) \right] \ ,\\ {\bf X}_{ij,-}^{(n)}&=&\hat{\bf r}_{ij} \, f^\prime_n(r_{ij}) \ , \end{eqnarray} where the gradient operator ${\bm \nabla}_{ij}= ({\bm \nabla}_i-{\bm \nabla}_j)/2$ acts on the right (initial) wave function, and $f^\prime(x) ={\rm d}f(x)/{\rm d}x$. Gradients are discretized as \begin{eqnarray} \nabla_{i,\alpha} \psi ({\bf x}_1,{\bf x}_2,{\bf x}_3)\!\!&\simeq&\!\! \big[ \psi(\dots {\bf r}_{i}+\delta\, \hat{\bf e}_\alpha \dots) \nonumber \\ \!\!&&\!\! -\psi(\dots {\bf r}_{i}-\delta\, \hat{\bf e}_\alpha \dots)\big]/(2\,\delta) \ , \end{eqnarray} where $\delta$ is a small increment and $\hat{\bf e}_\alpha$ is a unit vector in the $\alpha$-direction. Matrix multiplications in the spin-isospin space are performed exactly with the techniques developed in Ref.~\cite{Schiavilla89}. The problem is then reduced to the evaluation of the spatial integrals, which is efficiently carried out by a combination of MC and standard quadratures techniques. We write \begin{equation} \langle f\!\mid O \mid \! i \rangle\! = \!\int{\rm d} \hat{\bf x}_1 \,{\rm d} {\bf x}_2 {\rm d} {\bf x}_3 \, F(\hat{\bf x}_1,{\bf x}_2,{\bf x}_3) \! \simeq\! \frac{1}{N_c} \sum_{c=1}^{N_c} \frac{F(c)}{W(c)} \ , \end{equation} where the $c$'s denote collectively (uniformly sampled) directions $\hat{\bf x}_1$ and Jacobi coordinates $({\bf x}_2,{\bf x}_3)$, and the probability density $W(c)=\mid \!\!\Psi({\bf x}_2,{\bf x}_3)\!\!\mid^2\!\!/(4\pi)$---$\Psi({\bf x}_2,{\bf x}_3)$ is the triton bound-state wave function normalized to one---is sampled via the Metropolis algorithm. For each such configuration $c$ (total number $N_c$), the function $F$ is obtained by Gaussian integrations over the $x_1$ variable, {\it i.e.} \begin{eqnarray} F(c)&=& \sum_{a,b=1}^{96} \int_0^\infty {\rm d}x_1 \, x_1^2 \, \psi^*_{f,a}({\bf x}_1,{\bf x}_2,{\bf x}_3) \nonumber \\ &&\times \left[ O({\bf x}_1,{\bf x}_2,{\bf x}_3)\right]_{ab} \psi_{i,b}({\bf x}_1,{\bf x}_2,{\bf x}_3) \ . \end{eqnarray} Convergence in the $x$ integrations requires of the order of 50 Gaussian points, distributed over a non-uniform grid extending beyond 20 fm, while $N_c$ of the order of a hundred thousand is sufficient to reduce the statistical errors in the MC integration on the PV $T$-matrix elements at the few percent level. In this respect, we note that these errors are computed directly, by accumulating, in the course of the random walk, values---and their squares---for the appropriate linear combinations of $R$-matrix elements, as given in Eqs.~(\ref{eq:t0011}) and (\ref{eq:t0110n})--(\ref{eq:t0111n}) of Appendix~\ref{app:a1}. Because of correlations, the errors on the $T$-matrix elements obtained in this way are much smaller than those that would be inferred from the $R$-matrix elements by naive error propagation. The present method turns out to be computationally intensive, particularly because of the large number of wave functions (and their derivatives) that have to be generated at each configuration $({\bf x}_1,{\bf x}_2,{\bf x}_3)$. The computer codes have been successfully tested by carrying out a calculation based on Gaussian wave functions for the initial and final states, as described in the following subsection. \subsection{Test calculation} \label{sec:calt} In order to test the computer programs based on QMC techniques, we carried out a preliminary calculation using wave functions for which it is possible to evaluate the matrix elements of the PV potential also analytically. These (antisymmetric) wave functions are written as \begin{eqnarray} \Psi^{JJ_z}_{\gamma, LS}&=&{1\over 4\pi} \sum_{p=1}^4 e^{-\beta \rho^2} y_p^{L+2\, n_\beta} \nonumber \\ &\times& \biggl[ Y_L(\hat {\bf y}_p) \otimes \Bigl[\phi_{\gamma }(ijk) \otimes\chi_\gamma (l)\Bigr]_S \biggr]_{JJ_z} \ , \label{eq:test1} \end{eqnarray} where $\phi_\gamma$ ($\chi$) represents a three-nucleon (single-nucleon) spin-isospin one-half state with isospin projection --1/2 (+1/2) for $\gamma =1$ ($p$-$^3$H channel) and +1/2 (--1/2) for $\gamma=2$ ($n$-$^3$He channel). Thus, as in the realistic case, the wave functions above do not have a definite total isospin $T$ but, rather, are combinations of $T=0$ and $T=1$ states (having, of course, $T_z=0$). The whole radial dependence is given by the factor $y_p^{L+2\, n_\beta} e^{-\beta \rho^2}$, where $\rho$ is the hyperradius. The non-negative integer $n_\beta$ and the real parameter $\beta$ can be varied so as to obtain a family of wave functions. For the purpose of computing matrix elements of two-body operators, it is convenient to express the pieces in Eq.~(\ref{eq:test1}) corresponding to permutations $p \neq 1$ in terms of quantities relative to the permutation $p=1$ or $(123,4)$. This can be accomplished by making use of the properties of Wigner coefficients: \begin{widetext} \begin{eqnarray} \Psi^{JJ_z}_{\gamma, LS}&=& e^{-\beta \rho^2} \sum_{\mu} C^{LSJ}_{\gamma\, n_\beta; \mu}\, x_1^{n_1} x_2^{n_2} x_3^{n_3} \Biggl[ \biggl[ \Bigl[ Y_{\ell_3} (\hat{\bf x}_3) \otimes \bigl[ \chi_1 \otimes\chi_2\bigr]_{S_2}\Bigr]_{j_3} \otimes \Bigl[ Y_{\ell_2}(\hat{\bf x}_2) \otimes \chi_{3}\Bigr]_{j_2} \biggr]_{J_2} \otimes \Bigl[ Y_{\ell_1}(\hat{\bf x}_1) \otimes \chi_{4}\Bigr]_{j_1} \Biggr]_{JJ_z}\nonumber \\ &&\qquad \qquad \times \biggl[ \Bigl[ \bigl[\xi_1\otimes \xi_2\bigr]_{T_2}\otimes \xi_{3}\Bigr]_{T_3} \otimes \xi_4\biggr]_T\ , \qquad \mu \equiv \{ \ell_1\ell_2\ell_3 n_1 n_2 n_3 j_1 j_2 j_3 J_2 S_2 T_2 T_3 T\}\ , \label{eq:test2} \end{eqnarray} \end{widetext} where $\chi_i$ and $\xi_i$ are the spin and isospin states of nucleon $i$, ${\bf x}_j$ are the Jacobi vectors corresponding to the permutation $p=1$ and $n_1+n_2+n_3=L+2\,n_\beta$, and the $C$'s denote combinations of Wigner coefficients. It is now relatively simple to evaluate the matrix of the PV potential $\sum_{i<j} v^{\rm PV}_{ij}=6\, v^{\rm PV}_{12}$~\cite{Schiavilla08}, by expressing the wave functions as in Eq.~(\ref{eq:test2}). As an example, we report here the results obtained for two $J=0$ wave functions. In Table~\ref{tb:testwf}, we list the values of the quantum numbers $LSJ$, and parameters $n_\beta$ and $\beta$, used in the actual calculation. The ket $|\gamma\rangle$ with $\gamma=1$ (2) describes a ``$p$-$^3$H'' (``$n$-$^3$He'') state. We compute the matrix elements in two ways: i) by performing the analytical calculation via the transformation of Eq.~(\ref{eq:test2}), and ii) by using the QMC techniques discussed earlier. \begin{table}[bth] \begin{tabular}{c|c|c|c|c} \hline \hline State & $J^\pi$ & $LS$ & $n_\beta$ & $\beta$ \\ \hline $|1\rangle$ & $0^+$ & $00$ & $0$ & 0.25 \\ $|2\rangle$ & $0^-$ & $11$ & $0$ & 0.25 \\ \hline \hline \end{tabular} \caption{Values of the quantum numbers and parameters for some of the test wave functions used in this work. See text for explanation.} \label{tb:testwf} \end{table} The values for the matrix elements $-\langle 1| O_{12}^{(n)} | 2\rangle$ corresponding to the $12$ components of the DDH potential (see Table~\ref{tb:tab1}) are reported in Table~\ref{tb:test}. \begin{table}[bth] \begin{tabular}{c|c|c} \hline \hline $n$ & Analytical & QMC \\ \hline 1 & $-2.987$ & $-3.020(15) $ \\ 2 & ${\phantom{-}} 0.000$ & ${\phantom{-}} 0.000{\phantom{-}}\m$ \\ 3 & $-0.333$ & $-0.349(4)$ \\ 4 & $-0.264$ & $-0.281(4)$ \\ 5 & $-0.222$ & $-0.233(2)$ \\ 6 & ${\phantom{-}} 0.000$ & ${\phantom{-}} 0.000{\phantom{-}}\m$ \\ 7 & ${\phantom{-}} 0.000$ & ${\phantom{-}} 0.000{\phantom{-}}\m$ \\ 8 & $-0.335$ & $-0.349(6)$ \\ 9 & $-0.143$ & $-0.147(2)$ \\ 10 & $-0.335$ & $-0.349(6)$ \\ 11 & $-0.286$ & $-0.294(3)$ \\ 12 & $-0.264$ & $-0.281(4)$ \\ \hline \hline \end{tabular} \caption{Results for the real part of the (adimensional) matrix element $-\langle 1| O_{12}^{(n)} | 2\rangle $ calculated analytically and by using the QMC code. For the latter calculation, the statistical uncertainties are reported in parentheses, and correspond to a (rather modest) set of 5k samples. The operators $O_{12}^{(n)}$ are those of the DDH potential, listed in Table~\ref{tb:tab1}.} \label{tb:test} \end{table} There is good agreement between the results of the two calculations. Note that the $n=2$ contribution associated with an isoscalar operator as well as the $n=6,7$ contributions corresponding to isotensor operators vanish. The test wave functions consist of a superposition of $T=0$ and $T=1$ components, and therefore it is not immediately apparent why this should be so. The reason for this result becomes clear only after carrying out the decomposition of the wave functions as in Eq.~(\ref{eq:test2}). It comes about because of delicate cancellations among various terms. We find it reassuring that these same matrix elements are seen to vanish (within machine precision) with the QMC code. We have verified explicitly that the close agreement between the two calculations persists for the matrix elements involving other pairs of states, including those having $J=1$. \section{Further results} \label{sec:res} The results for the coefficients $I^\alpha_n$ in Eq.~(\ref{eq:in}), obtained with the (zero energy) $n$-$^3$He continuum wave functions corresponding to the AV18, AV18/UIX, N3LO, and N3LO/N2LO strong-interaction Hamiltonians, are reported for the DDH and pionless EFT PV potentials in Tables~\ref{tb:tab8} and~\ref{tb:tab9}, respectively. The subscript $n$ in $I^\alpha_n$ specifies the operators as listed in Table~\ref{tb:tab1}, and the set of cutoff parameters entering the modified Yukawa functions are given in Table~\ref{tb:tabpv}. \begin{table}[bth] \begin{tabular}{c|c|c|c|c} \hline \hline & \multicolumn{4}{c} {$I_n^{\rm DDH}$} \\ \hline \hline $n$ & AV18 & AV18/UIX & N3LO & N3LO/N2LO \\ \tableline 1 &--0.186E+00 &--0.189E+00 &--0.203E+00 & --0.113E+00 \\ 2 &--0.826E--02 &--0.577E--02 &--0.608E--02 & --0.622E--02 \\ 3 &+0.811E--02 &+0.864E--02 &+0.333E--02 & --0.693E--02 \\ 4 &--0.620E--02 &--0.794E--02 & --0.970E--02& --0.753E--02 \\ 5 &--0.800E--02 &--0.976E--02 &--0.102E--01 & --0.781E--02\\ 6 &--0.359E--03 &--0.170E--03 & --0.942E--03& +0.322E--03 \\ 7 &+0.631E--03 &+0.115E--02 & --0.641E--04 & +0.703E--03 \\ 8 & +0.605E--02&+0.404E--02 &--0.699E--03 & --0.794E--02\\ 9 &+0.314E--02 &+0.289E--02 & --0.171E--02& --0.577E--02\\ 10 &--0.689E--02 &--0.887E--02 &--0.115E--01 & --0.902E--02\\ 11 & --0.930E--02&--0.113E--01 & --0.123E--01&--0.940E--02 \\ 12 &--0.801E--02 &--0.979E--02 & --0.115E--01& --0.606E--02\\ \hline \hline \end{tabular} \caption{The coefficient $I_n^{\rm DDH}$ corresponding to the DDH potential components $O^{(n)}$ in combination with the AV18, AV18/UIX, N3LO, N3LO/N2LO strong interaction Hamiltonians. The statistical Monte Carlo errors are not shown, but are at the most 10\% for the smallest contributions, and less than 2\% for the largest. The $I_n^{\rm DDH}$ are in units of fm$^{-1}$.} \label{tb:tab8} \end{table} \begin{table}[bth] \begin{tabular}{c|c|c} \hline \hline & \multicolumn{2}{c} {$I_n^{\rm EFT}$} \\ \hline \hline $n$ & AV18/UIX & N3LO/N2LO \\ \tableline 1 & --0.195E+00 & --0.119E+00 \\ 4 & --0.606E+00 & --0.391E+00 \\ 6 & --0.639E--02 & +0.179E--01 \\ 8 & +0.608E+00 & --0.515E--01 \\ 9 & +0.301E+00 & +0.426E--01 \\ \hline \hline \end{tabular} \caption{The coefficient $I_n^{\rm EFT}$ corresponding to the pionless EFT potential components $O^{(n)}$ in combination with the AV18/UIX and N3LO/N2LO strong interaction Hamiltonians. Note that there are no potential components with $n$=2, 3, 5, 7, 10, 11, and 12. The statistical Monte Carlo errors are not shown, but are typically less than 5\%. The $I_n^{\rm EFT}$ are in units of fm$^{-1}$.} \label{tb:tab9} \end{table} A quick glance at Table~\ref{tb:tab8} makes it clear that i) the contribution of the long-range component of the DDH potential due to pion exchange is at least a factor 15 larger than that of any of the short-range components induced by vector-meson exchanges, and ii) among the vector-meson exchange contributions the isoscalar ($n=2,3$ and $n=8,9$) and isovector ($n=4,5$ and $n=8$--12) ones are comparable in magnitude and much larger than those due to isotensor $\rho$-meson exchanges ($n=6,7$). It is also clear that the pion-exchange contribution is fairly insensitive to the choice of input strong-interaction Hamiltonian (with or without the inclusion of a three-nucleon potential), used to generate the $n$-$^3$He even and odd parity states with $J=0$ and 1. However, the N3LO/N2LO model stands out: the pion-range contribution is (in magnitude) substantially smaller than that calculated for the other models. Moreover, the isoscalar $\rho$-meson ($\omega$-meson) contribution corresponding to $n=3$ ($n=8$) has opposite sign than obtained for the other (AV18 and AV18/UIX) models. To investigate the stability of the AV18/UIX and N3LO/N2LO results with respect to convergence in the internal part of the wave function, we present in Table~\ref{tb:tcs2} the coefficients $C_\alpha^i$ entering the PV observable $a_z$ in Eq.~(\ref{eq:cddh}) for two different choices of wave functions. The results labeled ``wf2'' were listed earlier in Table~\ref{tb:tcs}, except that those relative to the N3LO/N2LO model are based here on a smaller number of configurations. These results are obtained by including in the expansion of the internal parts of the $0^{\pm}$ and $1^\pm$ wave functions the maximum number of HH functions we have considered in the present work. The results corresponding to the row ``wf1'' are obtained by reducing this number: in practice, for each of the classes $K_1,\dots,K_4$ we set $K_i({\rm wf1})=K_i({\rm wf2})-2$ (see discussion in Sec.~\ref{sec:whh}). Note also that the Monte Carlo calculation of the ``wf1'' coefficients for the AV18/UIX model uses a factor three smaller number of configurations, and therefore the associated statistical errors are substantially larger. On the other hand, the ``wf1'' and ``wf2'' N3LO/N2LO results correspond to the same number of configurations and indeed the same random walk. Taking into account errors, we conclude that both AV18/UIX and N3LO/N2LO calculations have converged. This is not the case as far as the N3LO/N2LO singlet scattering length is concerned. \begin{widetext} \begin{center} \begin{table}[bth] \begin{tabular}{l|c||c|c|c||c|c} \hline \hline & $C_\pi^1$ & $C_\rho^0$ & $C_\rho^1$ & $C_\rho^2$ & $C_\omega^0$ &$C_\omega^1$ \\ \tableline AV18/UIX-wf1 & --0.2077(281) & --0.0433(116) &+0.0242(29) & --0.0011(2) &--0.0232(77) & +0.0490(30) \\ AV18/UIX-wf2 &--0.1853(150) &--0.0380(70) & +0.0230(18) & --0.0011(1) &--0.0231(56) & +0.0500(20) \\ \hline N3LO/N2LO-wf1 & --0.1118(29) & +0.0369(25) & +0.0200(8) & --0.0009(1) & +0.0390(23) & +0.0402(12) \\ N3LO/N2LO-wf2 & --0.1050(35) & +0.0445(33) & +0.0189(9) & --0.0008(1) & +0.0454(31) & +0.0417(12) \\ \hline \hline \end{tabular} \caption{The coefficients $C_\alpha^i$ entering the PV observable $a_z$, corresponding to the AV18/UIX and N3LO/N2LO strong-interaction Hamiltonians for two sets of wave functions (see text for details). The statistical errors due to the Monte Carlo integrations are indicated in parentheses. } \label{tb:tcs2} \end{table} \end{center} \end{widetext} Therefore, the differences found between the N3LO/N2LO and the other models are presumably due to the fact that the HH expansion for the N3LO/N2LO wave functions (specifically the $0^+$ wave function) has not fully converged. Consequently, in the following we restrict our discussion to the results obtained with the AV18, N3LO, and AV18/UIX models. In reference to the pion contribution, the calculated $C^1_\pi$ is rather insensitive to the choice of strong Hamiltonian. However, there is still a considerable model dependence in the results obtained for the individual contributions due to vector-meson exchanges. This model dependence, in turn, impacts very significantly predictions for the PV asymmetry $a_z$, as it can be surmised from Table~\ref{tb:tab10}. Of course, this is so under the assumption that the values for the strong- and weak-interaction coupling constants characterizing the DDH potential are those listed in Table~\ref{tb:tabpv}. For example, the combination of coupling constants corresponding to pion-exchange ($n=1$) and isoscalar $\rho$-meson exchange ($n=2$ and 3) are, respectively, $c_1^{\rm DDH}=(4.48\times 10^{-7})$ fm, $c_2^{\rm DDH}=(11.2\times10^{-7})$ fm and $c_3^{\rm DDH}=(79.5\times10^{-7})$ fm---note that $c_3^{\rm DDH} =(1+\kappa_\rho) c_2^{\rm DDH}$ and $\kappa_\rho=6.1 $ is the value adopted here for the tensor coupling of the $\rho$-meson to the nucleon~\cite{Machleidt01}. Consequently, the contribution $c_3^{\rm DDH} \times I_3^{\rm DDH}$ is comparable in magnitude and opposite in sign to the pion-exchange contribution $c_1^{\rm DDH} \times I_1^{\rm DDH}$. In this respect, we note that the asymmetry $a_z$ changes roughly from $-27\times 10^{-8}$ to $ +13 \times 10^{-8}$ as the six PV weak coupling constants entering the DDH model are varied over their respective allowed ranges determined in Ref.~\cite{Desplanques80}. Thus, $a_z$ could potentially be large enough to make its measurement (relatively) easy. \begin{widetext} \begin{center} \begin{table}[bth] \begin{tabular}{c|c|c|c|c} \hline \hline & \multicolumn{4}{c} {$10^{8} \times a^{\rm DDH}_z$} \\ \hline \hline $n$ & AV18 & AV18/UIX & N3LO & N3LO/N2LO \\ \tableline 1 & --8.33$\pm$0.35 & --8.45$\pm$0.69 & --9.07$\pm$0.40 & --5.06$\pm$0.34 \\ 2 & --9.26$\pm$0.35 & --9.09$\pm$0.70 & --9.75$\pm$0.40 &--5.76$\pm$0.38 \\ 3 & --2.80$\pm$0.68 & --2.22$\pm$1.34 & --7.10$\pm$0.89 & --11.3$\pm$0.98 \\ 4 & --2.86$\pm$0.68 & --2.30$\pm$1.34 & --7.20$\pm$0.89 & --11.3$\pm$0.98\\ 5 & --3.40$\pm$0.68 & --2.95$\pm$1.34 & --7.88$\pm$0.89& --11.9$\pm$0.98\\ 6 & --3.41$\pm$0.68 & --2.95$\pm$1.34 & --7.90$\pm$0.89 &--11.9$\pm$0.98\\ 7 & --3.32$\pm$0.68 & --2.80$\pm$1.34 & --7.91$\pm$0.89 &--11.8$\pm$0.98\\ 8 & --3.97$\pm$0.69 & --3.23$\pm$1.35& --7.83$\pm$0.90& --10.9$\pm$0.99\\ 9 & --4.31$\pm$0.69 & --3.55$\pm$1.35 & --7.65$\pm$0.90 &--10.3$\pm$0.99 \\ 10& --4.09$\pm$0.69 & --3.26$\pm$1.35& --7.28$\pm$0.90& --10.0$\pm$0.99\\ 11& --3.79$\pm$0.69 & --2.89$\pm$1.35 & --6.88$\pm$0.90&--9.70$\pm$0.99\\ 12& --3.45$\pm$0.69 & --2.48$\pm$1.35 & --6.40$\pm$0.90& --9.44$\pm$0.99\\ \hline variation&from --27.1 to +13.3 &from --27.6 to +13.8 & from --26.0 to +3.68 & from --29.7 to +6.66 \\ \hline \hline \end{tabular} \caption{Cumulative contributions to $a_z$ and associated errors (rows 1--12), obtained for the DDH PV potential with values for the coupling constants as listed in Table~\protect\ref{tb:tabpv}. The four columns correspond to the different combinations of strong-interaction Hamiltonians adopted in the calculations. The last row shows the minimum and maximum (central) values that $a_z$ can attain, as the PV couplings are varied over the allowed ranges in the original DDH formulation~\protect\cite{Desplanques80}. } \label{tb:tab10} \end{table} \end{center} \end{widetext} The coefficients $I^{\rm EFT}_n$ for the operators entering the pionless EFT PV potential, that is $n$=1, 4, 6, 8, and 9, are reported in Table~\ref{tb:tab9}. The coefficients $I^{\rm EFT}_n$ for $n$=1, 4, 8, and 9, corresponding to isoscalar and isovector structures, are all of the same order of magnitude, while that for $n$=6 with isotensor character is much smaller. Note that the radial functions are taken to be the same for all $n$, $f^{\rm EFT}_n(r)=f_\mu(r)$. Of course, the $I^{\rm EFT}_n$'s will depend significantly on the value of the mass $\mu$---either $\mu=m_\pi$, as appropriate in the present pionless EFT formulation, or $\mu=1$ GeV, the scale of chiral symmetry breaking, as appropriate in the formulation in which pion degrees of freedom are explicitly retained. Indeed, in this latter formulation the leading order component of $v^{\rm PV}$ has the same form as the pion-exchange term in DDH. Finally, rough estimates have been made for the range of values allowed for the low-energy constants $C_1$, $C_2+C_4$, $C_5$, $\tilde C_1$, and $C_6$ in Ref.~\cite{Zhu05}. However, at the present time a systematic program for their determination is yet to be carried out. In view of this, we refrain here from making EFT-based predictions for the longitudinal asymmetry. \section*{Acknowledgments} The authors would like to thank J.D.\ Bowman, C.B.\ Crawford, and M.T.\ Gericke for their continued interest in the present work and for correspondence in reference to various aspects of the calculations. One of the authors (R.S.) would also like to thank the Physics Department of the University of Pisa, the INFN Pisa branch, and especially the Pisa group for the support and warm hospitality extended to him on several occasions. The work of R.S.\ is supported by the U.S.~Department of Energy, Office of Nuclear Physics, under contract DE-AC05-06OR23177. The calculations were made possible by grants of computing time from the National Energy Research Supercomputer Center.
1,314,259,995,479
arxiv
\section{Introduction} Entanglement is a fundamental concept of quantum mechanics, manifesting in strong, non-local correlations between subsystems. Constituting one of the most crucial differences between classical and quantum physics, entanglement is studied in a diverse area of physics, ranging from quantum gravity \cite{blackhole} to topological order in condensed matter systems \cite{topology}. In recent years, entanglement generation in non-equilibrium many-body systems received a special attention, due to the intimate connection between entanglement spreading and the equilibration in closed systems \cite{NonEquilibriumReview,NonEquilibrium,schmiedmayer,trotzky}. Even for globally pure quantum states, the generation of strong entanglement between subsystems allows the thermalization of an isolated quantum system under its own coherent dynamics in a sense that measurements of local observables become indistinguishable from the predictions of an equilibrium thermal ensemble \cite{thermalization,thermalization2,thermalization3}. In contrast, the large number of conserved quantities in integrable systems can prevent entanglement spreading, and result in the failure of thermalization. The slow, logarithmic increase of entanglement has been suggested as a fingerprint of non-ergodic many-body localized phases \cite{mbllog,mbllog2,mbllog3}, whereas delocalized phases are characterized by a linear, light cone-like propagation of correlations \cite{lightcone,ballistic}. Despite its fundamental importance, the experimental investigation of entanglement in correlated many-body systems remains challenging, since it usually requires information on the full quantum state. However, the swift experimental progress in recent years opened up unprecedented possibilities to study entanglement in ultracold atomic settings \cite{ZwergerReview}. Site-resolved control of ultracold atoms in optical lattices allowed the direct measurement of R\'enyi entanglement entropy and mutual information \cite{lightcone2,greiner1,bosonentanglement}, as well as the investigation of the intimate relation between the quantum purity of subsystems and the thermalization of an isolated non-equilibrium system \cite{greiner2}. \begin{figure}[b!] \includegraphics[width=0.5\columnwidth]{sketch.pdf}\\ \caption{Illustration of the tunnel coupling described by Hamiltonian \eqref{eq:Hboson}. A Bose-Einstein condensate is loaded into a double well potential, with tunneling $J$ between the two sides. The $N_L$ and $N_R$ particles on the left and right hand side condense into a single wave function. Bosons in the same well repel each other with interaction strength $U$.} \label{fig:intro0} \end{figure} Entanglement is also at the heart of entropy production in closed quantum systems \cite{deutsch,rigol}. Taking two coupled quantum systems, even coherent evolution of the whole system produces entanglement entropy for each subsystem and may lead to equilibration. Two coupled single mode condensates provide one of the simplest examples to study this phenomenon in detail. In contrast to the case of small subsystems~ \cite{lightcone2,greiner1,bosonentanglement}, here both subsystems are equally large, and therefore equilibration can be understood in terms of a \emph{microcanonical} ensemble rather than a thermal Gibbs ensemble. This system can be realized by loading a Bose-Einstein condensate into a double well potential (see Fig.~\ref{fig:intro0}). Assuming that the atoms in the left and right wells condense into a single wave function, the dynamics is governed by the Hamiltonian \cite{milburn} \begin{equation}\label{eq:Hboson} \hat{H}=-J\left(\hat{a}_L^\dagger\hat{a}_R+\hat{a}_R^\dagger\hat{a}_L\right)+\frac{U}{2}\left(\hat{N}_L^2-\hat{N}_L+\hat{N}_R^2-\hat{N}_R\right). \end{equation} Here the bosonic operators $\hat{a}_L^\dagger$ and $\hat{a}_R^\dagger$ create particles into the left and right potential wells respectively, and $\hat{N}_i=\hat{a}_i^\dagger\hat{a}_i$ for $i=L,R$. The first term in the Hamiltonian describes the tunneling of particles, while the second term takes into account the interaction between the bosons in the same potential well. For given total particle number $N$, the entanglement entropy between the left and right wells is simply given by \cite{entanglement} \begin{equation*} \mathcal{S}(t)=-\sum_{n_L=0}^{N} P_t(n_L)\,\log P_t(n_L), \end{equation*} where $P_t(n_L)$ denotes the probability of state $\hat{N}_L=n_L$ at time $t$. Here, concentrating on the effect of dephasing during the coherent, unitary time evolution of a closed quantum system, we investigate the time dependence of the entanglement entropy $\mathcal{S}(t)$ at $T=0$ temperature. Importantly, the full time evolution of $\mathcal{S}(t)$ can be investigated experimentally, since it only requires measuring the number of particles in the left and right wells. In contrast to earlier entropy measurements in optical lattices, involving a small sublattice with only a few atoms, coupled single-mode Bose-Einstein condensates would allow to study entanglement in large correlated many-body systems \cite{oberthalerentanglement,riedelentanglement}. Let us note that besides the double well experiment illustrated above, the Hamiltonian \eqref{eq:Hboson} can also be realized in a two component condensate trapped in a single well. E.g. two atomic hyperfine states forming the condensates may be coupled through microwaves \cite{hall}, while their interaction may be tuned using a Feshbach resonance \cite{feshbachrev}. Hamiltonian \eqref{eq:Hboson} can also be rewritten in a more convenient form. Using the Schwinger boson representation~\cite{milburn}, we introduce spin operators \begin{equation*} \hat{S}_z=\dfrac{1}{2}\left(\hat{N}_L-\hat{N}_R\right), \quad \hat{S}_x=\dfrac{1}{2}\left(\hat{a}_L^\dagger\hat{a}_R+\hat{a}_R^\dagger\hat{a}_L\right), \end{equation*} of length $N/2$, with $N$ denoting the total number of particles. Apart from a redundant constant term, $\hat{H}$ can then be expressed as \begin{equation}\label{eq:H} \hat{H}= -2J\hat{S}_x+U\hat{S}_z^2. \end{equation} In this new representation, the entanglement entropy between the left and right wells corresponds to the entropy associated with $\hat{S}_z$ \cite{footnote}, \begin{equation}\label{eq:S} \mathcal{S}(t)=-\sum_{m=-N/2}^{N/2} P_t(m)\,\log P_t(m)\;, \end{equation} with $P_t(m)$ denoting the probability of state $\hat{S}_z=m$ at time $t$. Let us note that the spin Hamiltonian Eq. \eqref{eq:H} is a special case of the Lipkin-Meshkov-Glick model, describing mutually interacting spin-1/2 particles, embedded in a magnetic field ~\cite{LMG}. In this context $\hat{S}_\alpha=\sum_i\hat{\sigma}_i^\alpha/2$ is the total spin operator, with $\hat{\sigma}_i^\alpha$ denoting the Pauli matrices at site $i$ for $\alpha=x,y,z$. Depending on the the strength of the magnetic field, the Lipkin-Meshkov-Glick model shows a second order quantum phase transition. The entanglement properties of the ground state of this system have been analyzed by calculating the von Neumann entropy of a subsystem consisting of $L$ sites. In particular, it has been shown that the entanglement entropy shows a logarithmic divergence at the critical point of the quantum phase transition ~\cite{LMG1,QPT}. Similar divergence in the entanglement properties of the ground state at the critical point has also been observed in other systems, like the Dicke model or the transverse field Ising model ~\cite{dicke,fisherinfo}. Moreover, the dynamics of the von Neumann entropy of a single spin in the Lipkin-Meshkov-Glick model has also been investigated ~\cite{LMG2}. In this work we concentrate on a different type of entanglement entropy, associated with the spin operator $\hat{S}_z$. \begin{figure}[b!] \includegraphics[width=\columnwidth]{EntropyScalingTheta0.pdf} \caption{Typical time evolution of entropy \eqref{eq:S}. Rescaled entropy $\mathcal{S}/\log N$ is plotted as a function of dimensionless time $t J/(2\pi)$ for different total particle numbers $N$, keeping $\alpha=1.85$ fixed. Initial state is chosen as $|\hat{S}_z=N/2\rangle$. The entropy oscillates on the top of a steady increase, before saturating to a value proportional to $\log N$.} \label{fig:intro2} \end{figure} Our main purpose here is to analyze the time evolution of entropy \eqref{eq:S} for different initial states and interaction strengths, by combining numerical results with analytical calculations. We demonstrate that $\mathcal{S}(t)$ exhibits coherent oscillations, reflecting the quantum mechanical dynamics of the coupled single-mode condensates. At the same time, $\mathcal{S}(t)$ shows a steady increase, and eventually reaches a stationary, "equilibrium" value, even though this closed system always remains in a pure state. The dynamics of the system depends crucially on the dimensionless parameter \begin{equation}\label{eq:alpha} \alpha\equiv\dfrac{NU}{2J}, \end{equation} characterizing the strength of interactions~\cite{dynamics}. For $\alpha<1$, the average population imbalance between the two potential wells, $N_L-N_R$, oscillates between positive and negative values. For $\alpha>1$, however, the system undergoes a self-trapping transition \cite{oberthaler,oberthalersemiclass}. Here, for large initial particle number imbalance $N_L-N_R$ and strong enough interactions $\alpha\gg 1$, the interaction energy of the initial state prevents levelling off the number of particles in the two wells, and the amplitude of population imbalance oscillations is suppressed (see Sec.~\ref{sec:semiclass} for more details). We show a typical example of entropy production in Fig~\ref{fig:intro2}. The initial state of the system corresponds to maximal population imbalance, $|\hat{S}_z=N/2\rangle$, and the time evolution of $\mathcal{S}$ is calculated numerically by exact diagonalization. The entropy shows oscillations on the top of a steady increase, before saturating to a constant value. Moreover, the curves corresponding to the same $\alpha$, but different total particle numbers can be scaled together. As we discussed earlier, these oscillations during entropy production should be experimentally accessible (for a discussion of experimental parameters see Sec. \ref{sec:microwave}). \begin{figure}[t!] \includegraphics[width=0.75\columnwidth]{Overlap2.pdf} \caption{Overlap between the wave function and different spin coherent states after dephasing. The overlap $|\langle\Omega_{\theta,\varphi}|\psi(t)\rangle|^2$ is plotted as a function of polar and azimuthal angles $\theta$ and $\varphi$, for interaction strength $\alpha=2$. Here $|\Omega_{\theta,\varphi}\rangle$ is the spin coherent state of direction $(\theta,\varphi)$, and $|\psi(t)\rangle$ denotes the wave function after time $tJ/(2\pi)=2.7$. We used the initial state $\hat{S}_z=N/2$ with $N=500$, lying on the boundary of self-trapping. Due to dephasing, the initial state quickly spreads over the vicinity of the classical trajectory (black line), allowing to apply a classical microcanonical description.} \label{fig:introoverlap} \end{figure} The long time limit of the entropy also reflects the self-trapping transition by showing a sudden jump at the "phase boundary". Interestingly, the computed asymptotic entropy value agrees well with the predictions of a classical microcanonical ensemble, where the normalized spin vector $\vec{\Omega}\equiv 2\vec{S}/N$ is distributed uniformly along a classical trajectory. The remarkable success of classical description can be understood by investigating the overlap between the wave function $|\psi(t)\rangle$ and the spin coherent states $|\Omega_{\theta,\varphi}\rangle$, polarized into direction $(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)$. We plotted this overlap on the unit sphere in Fig.~\ref{fig:introoverlap}, for a maximally polarized initial state $\hat{S}_z=N/2$, with the interaction strength tuned to the boundary of self-trapping transition. For sufficiently large $t$, the dephasing between different energy eigenstates leads to the broadening of the wave function, and the overlap $|\langle\Omega_{\theta,\varphi}|\psi(t)\rangle|^2$ traces out precisely the semiclassical trajectory (black line in Fig.~\ref{fig:introoverlap}). The paper is organized as follows. In Sec. \ref{sec:semiclass}, we outline the semi-classical dynamics of Hamiltonian \eqref{eq:H} \cite{phasespace,semiclass,semiclass2}. We analyze the entropy oscillations and the entropy production for short times, and compare the exact dynamics to a semi-classical approximation in Sec. \ref{sec:shortt}. In Sec. \ref{sec:longt} we concentrate on the stationary long time limit of the entropy, and show that it is well approximated by the classical entropy of the microcanonical ensemble. We outline the experimental realization of Hamiltonian \eqref{eq:H} in microwave measurements with $^{87}$Rb atoms in Sec. \ref{sec:microwave}. Our conclusions are summarized in Sec. \ref{sec:discuss}. \section{Semiclassical dynamics}\label{sec:semiclass} \begin{figure}[b!] \includegraphics[width=\columnwidth]{Trajectories.pdf} \caption{Left: Unit sphere, and classical spin trajectories above the self-trapping transition $\alpha>\alpha_c$. Trapped trajectories (orange) never cross the equator, but remain confined to the upper or lower hemisphere. Non-trapped trajectories (blue) intersect the equator, visiting both hemispheres. The boundary of these regimes is the separatrix (black). Right: Projection of trajectories to the $x-y$ plane. Trapped trajectories (orange) form full circles, while non-trapped curves (blue) are arc segments inside the unit circle (grey). The separatrix (black line) touches the unit circle at $\Omega_x=-1$.} \label{fig:semiclass} \end{figure} In the following sections we will investigate the time evolution of the entropy \eqref{eq:S}, with different spin coherent states taken as initial conditions. More precisely, we consider spin coherent states lying in the $x-z$ plane, polarized in the direction $\vec{\Omega}_\theta=(\sin\theta,\,0,\,\cos\theta)$, for different angles $-\pi/2<\theta<\pi/2$. These states are eigenstates of the spin operator $\hat{S}_\theta=\hat{S}_x\sin\theta+\hat{S}_z\cos\theta$, with eigenvalue $\hat{S}_\theta=N/2$. For large total particle number $N$, the semiclassical approximation yields a good description \cite{semiclass,oberthalersemiclass,semiclass2}, and the spin operators in Eq. \eqref{eq:H} can be replaced by the components of a classical vector \begin{equation*} \vec{S}=\dfrac{N}{2}\vec{\Omega}. \end{equation*} The time evolution of the unit vector $\vec{\Omega}$ is governed by the differential equations \cite{semiclass} \begin{align}\label{eq:classdyn} &\partial_t\Omega_x=-UN\Omega_y\Omega_z,\nonumber\\ &\partial_t\Omega_y=2J\Omega_z+UN\Omega_x\Omega_z,\nonumber\\ &\partial_t\Omega_z=-2J\Omega_y, \end{align} with initial condition $\vec{\Omega}(t=0)=\vec{\Omega}_\theta$. These classical trajectories lie on the unit sphere, and their shape is determined by the parameter $\alpha$ in Eq. \eqref{eq:alpha} \cite{semiclass}. Typical trajectories are depicted in Fig.~\ref{fig:semiclass}. Below the critical value $\alpha_c=1$, all trajectories intersect the equator of the sphere, and no self-trapping occurs. Here the equations of motion \eqref{eq:classdyn} have two stable fixed points at $\Omega_x=\pm 1$, $\Omega_y=\Omega_z=0$. For $\alpha>\alpha_c$, however, a self-trapped regime appears on the unit sphere (see Fig.~\ref{fig:semiclass}). Here the fixed point at $\Omega_x=-1$ becomes unstable, and bifurcates into two new, stable fixed points at \cite{semiclass,oberthalersemiclass} \begin{equation*} \Omega_x=-\dfrac{1}{\alpha},\quad\Omega_y=0,\quad\Omega_z=\pm\sqrt{1-\dfrac{1}{\alpha^2}}. \end{equation*} Trapped trajectories around these fixed points can not cross the equator of the sphere, but are constrained to the $\Omega_z>0$ or $\Omega_z<0$ hemisphere. Non-trapped trajectories, however, reach both positive and negative $\Omega_z$ values. The separatrix, forming the boundary of self-trapping, touches the equator at the unstable fixed point, $\Omega_x=-1$ (see Fig.~\ref{fig:semiclass}). The semiclassical trajectories can be visualized more easily by noting that their projections on the $x-y$ plane form circles centered at $(-1/\alpha,0)$ (see Fig.~\ref{fig:semiclass}), \begin{equation}\label{eq:classtrajectory} \left(\Omega_x+\dfrac{1}{\alpha}\right)^2+\Omega_y^2={\rm const.} \end{equation} The trajectory determined by the initial condition $\vec{\Omega}(t=0)=(\sin\theta,0,\cos\theta)$ will coincide with the separatrix at interaction strength \begin{equation}\label{eq:separatrix} \alpha_{\theta}=\dfrac{2}{1-\sin\theta}. \end{equation} Another special case occurs, when the initial condition satisfies \begin{equation}\label{eq:fixpoint} \alpha_{\theta}^{\rm fix}=-\dfrac{1}{\sin\theta}, \end{equation} and $\vec{\Omega}(t=0)$ is a stable fixed point of the classical equations of motion \eqref{eq:classdyn}. As we will show later, the long time limit of the entropy \eqref{eq:S} reflects this semiclassical dynamics (see Sec. \ref{sec:longt}). The trapping transition at $\alpha=\alpha_\theta$, Eq. \eqref{eq:separatrix}, is revealed by a sudden jump of size $\log 2$ in the entropy, related to the rapid change by a factor of $2$ in the length of the classical trajectory. The classical fixed point \eqref{eq:fixpoint} corresponds to a local minimum in $\mathcal{S}$ due to the strong confinement of trajectories around this point. \section{Entropy production}\label{sec:shortt} Now we concentrate on the entropy production at short times, during the first few oscillations of entropy \eqref{eq:S}. We consider the spin coherent initial state $|\Omega_\theta\rangle\equiv|\hat{S}_\theta=N/2\rangle$, with $\hat{S}_\theta=\hat{S}_x\sin\theta+\hat{S}_z\cos\theta$. To gain more insight into the structure of the wave function, let us expand this state in the eigenbasis of $\hat{S}_z$ \cite{Auerbach}, \begin{align*} &|\Omega_\theta\rangle=\\ &\sum_{m=-N/2}^{N/2}\sqrt{\binom{N}{m+\frac{N}{2}}}\left(\cos\frac{\theta}{2}\right)^{m+\frac{N}{2}}\left(\sin\frac{\theta}{2}\right)^{\frac{N}{2}-m}|m\rangle, \end{align*} with $|m\rangle$ denoting the eigenstate $\hat{S}_z=m$. This expression shows that the shifted spin operator $\hat{S}_z+N/2$ follows a binomial distribution $B(n,p)$, with number of trials $n=N$ and probability $p=\cos^2(\theta/2)$. This binomial distribution yields an expectation value $\langle\hat{S}_z\rangle=np-N/2=N/2\cos\theta$ and a variance ${\rm Var}(\hat{S}_z)=np(1-p)=N\sin^2\theta/4$. In the semiclassical limit of large total particle number $N$, this initial state can be approximated by a Gaussian wave function \begin{equation}\label{eq:gaussinitial} |\Omega_\theta\rangle\approx\sqrt{\frac{2}{N\pi\sin^2\theta}}\sum_m\exp\left(-\dfrac{\left(m-\frac{N}{2}\cos\theta\right)^2}{N\sin^2\theta}\right), \end{equation} excepting the vicinity of $\theta=0$. \begin{figure}[t!] \includegraphics[width=\columnwidth]{EntropyClassScaling.pdf} \caption{Entropy production for short times. Time evolution of rescaled entropy $\widetilde{\mathcal{S}}$ is plotted as a function of dimensionless time $t\,J/(2\pi)$, for different interaction strengths $\alpha$ and initial conditions $\theta$. Different rows correspond to non-trapped regime (top), phase boundary $\alpha=\alpha_\theta$ (center) and self-trapping (bottom). Curves with different particle numbers $N$, shown in different colors, scale together for short times, before they reach a stationary value differing by $\log\sqrt{N}$. On the top of entropy production, $\widetilde{\mathcal{S}}$ oscillates both in the non-trapped and self-trapped regimes. These oscillations vanish at the phase boundary, leaving an approximately linear increase of the entropy. The analytical results of a Gaussian, semiclassical approximation (dashed orange line), Eq. \eqref{eq:SG}, are also shown.} \label{fig:shorttime} \end{figure} The entropy of a Gaussian distribution is known exactly \cite{Renyi}, yielding the approximation \begin{equation*} \mathcal{S}(t=0)\approx\dfrac{1}{2}\log\dfrac{\pi\,eN\sin^2\theta}{2}. \end{equation*} Based on this expression, we scale together the curves corresponding to different total particle numbers $N$ by introducing the rescaled entropy \begin{equation}\label{eq:rescale} \widetilde{\mathcal{S}}(t)=\mathcal{S}(t)-\dfrac{1}{2}\log\dfrac{\pi\,eN\sin^2\theta}{2}. \end{equation} The time evolution of the rescaled entropy \eqref{eq:rescale}, obtained by exact diagonalization, is shown for different interactions strengths $\alpha$ and particle numbers $N$ in Fig.~\ref{fig:shorttime}, for two different initial angles $\theta$. Different rows correspond to the non-trapped regime $\alpha<\alpha_\theta$ (top), lie on the phase boundary $\alpha=\alpha_\theta$ (center), and fall in the self-trapped regime $\alpha>\alpha_\theta$ (bottom), respectively, with $\alpha_\theta$ given by Eq. \eqref{eq:separatrix}. As expected, for fixed $\alpha$ but different total particle number $N$, the rescaled entropies $\widetilde{\mathcal{S}}$ follow the same curve for short times. The long time limit of $\widetilde{\mathcal{S}}$, however, is proportional to $\log N$ instead of $\log\sqrt{N}$, and is different for each $N$ (see also Fig.~\ref{fig:intro2}). The entropy oscillates both in the non-trapped and self-trapped regimes, while increasing steadily towards a stationary value. These entropy oscillations reveal the coherent oscillations of the single mode condensates, while the dephasing between different energy eigenstates is responsible for the entropy production. At the phase boundary, the oscillations become washed out, and the entropy increases approximately linearly, until saturating to the long time limit. The main features of the time evolution of $\mathcal{S}$ can be understood in terms of the classical trajectories in the semiclassicl limit of large $N$. As supported by the detailed analysis below, the state of the system can be visualized as an extended packet on the unit sphere around the classical unit vector $\vec{\Omega}(t)$. For an initial state $\hat{S}_\theta=N/2$, this packet has a Gaussian shape around $\vec{\Omega}_\theta$, with variance $\sim 1/N$. The center of the packet, $\vec{\Omega}(t)$, follows the classical equations of motion \eqref{eq:classdyn}, while the typical width of the packet increases. This broadening occurs because the packet gets more elongated along the classical trajectory due to the dephasing between different energy eigenstates. At the same time, the width perpendicular to the trajectory decreases to keep the volume of the packet constant. The distribution of $\Omega_z$ and the corresponding entropy $\mathcal{S}$ can be determined by projecting this packet to the $z$ axis. The oscillations of the center of the packet, $\vec{\Omega}(t)$, result in entropy oscillations, and their period is given by the period of the classical trajectory. For the separatrix this period is infinity, explaining the vanishing entropy oscillations at the boundary of self-trapping. For a more detailed analysis, notice that the entropy is proportional to $\log \sigma$, with $\sigma$ denoting the typical width of the distribution of $\Omega_z$. In spite of the broadening of the packet, $\sigma$ can display a very different behavior depending on the position along the trajectory. At the upper and lower turning points, where the tangent vector of the trajectory is perpendicular to the axis $\hat z$, the projection yields a sharp distribution for $\Omega_z$, resulting in local minima for the entropy. Since the width of the packet perpendicular to the trajectory decreases, $\sigma$ can even decrease compared to the width of the initial state, resulting in decreasing local minima (see the first row in Fig.\ref{fig:shorttime}). On the other hand, at the horizontal turning points, where the tangent vector is parallel to $\hat z$, $\sigma$ is maximal. This maximal value increases with time as the wave packet gets more elongated along the trajectory, yielding increasing entropy maxima after each oscillation. To substantiate these arguments and to get a quantitative description for the time evolution of entropy $\widetilde{\mathcal{S}}$, we applied a Gaussian Ansatz for the wave function. This approximation relies on the observation that the initial state is well described by the Gaussian expression \eqref{eq:gaussinitial}. We assume that the wave function keeps this Gaussian form during the time evolution. As a first step, we expand the wave function according to the eigenstates of $\hat{S}_z$, $$|\psi(t)\rangle=\sum_m e^{-i\varphi(t)m}\, c_m(t)\,|m\rangle .$$ Having separated a rapidly oscillating phase factor $e^{-i\varphi(t)m}$ - corresponding to the rotation of the state around the $z$ axis - we can assume that the coefficients $c_m(t)$ are slowly varying functions of $m$. Let us introduce a new variable $x=2m/N$. In the limit of large $N$, $x$ can be treated as a continuous variable \cite{WKB}. We can replace the discrete, slowly varying coefficients $c_m(t)$ by a continuous function $\psi(x,t)$, and assume a Gaussian form, \begin{align}\label{eq:gaussian} &c_m(t)\rightarrow \psi(x,t)\equiv \nonumber\\ & \left(\frac{2N{\rm Re }\,c(t)}{\pi}\right)^{1/4}\exp(-c(t)N\, (x-x_0(t))^2). \end{align} This Ansatz yields a Gaussian distribution for the normalized spin operator $\hat{\Omega}_z\equiv 2\hat{S}_z/N$, with expectation value $\langle\hat{\Omega}_z\rangle(t)=x_0(t)$ and variance $1/(N{\rm Re}\,c(t))$. Moreover, for the Gaussian wave function given by Eq. \eqref{eq:gaussian}, the rescaled entropy \eqref{eq:rescale} can be expressed as \begin{equation}\label{eq:SG} \mathcal{S}_G(t)=-\dfrac{1}{2}\log\left(4\sin^2\theta\;{\rm Re}\,c(t)\right). \end{equation} The optimal parameters of the Gaussian wave function, $|\psi_G\rangle$, are determined from the variational condition \begin{equation}\label{eq:variational} \delta\langle\psi_G|\,i\partial_t-\hat{H}\,|\psi_G\rangle=0, \end{equation} where \begin{align*} &\langle\psi_G|\,i\partial_t-\hat{H}\,|\psi_G\rangle=\\ &\partial_t\varphi(t)\,\frac{N}{2}\int{\rm d}x\,x\,|\psi(x,t)|^2+i\int{\rm d}x\,\psi^*(x,t)\,\partial_t\psi(x,t)\\ &-U\frac{N^2}{4}\int{\rm d}x\,x^2\,|\psi(x,t)|^2 +\dfrac{JN}{2}\left(e^{i\varphi(t)}\right.\times\\ &\left.\int {\rm d}x\,\psi^*(x,t)\psi(x-\frac{2}{N},t)\sqrt{1-x^2+\frac{2}{N}(1+x)}+c.c.\right).\end{align*} For large total particle number $N$, Eq. \eqref{eq:variational} can be expanded systematically according to the powers of $N$. The leading order contributions result in the semiclassical equations of motion \begin{align}\label{eq:xphi} &\partial_t x_0=-2J\sqrt{1-x_0^2}\,\sin\varphi,\nonumber\\ &\partial_t\varphi=UNx_0+2J\dfrac{x_0}{\sqrt{1-x_0^2}}\cos\varphi. \end{align} These equations determine the same trajectories as Eqs. \eqref{eq:classdyn}, with the unit vector $\vec{\Omega}$ given by \begin{equation*} \vec{\Omega}=(\sqrt{1-x_0^2}\,\cos\varphi,\,\sqrt{1-x_0^2}\,\sin\varphi,\,x_0). \end{equation*} The next order of the expansion yields the time evolution of $c(t)$, \begin{align*} & i\partial_t c=-\dfrac{\alpha J}{2}-\dfrac{J\cos\varphi}{2\,(1-x_0^2)^{3/2}}\\ &\quad\quad-4J\dfrac{x_0}{\sqrt{1-x_0^2}}\sin\varphi\;c+8J\cos\varphi\sqrt{1-x_0^2}\;c^2. \end{align*} Notice that $c(t)$ only depends on the dimensionless time $tJ$, the parameter $\alpha$ and the initial condition $\theta$, but not on the particle number $N$. Concentrating on the semiclassical limit of large $N$, we neglect the remaining $O(1/N)$ corrections. The Gaussian entropy $\eqref{eq:SG}$ is plotted together with exact numerical results in Fig.~\ref{fig:shorttime}. As noted above, $\mathcal{S}_G$ is independent of $N$ up to corrections of the order $1/N$, neglected in our semiclassical approximation. We find that Eq. \eqref{eq:SG} yields a surprisingly good approximation for the dynamics at short times. \section{Long time limit of entropy and equilibration}\label{sec:longt} We now turn to the long time behavior of the entropy $\mathcal{S}$, and show how it reflects the semiclassical dynamics discussed in Sec. \ref{sec:semiclass}. Due to the discrete spectrum of Hamiltonian \eqref{eq:H}, the entropy shows several revivals and, strictly speaking, it never reaches a stationary value. However, the period of these revivals is typically very long compared to experimentally relevant time scales, and it is still meaningful to consider the steady state at intermediate times \cite{revival}. We defined the long time limit of the entropy \eqref{eq:S} as the time average \begin{equation}\label{eq:Stav} \overline {\mathcal{S}}=\dfrac{1}{T}\int_0^T {\rm d}t\,\mathcal{S}(t), \end{equation} with $T$ chosen large enough to reach a stationary value. As before, we used spin coherent initial states $|\hat{S}_\theta=N/2\rangle$. The numerical results from exact diagonalization are shown in Fiq.~\ref{fig:longtime} as a function of the angle $\theta$, for two different parameters $\alpha$. The semiclassical fixed point, Eq. \eqref{eq:fixpoint}, appears as a sharp local minimum in the time averaged entropy. Since the entropy is related to the width of the distribution of $\hat{S}_z$, this entropy minimum follows from the strong confinement of classical trajectories around the stable fixed point, leading to sharp distributions for $\Omega_z$. The separatrix of the self-trapped phase, Eq. \eqref{eq:separatrix}, is revealed by a sudden jump of size $\log 2$ in entropy \eqref{eq:Stav}. This sudden entropy gain is related to the doubling of the length of classical trajectories at the self-trapping transition, doubling the phase space available for $\vec{\Omega}$ (see Fig.~\ref{fig:semiclass}). \begin{figure}[t!] \includegraphics[width=\columnwidth]{MicroCanonicalEntropy.pdf} \caption{Long time limit of entropy. Time averaged entropy $\overline{S}$ (symbols) plotted as a function of initial angle $\theta$, for two different parameters $\alpha$. Semiclassical fixed point (F), Eq. \eqref{eq:fixpoint}, appears as a sharp minimum in the entropy. The separatrix of self-trapping (S) is accompanied by a sudden entropy gain of size $\log 2$, due to the doubling of available phase space at the phase transition. The prediction of a semiclassical microcanonical ensemble (solid line) is also shown. For the numerics we used total particle number $N=3000$.} \label{fig:longtime} \end{figure} The long time limit of the entanglement entropy, Eq. \eqref{eq:Stav}, can be understood in terms of a semiclassical microcanonical ensemble. In the semiclassical approximation, the trajectory is determined by Eqs. \eqref{eq:classdyn}, which conserves the energy of the classical Hamiltonian. In a microcanonical description, the spin vector $\vec{\Omega}$ is randomly distributed along this trajectory (the surface of constant energy in general), resulting in a uniform distribution on the classical trajectory. This classical trajectory amounts in a continuous distribution for the $z$-component of the spin, $\Omega_z$ (see Appendix \ref{app:PSz}). Denoting the corresponding probability density by $P(\Omega_z)$, the classical entropy is given by \begin{equation}\label{eq:Sclass} \mathcal{S}_{\rm class}=-\int\,{\rm d}\Omega_z P(\Omega_z)\log P(\Omega_z)+\mathcal{S}_0. \end{equation} Here $\mathcal{S}_0$ denotes an arbitrary constant entropy shift, accounting for some unknown box size $\Delta\Omega_z$. Eq.~\eqref{eq:Sclass} yields good agreement with the numerical results by using a single fitting parameter $\mathcal{S}_0=7.0$ (see Fig.~\ref{fig:longtime}). The deviation between the semiclassical approximation and $\overline{\mathcal{S}}$ gets larger only in the immediate vicinity of the semiclassical fixed point, Eq. \eqref{eq:fixpoint}. Here $\mathcal{S}_{\rm class}$ diverges, because the variance of the continuous classical distribution $P(\Omega_z)$ approaches zero. However, the time averaged entropy $\overline{\mathcal{S}}$ remains non-negative even at the fixed point, and its minimal value is determined by the width of the spin coherent initial state in the eigenbasis of $\hat{S}_z$. For the semiclassical case of large $N$ this yields a minimal variance ${\rm Var}(\hat{S}_z)\approx N/(4\,\alpha^2)$ and minimal entropy \begin{equation*} \overline{\mathcal{S}}_{\rm min}\approx\dfrac{1}{2}\log\dfrac{\pi\,e\,N}{2\, \alpha^2}. \end{equation*} \begin{figure}[t!] \includegraphics[width=\columnwidth]{Overlap.pdf} \caption{Overlap between the wave function and different spin coherent states for long times. The overlap $|\langle\Omega_{\theta,\varphi}|\psi(t)\rangle|^2$ is plotted as a function of polar and azimuthal angles $\theta$ and $\varphi$, for two different interaction strengths corresponding to non-trapped ($\alpha=1.8$, left) and self-trapped ($\alpha=2.4$, right) regimes, respectively. Here $|\Omega_{\theta,\varphi}\rangle$ denotes the spin coherent state of direction $(\theta,\varphi)$. We used the maximally polarized initial state $\hat{S}_z=N/2$ with $N=500$, and $|\psi(t)\rangle$ is the wave function after time $tJ/(2\pi)=19.3$. Classical trajectories (black lines) are also shown for comparison. During the time evolution the spin coherent initial state broadens and becomes elongated along the classical trajectory.} \label{fig:overlap} \end{figure} A classical microcanonical equilibrium ensemble thus yields a good approximation for the time averaged entropy, in spite of the unitary time evolution of the system. To see how dephasing alone can lead to equilibration, we plotted the overlap between the wave function $|\psi(t)\rangle$ and the spin coherent states polarized into direction $(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)$, $|\Omega_{\theta,\varphi}\rangle$, for long times in Fig.~\ref{fig:overlap}. For the initial state we chose the maximally polarized state $\hat{S}_z=N/2$, and selected two different interaction strengths, corresponding the non-trapped and self-trapped regimes, respectively. In accordance with the classical picture presented above, the wave function broadens, and extends to the vicinity of the classical trajectory on both sides of the self-trapping transition. Notice that the state of the system never becomes stationary, and at any time the density displays several maxima along the trajectory. The position of these maxima depends on time, leading to revival effects. However, the time averaged entropy is always well approximated by a state spreaded uniformly along the classical trajectory. As a final remark, let us note that in order to observe self-trapping, the upper bound of the time averaging Eq.~\eqref{eq:Stav}, $T$, should not be too large. Strictly speaking, a real self-trapping transition only occurs in the classical system. The quantum system would eventually tunnel to reversed population imbalances \cite{tunneling}, and the entropy would increase by $\log 2$, compensating the entropy loss of self-trapped regime. We have chosen $T$ much smaller than the time scale of this tunneling process, $T_{\rm tunnel}$. The period $T_{\rm tunnel}$ depends on the small energy difference $\Delta E_{\rm doublet}$ between quasi-degenerate doublets $(|u\rangle\pm|d\rangle)/\sqrt{2}$, with the states $|u\rangle$ and $|d\rangle$ confined to the $\hat{S}_z>0$ and $\hat{S}_z<0$ hemispheres, respectively. The splitting $\Delta E_{\rm doublet}$ is tiny even for moderate particle numbers ~\cite{footnote2}, resulting in an exponentially long tunneling time $T_{\rm tunnel}$, far beyond the time scales available for experiments or numerical simulations. However, for small systems consisting only of a few particles, this tunneling process can be observed in simulations. An example of this population inversion is shown in Fig.~\ref{fig:tunnel}, for a small particle number $N=10$. \begin{figure}[t!] \includegraphics[width=\columnwidth]{Tunnel.pdf} \caption{Tunneling process on long time scales. The overlap $|\langle\Omega_{\theta,\varphi}|\psi(t)\rangle|^2$ is plotted as a function of polar and azimuthal angles $\theta$ and $\varphi$ at two different times $t$, for a small system with total particle number $N=10$. Here $|\Omega_{\theta,\varphi}\rangle$ denotes the spin coherent state of direction $(\theta,\varphi)$, and $|\psi(t)\rangle$ is the wave function at time $t$. We used the maximally polarized initial state $\hat{S}_z=N/2$ in the self-trapped regime with $\alpha=3$. At intermediate times ($tJ=110$, left) the particles remain trapped due to the large interaction energy of the initial state, whereas on longer time scales ($tJ=900$, right) the system can tunnel to reversed population imbalances.} \label{fig:tunnel} \end{figure} \section{Microwave experiments}\label{sec:microwave} As mentioned in the introduction, Hamiltonian \eqref{eq:H} can also be realized in microwave experiments with ultracold atoms, making use of different hyperfine states of the atoms. Below we outline how Hamiltonian \eqref{eq:H} arises in this setup, and we briefly discuss the optimal experimental parameters for the observation of entropy oscillations for the specific case of $^{87}$Rb atoms. In $^{87}$Rb experiments, one can tune interactions by utilizing the two hyperfine states $|0\rangle\equiv |F=1,\,m_F=1\rangle$ and $|1\rangle\equiv |F=2,\,m_F=-1\rangle$, with $F$ and $m_F$ denoting the total spin of the atom and its projection to the quantization axis, respectively \cite{feshbachrev}. These states can be trapped in optical dipolar traps, while they can be coupled by microwave pulses. Below we concentrate on an experimentally relevant setting, where these atoms are trapped in a spherically symmetric harmonic trap with trapping frequency $\nu_0=50$Hz. For weak enough interactions, all atoms occupy the ground state of this harmonic potential, \begin{equation}\label{eq:harmonic} \varphi_0(\mathbf{r})=\dfrac{1}{\pi^{3/4}l_0^{3/2}}\exp\left(-\dfrac{r^2}{2l_0^2}\right), \end{equation} with $l_0=\sqrt{\hbar/(2\pi m\nu_0)}$ denoting the oscillator length. Let us note that for stronger interactions the condensate wave function is better described by a Thomas-Fermi profile instead of $\varphi_0$, Eq. \eqref{eq:harmonic}. However, for not too large interaction strengths, a two mode approximation is still applicable, and the derivation presented below is valid with minor modifications (see also the discussion at the end of the section). The short range interaction between the Rb atoms is well described by a Dirac-delta potential. Denoting the bosonic creation operators of the hyperfine states $|0\rangle$ and $|1\rangle$ by $\hat{a}_0^\dagger$ and $\hat{a}_1^\dagger$, the interaction energy is given by \begin{align}\label{eq:Hint} \hat{H}_{\rm int}&=\sum_{\sigma,\sigma^\prime=0,1}\dfrac{g_{\sigma\sigma^\prime}}{2}\int{\rm d}^3\mathbf{r}\,|\varphi_0(\mathbf{r})|^4\,\hat{a}_\sigma^\dagger\hat{a}_{\sigma^\prime}^\dagger\hat{a}_{\sigma^\prime}\hat{a}_\sigma\nonumber\\ &=\sum_{\sigma,\sigma^\prime=0,1}\dfrac{U_{\sigma\sigma^\prime}}{2}\hat{a}_\sigma^\dagger\hat{a}_{\sigma^\prime}^\dagger\hat{a}_{\sigma^\prime}\hat{a}_\sigma. \end{align} Here $U_{\sigma\sigma^\prime}=g_{\sigma\sigma^\prime}/(2\pi l_0^2)^{3/2}$, and the interaction strength $g_{\sigma\sigma^\prime}$ can be expressed with the scattering length of the Rb atoms, $a_{\sigma\sigma^\prime}$, as \cite{ZwergerReview} \begin{equation*} g_{\sigma\sigma^\prime}=\dfrac{4\pi\hbar^2}{m}a_{\sigma\sigma^\prime}, \end{equation*} with $m$ denoting the mass of $^{87}$Rb. The bare scattering lengths of $^{87}$Rb depend very weakly on the hyperfine states of the atoms, and all interactions are determined by the single length scale $a_{\sigma\sigma^\prime}=5.3$nm. The scattering length between hyperfine states $|0\rangle$ and $|1\rangle$, however, can be tuned by a Feshbach resonance, changing $a_{01}$ by as much as $\Delta a_{01}=0.1\, a_{00}=0.53$nm \cite{feshbach}. Introducing the average interaction strength $\overline{U}=(U_{00}+U_{01})/2$, and the difference $\Delta U=U_{00}-U_{01}$, the interaction energy \eqref{eq:Hint} can be rewritten as \begin{align*} \hat{H}_{\rm int}&=\dfrac{U_{00}}{2}(\hat{N}_0^2-\hat{N}_0+\hat{N}_1^2-\hat{N}_1)+U_{01}\hat{N}_0\hat{N}_1\\ &=\dfrac{\overline{U}}{2}N^2+\Delta U\hat{S}_z^2. \end{align*} Here $\hat{N}_i=\hat{a}_i^\dagger\hat{a}_i$ for $i=0,1$, $N=\hat{N}_0+\hat{N}_1$ is the total particle number, and the spin operator is defined as $\hat{S}_z=(\hat{N}_0-\hat{N}_1)/2$. For a closed system $\overline{U}N^2/2$ is just an irrelevant constant energy shift. Thus the interaction between Rb atoms takes the same form as the interaction term in Hamiltonian \eqref{eq:H}, with interaction strength $\Delta U$ determined by the difference of scattering lengths $a_{00}-a_{01}$. Let us mention that instead of controlling the scattering lengths $a_{\sigma\sigma^\prime}$ by a Feshbach resonance, the interaction strength $\Delta U$ can also be tuned by applying a microwave trapping potential which depends on the hyperfine state of the atoms \cite{riedelentanglement}. In this case the atoms occupy state-dependent condensate wave functions, $\varphi_\sigma$, and the interaction strength is given by \begin{equation*} U_{\sigma\sigma^\prime}=g_{\sigma\sigma^\prime}\int{\rm d}^3\mathbf{r}\,|\varphi_\sigma(\mathbf{r})|^2|\varphi_{\sigma^\prime}(\mathbf{r})|^2. \end{equation*} Thus $\Delta U=U_{00}-U_{01}$ can be controlled by changing the overlap between the two condensate modes $\varphi_0$ and $\varphi_1$. The hyperfine states $|0\rangle$ and $|1\rangle$ can be coupled by a two photon transition, where a detuned microwave pulse couples $|0\rangle$ to an intermediate state $|F=2,m_F=0\rangle$, coupled to the final state $|1\rangle$ by a radiofrequency transition \cite{hall}. This two photon transition gives rise to a hopping term $-J(\hat{a}_0^\dagger\hat{a}_1+\hat{a}_1^\dagger\hat{a}_0)$ in the Hamiltonian. In the spin representation this corresponds to the term $-2J\hat{S}_x$, thus with the already known form of the interaction, $\Delta U\hat{S}_z^2$, we recover Hamiltonian \eqref{eq:H}. To reach optimal parameters, one needs strong enough interactions and therefore relatively strong confinement. For a trap frequency $\nu_0=50{\rm Hz}$, and typical scattering length difference $a_{00}-a_{01}=0.1\, a_{00}=0.53$nm, the interaction strength is $\Delta U/h=0.014$Hz. For atom numbers in the range of $N\sim 2000$, the relevant parameter of the spin model is around $\Delta UN/h=30$Hz. For the typical entropy oscillations plotted in Fig.~\ref{fig:intro2}, the parameter $\alpha=\Delta UN/(2J)$ is roughly $\alpha\sim 2$, corresponding to $J/h=7$Hz. With these parameters, the typical time scale of entropy oscillations and entropy generation for the maximally polarized initial state $|\hat{S}_z=N/2\rangle$ is expected to be around $t\sim 70$ms,which is much shorter than the lifetime of a condensate, and also much shorter than the coherence time of superposition states \cite{microwave,treutlein,deutsch2,opticaltrap}. Therefore the entropy oscillations should be observable on experimentally realistic time scales. Let us note that for the parameters above $\overline{U}\gg\Delta U$ implies $N\overline{U}/h\gg\nu_0$. Since the typical scale of interaction energy is much larger than the trapping frequency $\nu_0$, the atoms do not remain in the ground state of the harmonic potential. However, the system can still be described as two coupled single mode condensates with a modified condensate wave function $\varphi_0$, because $N\Delta U/J<\nu_0$ \cite{footnote2}, and the entropy oscillations and entropy generation remain observable with a slightly modified oscillation frequency and entropy production rate. \section{Conclusions}\label{sec:discuss} In this work we analyzed the entropy generation for two coupled single-mode Bose-Einstein condensates, realized by loading a condensate into a double well potential, or by an interstate coupling of a two component Bose-Einstein condensate. This system provides one of the simplest examples to study the entropy production by the coherent time evolution of coupled quantum systems. Even though entanglement measurements in generic correlated many-body systems are challenging, in this setting the entanglement between the two potential wells should be experimentally accessible by measuring the number of atoms in the wells. Besides its experimental relevance, the dynamics of coupled single-mode condensates already shows interesting physics. At large particle number imbalances and sufficiently strong interactions, the system enters a self-trapped regime, where the amplitude of population imbalance oscillations gets suppressed due to the large interaction energy of the initial state. Concentrating on the entropy production during unitary time evolution, we investigated the time dependence of the entropy at $T=0$ temperature, by combining numerical results with analytical calculations. We found that the coherent oscillations of the single mode condensates manifest in entropy oscillations on the top of a steady entropy generation. These coherent oscillations only vanish in the vicinity of the self-trapping transition, where the entropy increases linearly for short time scales. In this pure quantum state, the entropy production originates from the dephasing between different energy eigenstates, eventually leading to a stationary, saturated entropy. Interestingly, this entropy saturation looks like equilibration, in spite of the coherent time evolution of this closed system. These results should be experimentally observable for realistic parameters in microwave measurements with $^{87}$Rb atoms. Here the two modes of the condensate are not spatially separated; instead they correspond to two different hyperfine states of $^{87}$Rb. To gain more insight into the entropy oscillations and entropy production on short times scales, we have shown that the time evolution of the entropy can be understood in terms of the semiclassical trajectories of the system. The wave function can be visualized as a broadening packet on the unit sphere, with its center evolving along the classical trajectory. To obtain a quantitative description, we have shown that a Gaussian Ansatz for the wave function, together with a semiclassical expansion, yields a surprisingly good approximation for the exact time evolution. We also analyzed how the stationary long time limit of the entropy reflects the semiclassical dynamics of the system. The classical fixed point is revealed by a local minimum in the entropy, related to the strong confinement of trajectories in the vicinity of this point, while the self-trapping transition is accompanied by a sudden entropy jump of size $\log 2$, due to the rapid change by a factor of 2 in the length of trajectories. In order to investigate the dephasing induced equilibration of the entropy in more detail, we compared the numerical results to the prediction of a classical microcanonical ensemble, where the spin vector is distributed uniformly over the classical trajectory. We found that this ensemble yields a surprisingly accurate description for the stationary limit of the entropy. To gain more insight into the exact time evolution of the system, we calculated the overlap of the wave function with the spin coherent states of different orientations. We have shown that this overlap traces out the classical trajectories on the unit sphere at long times, supporting our picture describing the system in terms of a classical microcanonical ensemble. In this work we concentrated on the entropy generation in a pure state, and performed all calculations at $T=0$ temperature. Thermal fluctuations are expected to shift the entropy of the system to higher values, while the reduction of coherence starts to wash out the oscillations during the entropy production. However, the entropy oscillations should still remain visible for low enough temperatures of the order of a few hundred nK. Since entropy generation lies at the heart of equilibration and thermalization in closed systems, the detailed analysis of entropy production in other correlated many-body systems, and entanglement spreading in the presence of conserved quantities, remains a question of fundamental interest. \begin{acknowledgments} We acknowledge fruitful discussions with P\'eter Domokos. This research has been supported by the National Research, Development and Innovation Office - NKFIH Nos. K105149 and SNN118028. ED acknowledges support from Harvard-MIT CUA, NSF Grant No. DMR-1308435, AFOSR Quantum Simulation MURI, AFOSR Grant No. FA9550-16-1-0323. \end{acknowledgments}
1,314,259,995,480
arxiv
\section{Introduction} Simply stated, a black hole is a gravitational collapsing core, surrounded by an event horizon. The causal structure may undergo notable changes by passing the null boundary surface (event horizon) defined as where the light cone closes\cite{Horizon}. Although the main features of a black hole are attributed to the event horizon, but some non--analytical aspects of this surface besides the core singularity, diverted some attentions to alternative models. The \textit{Mazur--Mottolla gravastar} mainly stands for this idea\cite{Mazur}. On their suggestion, replacement of the black hole collapsing matter with a deSitter--like core which is surrounded by a layer of matter of finite thickness, prevents gravitational collapse and infinite density singularity formation. The thin layer is composed of a positive pressure fluid with typical ultra relativistic equation of state $P = + \rho$. Thus, the gravastar's three main regions are internal deSitter core, matter layer with a positive pressure and the external vacuum region, which for spherically symmetric static spacetime is necessarily in Schwarszchild form according to the Birkhoff's theorem. To have a well--defined smooth manifold, the fundamental forms should not experience any discontinuity at the boundaries of these regions, unless some Darmois--Israel matter compensates them\cite{Israel, Poisson}. As a result, the intermediate thin layer practically consists of three sub--layers: An external thin shell at radius $r_{e} \gtrsim r_{H_e}$ (where $r_{H_e}$ is the horizon of the external region) with surface density $\sigma_{e}$ and surface tension $\vartheta_{e}$ , a thin layer which is filled with a positive pressure fluid, and an internal thin shell at $r_{i} \lesssim r_{H_i}$ (where $r_{H_i}$ is the horizon of the internal region) with surface density $\sigma_{i}$ and surface tension $\vartheta_{i}$. Clearly, no singularity is present in the deSitter core. Further, the presence of junction layers, solves the problem of non--global time coordinate. Moreover, with some exclusive considerations, this object do not experience any thermodynamical unstability\cite{Mazur}. The gravastar idea is based on the Chapline--Hohlfled--Laughlin--Santiago model\cite{Chapline}. They proposed to replace the event horizon with a critical phase transition layer where the Bose--Einstein condensation occurs in a gravitational system, so that the classical general relativity remains valid arbitrary close to the horizon. It is quite possible that the quantum effects are dominant at the horizon and the general theory of relativity breaks there. As a result, it would be notable if we look for the replacement of gravastar thin layer with one of quantum nature, which holds the quantum effects of geometry, \textit{a quantum improved layer}. An improved gravitational system contains corrections which raised from considering the effects of the renormalization group corrections. This is based on using the exact renormalization group equation to study the Weinberg asymptotic safety conjecture. According to this renormalization method, any quantum field theory is UV complete if its essential couplings run to a non--Gaussian fixed point at UV\cite{AS1,AS2}. The existence of such a fixed point for the gravity renormalization group flow, is searched by various methods (see references in \cite{Weinberg-inf}). The effects of the quantization are usually studied using the effective average action. Unfortunately, solving the exact renormalization group equation to derive the effective average action is complex, if not impossible. Therefore, by the decoupling idea \cite{Reuter & Saueressig, Reuter & Weyer} these effects are considered as some sort of correction to the classical theory and studied through an effective theory obtained by improving the classical coupling constant to the running one which is derived from the solution for the $\beta$-function\cite{Reuter-1st,Reuter-2nd,Rodrigues,Reuter-3rd}. This can be done with different strategies as it is discussed in\cite{our3}. The most physical way of quantum improvement of the field equations, is the action improvement presented in\cite{our3}. To simplify the model, usually the theory space is truncated to Einstein--Hilbert subspace and the whole effective average action is expanded using two basis: $\sqrt{g}$ and $\sqrt{g} R$ \cite{Codello}. This truncation has satisfactory results, especially when the desirable spacetime is the vacuum one. As a result of the Einstein--Hilbert truncation, the renormalization group equation leads to \begin{equation} \label{RC} G(\chi) = \frac{G_0}{1+f(\chi)} \end{equation} where $f(\chi) \equiv \xi /\chi^2$ is a dimensionless function of the renormalization group parameter $\chi$. This parameter has a dimension of length as one expects. The small scaling constant $\xi$ (length square dimensional) equals to $ \omega_q \xi_0 G_0$ provided that the reference constant $ G_0 $ be the experimentally observed value of Newton's constant $G_N$ and $\omega_q = \frac{4}{\pi}(1-\frac{\pi^{2}}{144})$ is the quantization parameter obtained from the renormalization group methods\cite{Bonanno & Reuter}. The parameter $\xi_0 $ (of order one) is used to relate the length and mass dimensions, $[M] = \xi_0 [L]^{-1}$ . The renormalization group parameter $\chi$ is the scaling parameter of the theory. It is discussed in detail in \cite{our3}, that the best scaling parameters of the spacetime are the components of the Riemann tensor, which physically describe the tidal forces. To save the general covariance of the action after improvement, they should contribute in $\chi$ in terms of curvature invariants. Thus, in general, $\chi$ could be a well--defined function of all independent curvature invariants such as $R, R_{\alpha\beta}R^{\alpha\beta}, R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}, \ldots $. Unfortunately, one of the unwanted features of this method is that there is no unique way to fix the form of $\chi$, in general yet\cite{Pawlowski,Reuter-2021}. Considering various conditions such as the behavior of singularities and energy conditions\cite{our4, our5}, one can restrict the possible choices, but not fix them universally. Accordingly, it seems a plausible idea to have a region of deSitter spacetime confined by a thin quantum (gravitational) layer, instead of a thin layer of matter. This is what we are looking for in this paper and call such an object a \textit{pure quantum gravastar}. To have such a solution, the corrections from the improved Einstein--Hilbert action, with a proper cutoff function $f(\chi)$ should dominate about the classical horizons. A pure quantum gravastar is consisted of a deSitter core and a Schwartzschild outer region for which quantum gravitational effects are negligible for both of them. These two regions are connected smoothly to each other by a thin layer of quantum gravitational effects. In this way one has spacetime effects similar to black holes, without any matter. \section{Quantum gravastar} Here we are looking for the pure quantum gravastar solution, a vacuum (no matter, but quantum gravitational effects) solution of quantum improved Einstein equations. \subsection{Kinematic description} By a pure quantum vacuum solution of quantum improved Einstein's equations, we mean a solution with a deSitter core and an external Schwartzchild region. These two regions should be matched to each other by a thin layer of vacuum full of quantum effects of gravity. Such a substitution of the thin fluid layer of gravastar with a quantum vacuum layer is possible if it matches smoothly with both internal and external sides, and all the internal, external and the layer metrics be solutions of the quantum improved equations. To compare the results with Mazur--Mottola gravastar, we consider the general spherical symmetric metric \begin{equation} \label{SSM} ds^2 = f(r)\dd{t}^2 -h(r)^{-1}\dd{r}^2-r^2 \dd{\Omega}_2 \end{equation} where $\dd{\Omega}_2 \equiv \dd{\theta}^2 + \sin\theta^2 \dd{\phi}^2 $. The deSitter core is described by \begin{equation} \label{Internal-Solution} f(u<u_d) \equiv f_i(u) = 1 - \tilde{H}_0^2 u^2 \quad \quad , \quad \quad h(u<u_d) \equiv h_i(u) = 1 - \tilde{H}_0^2 u^2 \end{equation} and the metric for the external Schwarzschild region is \begin{equation} \label{External-Solution} f(u>u_s) \equiv f_e(u) = K( 1 - \sigma/u) \quad \quad , \quad \quad h(u>u_s) \equiv h_e(u) = K( 1 - \sigma/u) \end{equation} where $u \equiv r/l_p$, $ \sigma \equiv r_s/l_p$ and $\tilde{H}_0 \equiv H_0 l_p^2$ are the normalized counterpartes of the radial coordinate, Schwarzschild radius and deSitter constant, respectively. Note that as we are looking for a pure quantum solution, it is natural to scale everything with the planck length, $l_p$. The $u_d$ and $u_s$ are the boundaries of the deSitter and Schwarzscild regions. Also, $K$ is a constant needed for matching different parts of the solution. The intermediate region $u_d<u<u_s$ can be restricted to have spherical symmetry, for comparison of the results with Mazur--Mottola gravastar. Therefore we employ the spherical symmetric solution \begin{equation} ds^2 = f_q(r)\dd{t}^2 -h_q(r)^{-1}\dd{r}^2-r^2 \dd{\Omega}_2 \end{equation} for this region. In order to have a solution without any horizon, deSitter and Schwartzschild horizons should lie within this region. All these metric components ($f_i, h_i, f_e, h_e, f_q, h_q$) should satisfy the quantum improved Einstein's equations and match smoothly to each other. \subsection{Dynamic confirmation} The quantum improved Einstein's equations are derived from the improved Einstein--Hilbert action \cite{our3,our2} \begin{equation} \label{IA} \mathcal{A}^{\text{I}} = \int\dd[4]{x} \frac{\sqrt{-g}}{8\pi G(\chi)} R + \mathcal{A}_{\text{matter}} \ . \end{equation} As it is stated in the Introduction section, $\chi$ is the scaling parameter which should be fixed by various considerations. Although all the independent curvature invariants may be necessary for scaling the spacetime, but they will involve the problem without any clear wisdom about the geometric description of each term. Using a length dimensional function of three fundamental curvature invariants $R, R_{\alpha\beta}R^{\alpha\beta}$ and $ R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta} $ makes the comparison of the results with $f(R)$ models possible besides giving sufficient insight. There are two notable conditions that would be useful to fix $\chi$: \begin{itemize} \item For an action which is a collection of Euler--densities, Darmois--Israel junction conditions are known to some extent\cite{Euler-Junction}. If we restrict our choices to the fundamental invariants $(R, R_{\alpha\beta}, R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta})$, the running coupling \eqref{RC} suggests a proper combination of $1/R$, $ R/ R_{\alpha\beta}R^{\alpha\beta}$ and $ R/ R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}$ for $\chi^2$. \item On the other hand, we are interested in a solution for which the quantum effects are dominant at the intermediate layer and fall off out of it. This would help us to find the most impressive term of the linear composition $\chi^2 =\varphi_1 / R + \varphi_2 \times R/R_{\alpha\beta}R^{\alpha\beta} + \varphi_3 \times R/ R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}$ with constant coefficients $\varphi_{i=1,2,3}$. Here we choose the second term, as it is discussed in the Appendix. \end{itemize} Therefore, we set $\chi^2 =R/R_{\alpha\beta}R^{\alpha\beta}$ and the field equations are now \cite{Higher Derivative} \begin{equation} \label{IEoM} G_{\alpha\beta} + \xi_p \mathcal{G}^{\text{(q)}}_{\alpha\beta} = \kappa_{0} T_{\alpha\beta} \end{equation} where $\xi_p \equiv \eval{\xi}_{G_0 \sim l_p^2} = \omega_q \xi_0 l_p^2$ and \begin{equation} \label{Iterm} \mathcal{G}^{\text{(q)}}_{\alpha\beta} \equiv 2 R_{\alpha\mu\beta\nu}R^{\mu\nu} - \nabla_{\alpha}\nabla_{\beta} R + \square R_{\alpha\beta} -\frac{1}{2} g_{\alpha\beta} \Bigl( R_{\mu\nu}R^{\mu\nu} - \square R \Bigr) \ . \end{equation} For a spherical symmetric solution \eqref{SSM}, the $f(u)$ and $h(u)$ are the solutions of the temporal, radial and polar coupled differential equations \begin{align} & G_{tt}+\alpha_q \mathcal{Q}_t=\frac{1}{u^2}\dv{}{u}[u(1-h)] + \alpha_q \mathcal{Q}_t = 0 \label{I00-fh}\\ & G_{rr}+\alpha_q \mathcal{Q}_r= \frac{h}{uf}\dv{f}{u}+\frac{1}{u^2} (h-1) + \alpha_q \mathcal{Q}_r = 0 \label{I11-fh} \\ & G_{\theta\theta}+\alpha_q \mathcal{Q}_\theta=\frac{1}{4f^2 u} \left( -hu(\dv{f}{u})^2 +2f^2 \dv{h}{u} +f \left( u \dv{f}{u} \dv{h}{u} +2h(\dv{h}{u} + u\dv[2]{f}{u}) \right) \right) + \alpha_q \mathcal{Q}_{\theta} = 0 \label{I22-fh} \end{align} where $\alpha_q \equiv \xi_p/16$. The $\mathcal{Q}_r(u), \mathcal{Q}_t(u)$ and $\mathcal{Q}_{\theta}(u)$ are nonlinear functions of $f(u)$ and $h(u)$ and their derivatives up to order 4, given by the relations: \begin{equation} \begin{split} \mathcal{Q}_t = \frac{-1}{f^5 h^3 u^4} {}& \Bigl( 48 h^5 f^5 - 64 h^4 f^5 + 16 h^3 f^5 + 208 h^2 u^3 h'^3 f^5 + 12 h^3 u^2 h'^2 f^5 - 32 h^4 u h' f^5 \\ {}& + 16 h^4 u^2 h'' f^5 -96 h^3 u^3 h' h'' f^5 + 16 h^4 u^3 h^{(3)} f^5 + 192 h^5 f^4 - 192 h^4 f^4 - 168 u^4 h'^4 f^4 \\ {}& + 384 h^2 u^3 h'^3 f^4 + 20 h^3 u^3 f'h'^2 f^4 -64 h^5 u f' f^4 + 32 h^4 u f' f^4-64 h^4 u h' f^4 + 24 h^4 u^2 f' h' f^4 \\ {}& + 32 h^5 u^2 f'' f^4 + 12 h^3 u^4 h'^2 f'' f^4 +104 h^4 u^3 h' f'' f^4 - 32 h^4 u^2 h'' f^4+168 h u^4 h'^2 h'' f^4 \\ {}& + 56 h^4 u^3 f' h'' f^4-192 h^3 u^3 h' h'' f^4+4 h^3 u^4 f' h' h'' f^4 +16 h^4 u^4 f'' h'' f^4 + 48 h^5 u^3 f^{(3)} f^4 \\ {}& +32 h^4 u^3 h^{(3)} f^4 + 4 h^4 u^4 f' h^{(3)} f^4 - 28 h^2 u^4 h' h^{(3)} f^4 + 8 h^5 u^4 f^{(4)} f^4 -240 u^4 h'^4 f^3 \\ {}& - 20 h^5 u^2f'^2 f^3 - 7 h^3 u^4 f'^2 h'^2 f^3 - 20 h^5 u^4 f''^2 f^3 + 64 h^5 u f' f^3-72 h^4 u^3 f'^2 h' f^3 \\ {}&-64 h^4 u^2 f' h' f^3 - 64 h^5 u^2 f'' f^3 - 120 h^5 u^3 f' f'' f^3 + 64 h^4 u^3 h' f'' f^3 - 60 h^4 u^4 f' h' f'' f^3 \\ {}& - 12 h^4 u^4 f'^2 h'' f^3 +240 h u^4 h'^2 h'' f^3 + 32 h^4 u^3 f' h'' f^3+32 h^4 u^4 f'' h'' f^3 +32 h^5 u^3 f^{(3)} f^3 \\ {}& -24 h^5 u^4 f' f^{(3)} f^3 + 8 h^4 u^4 f' h^{(3)} f^3 -40 h^2 u^4 h' h^{(3)} f^3 + 16 h^5 u^4 f^{(4)} f^3 + 52 h^5 u^3 f'^3 f^2 \\ {}&- 208 h^2 u^3 h'^3 f^2 + 64 h^5 u^2 f'^2 f^2 - 32 h^5 u^4 f''^2 f^2 + 30 h^4 u^4 f'^3 h' f^2 - 64 h^4 u^3 f'^2 h' f^2 \\ {}& + 56 h^5 u^4 f'^2 f'' f^2 - 96 h^5 u^3 f' f'' f^2 - 88 h^4 u^4 f' h' f'' f^2 -24 h^4 u^4 f'^2 h'' f^2 + 104 h^3 u^3 h' h'' f^2 \\ {}& - 48 h^5 u^4 f' f^{(3)} f^2 - 23 h^5 u^4 f'^4 f + 64 h^5 u^3 f'^3 f - 384 h^2 u^3 h'^3 f +48 h^4 u^4 f'^3 h'f \\ {}& + 112 h^5 u^4 f'^2 f'' f + 192 h^3 u^3 h' h''f - 48 h^5 u^4 f'^4 \Bigr)\ , \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{Q}_r = \frac{1}{f^4 h u} {}& \Bigl( 304 f^4 h^3 - 16 f^4 h^2 u^2 h'' - 320 f^4 h^2 + 40 f^4 u^3 h'^3 + 4 f^4 h u^2 h'^2 + 32 f^4 h u h'+64 f^4 h^2 h^{(3)} u^3 \\ {}& +32 f^4 h u^3 h' h'' - 96 f^4 h^2 u h' + 16 f^4 h+80 f^{(3)} f^3 h^3 u^3 + 96 f^{(3)} f^3 h^2 u^4 h' - 60 f^2 h^3 u^4 f''^2 \\ {}& - 88 h^3 u^4f'^4 +124 f h^3 u^3 f'^3 + 103 f h^2 u^4 f'^3 h' + 32 f^{(4)} f^3 h^3 u^4 + 32 f^4 h^4 u^2 f'' \\ {}& +64 f^4 h^4 u f' + 16 f^4 h^3 u^2 f' h' -64 f^3 h^3 u^2 f'' + 64 f^3 h^2 u^4 f'' h'' + 24 f^3 h u^4 f''h'^2 \\ {}& + 176 f^3 h^2 u^3 f'' h' - 16 f^3 h^4 u^2 f'^2 + 96 f^3 h^3 u f' +80 f^3 h^2 u^3 f' h'' + 28 f^3 h u^3 f' h'^2 \\ {}& + 16 f^3 h^2 h^{(3)} u^4 f' + 8 f^3 h u^4 f' h' h'' - 88 f^3 h^2 u^2 f' h' + 206 f h^3 u^4 f'^2 f''-192 f^2 h^3 u^3 f' f''\\ {}& +68 f^2 h^3 u^2 f'^2 -44 f^2 h^2 u^4 f'^2 h''-15 f^2 h u^4 f'^2 h'^2-192 f^2 h^2 u^4 f' f'' h' -148 f^2 h^2 u^3 f'^2 h' \\ {}& -88 f^{(3)} f^2 h^3 u^4 f' \Bigr)\ , \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{Q}_{\theta} = \frac{-1}{f^4 h u^4} {}& \Bigl( 16 f^4 h^3 - 64 f^4 h^2 u^2 h'' - 24 f^4 u^4 h'^3 -24 f^4 h u^2 h'^2 - 4 f^4 h^2 h^{(3)} u^4 - 24 f^4 h^2 h^{(3)} u^3 \\ {}& + 24 f^4 h u^4 h' h'' - 12 f^4 h u^3 h' h'' + 48 f^4 h^2 u h' - 16 f^4 h-56 f^{(3)} f^3 h^3 u^3 - 24 f^{(3)} f^3 h^2 u^4 h' \\ {}& + 12 f^2 h^3 u^4 f''^2 + 19 h^3 u^4 f'^4 - 68 f h^3 u^3 f'^3 - 22 f h^2 u^4 f'^3 h' - 8 f^{(4)} f^3 h^3 u^4-48 f^3 h^3 u^2 f''\\ {}& - 16 f^3 h^2 u^4 f'' h''- 6 f^3 h u^4 f'' h'^2 - 116 f^3 h^2 u^3 f'' h'+64 f^3 h^3 u f'-48 f^3 h^2 u^3 f' h'' \\ {}& -16 f^3 h^2 u f' +24 f^3 u^4 f'h'^3 - 16 f^3 h u^3 f' h'^2 - 26 f^3 h u^4 f' h' h''-36 f^3 h^2 u^2 f' h' \\ {}& -44 f h^3 u^4 f'^2 f''+ 120 f^2 h^3 u^3 f' f''+ 44 f^2 h^3 u^2 f'^2+ 10 f^2 h^2 u^4 f'^2 h'' + 3 f^2 h u^4 f'^2 h'^2 \\ {}& + 42 f^2 h^2 u^4 f' f'' h'+ 84 f^2 h^2 u^3 f'^2 h'+20 f^{(3)} f^2 h^3 u^4 f' \Bigr) \ . \end{split} \end{equation} The solution of \eqref{I00-fh}-\eqref{I22-fh} should also satisfy the effective conservation equation \begin{equation}\label{IC} \mathcal{Q}_c\equiv -\frac{1}{\kappa_0} \left(\dv{\mathcal{Q}_r}{u}+(\mathcal{Q}_r+\mathcal{Q}_t)\frac{1}{2 f}\dv{f}{u} + 2 \frac{\mathcal{Q}_r-\mathcal{Q}_{\theta}}{u} \right)=0 \ \end{equation} obtained by taking the divergence of field equations \eqref{IEoM}. It is quite easy to check that the quantum effects, $\mathcal{Q}_t$, $\mathcal{Q}_r$, $\mathcal{Q}_\theta$, and $\mathcal{Q}_c$ rapidly go to zero in the internal ($u<u_d$) and external ($u>u_s$) regions and thus the solutions \eqref{Internal-Solution} and \eqref{External-Solution} are valid. For the intermediate region we propose the Gaussian functions \begin{equation} \label{fjhj} f_q(u) = c_1 + \frac{a_1}{1+b_1(u/\sigma-1)^2} \quad \quad , \quad \quad h_q(u) = c_2 + \frac{a_2}{1+b_2(u/\sigma-1)^2} \end{equation} with six constants $(a_1, a_2, b_1, b_2, c_1, c_2)$ to be fixed by matching conditions. For deSitter region the estimation $\tilde{H}_0 \simeq 1/\sigma $ is fine, and thus the matching conditions \begin{align} \label{BC} & f_i(u_d) = f_q(u_d) \quad \quad , \quad \quad h_i(u_d) = h_q(u_d) \quad \quad , \quad \quad f_q(u_d) =h_q(u_d) \nonumber \\ & f_e(u_s) = f_q(u_s) \quad \quad , \quad \quad h_e(u_s) = h_q(u_s) \quad \quad , \quad \quad f_q(u_s)=h_q(u_s) \end{align} result in \begin{equation} \label{var} a_1 = a_2 \approx -\frac{16}{3 \sigma} \quad \quad , \quad \quad b_1 = b_2 \approx \frac{\sigma^2}{3} \quad \quad , \quad \quad c_1 = c_2 \approx \frac{6}{\sigma} \quad \quad , \quad \quad K=2 \ . \end{equation} Numerical solutions of these differential equations are greatly consistent with the general solution \eqref{fjhj} with constants \eqref{var}. Also it is a simple task to substitute the proposed solution of intermediate region into the field equations \eqref{I00-fh}-\eqref{IC} and observe that \begin{align} & G_{tt} + \alpha_q \mathcal{Q}_t = \mathcal{O}(\sigma^{-2}) \ , \nonumber\\ & G_{rr} + \alpha_q \mathcal{Q}_r = \mathcal{O}(\sigma^{-2}) \ , \nonumber \\ & G_{\theta\theta} + \alpha_q \mathcal{Q}_{\theta} = \mathcal{O}(\sigma^{-4}) \ , \nonumber \\ & \mathcal{Q}_c = \mathcal{O}(\sigma^{-2}) \ . \end{align} Noting that for an ordinary pure quantum gravastar $\sigma=r_s/\ell_p$ is very very large, and the solution is acceptable. The temporal component of the metric is plotted in figure \ref{gtt}. Obviously, the quantum layer, extended from $-1/\sigma$ to $1/\sigma$, smoothly matches to the deSitter core at $ u =-1/\sigma$ and to the Schwarzschild region at $u=1/\sigma$. No redshift infinity surface is present and the asymptotic observer does not figure out any problem with the clock of free falling observer when crossing the quantum layer. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{gtt.pdf} \caption{The temporal component of the metric as a function of radial coordinate.} \label{gtt} \end{figure} Also figure \ref{grr} shows the radial component of the metric. Again one has a smooth matching of the radial metric component at both deSitter and Schwarzschild boundaries and no singularity is present. Furthermore, the metric signature is saved. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth, trim= 1 1 1 1,clip=true]{grr.pdf} \caption{The radial component of the metric about the quantum layer.} \label{grr} \end{figure} As expected the divergences of the metric at the classical position of horizons are converted to a finite Gaussian curve, with height $1.5\sigma$ and its center at $\sigma$. Thus, for a big enough gravastar $(\sigma \gg 1)$, this peak behaves like a delta function. Anyway, both the horizons (of deSitter and Schwartzschild) are replaced by a quantum vacuum layer. \section{Entropy of the quantum layer} In order to discuss the stability of the solution, we need to assign some entropy to the quantum layer. There is no straightforward approach to do so. This is because neither we are dealing with a matter thin layer as in the Mazur--Mottola gravastar, nor we have horizons to use surface gravity. Let us try to define an effective energy--momentum tensor $T^{\mu (eff)}_{\nu} = (\rho_{eff},-P_{eff},-P_{eff},-P_{eff})$ for the quantum layer. Since the layer is very thin, the components of the effective energy--momentum tensor can be approximated by the average values $\overline{\alpha_q\mathcal{Q}}_t$, $\overline{\alpha_q\mathcal{Q}}_r$ and $\overline{\alpha_q\mathcal{Q}}_{\theta}$ (averaged over the volume of the layer). From equations \eqref{Iterm} to \eqref{I22-fh}, we get \begin{align} &\alpha_q \mathcal{Q}_t \simeq 56.9 \cdot \frac{\alpha_q}{\sigma} -3072 \ \alpha_q\sigma^2 \epsilon \ ,\\ &\alpha_q \mathcal{Q}_r \simeq -63.2 \ \alpha_q\sigma -227.6 \ \alpha_q \sigma \epsilon \ , \label{Ef-Pressure}\\ &\alpha_q \mathcal{Q}_{\theta} \simeq -6.3 \ \frac{\alpha_q}{\sigma^2} - 85.3 \ \frac{\alpha_q}{\sigma} \epsilon \end{align} where $\epsilon = \frac{u}{\sigma} - 1$. Considering the thin layer is defined as the region $ \sigma - 1 < u < \sigma+1 $, we have \begin{equation} -\frac{1}{\sigma} < \epsilon < \frac{1}{\sigma} \ , \end{equation} and the average values of $\alpha_q\mathcal{Q}_t$, $\alpha_q\mathcal{Q}_r$ and $\alpha_q\mathcal{Q}_{\theta}$ on the quantum layer are \begin{align} &\overline{\alpha_q \mathcal{Q}}_t = \frac{\sigma}{2} \int_{-1/\sigma}^{1/\sigma} \alpha_q \mathcal{Q}_t \dd{\epsilon} = 56.8 \ \frac{\alpha_q}{\sigma} \ ,\\ &\overline{\alpha_q \mathcal{Q}}_r = \frac{\sigma}{2} \int_{-1/\sigma}^{1/\sigma} \alpha_q \mathcal{Q}_t \dd{\epsilon} = -63.2 \ \alpha_q\sigma \ ,\\ &\overline{\alpha_q \mathcal{Q}}_{\theta} = \frac{\sigma}{2} \int_{-1/\sigma}^{1/\sigma} \alpha_q \mathcal{Q}_t \dd{\epsilon} = -6.3 \ \frac{\alpha_q}{\sigma^2} \ \ . \end{align} At the limit $\sigma \gg 1$, $\overline{\alpha_q\mathcal{Q}_t}$ and $\overline{\alpha_q\mathcal{Q}_{\theta}}$ tend to zero, and thus a well--defined equation of state parameter can not be defined for our quantum layer. We can obtain the equivalent entropy and temperature of the quantum layer by identifying the action calculated over the layer region to the Helmholtz free energy. Therefore, since for the quantum layer \eqref{fjhj} located at $\sigma$, the improved action is \begin{equation} \mathcal{A}^{\text{I}} = \mathcal{C}\int_V \dd{V(\sigma)} \dd{(ct)} \mathcal{L}(\sigma) \end{equation} where $\tilde{\alpha}_q \equiv \alpha_q / l_p$, \begin{align} & \mathcal{L}(\sigma) = R(\sigma)+\epsilon_q R_{\alpha\beta}(\sigma)R^{\alpha\beta}(\sigma) = 1.6 \bar{\alpha}_q \frac{\sigma^2}{l_p^4} + 1.8 \frac{\sigma}{l_p^4} + \mathcal{O}(\sigma^{-1}) \ , \\ &\dd{V(\sigma)} = \sqrt{-g} \dd[3]{x} = 4\sqrt{2} \pi \sigma^3 l_p^3 \dd{\epsilon} \ , \end{align} and setting $\mathcal{C}\int\dd{(ct)} = 2c/ \kappa$, we obtain the Helmholtz free energy as \begin{equation} \label{FE} \dd{F} \simeq -P_{eff}\dd{V}(\sigma) - 56.2 \times 4\sqrt{2} \pi \frac{\tilde{\alpha}_q^2}{\kappa} \sigma^5 l_p^3 \dd{\epsilon} \end{equation} where we have used equation \eqref{Ef-Pressure} to substitute for $P_{eff}$. On the other hand the second term in the above equation should be $SdT$. Noting the dimension of temperature, $[k_BT] \sim m_p c^2$, it can be deduced that \begin{equation} \dd{k_BT} \sim m_p c^2 \dd{\epsilon} \ . \end{equation} Therefore, since $\kappa=8\pi l_p/ m_p c^2$ in planck units, the entropy of the quantum layer becomes \begin{equation} S \simeq 40 k_B \alpha_q^2 \sigma^5 \ . \end{equation} It is very important to note that the positivity of the entropy confirms the stability of the layer, and that larger quantum gravastars contain more entropy. \section{Conclusion} The gravastar was proposed by Mazur and Mottola as a stable, compact nonsingular solution of Einstein equation. In some aspects, it seems a proper alternative for the black hole as a gravitational collapse endpoint. The main features of this model are its non--sigular deSitter core and the replacment of the event horizon by a thin layer of ultra-relativistic fluid. Here we have shown that, in the context of quantum improved Einstein equations (that is including quantum gravitational effects), it is quite possible to have a pure quantum gravastar. Such a solution consists of three regions. A deSitter core, a Schwartzschild outer region, and an intermediate region matching the other two regions for which the quantum effects are dominant. The thin quantum layer matches smoothly with both deSitter and Schwarzschild sides, and thus there is no need for Darmois--Israel junction layer. In other words, a pure quantum gravastar is a region of cosmological constant (maybe dark energy) confined by quantum gravitational effects, and thus is a vacuum solution of quantum improved Einstein equations. Also we have studied the entropy of such a solution and shown that it is stable.
1,314,259,995,481
arxiv
\section{Introduction} \label{sec:introduction} The photometric and kinematic analysis of nearby objects reveals that disk galaxies may host decoupled structures on various scales, from a few tens of pc \citep[e.g.,][]{Pizzella2002, Corsini2003, Erwin2004} to several kpc \citep[e.g.,][]{Rubin1994, Kuijken2001, Combes2006}. In particular, observational evidence for two stellar disks, two gaseous disks, or for a gaseous disk and a stellar disk rotating in opposite directions have been found on large scales in galaxies of different morphological types \citep{Galletta1996, Corsini2014}. Counter-rotating stellar and/or gaseous disks occur in $\sim30\%$ of S0 galaxies \citep{Pizzella2004, Davis2011} and in $\sim10\%$ of spirals \citep{Kannappan2001, Pizzella2004,Corsini2012}. Different processes have been proposed to explain the formation of a galaxy with two counter-rotating stellar disks, and each formation scenario is expected to leave a noticeable signature in the stellar population properties of the counter-rotating components. A counter-rotating stellar disk can be built from gas accreted with an opposite angular momentum with respect to the pre-existing galaxy from the environment or from a companion galaxy. The counter-rotating gas settles on the galaxy disk and forms the counter-rotating stars. In this case, the gas is kinematically associated with the counter-rotating stellar component, which is younger and less massive than the main body of the galaxy \citep{Thakar1996, Thakar1998,Algorry2014}. Another viable, but less probable, formation process is related to the major merger between two disk galaxies with opposite rotation. The difference in age of the two counter-rotating components depends on the stellar population of the progenitors and on the timescale of the star formation triggered by the binary merger. Moreover, the two stellar disks are expected to have a different thickness \citep{Puerari2001, Crocker2009, Bettoni2014}. Finally, the dissolution of a bar or triaxial stellar halo can build two counter-rotating stellar components with similar age and mass without involving gas. One of them is rotating in the same direction as the bulge and disk of the pre-existing galaxy \citep[][but see also \citealt{Sellwood1994, Khoperskov2016}]{Evans1994}. These predictions are difficult to be tested, since outside our Galaxy it is a hard task to separate the single components of a composite stellar population. However, this is possible in a few galaxies because of the difference in velocity of their extended counter-rotating stellar components. Counter-rotating galaxies are therefore ideal laboratories for studying how galaxies grow by episodic or continuous accretion of gas and stars through acquisition and merging events. \citet{Coccato2011} presented a spectroscopic decomposition technique that allows separating the relative contribution of two stellar components from the observed galaxy spectrum. This allows us to study the kinematics and spectroscopic properties of individual components independently, minimizing their cross-contamination along the line of sight. We applied this technique to many of the galaxies known to host counter-rotating stellar disks with the aim of constraining their formation process \citep{Coccato2011, Coccato2013, Coccato2015, Pizzella2014}. In most of these cases, the available evidence supports the hypothesis that stellar counter-rotation is the end product of a retrograde acquisition of external gas and subsequent star formation. Other teams developed their own algorithms for separating the kinematics and stellar populations of counter-rotating galaxies and found results similar to ours \citep{Johnston2013, Katkov2011, Katkov2013, Katkov2016, Mitzkus2016}. NGC~1366 is a bright and spindle galaxy (Fig.~\ref{fig:image}) in the Fornax cluster at a distance of 17 Mpc \citep{Ferguson1989}. It is classified as S0$^0$ by \citet{RC3} and S0$_1$(7)/E7 by \citet{CAG} because it has a highly inclined thin disk. Although NGC~1366 belongs to the LGG~96 group, \citet{Garcia1993}, it does not have any nearby bright companion and shows an undisturbed morphology. It has an absolute total $B$ magnitude $M_{B_T}^0=-18.30$ mag, as derived from $B_T=11.97$ mag \citep{RC3} by correcting for the inclination and extinction given by HyperLeda \citep{Makarov2014}. The apparent isophotal diameters measured at a surface brightness level of $\mu_B = 25$ mag~arcsec$^{-2}$ are $2.1\times0.9$ arcmin corresponding to $10.4\times4.5$ kpc. Its surface-brightness distribution is well fit by a S\'ersic bulge and an exponential disk with a bulge-to-total luminosity ratio $B/T=0.2,$ as found by \citet{Morelli2008}. These authors detected a kinematically decoupled stellar component that is younger than the host bulge and has probably formed by enriched material acquired through interaction or minor merging. \begin{figure} \includegraphics[angle=0.0,width=0.5\textwidth]{galaxy.ps} \caption{ Contour plots in arbitrary scale of the $R$-band image of NGC~1366 taken from \citet{Morelli2008}. The solid lines mark the position of the slit along the major and minor axis of the galaxy. The brown and orange segments correspond to the radial bins where we were able to separate the two counter-rotating stellar components. Orientation of the field of view is given in the figure, and the scale is 82 pc per arcsec.} \label{fig:image} \end{figure} In this paper we revisit the case of NGC 1366 by successfully separating the two counter-rotating components and properly measuring the properties of their stellar populations (Sect. \ref{sec:spectroscopy}). The analysis of the kinematics of the stars and ionized gas and of the stellar populations is consistent with the formation of the counter-rotating component from external gas that is still accreting onto the galaxy (Sect. \ref{sec:discussion}). \section{Long-slit spectroscopy} \label{sec:spectroscopy} \subsection{Observations and data reduction} \label{sec:observations} We carried out the spectroscopic observations of NGC~1366 on 2005 January 25 with the 3.5 m New Technology Telescope (NTT) at the European Southern Observatory (ESO) in La Silla (Chile). We obtained $2\times45$-minutes spectra along the major (P.A.$=2^{\circ}$) and minor (P.A.$=92^{\circ}$) axis of the galaxy with the ESO Multi-Mode Instrument (EMMI). It mounted a 1200 $\rm grooves\,mm^{-1}$ grating with a 1.0 arcsec $\times$ 5.5 arcmin slit, giving an instrumental resolution $\sigma_{\rm inst}=25$ \kms. The detector was a mosaic of the No.~62 and No.~63 MIT/LL CCDs. Each CCD has $2048\,\times\,4096$ pixels of $15\,\times\,15$ $\rm \mu m^2$. We adopted a $2\times2$ pixel binning. The wavelength range between about 4800 \AA\ and 5400 \AA\ was covered with a reciprocal dispersion of 0.40 \AA\ $pixel^{-1}$ after $2\times2$ pixel binning. All the spectra were bias subtracted, flat-field corrected, cleaned of cosmic rays, and wavelength calibrated using standard IRAF\footnote{Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatory (NOAO), which is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under cooperative agreement with the National Science Foundation.} routines. The spectra obtained along the same axis were coadded using the center of the stellar continuum as reference. Further details about the instrumental setup and spectra acquisition are given in \citet{Morelli2008}. We followed the prescriptions of \citet{Morelli2016} for the data reduction. \subsection{Stellar and ionized-gas kinematics} \label{sec:kinematics} We derived the stellar kinematics along both the major and minor axis of NGC~1366 with a single-component and with a two-components analysis as done in \citet{Pizzella2014}. \begin{figure*} \includegraphics[angle=90.0,width=0.98\textwidth]{N1366_MJ.ps} \caption{Line-of-sight velocity dispersion (top panel) and velocity (bottom panel) radial profiles measured along the major axis of NGC~1366 for the total (black filled circles), counter-rotating (blue filled square), and co-rotating (red filled diamonds) stellar components and for the ionized gas component (green open triangles). Error bars smaller than symbols are not shown. The blue and red horizontal lines in the top panel mark the radial bins we adopted for measuring the counter-rotating and corotating components, respectively. The blue and red dashed lines in the bottom panel} are a tentative indication of the velocity rotation curves for the counter-rotating and corotating component, respectively. \label{fig:kin_mj} \end{figure*} \begin{figure*}[t!] \includegraphics[angle=90.0,width=0.98\textwidth]{N1366_MN.ps} \caption{Line-of-sight velocity dispersion (top panel) and velocity (bottom panel) radial profiles measured along the minor axis of NGC~1366 for the total stellar (black filled circles) and two ionized-gas components (cyan open triangles and violet filled triangles). Error bars smaller than symbols are not shown.} \label{fig:kin_mn} \end{figure*} We first measured the spectra without separating the two counter-rotating components \citep{Morelli2015}. We used the penalized pixel fitting \citep[pPXF,][]{Cappellari2004} and gas and absorption line fitting \citep[GANDALF,][]{Sarzi2006} IDL\footnote{Interactive Data Language (IDL) is distributed by ITT Visual Information Solutions.} codes with the ELODIE library of stellar spectra from \citet{Prugniel2001} and adopting a Gaussian line-of-sight velocity distribution (LOSVD) to obtain the velocity curve and velocity dispersion radial profile along the observed axes. We subtracted the measured velocities from the systemic velocity, but we did not apply any correction for the slit orientation and galaxy inclination, while we corrected the measured velocity dispersion for the instrumental velocity dispersion. We found a peculiar stellar kinematics along the major axis of NGC~1366 (Fig.~\ref{fig:kin_mj}). The velocity curve is symmetric around the center for the innermost $|r|\leq11\arcsec$. It is characterized by a steep rise reaching a maximum of $|v|\simeq50$ \kms\ at $|r|\simeq2\arcsec$ and decreasing farther out to $|v|\simeq0$ \kms\ at $6\la|r|\la11\arcsec$. For $|r|\geq11\arcsec$ the spectral absorption lines clearly display a double peak that is due to the difference in velocity of the two counter-rotating components. The absorption lines of the two stellar populations are so well separated that the pPXF-GANDALF procedure fit only one of the two components. This is the reason for the shift in velocities and the drop in velocity dispersion to lower values that we measured on both sides of the galaxy at $|r|\geq11\arcsec$ (Fig. \ref{fig:kin_mj}). The velocities measured at large negative and positive radii are related to the counter-rotating and corotating component, respectively. The velocity dispersion shows a central maximum $\sigma\simeq150$ \kms\ and decreases outwards. It rises again to peak at $\sigma\simeq140$ \kms\ at $|r|\simeq9\arcsec$ and decreases to a value of $\sigma\simeq100$ \kms\ at $|r|\simeq25\arcsec$. The combination of zero velocity with two off-centered and symmetric peaks in the velocity dispersion of the stellar component measured along the galaxy major axis is indicative of two counter-rotating components. This feature shows up in the kinematics obtained from long-slit \citep{Bertola1996,Vergani2007} and integral-field spectroscopy \citep{Krajnovic2011,Katkov2013} when the two counter-rotating components have almost the same luminosity and their difference in velocity is not resolved. We found no kinematic signature of stellar decoupling along the minor axis of NGC~1366 (Fig.~\ref{fig:kin_mn}). The velocity curve is characterized by $|v|\simeq0$ \kms\ at all radii, indicating that the photometric and kinematic minor axes of the galaxy coincide with each other. The velocity dispersion profile is radially symmetric and smoothly declines from $\sigma\simeq150$ \kms\/ in the center to $\simeq60$ \kms\ at the last measured radius ($r\simeq14\arcsec$). \begin{figure}[t!] \centering \includegraphics[angle=90.0,width=0.49\textwidth]{N1366_MJ_residuals.ps}\\ \vspace{-0.8cm} \includegraphics[angle=90,width=0.56\textwidth]{N1366_MN_residuals.ps} \caption{Portion of major (top panel) and minor-axis (bottom panel) rest-frame spectra of NGC~1366 showing the \oiiic\/ emission line after subtracting the best-fitting stellar template.} \label{fig:oiii} \end{figure} Finally, we derived the kinematics of the two counter-rotating components along the major axis at the radii where their difference in velocity was resolved, giving rise to double-peaked absorption lines. To reach the signal-to-noise ratio $(S/N)$ needed to successfully perform the spectral decomposition, we averaged the galaxy spectrum along the spatial direction in the regions with the highest contribution of the counter-rotating component. We obtained a minimum $S/N \geq 30$ per resolution element, which increases to a maximum value $S/N \simeq 50$ in the very central region. \begin{figure*}[t!] \includegraphics[angle=0.0,width=0.98\textwidth]{N1366_fit.ps} \caption{Decomposition of the major-axis spectrum of NGC~1366 (black line) in the analyzed spatial bins ($r=-20.9\arcsec, -12.6\arcsec, 11.4\arcsec$ and $19.9\arcsec$). The best-fitting model (magenta line) is the sum of the spectra of the corotating (red line) and counter-rotating stellar component (blue line) and of the ionized-gas component (cyan line). The normalized flux of the fit residual (green line) has a false zero-point for viewing convenience. The yellow shaded area indicates a spectral region masked in the fit that is due to the imperfect subtraction of the spurious signal, which is the result of a reflection on the EMMI CCD.} \label{fig:2c_fit} \end{figure*} We performed the spectroscopic decomposition using the implementation of the pPXF developed by \citet{Coccato2011}. We built for each stellar component a best-fitting synthetic template as linear combination of the ELODIE stellar spectra. The two templates depend on the corresponding stellar populations of the corotating and counter-rotating components and were convolved with a Gaussian LOSVD according to their kinematics. We added multiplicative polynomials to deal with differences in the continuum shape of the galaxy and stellar spectra due to flux calibration and flat fielding residuals. We also included a few Gaussian functions to account for the ionized-gas emission lines and generated a synthetic galaxy spectrum that matches the observed spectrum. The spectroscopic decomposition returns the luminosity fraction, the line-of-sight velocity, and velocity dispersion of the two stellar components, the line-of-sight velocity and velocity dispersion of the ionized gas, and the two best-fitting synthetic stellar templates to be used for the analysis of the stellar population properties. We quantified the errors on the luminosity fraction, line-of-sight velocity, and velocity dispersion of the two counter-rotating stellar components with a series of Monte Carlo simulations on a set of artificial galaxy spectra, as done in \cite {Coccato2011}. The decomposition of the galaxy spectrum in the radial bins at $r=-20.9\arcsec, -12.6\arcsec, 11.4\arcsec$ and $19.9\arcsec$ are shown in Fig.~\ref{fig:2c_fit}, and the resulting kinematics of the corotating and counter-rotating stellar components are plotted in Fig.~\ref{fig:kin_mj}. Corotating stars are characterized by a higher rotation velocity ($|v|\simeq120$) \kms\ and a lower velocity dispersion ($\sigma\simeq30$) than the counter-rotating stars that rotate with a $|v|\simeq90$ \kms\ and have a ($\sigma\simeq80$ \kms). The corotating and counter-rotating components contribute $(45\pm15)\%$ and $(55\pm15)\%$ of the stellar luminosity at all the measured radii. We converted the luminosity fraction of each component into mass fraction using the measured ages and metallicities and adopting the models by \citet{Maraston2005}. We derived stellar mass-to-light ratios of $M/L=3.02$ and $M/L=1.63$ for the corotating and counter-rotating components, respectively. From these quantities we found that the stellar mass fractions of the corotating and counter-rotating components are $60\%$ and $40\%$, respectively. A comparison between the stellar and ionized-gas velocity curves indicates that the gas is disturbed and is not associated with one of the two counter-rotating components. In fact, the gas rotates in the same direction and with a velocity amplitude close to that of the stellar component at small ($|r|\la1\arcsec$) and large radii ($|r|\geq11\arcsec$). A broad feature is clearly visible in the gas structure at $|r|\simeq7-10\arcsec$ along the major axis (Fig.~\ref{fig:oiii}). Although the \oiiic\/ emission line has a broad profile (Fig.~\ref{fig:2c_fit}), there is no clear evidence for a double peak. The wavelength range of our spectra does not cover the \Ha\/ region, which prevents us form building a complete diagnostic diagram to properly distinguish between the different excitation mechanisms of the ionized gas. However, the high value of log(\oiiic /\Hb)$\,\simeq\,1.5$ favors the shocks as excitation mechanism. We detected two ionized-gas rotating components along the galaxy minor axis as it results from the double-peaked \oiiic\ emission line shown in Fig.~\ref{fig:oiii}. We independently measured the brighter emission line at lower velocities and the fainter emission line at higher velocities. Their velocity and velocity dispersion are shown in Fig.~\ref{fig:kin_mn}. The two gas components have a systematic and almost constant offset in velocity with respect to the stellar component, suggesting the presence of more gas clouds along the line of sight. We prefer this interpretation to the idea of having two gas components with mirrored asymmetric distributions with a brighter and a fainter side and giving rise to an X-shaped \oiiic\ emission line. The gas velocity dispersion is typically $\sigma_{\rm gas} < 100$ \kms\ and mostly $\sigma_{\rm gas} \simeq 50$ \kms\ along both axes after correcting for the instrumental velocity dispersion. \subsection{Stellar populations} \label{sec:populations} We measured the Lick line-strength indices \citep[as defined in][]{Gorgas1990, Worthey1994, Thomas2003} of the corotating and counter-rotating components on the best-fitting synthetic templates and derived the age, metallicity, and \aFe\ ratio of the corresponding stellar population as in \citet{Morelli2012}. We derived the errors on the equivalent widths of the line-strength indices of the two counter-rotating stellar components with a series of Monte Carlo simulations on a set of artificial galaxy spectra as done in \cite {Coccato2011}. We report the measurements in Table~\ref{tab:lick2c} and compare them to the line-strength indices predicted for a single stellar population that accounts for the $\alpha/$Fe overabundance by \citet{Thomas2003} in Fig.~\ref{fig:griglie}. We obtained the stellar population properties of both components from the line-strength indices averaged on the two galaxy sides. They are given in Table~\ref{tab:agemetalfa} together with the relative luminosity of the corotating and counter-rotating components. \begin{table}[t!] \caption{Line-strength indices of the corotating and counter-rotating stellar components of NGC~1366.} \begin{small} \begin{tabular}{rcccc} \hline \noalign{\smallskip} \multicolumn{1}{c}{$r$} & \multicolumn{1}{c}{\Hb} & \multicolumn{1}{c}{\Mgb} & \multicolumn{1}{c}{Fe$_{5270}$} & \multicolumn{1}{c}{Fe$_{5335}$} \\ \noalign{\smallskip} \multicolumn{1}{c}{[\arcsec]} & \multicolumn{1}{c}{[\AA]} & \multicolumn{1}{c}{[\AA]} & \multicolumn{1}{c}{[\AA]} & \multicolumn{1}{c}{[\AA]} \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{5}{c}{Corotating component} \\ \noalign{\smallskip} $-20.9$ & $1.72\pm0.71$ & $2.32\pm0.79$ & $2.11\pm0.89$ & $1.32\pm0.80$\\ $-12.6$ & $1.43\pm0.48$ & $2.51\pm0.48$ & $2.16\pm0.47$ & $1.39\pm0.49$\\ $ 11.4$ & $2.27\pm0.36$ & $3.03\pm0.33$ & $2.94\pm0.39$ & $2.52\pm0.44$\\ $ 19.9$ & $2.57\pm0.51$ & $2.76\pm0.60$ & $2.47\pm0.68$ & $2.02\pm0.68$\\ \noalign{\smallskip} \multicolumn{5}{c}{Counter-rotating component} \\ \noalign{\smallskip} $-20.9$ & $2.40\pm0.32$ & $2.57\pm0.32$ & $2.70\pm0.38$ & $2.54\pm0.35$\\ $-12.6$ & $2.85\pm0.26$ & $2.22\pm0.26$ & $2.42\pm0.27$ & $2.32\pm0.27$\\ $ 11.4$ & $2.35\pm0.22$ & $2.44\pm0.21$ & $2.35\pm0.25$ & $2.23\pm0.29$\\ $ 19.9$ & $2.59\pm0.26$ & $1.87\pm0.33$ & $1.84\pm0.39$ & $1.83\pm0.38$\\ \noalign{\smallskip} \hline \end{tabular} \end{small} \label{tab:lick2c} \end{table} \begin{figure} \centering \includegraphics[angle=90.0,width=0.49\textwidth]{N1366_hbe_mgfe.ps} \\ \includegraphics[angle=90.0,width=0.49\textwidth]{N1366_mgb_fe.ps} \caption{Values of \Hb\ and \MgFe\ line-strength indices (top panel) and \Fe\ and \Mgb\ line-strength indices (bottom panel) for the corotating (small red diamonds) and counter-rotating stellar component (small blue squares) measured along the major axis of NGC~1366 ($r=-20.9\arcsec, -12.6\arcsec, 11.4\arcsec,$ and $19.9\arcsec$). The larger symbols correspond to the averaged line-strength indices for the two stellar components. The lines indicate the model predictions by \citet{Thomas2003} for different \aFe\ ratios (top panel) and ages (bottom panel).} \label{fig:griglie} \end{figure} \begin{table} \caption{Properties of the stellar populations of the corotating and counter-rotating stellar components of NGC~1366.} \begin{center} \begin{small} \begin{tabular}{lcrrr} \hline \noalign{\smallskip} \multicolumn{1}{c}{Component} & \multicolumn{1}{c}{$L/L_{\rm T}$} & \multicolumn{1}{c}{Age} & \multicolumn{1}{c}{\ZH} & \multicolumn{1}{c}{\aFe} \\ \noalign{\smallskip} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[Gyr]} & \multicolumn{1}{c}{[dex]} & \multicolumn{1}{c}{[dex]} \\ \noalign{\smallskip} \hline \noalign{\smallskip} Corotating &$0.45$ &$ 5.6\pm2.7$ & $ -0.18\pm0.16$ & $ 0.08\pm0.13$ \\ Counter-rotating &$0.55$ &$ 2.6\pm0.5$ & $ -0.16\pm0.11$ & $-0.07\pm0.08$ \\ \noalign{\smallskip} \hline \noalign{\medskip} \end{tabular} \end{small} \label{tab:agemetalfa} \end{center} \end{table} The comparison of the averaged age values suggests that the counter-rotating component is significantly younger (Age$\,=\,2.6$ Gyr) than the corotating component (age$\,=\,5.6$ Gyr). The two averaged metallicities are both subsolar and similar to each other (\ZH$\,=-0.16$ and $-0.18$ dex for the counter-rotating and corotating components, respectively). However, the large scatter in the metallicity measurements of the corotating component does not allow us to give a firm conclusion. At face value, the subsolar \aFe\ ratio of the counter-rotating component (\aFe$\,=\,-0.07$ dex) points to a longer star-formation timescale than that of the corotating component, which is characterized by a supersolar \aFe\ ratio (\aFe$\,=\,0.08$ dex). \section{Discussion and conclusions} \label{sec:discussion} There is no morphological or photometric evidence that NGC~1366 is hosting two counter-rotating stellar components. NGC~1366 is characterized by an undisturbed morphology with no sign of recent interaction with small satellites or companion galaxies of similar size \citep{Morelli2008}. This is common for most of the counter-rotating galaxies since their environment does not appear statistically different from that of normal galaxies, see \citet{Bettoni2001}. In addition, the surface brightness distribution of NGC~1366 is remarkably well fitted by a S\'ersic bulge and an exponential disk with no break at any radius \citep{Morelli2008}. We provided the spectroscopic evidence of two counter-rotating stellar components with a high rotation velocity and low velocity dispersion ($v/\sigma\simeq2$) that give almost the same contribution to the galaxy luminosity. We infer that they have a similar scale length from the constant slope of the exponential surface-brightness radial profile outside the bulge-dominated region as in NGC~4138 \citep{Jore1996, Pizzella2014} and NGC~4550 \citep{Rix1992, Coccato2013, Johnston2013}. These kinematic and photometric properties support the disk nature of the two components. The stellar population of the corotating component is characterized by an older age, consistent with that of bulge \citep[$5.1\pm1.7$ Gyr,][]{Morelli2008}, subsolar metallicity, and almost solar $\alpha$/Fe enhancement. This suggests a formation timescale of a few Gyr that occurred at the time of the galaxy assembly. The counter-rotating stellar component is remarkably younger with lower $\alpha$/Fe enhancement and subsolar metallicity. The metallicity and age values obtained for the two components are consistent within the errors with the results obtained by \citet{Morelli2008} on the galaxy integrated light when considering its strong radial gradients of stellar population properties. Therefore, the counter-rotating stellar component could be the end result of a slower star formation process that occurred in a disk of gas accreted by a preexisting galaxy and settled onto retrograde orbits. However, unlike most of previously studied cases \citep[e.g.,][]{Johnston2013,Pizzella2014,Coccato2015,Katkov2016}, the ionized gas of NGC~1366 is not associated with the counter-rotating stellar component. It has peculiar kinematics with multiple velocity components along the minor axis with different gas clouds along the line of sight. The kinematic mismatch between the ionized gas and counter-rotating stellar component complicates the scenario of gas accretion followed by star formation. The most obvious possibility is to consider an episodic gas accretion. The first event of capture of external gas occurred $\sim3$ Gyr ago and built the counter-rotating stellar component. It was followed by a subsequent event that is still ongoing at present. However, this rises the question about the origin of the newly supplied and kinematically decoupled gas since there is no clear donor candidate in the neighborhood of NGC~1366. This leaves us with the possibility of the acquisition of small gas clouds coming either from the environment or from the internal reservoir inside the galaxy itself. When external gas is captured in distinct clouds, it settles onto the galaxy disk in a relatively short time \citep[$\sim1$ Gyr,][]{Thakar1997, Algorry2014,Mapelli2015}. In this case, NGC~1366 could be an object caught at an intermediate stage of the acquisition process, before its configuration becomes stable. It is interesting to note that this could also have occurred in galaxies with gas associated with the counter-rotating stellar component. Without clear evidence of ongoing star formation or very young stars, the counter-rotating stellar component could be the result of a past acquisition of gas coming from the same reservoir that provides the counter-rotating gas we observe at present. An intriguing alternative was explored by \citet{Crocker2009}. They showed the time evolution of the distribution and kinematics of gas and stars in a set of numerical simulations aimed at investigating the formation of the stellar counter-rotating disks of NGC 4550 from a binary merger. One Gyr after the merger, while the stars have settled in two counter-rotating disks with a relatively regular kinematics, the gas distribution still remains rather disordered with a disturbed kinematics. However, this configuration is not stable, and the gas tends to a more regular configuration between 1 and 2 Gyr from the merging event. The structure and stellar populations properties of the counter-rotating components of NGC~1366 are somewhat different from those of NGC~4550 for a direct comparison of our results with the simulations by \citet{Crocker2009}, and dedicated simulations are needed for a firmer interpretation of this galaxy in terms of a binary merger. These speculations need further evidence since the available spectroscopic data are not conclusive. To date, NGC~1366 is a unique example, and it may become a corner stone for understanding the formation of counter-rotation in relatively isolated and undisturbed galaxies. Mapping the ionized-gas distribution and kinematics of NGC~1366 with integral-field spectroscopy is a crucial complement for the present dataset and is necessary to distinguish between different scenarios and address the question of the origin of the gas. In the case of a episodic gas acquisition, we expect to see a clear morphological and kinematic signature of the incoming gas without a counter-part in the stellar distribution. In contrast, in the case of a galaxy binary merger, we expect to observe a morphological association between the distribution of stars and gas, a regular velocity field for the two counter-rotating stellar disks, and an irregular velocity field for the ionized gas. \begin{acknowledgements} We benefited from discussion with Roberto P. Saglia. This work was supported by Padua University through grants 60A02-5857/13, 60A02-5833/14, 60A02-4434/15, and CPDA133894. LM and EMC acknowledge financial support from Padua University grants CPS0204 and BIRD164402/16, respectively. LM is grateful to the ESO Scientific Visitor Programme for the hospitality at ESO Headquarters while this paper was in progress. This research made use of the HyperLeda Database (http://leda.univ-lyon1.fr/) and NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration (http://ned.ipac.caltech.edu/). \end{acknowledgements} \bibliographystyle{aa}
1,314,259,995,482
arxiv
\section{Results and discussion} \section*{Structure search and analysis} The phase stability of metal hydrides over a wide range of compressions was explored using density-functional theory combined with the CALYPSO structure search method\cite{Wang2010,Wang2012} to identify the range of stable structures and stoichiometries possible. Uncertainty quantification becomes important given the challenges in describing high pressure phases\cite{Amsler2018}. The Bayesian error estimation functional with van der Waals correlation (BEEF-vdW) was employed to provide a confidence value (c-value) for competing phases to avoid possible bias due to selection of a particular DFT functional\cite{houchins}, an approach that has been successfully applied to calculate uncertainty in phase diagrams for other systems\cite{Guan2019,Houchins2020}. To enhance robust assessment of the ground state within the structure search, we used the ensemble of functionals within the BEEF formulation to identify the predicted ground state. Thus, each functional identifies a particular ground state for a given composition and c-value quantifies what fraction of the functionals identify that structure as having the minimum energy. We begin by discussing the results for Pd-H at zero pressure and megabar (150 GPa) pressures in the absence of electrochemical loading. Two experimentally assessed compositions at ambient and high pressure, PdH and \ce{Pd3H4}, were considered first. We find several structures close in energy for PdH, and used the Bayesian error estimation capability within BEEF-vdw to construct a confidence-value (c-value) diagram (Fig. \ref{fig:c-PdH}). This analysis indicates that the most probable predicted ground state of PdH has the $R3m$ space group with Pd in tetrahedral coordination. However, this structure has a c-value of around 0.4, indicating that at GGA-level DFT, it is not possible to conclusively identify the true ground state. We recover the experimentally reported $Fm\Bar{3}m$ rocksalt structure as one of the possible structures, though it has a lower c-value (about 0.05). At high pressure (e.g., 150 GPa), the method gives the rocksalt structure as the most probable with a high c-value, indicating that on compression this structure is indeed the ground state predicted at the GGA-level of theory. These results are consistent with experiments reported up to 100 GPa\cite{brownsberger2017x,guigue2020x}. \begin{figure} \includegraphics[width=0.49\linewidth]{0-PdH-n.png} \includegraphics[width=0.49\linewidth]{cval_1_plot.pdf} \centering \caption{Crystal structures of PdH (a) $Fm\Bar{3}m$ and (b) $R3m$. In the $Fm\Bar{3}m$ structure, the Pd atom is octahedrally coordinated whereas in $R3m$, Pd is tetrahedrally coordinated. (c) Ground states with c-values of PdH at zero pressure and 150 GPa calculated by BEEF-vdW. The most probable structure has the $R3m$ space group at 0 GPa although several other structures are also possible. At 150 GPa, the $Fm\Bar{3}m$ structure is the most probable.} \label{fig:c-PdH} \end{figure} We next consider \ce{Pd3H4}, which is the only higher hydride reported experimentally, e.g., at around 5 GPa\cite{fukai1994formation}. The structure search calculations identify a structure with Pd having 5-fold and 6-fold coordination and space group $Cm$ as the most probable (Fig. S1). We recover the experimentally observed \ce{Cu3Au}-type structure with the $Pm\Bar{3}m$ space group as one of the probable structures. This structure can be viewed as introducing one Pd vacancy in each unit cell of rocksalt PdH, and therefore can also be written \ce{Pd3VaH4}, where Va represents a vacancy. At high pressure, we find a low-symmetry $P1$ structure with a complex coordination environment as the most probable structure. Following this analysis, we examine the stable structures of compositions PdH$_n$ where n is an integer between 2 and 12. The most probable structures identified in the structure search are shown in Fig. \ref{fig:0&150-PdHn}. These structures consist of Pd-H layers or clusters between which \ce{H2} molecules are located. The atomic coordination features of the low enthalpy structures are characterized by the average radial distribution function (RDF) (Figs. S9 and S10). All structures at zero pressure are characterized by Pd-H and H-H bonding but variable bond distances. Whereas nearest neighbor Pd-H distances remain within 1.7-2.0 {\AA}, the nearest neighbor H-H distances change significantly with composition. For \ce{PdH_n} when n is less than 2-3, the H-H distances are mainly over 2 {\AA}; however, when n is larger than 2-3, H-H forms a peak at about 0.7-0.8 {\AA}, indicating the formation of \ce{H2} molecules. Pressure also affects the RDF significantly. At 150 GPa, for example, H-H distances span a broad range beyond the conventional covalent \ce{H2} bond length, indicating diverse coordination environments of H at high compressions. \begin{figure} \includegraphics[width=0.5\linewidth]{0_150-PdHn-v3.png} \centering \caption{Results of ground-state structure search for (a) \ce{PdH8}, (b) \ce{PdH10}, (c) \ce{PdH12} at zero pressure and \ce{PdH12} (d) top and (e) side view at 150 GPa.} \label{fig:0&150-PdHn} \end{figure} To further characterize the structural features of these Pd hydrides, a topological analysis of the low enthalpy structures was performed (Fig. \ref{fig:topo}). Each structure is based on a Pd-H framework with the void space occupied by extra H atoms if any, which are mainly present in the form of \ce{H2} molecules. The basic structural unit in the Pd-H framework is a polyhedron with the Pd atom coordinated by H atoms. The Pd-centered polyhedra can be directly connected with each in terms of sharing of faces, edges or corners, or indirectly connected via intermediate H atoms, forming a network with specific dimensionality. In a 0D framework, the Pd-centered polyhedra are isolated, whereas they can also form columns and layers to give 1D and 2D frameworks, respectively. The network can also extend in three dimensions (3D). Obviously, both pressure and composition have significant influence on the coordination number (CN) of Pd and the dimensionality of the Pd-H framework. At zero pressure, the CN is between 3 and 7, and moderately increases from PdH to \ce{PdH2} but saturates beyond \ce{PdH2}, with the maximum CN pinned at 7. The dimensionality also changes around \ce{PdH2} and the transition is even sharper. From PdH to \ce{PdH2}, the frameworks are exclusively 3D, whereas beyond \ce{PdH2}, no 3D framework is present in the structures examined. From \ce{PdH3} to \ce{PdH6}, the frameworks are exclusively 2D, while starting from \ce{PdH7}, 1D gradually becomes dominant. At 150 GPa, the frameworks are invariably 3D regardless of the compositions, but CN is diffuse, spanning between 5-18, and shows a dramatic change with composition. Based on the above analysis, it can be concluded that high pressure can compress large amounts of H around Pd and increase the connectivity of the framework components. \begin{figure} \includegraphics[width=0.7\linewidth]{topo-v2.png} \centering \caption{Composition-dependent topological features of the low enthalpy structures at (a) zero pressure and (b) 150 GPa. The coordination number corresponds to the number of nearest neighbor H atoms surrounding a Pd atom, and the dimensionality is used to describe the framework formed by the Pd polyhedra.} \label{fig:topo} \end{figure} The most probable stable structure found for \ce{PdH12} at 150 GPa is monoclinic with $Cmcm$ space group (Fig. \ref{fig:0&150-PdHn}). This structure is distinct from those predicted as thermodynamically stable phases for rare-earth superhydrides, which are based on clathrate or cage-like structures\cite{liu2017potential,peng2017hydrogen}. On the other hand, the structure has similarities to those predicted for \ce{MgH12} and \ce{MgH16}, which consists of molecular \ce{H2} units having a range of nearest neighbor distances\cite{PhysRevB.87.054107}. The $Cmcm$ structure of \ce{PdH12} consists of a 3D network of Pd-centered H polyhedra bridged by H-H covalent bonds with the space in-between filled with \ce{H2} molecules. Interestingly, viewing perpendicular to the monoclinic c axis, all the H atoms are arranged in 2D layers, which are stacked together along the c axis. Fig. \ref{fig:PdH12-properties}(a) shows its radial distribution function (RDF). The nearest neighbor Pd-H distances span 1.7-1.9 \AA, giving an effective coordination of Pd by H of 14. On the other hand, the H-H distances span a wide range and consist of two groups (1) 0.7-1.0 \AA, corresponding to \ce{H2} molecules between polyhedra and the bridging \ce{H2} units that connect polyhedra, and (2) 1.5-2.0 \AA, corresponding to neighboring corner H atoms within a polyhedron. The charge density distribution of a (001) plane is plotted in Fig. \ref{fig:PdH12-properties}(b). The charge density between H-H is higher than that between Pd-H, indicating that covalent H-H bonds are dominant even under pressures of 150 GPa. This is likely due to zero pressure electronegativities of Pd and H being very close. We conclude that this equivalence persists to high compressions, as evident by the lack of significant charge transfer even under these extreme conditions. Nevertheless, bonding between Pd and H is apparent from the charge density map. The electronic density of states (DOS) is shown in Fig. \ref{fig:PdH12-properties}(c), where the total DOS is decomposed to contributions from different orbitals, H-s, Pd-s, Pd-p and Pd-d. The considerable DOS at the Fermi level indicates that the monoclinic \ce{PdH12} is a metal under these conditions. The dominant partial DOS are from H-s and Pd-d showing strong hybridization, both contributing to conducting electrons. \begin{figure} \includegraphics[width=0.45\linewidth]{PdH12-prop.png} \centering \caption{Properties of $Cmcm$ \ce{PdH12} at 150 GPa: (a) Radial distribution function (RDF). The predominant bonding is H-H and Pd-H. Note that the minimum H-H distance is about 0.74 \AA, close to H-H bond in molecular \ce{H2}. (b) Charge density distribution within the (001) plane. The charge density between H and H is higher than that between Pd and H. The unit of the charge density is $a_0^{-3}$, where the Bohr radius $a_0$ = 0.529 {\AA}. The contour lines are spaced by 0.05 $a_0^{-3}$. (c) Density of states (DOS) and partial density of states (PDOS). The red dashed line is the Fermi level.} \label{fig:PdH12-properties} \end{figure} The H-H distances for the rare-earth superhydrides are in the range of $\sim$1.1 \AA~ at high pressures\cite{liu2017potential,peng2017hydrogen}. Recently, evidence for H-H distances below 1.6 \AA~ was found in \ce{ZrV2Hx}, even at ambient pressure, from inelastic neutron scattering\cite{borgschulte2020inelastic}, in violation of the so-called Switendick\cite{switendick1979band} criteria for the minimum H-H distances of 2.1 \AA~ in common hydrides. In order to analyze how the H-H distance changes with composition and pressure, the distribution of minimum H-H distances of the low enthalpy structures at both zero pressure and 150 GPa is plotted in Fig. S6. The composition has a large influence on the minimum H-H distance. At zero pressure, the minimum H-H distance is around 2.2-3.1 \AA~ for PdH, but drastically drops to about 0.8 \AA~ for \ce{PdH2}, close to the H-H distance in \ce{H2}, 0.74 \AA. Further increasing H content does not cause significant changes, and the minimum H-H distance is pinned by the H-H distance in \ce{H2} molecule. A similar trend holds for 150 GPa, though the minimum H-H distance usually shrinks compared with that at zero pressure, due to large compression. However, the H-H distance is close to that of free \ce{H2}. Having identified the possible stable structures, we assess the phase stability from an enthalpy convex hull analysis. Using an ensemble of functionals, we generate an ensemble of convex hulls, which are shown in Figs. S7 and S8. Unsurprisingly, We find that no Pd superhydrides are thermodynamically stable at zero pressure. At 150 GPa, there are still no thermodynamically stable superhydrides although \ce{PdH12} is only slightly unstable, and the stability of superhydrides is greatly increased compared to zero pressure. \section*{Moderate-pressure electrochemical synthesis} Superhydrides were first documented experimentally using megabar high pressure diamond-anvil cell laser-heating techniques, leading to the discovery of near-room-temperature superconductivity in \ce{LaH10}\cite{geballe2018synthesis,PhysRevLett.122.027001,hemley2019road}. This result was subsequently confirmed\cite{Drozdov2019}, and other metal superhydrides have subsequently been observed\cite{Zurek2019}. We now discuss an alternate approach, $\mathcal{P}^2$, to synthesizing dense metal superhydrides by combining pressure and electrochemistry (i.e. electrode potential). In the $\mathcal{P}^2$~ approach, an electrode consisting of a metal (or conducting metal hydride) is loaded with hydrogen by holding at an appropriate electrode potential using an electrolyte consisting of mobile protons. The proton-conducting membrane could be an aqueous electrolyte solution, polymer electrolyte membrane (e.g., Nafion)\cite{benck2019producing}, proton-conducting ceramic electrolytes\cite{benck2019producing}, and solid acid proton conductors\cite{haile2007solid}. The electrolytes provide a way to tune the activity of mobile protons and kinetics of reactions at electrode-electrolyte interfaces. Continuing with the example of the Pd-H system, the hydrogen loading reaction for the metal electrode and an electrolyte containing mobile protons is given by: \begin{equation} \mathrm{Pd + n(H^+ + e^-) \Longleftrightarrow PdH_n} \end{equation} with the associated Gibbs Free Energy of the reaction: \begin{equation} \mathrm{\Delta G = G_{PdH_n} - G_{Pd} - n G_{H^+} - n G_{e^-}}. \end{equation} The free energy of protons at unit activity and electrons at electrode potential zero on the standard hydrogen electrode scale can be related to the free energy of hydrogen gas. Thermodynamic corrections can then be added to account for the effect of electrode potential and activity of protons\cite{norskov2004origin}. The computational hydrogen electrode equation provides the relation, $\mathrm{G_{H^+ (a_H^+ = 1)} + G_{e^- (U = 0)} = \frac{1}{2} G_{H_2}}$. This provides the relation, \begin{equation} \mathrm{\Delta G = G_{PdH_n} - G_{Pd} - \frac{n}{2} G_{H_2} + neU_{SHE} -n k_BT ln(a_{H^+})}. \end{equation} This relation allows us to construct an electrochemical phase diagram for loading hydrogen into a material as a function of pH and electrode potential. Lowering the electrode potential (making it more negative) or increasing the activity of protons enables loading higher amounts of hydrogen. However, in a practical device, at negative potentials, metals tend to catalyze the hydrogen evolution reaction\cite{norskov2005trends}, given by, \begin{equation} \mathrm{2H^+ + 2e^- \Longleftrightarrow H_2}. \end{equation} Hence, electrochemical loading needs to compete with the hydrogen evolution reaction. Electrolyte formulations can suppress the hydrogen evolution reaction kinetically through superconcentrated electrolytes\cite{suo2015water} or other suppressing mechanisms. However, we set the limit for electrochemical synthesizability at the potential where reaction free energy for hydrogen evolution on the catalyst surface is thermodynamically downhill, which is determined by the free energy of adsorbed hydrogen on the palladium surface\cite{norskov2005trends}. While the $\mathcal{P}^2$~ approach is demonstrated with a proton conductor, a similar scheme can be constructed with hydride ion conductors\cite{Verbraeken2014} (for e.g. $\mathrm{Pd + nH^- \Longleftrightarrow PdH_n + ne^-}$). The electrochemical phase diagram incorporating uncertainty analysis at ambient pressure is shown in \cref{fig:pb-uq}. At ambient pressure, at $pH = 0, ~(a_{H^+} = 1)$, we find that electrochemical loading of even the PdH phase is challenging and will compete with the hydrogen evolution reaction, as observed experimentally\cite{benck2019producing}. As Pd catalyzes hydrogen evolution with almost no overpotential\cite{norskov2005trends}, thus suppressing hydrogen evolution is the only approach to accessing these phases at ambient conditions. Further, it is likely that the bulk loading reaction will have slower kinetics than surface catalyzed hydrogen evolution reaction making this even more challenging. Next, we explore the effect of very high pressures, and find that it is possible to produce \ce{PdH12} below the hydrogen evolution potentials. \begin{figure} \includegraphics[width=0.62\linewidth]{pb-uq-0_150-v2.png} \centering \caption{Pourbaix diagram by BEEF-vdW ensmebles at (a) zero pressure and (b) 150 GPa. The solid black lines are phase boundaries calculated by the best-fit BEEF-vdW functional. The dotted dashed green line represents the equilibrium HER (hydrogen evoluation reaction) on Pd, while the blue dashed line takes overpotential into consideration.} \label{fig:pb-uq} \end{figure} The above analysis raises the intriguing question of whether such palladium superhydrides could be synthesized at modest pressures, e.g., at 100 MPa (kilobar) versus 100 GPa (megabar) conditions. Although electrochemical studies have been performed over the years in the 100 MPa range (maximum of ~1 GPa\cite{doi:10.1021/j100203a057,giovanelli2004electrochemistry}), the field remains largely unexplored. To examine the possibility to stablize Pd superhydrides at modest pressures, we calculate a comprehensive pressure dependent Pourbaix diagram at 300 K, with the reversible hydrogen electrode (RHE) as the reference (Fig. \ref{fig:U-P}). At decreasing potentials, the phase transition sequence is $\ce{Pd} \rightarrow \ce{PdH} \rightarrow \ce{PdH10} \rightarrow \ce{PdH12}$. The most thermodynamically accessible Pd superhydride, \ce{PdH10} has a very narrow potential window near ambient pressure, which is gradually enlarged with increasing pressure. The phase boundary between PdH and \ce{PdH10}, can be fitted as the following relation: \begin{equation} \mathrm{U=-0.168+0.0297*log_{10}(P)+8.83*10^{-5}\times P^{2/3}}, \end{equation} where U is the electrode potential on the RHE scale (in V) needed to transform PdH to \ce{PdH10} under a given pressure P (in MPa). The above relation has good linearity between U and logP near the ambient pressure. The power of the electrochemical driving force is illustrated by the fact that an order of magnitude reduction of the transition pressure can be achieved by only 0.03 V change in the electrode potential. When the pressure approaches $10^3$ MPa (1 GPa), non-linearity of the relation becomes significant. The superhydride \ce{PdH10} can be stablized at about 300 MPa (0.3 GPa) at the onset potential for hydrogen evolution. Using superconcentrated electrolytes that can suppress HER further, it is expected that \ce{PdH10} can be stablized under even lower pressure operating at a more negative potential. \begin{figure} \includegraphics[width=\linewidth]{U-P-v2.png} \centering \caption{Pressure dependent Pourbaix diagram of Pd-H by the best-fit BEEF-vdW functional at 300 K. The dotted dashed red line represents the equilibrium HER (hydrogen evolution reaction) on Pd, while the red dashed line takes overpotential into consideration.} \label{fig:U-P} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{M-H-v2.png} \centering \caption{Pressure dependent Pourbaix diagram by the best-fit BEEF-vdW functional at 300 K: (a) Y-H, (b) Mg-H, and La-H (c) with \ce{LaH11} and (d) without \ce{LaH11} considered. The dotted dashed lines represent the equilibrium HER (hydrogen evolution reaction) on the metal, while the dashed line takes overpotential into consideration.} \label{fig:M-H} \end{figure} \section*{Other superhydrides} From an electrochemical synthesis standpoint, palladium is among the most difficult as it catalyzes hydrogen evolution with negligible overpotential\cite{norskov2005trends}. To demonstrate the generality of this strategy, the pressure dependent Pourbaix diagrams were calculated for three other hydride systems, Y-H, Mg-H and La-H (Fig. \ref{fig:M-H}), all of which have been predicted or observed to have high-temperature superconductors. For all these systems, calculations similar to those above for Pd-H indicate that \ce{YH9}, \ce{MgH4}, \ce{MgH16} and \ce{LaH8} can be synthesized under ambient pressure at potentials without HER. As for Pd-H, the critical potential becomes less negative with increasing pressure. The results for these different metal-hydrogen systems thus demonstrate that using electrochemistry to drastically lower the synthesis pressure of superhydrides is a general approach. Most notable are the results for La-H, and the near-room-temperature superconducting phase \ce{LaH10}\cite{PhysRevLett.122.027001,hemley2019road,Drozdov2019}. The calculated stability to 200 GPa for La-H shows a rich set of superhydrides, the stabilizing pressures of which can be significantly reduced (in tens of GPa) by the electrode potential above HER. For instance, \ce{LaH16} can be stabilized below 100 GPa at the HER potential while an extreme pressure above 200 GPa is needed without an electrochemical driving force. It is also noted that the experimentally verified \ce{LaH10} phase is stabilized only after removing \ce{LaH11} in the underlying data, which may be due to the errors caused by neglecting higher order corrections such as anharmonicity \cite{liu2018} and nuclear quantum effects \cite{Errea2020} in the calculations. Such corrections usually enhances the stability of the high density structures to lower pressures \cite{liu2018}. More interestingly, the potential not only decreases the critical pressure, but also stabilizes some phases which cannot be stabilized regardless of pressure at zero potential, such as \ce{LaH4}, \ce{LaH5} and \ce{LaH8}. Such a new degree of freedom therefore brings a broad new world that is largely unexplored and promisingly fertile. \section*{Conclusions} Using first-principles density functional theory and particle swarm structure search calculations, the phase stability of metal superhydrides have been studied. Focusing on the Pd-H system, we find from an analysis of the electrochemical loading of hydrogen in the metal under a broad range of pressures, that \ce{PdH10} can be electrochemically synthesized before the onset of hydrogen evolution. Remarkably this is predicted to occur at even modest pressures of about 300 MPa, which is readily accessible using existing high pressure methods. Given that palladium is among the most active metals for hydrogen evolution, we suspected significant effects of electrochemical loading on the synthesis of other hydrides. Indeed, we demonstrate the generalizability of this pressure-potential ($\mathcal{P}^2$~) approach for La-H, Y-H and Mg-H, often yielding 10-100 times reduction in pressure needed for stabilizing a particular phase, as well as stabilizing new phases that cannot be done purely by either pressure or potential. Combining pressure and electrode potential thus offers an alternate route to synthesize metal superhydrides and other novel materials at currently accessible static pressure conditions, and a framework for such synthesis has been provided. Numerous extensions of existing electrochemical and high pressure techniques could lead altogether new materials created under an even broader range of pressures. This work should serve to open up the new frontier of high-pressure electrochemistry to produce exciting new materials with broad applications\cite{berlinguette2019revisiting,FloresLivas2020}. \begin{methods} \subsection{Calculation Details:} All DFT calculations involving the BEEF-vdW exchange correlation functional were run using GPAW software using the Atomic Simulation Environment (ASE) \cite{PhysRevB.71.035109,Enkovaara_2010,ase-paper}. A real space grid with spacing of 0.16 \AA ~ is used for the representation of electronic wavefunctions, and a k-point density of larger than 30 \AA~ in reciprocal space was used in each dimension. For each material, the geometry is relaxed to a maximum force of less than 0.01 eV/\AA. \subsection{Structure Search:} The particle swarm optimization is employed for structure search, using the CALYPSO code\cite{Wang2010,Wang2012}. Since only the superhydrides are of interests here, the compositions PdH$_n$ (n is an integer and $1 \leq n \leq 12$) and \ce{Pd3H4} which is the highest Pd hydride reported in experiments so far, totaling 13 compositions in all. For each composition, one unit cell is allowed to have 1-4 formulas. For a fixed number of formula at a given composition, about 1000 structures were searched during the structure evolution. \subsection{Bayesian Error Estimation Funtional:} For each material, a collection of functionals at the level of the generalized gradient approximation (GGA) were used as described below. Error estimation was carried out using the Bayesian error estimation functional with van der Waals correction \cite{BEEF}. This empirically fit functional generates an ensemble of functionals that are small perturbations away from the best fit functional in exchange correlation space. The exchange-correlation energy for the BEEF-vdW is given in Ref. \citenum{BEEF} as \begin{equation} E_{xc}=\sum_{m} a_m \int \epsilon_x^{UEG}(n)B_m[t(s)]d\mathbf{r} +\alpha_c E^{LDA-c}+(1-\alpha_c)E^{PBE-c}+E^{nl-c}. \end{equation} Here $B_m$ is the m$^{th}$ Legendre basis function, each of which has a corresponding expansion coefficient $a_m$. The expansion coefficients, as well as the $\alpha_c$ parameter that mixes the local density approximation (LDA) and PBE\cite{PhysRevLett.77.3865} exchange correlation functionals, have been pre-fit with respect to a range of data sets as described in Ref. \citenum{BEEF}. Additionally within the functional is the $E^{nl-c}$ non-local correlation term implemented via the vdW-DF2\cite{lee2010higher} method. The method to generate the ensemble of functionals was tuned such that the spread of the predictions of the functionals matches the error of the main self consistent functional with respect to the training and experimental data on which it was originally trained. Each of these functionals can then provide a non self consistent prediction of energy and therefore allows for a computationally efficient yet systematic way of understanding the sensitivity of the final prediction with respect to small changes in exchange correlation functional. \subsection{Confidence-value (c-value) Calculations:} We use the confidence value (c-value)\cite{houchins}, for determining the uncertainty associated with the choice of the functional. We define c-value to be the fraction of the ensemble that predict a certain structure to be the ground state. For a fixed composition, this simply involves counting the fraction of functionals that predict a particular structure to be the ground state. This framework can be expanded to construct a c-value associated with a Pourbaix diagram. In this case, the c$(U,pH)$, for a specific phase is defined as the fraction of functionals that predict it to have the lowest free energy at a given potential and pH, given by \begin{equation} c_i(U,pH) = \frac{1}{N_{ens}} \sum_{n=1}^{N_{ens}} \prod_{j\neq i} \Theta(\Delta G_{j}^n(U,pH)-\Delta G_{i}^n(U,pH)) \end{equation} \noindent where the summation is over number of ensembles, $\mathrm{N_{ens}} = 2000$ here, and the product is over all the remaining possible phases. $\Theta(x)$ denotes the Heaviside step function. At any given U and pH, $i, j \in S$, the set of all considered phases. \subsection{Topological analysis:} The coordination number is determined using the CrystalNN class based on a Voronoi algorithm in pymatgen\cite{Ong2013}. The framework of the crystal structure and its dimensionality are identified using the Zeo++ code based on the Voronoi decomposition\cite{Willems2012}, where radii of 0.5 {\AA} and 1.6 {\AA} are adopted for H and Pd respectively. \end{methods} \begin{addendum} \item This work was partially supported by Google (P. G. and V. V.) and by the DOE/NNSA and NSF-DMR (R.J.H.). The authors thank Hanyu Liu and Yanming Ma for sharing structure files of the Y-H, Mg-H and La-H systems. The Authors would like to thank helpful discussions with Yet-Ming Chiang, Matt Trevithick and Florian Metzler. \item[Contributions] P.G. and V.V. conceived the idea for the project, and R.J.H. provided input on the high pressure calculations and experiments. P.G. ran all the DFT calculations. All authors discussed the results and jointly wrote the manuscript. \item[Competing Interests] V.V. and P.G. are inventors on a provisional patent application, 63/028,265 related to electrochemical synthesis of metal superhydrides. \item[Correspondence] Correspondence and requests for materials should be addressed to V. Viswanathan (email: venkvis@cmu.edu) and R.J.Hemley (rhemley@uic.edu). \item[Additional information] Supplementary Information attached and consists of Supplementary Figures 1-11. \item[Data availability] The data that support the findings of this study will be made publicly available on Github. \item[Code availability] The custom code for calculating and plotting the phase diagrams based on first-principles data presented in the paper will be made publicly available on Github. \end{addendum} \clearpage
1,314,259,995,483
arxiv
\section{Introduction} \label{sec:intro} Hydrodynamical simulations suggest that a small fraction of mass is ejected when two neutron stars (or a black hole and neutron star) collide or merge \citep{Janka_1999,Rosswog_1999,Lee_2001, Rosswog_2005,Oechslin_2007, Chawla_2010,Shibata_2011, Hoto_2013}. If this ejecta is sufficiently neutron rich, it will assemble within seconds into heavy elements via rapid neutron captures (the r-process) \citep{Lattimer_1974,Eichler_1989, Freiburghaus_1999}. The subsequent beta decay of the nuclei will heat the ejecta for days, powering a thermal, supernova-like transient \citep{Li_Paz_1998}. Because the ejected mass is small in comparison to ordinary supernovae (SNe), the light curves of these ``r-process supernovae" are expected to be relatively dim and short lived. Previous radiative models predict peak bolometric luminosities around $10^{40}-10^{42}~{\rm ergs~s^{-1}}$, peaking at optical wavelengths and lasting around a day \citep{Li_Paz_1998,Kulkarni_2005,Metzger_2010,Roberts_2011,Goriely_2011, Piran_2012}. Although we have not yet discovered an r-process SN from a neutron star merger (NSM), there are compelling reasons to look for them. Because these outflows are non-relativistic, they emit radiation relatively isotropically, and are therefore promising electromagnetic counterparts to gravitational wave sources; if discovered coincidently, they could enhance the scientific value of an advanced LIGO/VIRGO gravitational wave signal \citep{Schutz_1986,Kochanek_1993,Sylvester_2003,Phinney_2009,Mandel_2010,Metzger_Berger_2011, Kelley_2012,Nissanke_2012}. Discovery of r-process SNe would also dramatically illuminate our incomplete understanding of heavy element production in the Universe. The NSM ejecta is thought to be a remarkably pure sample of r-process material, which would allow us to cleanly study heavy elements near their production site, and soon after they had been created. In principle, analysis of the light curves and spectra of these radioactive transients could be used to quantify the mass and chemical composition of the ejecta, which would clarify the unknown site(s) of r-process nucleosynthesis \citep[e.g.,][]{Arnould_2007,Sneden_2008}. Perhaps the largest remaining uncertainty in our understanding of r-process SNe has concerned the opacity of the ejected debris, which (along with the ejecta mass and kinetic energy) is a key parameter determining the brightness, duration, and color of the transient. The ejecta of NSMs consists of heavy elements in rapid differential expansion, and at relatively low densities and temperatures ($\rho \sim 10^{-13}~\ensuremath{{\rm g~cm^{-3}}}$ and $T \sim 5000$~K at 1 day after ejection). Because almost nothing is known about the optical properties of such material, previous radiative transfer models have simply adopted opacities characteristic of ordinary SNe. In Type~Ia supernovae (SNe~Ia), for example, the opacity is due primarily to numerous iron group lines, which are blended by Doppler broadening into a pseudo-continuum. We can expect that lines will also dominate the opacity of NSM ejecta, but unfortunately very little atomic data exists for ions heavier than the iron group, either from theory or experiment. Given the ignorance, we might first consider some general expectations from atomic physics. The number of strong lines will be larger for ions with greater complexity -- i.e., with a denser packing of low lying energy states. Naively, one might expect higher $Z$ elements to be more complex than the iron group. Of course, what matters is not the total number of electrons, but the number of distinct ways of distributing valence electrons within the open shells. A subshell with orbital angular momentum $l$ has $g = 2(2l +1)$ magnetic sublevels; one can estimate the number of states in a given electron configuration by simply counting the permutations of the valence electrons \begin{equation} C = \Pi_i \frac{g_i!}{n_i! (g_i-n_i)!}, \label{eq:complexity} \end{equation} where $n_i$ is the number of electrons in the $nl$-orbital labeled $i$, and the product runs over all open shells in a given configuration. The different terms and levels (i.e., distinct combinations of $L,S,J$) derived from these various permutations are split by electrostatic and fine-structure (e.g. spin-orbit) interactions. Equation~\ref{eq:complexity} can thus be used to estimate the relative number of distinct energy levels of an ion, while the number of lines (i.e., radiative transitions between levels) will scale roughly as $C^2$. Figure~\ref{fig:complexity} plots the complexity measure $C$ for the ground configurations of singly ionized ions, where the pattern of $l$ shell filling is clearly seen. \begin{figure} \includegraphics[width=3.5in]{f1.pdf} \caption{ Complexity of the elements (top panel) and their mass fractions in the r-process ejecta of neutron star mergers (bottom panel). The top panel plots the number of states in the ground configuration for singly ionized ions, as estimated using the simple permutation counting of eq.~\ref{eq:complexity}. The pattern of peaks reflects the filling of valance shells, with the color shading giving the orbital angular momentum $l$ (yellow = $s$, blue = $p$, green = $d$, red = $f$). The bottom panel plots the mass fractions of the tidal tail ejecta from the simulations of \cite{Roberts_2011}. } \label{fig:complexity} \end{figure} Equation~\ref{eq:complexity} provides immediate insight into the opacity of r-process ejecta. Ions with valence shells of higher $l$ are more complex, as are those whose open shells are closer to half filled. This is why the iron group, with a nearly half-filled $d$ ($l=2$) shell, usually dominates the line opacity in typical astrophysical mixtures. Heavy r-process ejecta, however, includes uncommon species of even greater complexity. Of particular importance are the lanthanides ($58 < Z < 70$) and the actinides ($90 < Z < 100$) which, due to the presence of an open $f$ ($l=3)$ shell, have complexity measures roughly an order of magnitude greater than the iron group. While the actinide series is generally of very low abundance, the lanthanides may represent several percent of r-process material by mass. We will find that these species dominate the total opacity of NSM ejecta, resulting in opacities orders of magnitudes greater than previously assumed. To calculate the ejecta opacity in detail, we need a comprehensive list of atomic lines. As almost no data is available for heavy ions, we turn here to {\it ab initio} atomic structure modeling using the \texttt{Autostructure}\ code \citep{Badnell_2011}. These models determine the approximate ion energy level structure and the wavelengths and oscillator strengths of all permitted radiative dipole transitions (\S\ref{sec:AS}). Without fine tuning the structure model, the computed energies and line wavelengths are not exact. Fortunately, the effective opacity in an expanding medium is a wavelength average over many lines. Because our models reasonably capture the statistical distribution of levels and lines, they can be used to derive fairly robust estimates of the pseudo-continuum opacity (\S\ref{sec:iron}-\S\ref{sec:high_Z}). Modeling the radiative properties of all high $Z$ ions is a long term endeavor; we present here initial structure calculations for a few representative ions selected from the iron group (Fe, Co, Ni), the lanthanides (Ce, Nd), and a few other heavy d-shell and p-shell ions (Os, Sn). The \texttt{Autostructure}\ line data is then used to calculate the opacity of expanding ejecta under the assumption of local thermodynamic equilibrium (LTE). We show that ions of similar complexity have similar properties, which allows us to estimate the total opacity of an r-process mixture based on the representative species (\S\ref{sec:rp_mix}). The derived opacities can be input into a multi-wavelength, time-dependent radiative transfer code to predict the observable properties of r-process supernovae (\S\ref{sec:spectra}). We discuss here the general spectroscopic properties of these transients, while a companion study explores the broadband light curves and their dependence on the ejecta properties \citep{Barnes_2013}. In general, the high r-process opacities result in light curves that are significantly broader, dimmer, and redder than previously believed. These results have important implications for observational strategies to find and interpret the radioactively powered electromagnetic counterparts to NSMs. \section{Opacity of Rapidly Expanding Ejecta} We set the stage by reviewing the physical properties of the material expected to be ejected in NSMs. We then describe the nature of the opacity in such gas, in particular that arising from line interactions in a rapidly expanding medium. \subsection{Physical Conditions of the Ejecta} There are at least two distinct mechanisms by which material may be ejected in NSMs: 1) During the merger, surface layers may be tidally stripped and dynamically flung out in ``tidal tails''. 2) Following the merger, material which has accumulated in a centrifugally supported disk may be blown off in a neutrino or nuclear driven wind \citep{Levinson_2006,Surman_2006, Metzger_2008,Metzger_2009}. The amount of mass ejected in the tidal tails appears to depends upon many factors: the NS mass ratio, the equation of state of nuclear matter, and the treatment of gravity, but simulations give values in the range $M_{\rm ej} = 10^{-4} - 10^{-1}~\ensuremath{M_\odot}$. A similar amount of mass may potentially be ejected in the disk wind. In both cases, the characteristic velocities are $v_{\rm ej} \approx 0.1-0.3c$, of order the escape velocity from a NS. The composition of the material ejected by the two mechanisms is likely different. The tidal tail ejecta is initially cold and very neutron rich (electron fraction $Y_e \sim 0.1$), and should rapidly assemble into heavy elements ($Z > 50$) through the r-process. The conditions in the disk wind are quite different; weak interactions will drive the material to be less neutron rich ($Y_e \approx 0.5$) and the entropy will be higher. This environment is more similar to the neutrino driven wind from proto-neutron stars in core collapse supernovae. It is unclear whether a robust r-process occurs in such a wind, or whether the distribution only extends to atomic numbers $Z \sim 50$. If neutrinos drive $Y_e$ close to 0.5, the composition may be dominated by radioactive \ensuremath{^{56}\mathrm{Ni}}\ \citep{Surman_2008}. Soon after the mass ejection ($\sim 100$'s of seconds) hydrodynamical and nucleosynthetic processes abate, and the ejecta reaches a phase of free-expansion. In the absence of any forces, the velocity structure becomes homologous -- i.e., the velocity of any mass element is proportional to radius, $v = r\ensuremath{t_{\mathrm{ej}}}$, where \ensuremath{t_{\mathrm{ej}}}\ is the time since homology sent in. The density of the ejecta at 1~day can be estimated \begin{equation} \rho_0 \approx \frac{M_{\rm ej}}{(4 \pi/3) v_{\rm ej}^3 \ensuremath{t_{\mathrm{ej}}}^3} \approx 2.8 \times 10^{-13} \frac{M_{-2}}{v_{0.1}^3 t_d^{3}}~\ensuremath{{\rm g~cm^{-3}}}, \end{equation} where $M_{-2} = M_{\rm ej}/10^{-2}~\ensuremath{M_\odot}$, $v_{0.1} = v_{\rm ej}/0.1c$, and $t_d = \ensuremath{t_{\mathrm{ej}}}/{\rm ~day}$. In just a day, the density of the ejecta has dropped by $\sim 20$ orders magnitude from its original value in the neutron star. The ejecta material initially cools very effectively by expansion, but will be reheated by the decay of r-process nuclei. Radioactive energy is released in the form of gamma-rays, beta particles, and fission fragments, which will be thermalized, to various degrees, by scattering within the ejecta \citep{Metzger_2010}. The heated material will radiate, and thermal photons will escape the medium on the effective diffusion timescale for a homologously expanding medium \citep{Arnett_1980} \begin{equation} t_{\rm d} \sim \biggl[ \frac{ M_{\rm ej} \kappa}{v_{\rm ej} c} \biggr]^{1/2} \sim 1.7 M_{-2}^{1/2} v_{0.1}^{-1/2} \kappa_{0.1}^{1/2} ~{\rm days}, \end{equation} where the opacity $\kappa$ of the ejecta has been normalized to a value $\kappa_{0.1} = \kappa/0.1~{\rm \ensuremath{{\rm cm^2~g^{-1}}}}$, a value appropriate for iron group elements (but not, we will find, for r-process elements). This timescale for diffusion sets the duration of the radioactively powered light curve. The luminosity near the peak of the light curve will be of order the instantaneous rate of energy deposition $L \approx M_{\rm ej} \dot{\epsilon}$, where $\dot{\epsilon}$ is the radioactive energy released per unit time per unit gram \citep{Arnett_1982}. Stefan's law, $L = 4 \pi r^2 \sigma_{\rm sb} T^4$, provides an estimate of the surface temperature \begin{equation} T \approx \biggl[ \frac{M_{\rm ej} \dot{\epsilon}}{\sigma v_{\rm ej}^2 \ensuremath{t_{\mathrm{ej}}}^2} \biggr]^{1/4} \approx 10^4 M_{-2}^{1/4} (v_{0.1} t_d)^{-1/2}~K. \end{equation} For times $\ensuremath{t_{\mathrm{ej}}} \gtrsim 1$~day, the characteristic temperatures and densities of NSM ejecta are thus roughly in the range, $T \sim 10^3 - 10^4$~K and $\rho \sim 10^{-16} - 10^{-12}~{\rm g~cm^{-3}}$. Under these conditions, and assuming local thermodynamic equilibrium (LTE), the gas will be in a low ionization state, being mostly singly or doubly ionized near the photosphere. \subsection{Local Thermodynamic Equilibrium} \label{sec:LTE} We adopt LTE to compute level populations in this paper, a necessary approximation given the complexity of the ions involved. The low density of NSM ejecta at $\ensuremath{t_{\mathrm{ej}}} \ga 1~$day is not sufficient for collisional processes alone to establish LTE. However, in the optically thick regions below the photosphere, the radiation field will tend toward a blackbody distribution and radiative transitions will drive the level populations to their LTE values. Because the effective diffusion time and spectral energy distribution (SED) are mostly set by processes near and below the photosphere, LTE calculations likely provide a reasonable first approximation. At late times $(t \ga 20$~days), when the entire remnant becomes transparent, LTE will break down at all radii and result in poor SED predictions. Considering the heavily radioactive environment of NSM ejecta, one may worry that, even at early times, departures from LTE may be driven by non-thermal ionization/excitation processes (namely, impacts by fast electrons that have been Compton scattered by radioactive gamma-rays). As a rough estimate of the potential effects, we compare the rates for a bound-bound transition of energy $\Delta E$. The non-thermal excitation rate is $R_{\rm nt} \approx f \dot{\epsilon}/ \Delta E$, where $\dot{\epsilon}$ is the radioactive power released per particle, and $f$ is the fraction of that power that goes into excitation (as opposed to ionization or thermalization). The radiative excitation rate, assuming a blackbody field, is $R_{\rm bb} = B_{12} B_\nu(T)$ where $B_{12}$ is the Einstein absorption coefficient and $B_\nu(T)$ the Planck function. Using the Einstein relations, the ratio of rates is \begin{equation} \frac{R_{\rm nt}}{R_{\rm bb}} \approx \biggl[ \frac{f \dot{\epsilon} }{ \Delta E A_{21}} \biggr] (e^{\Delta E/kT} - 1), \label{eq:nt} \end{equation} where the Einstein spontaneous emission coefficient $A_{21} \sim 10^8 - 10^9$ for permitted optical transitions. For r-process ejecta at 1~day, \cite{Metzger_2010} find $\dot{\epsilon} \approx 1~{\rm eV~s^{-1}}$ Considering values $\Delta E \sim$~1 eV, $T \approx 5000$~K and $f \sim 1/3$, the ratio $R_{\rm nt}/R_{\rm bb} \approx 10^{-8}$. Non-thermal transitions are therefore negligible except for transitions well above the thermal energy ($\Delta E/kT \gtrsim 20)$. A similar argument can be made for non-thermal ionization. We conclude that the radioactive energy deposition will likely not seriously undermine our LTE assumption, except at late times when the ejecta becomes rather cold and transparent. \subsection{Line Expansion Opacity} The opacity of bound-bound transitions is significantly enhanced by the high expansion velocities found in supernova and NSM ejecta \citep{Karp_1977}. We discuss here the ``expansion opacity'' formalism which wavelength averages the contribution of multiple lines and treats the line radiative transfer in the Sobolev approximation. As a photon propagates through the differentially expanding medium, its wavelength is constantly Doppler shifted with respect to the comoving frame. For a homologous (Hubble-like) expansion, this Doppler shift is always to the red, and proportional to the distance traveled. The photon will interact with a line when its comoving frame wavelength is redshifted into resonance with the line rest wavelength. The spatial extant of this interaction, or resonance, region is set by the intrinsic width of the line profile. If, for example, the line width is due to the thermal velocity, $\Delta v_t$, of ions, the physical size of the resonance region is $\Delta s \sim \Delta v_t \ensuremath{t_{\mathrm{ej}}}$. Because the thermal velocities in NSM ejecta are very small ($v_t \sim 1~\ensuremath{\mathrm{km~s}^{-1}}$) compared to the ejecta velocities ($v_{\rm ej} \approx 10^5~\ensuremath{\mathrm{km~s}^{-1}}$) the resonance region of a line is tiny compared to the ejecta scale height, and the matter properties can be taken to be constant over the region. This is the essence of the Sobolev approximation \citep{Sobolev_1960}. In this limit, the line extinction coefficient can be analytically integrated to give the Sobolev optical depth across the resonance region \begin{equation} \tau_s = \frac{\pi e^2}{m_e c} \ensuremath{f_{\rm osc}} n_{\rm l} \ensuremath{t_{\mathrm{ej}}} \lambda_0, \end{equation} where \ensuremath{f_{\rm osc}}\ is the oscillator strength and $\lambda_0$ the rest wavelength of the line. The Sobolev optical depth is a local quantity which depends on $n_{\rm l}$, the number density in the lower level of the transition at the location of resonance. The probability that a photon interacts (i.e., is scattered or absorbed) a least once in traversing the resonance region is simply $1 - e^{-\tau_s}$. A photon traveling through the expanding medium comes into resonance with lines one-by-one, sweeping from blue to red. The effective mean free path depends not on the strength of any one line, but rather on the wavelength spacing of strong ($\tau_s \gg 1$) lines, which can be quantified as follows. Say that within some wavelength region $(\lambda_i, \lambda_i + \ensuremath{\Delta \lambda_i})$ we have $N$ strong lines. The spacing between the lines is, on average, $\ensuremath{\Delta \lambda_i}/N$ and the velocity gradient which Doppler shifts a photon from one line to the next is $\ensuremath{\Delta v_s}/c = \ensuremath{\Delta \lambda_i}/ \lambda_i N$. If homologous expansion holds, the distance a photon travels between line interactions is then $l_{\rm mfp} = \ensuremath{\Delta v_s} \ensuremath{t_{\mathrm{ej}}}$. This is an estimate of the mean free path, while the inverse quantity, $l_{\rm mfp}^{-1}$, defines the matter extinction coefficient (units cm$^{-1}$) \begin{equation} \ensuremath{\alpha_{\rm ex}} \approx \frac{1}{l_{\rm mfp}} \approx \frac{N }{\ensuremath{\Delta \lambda_i}} \frac{\lambda_i}{ c \ensuremath{t_{\mathrm{ej}}}}, \label{eq:simple_ex} \end{equation} where the bin size \ensuremath{\Delta \lambda_i}\ can be chosen arbitrarily to average over a reasonable number of lines. A formal derivation along these lines was introduced by \cite{Karp_1977} to estimate the extinction coefficient in an expanding medium. We use here the slightly modified expression developed by \cite{Eastman_1993} \begin{equation} \ensuremath{\alpha_{\rm ex}}(\lambda) = \frac{1}{c \ensuremath{t_{\mathrm{ej}}}} \sum_i \frac{\lambda_i}{\ensuremath{\Delta \lambda_i} } [1 - e^{-\tau_i} ], \label{eq:alpha_ex} \end{equation} where the sum runs over all lines in the wavelength bin \ensuremath{\Delta \lambda_i}. Eq.~\ref{eq:alpha_ex} takes into account the cumulative effect of many weak lines; in the case that all lines are optically thick ($\tau_s \gg 1$), it reduces to the simple estimate eq.~\ref{eq:simple_ex}. An interesting property of the expansion opacity is the dependence on ion density, which appears only in the Sobolev optical depth, $\tau_s$. In the limit that all lines are weak, the density dependence is linear. However, in the opposite limit where all lines are very optically thick, the extinction coefficient is independent of density. In our actual calculations, the density dependence (which is set by the statistical distribution of line optical depths) is weak, roughly logarithmic. As a consequence, certain ions can make a significant contribution to the opacity even when their abundance is very low. The expansion opacity (units \ensuremath{{\rm cm^2~g^{-1}}}) of bound-bound transitions is related to $\alpha_{\rm ex}$ by \begin{equation} \kappa_{\rm ex}(\lambda) = \frac{\alpha_{\rm ex}}{\rho} = \frac{1}{c \ensuremath{t_{\mathrm{ej}}} \rho} \sum_i \frac{\lambda_i}{\Delta \lambda } [1 - e^{-\tau_i} ] \label{eq:ex_opacity} \end{equation} Given the weak dependence of $\alpha_{\rm ex}$ on density, the line expansion opacity actually increases as the density drops. \subsection{Applicability of the Sobolev Approximation} \label{sec:sobolev} At least three conditions must be met for the Sobolev approximation, which underlies the expansion opacity expression eq.~\ref{eq:ex_opacity}, to be valid. The first, already mentioned, is that the thermal velocity of the ions (presumed to set the intrinsic line widths) must be significantly smaller than the velocity scales over which the ejecta properties vary. For NSM ejecta, the ratio is $v_t/v_{\rm ej} \approx 10^{-5} \ll 1$, which assures the applicability of the narrow line limit. A second condition, relevant for time-varying environments, is that the time photons spend within a resonance region be short compared to the timescale over which the ejecta properties vary (e.g., the expansion timescale). For strong lines, a photon may scatter multiple times within the resonance region before finally being redistributed to the edge of the line profile and escaping (i.e., redshifting past) the transition. In the Sobolev formalism, the probability that a photon escapes the line after a scatter is \begin{equation} \beta = \frac{1 - e^{-\tau_s}}{\tau_s} \approx \tau_s^{-1}~{\rm for~\tau \gg 1}, \end{equation} and the average number of scatters in a thick line is $N_{\rm sc} \sim 1/\beta$. Assuming the distance traveled between scatters is, on average, $v_{\rm t}\ensuremath{t_{\mathrm{ej}}}$, the time spent in the resonance region is $t_{\rm sc} \sim N_{\rm sc} v_{\rm t} \ensuremath{t_{\mathrm{ej}}}/c$. The condition $t_{\rm sc} < \ensuremath{t_{\mathrm{ej}}}$ sets a limit on the optical depth of the line $\tau_s \la c/v_{\rm t} = 3\times 10^5 v_{t,1}$, where $v_{t,1} = v_t/1~\ensuremath{\mathrm{km~s}^{-1}}$. In practice, optical depths $> 3\times 10^5$ are regularly encountered, in particular for resonance lines. In most cases, however, fluorescence provides an avenue for escape. The probability of de-excitation to a lower level is suppressed by a factor of $\beta$, and so it is likely that the ion will eventually de-excite via a cascade through multiple low $\tau_s$ transitions. This generally evades the problem of extended line trapping, except perhaps for those few transitions in which fluorescence is not possible. A third condition is that the intrinsic profiles of strong lines must not, in general, overlap, as this would introduce a coupling of the radiative transport between lines. In particular, a photon that escapes from the red edge of one line will have an enhanced probability of escaping a second overlapping line. This invalidates the sum in eq.~\ref{eq:ex_opacity} which assumes an independent interaction probability for each line. Overlap of weak lines (which are extremely numerous in our calculations) is common, however this likely does not introduce any serious error, as the $\tau_s$ dependence is linear when $\tau_s \ll 1$. Moreover, the opacity is usually dominated by the strong lines. Occasional overlap of strong lines is inevitable, and may moderately reduce the expansion opacity at certain wavelengths. The entire Sobolev formalism, however, becomes inapplicable when the wavelength spacing of strong lines, $\Delta\lambda/N$, becomes comparable to the intrinsic (e.g., thermal) width $\Delta \lambda_{\rm t} = \lambda_0 (v_{\rm t}/c)$ of the lines. From eq.~\ref{eq:ex_opacity} we can define an opacity when such ``saturation" occurs \begin{equation} \kappa_{\rm sat} = \frac{\lambda_0}{\Delta \lambda_{\rm t} }\frac{1}{\rho c \ensuremath{t_{\mathrm{ej}}}} = \frac{1}{\rho v_{\rm t} t_{10}} \approx 10^3 ~\rho_{-13} t_{1}^{-1} v_{\rm t,1}^{-1}~\ensuremath{{\rm cm^2~g^{-1}}}. \label{eq:kappa_sat} \end{equation} When $\kappa_{\rm ex} > \kappa_{\rm sat}$, strong line overlap is the norm and the Sobolev expansion opacity formalism can no longer be trusted to return reasonable values. Under some conditions, and at certain wavelengths, we will find that our calculated r-process opacities approach or exceed saturation, such that this issue may be a serious concern. \subsection{Other sources of Opacity} Other potential sources of opacity include free-free (i.e., bremsstrahlung), bound-free (i.e., photoionization), and electron scattering. In NSM ejecta, none of these turn out to be important compared to bound-bound. For example, the wavelength independent electron scattering opacity is given by \begin{equation} \kappa_{\rm es} = \frac{\bar{x} \sigma_t }{\bar{A} m_p} \approx 0.4 \biggl( \frac{\bar{x}}{\bar{A}}\biggr) \ensuremath{{\rm cm^2~g^{-1}}}, \end{equation} where $\bar{x}$ is the mean ionization fraction and $\bar{A}$ the mean atomic weight of the ions. For NSM ejecta comprised of lowly ionized ($\bar{x} \sim 1$) heavy elements $(\bar{A} \sim 130$), $\kappa_{\rm es}$ is a factor $\bar{x}/\bar{A} \sim 10^{-2}$ smaller than the typical value for ionized hydrogen, and much less than the r-process line opacity at all wavelengths of interest. The free-free opacity for a gas in ionization state $\bar{x} \sim 1$ is given approximately by \citep[e.g.,][]{rybicki_lightman} \begin{equation} \kappa_{\rm ff} = 0.005 \frac{\bar{x}^3}{A^2} \rho_{-13} T_{4}^{-1/2} (\lambda/1 {\rm \mu m})^{3}~\ensuremath{{\rm cm^2~g^{-1}}}, \end{equation} where we have set the correction for stimulated emission and the Gaunt factor to unity. For the low densities and $\bar{x}/\bar{A}$ values found in NSM ejecta, the free-free opacity at the relevant wavelengths is negligible ($\kappa_{\rm ff} \approx 10^{-6}$~\ensuremath{{\rm cm^2~g^{-1}}}). Finally, the opacity due to a bound-free transition for photons at the threshold energy (where the cross-section is largest) for some excited level of an ion is given by \begin{equation} \kappa_{\rm bf} = \frac{\sigma_0}{\bar{A} m_p} \frac{e^{-\Delta E/kT}}{Z(T)}\,, \end{equation} where $\Delta E$ is the excitation energy of the level and $Z(T)$ the partition function (LTE is assumed). In order for the bound-free transition to apply to optical/infrared photons, the level must be a highly excited state, $\Delta E \gtrsim \chi - 2$~eV, where $\chi$ is the ionization potential. As a representative estimate of the opacity at $T = 5000$~K, we consider a singly ionized heavy element with $A = 120, \Delta E \sim 8$~eV and $Z(T) = 20$ and adopt the hydrogenic value $\sigma_0 \approx 6 \times 10^{-18}~{\rm cm^2}$. The resulting opacity at threshold is also very small, $\kappa_{\rm bf} \approx 1.3 \times 10^{-5}~\ensuremath{{\rm cm^2~g^{-1}}}$, due mainly to the Boltzmann factor. At ultraviolet wavelengths ($\lambda \lesssim 1000$~\AA) the bound-free opacity may actually dominate, since for low lying levels ($\Delta E \approx 0$) one finds $\kappa_{\rm bf} \approx 10^3~\ensuremath{{\rm cm^2~g^{-1}}}$. \section{Atomic Structure Calculations} \label{sec:AS} \begin{figure} \includegraphics[width=3.5in]{f2.pdf} \caption{Atomic structure model calculations of the excitation energy of the lowest level of NdII electron configurations. Circles denote the results from \texttt{Autostructure}\ obtained under various optimization approaches (described in text). Stars denote the experimental energies from NIST. \label{fig:Nd_energies}} \end{figure} \begin{table*} \caption{Autostructure atomic structure models \label{tab:configs}} \begin{small} \begin{center} \begin{tabular}{lcccc} \tableline \tableline Ion & Configurations included & \# levels & \# lines & $\chi$ (eV)\\ \tableline FeI & ${\bf 3d^6 4s^2}, 3d^7 4s, 3d^6 4s4p, 3d^7 4p, 3d^7 4d, 3d^7 4f, 3d^7 5s, 3d^7 5p, 3d^7 5d, 3d^6 4s 4d$ & 1784 & 326,519 & 7.90 \\ FeII & ${\bf 3d^6 4s}, 3d^7, 3d^6 4p, 3d^6 4d, 3d^6 4f, 3d^6 5s, 3d^6 5p, 3d^6 5d, 3d^5 4s^2, 3d^5 4s 4p$ & 1857 & 355,367 &16.18 \\ FeIII & ${\bf 3d^6}, 3d^5 4s, 3d^5 4p, 3d^5 4d, 3d^5 4f, 3d^5 5s, 3d^5 5p, 3d^5 5d, 3d^4 4s 4p$ & 2050 & 420,821 & 30.65 \\ FeIV & ${\bf 3d^5}, 3d^4 4s, 3d^4 4p, 3d^4 4d, 3d^4 4f, 3d^4 5s, 3d^4 5p, 3d^4 5d$ & 1421 & 217,986 &54.91 \\ CoI & ${\bf 3d^7 4s^2}, 3d^8 4s, 3d^7 4s 4p, 3d^9, 3d^8 4p, 3d^8 4d, 3d^8 5s, 3d^7 4s 4d, 3d^7 4s 5s$ & 778 & 62,587 & 7.88 \\ CoII & ${\bf 3d^8}, 3d^7 4s, 3d^6 4s^2, 3d^7 4p, 3d^6 4s 4p, 3d^7 5s, 3d^7 4d$ & 757 & 58,521 & 17.08 \\ CoIII & ${\bf 3d^7}, 3d^6 4s, 3d^6 4p, 3d^6 4d, 3d^6 5s$ & 601 & 34,508 & 33.50\\ CoIV & ${\bf 3d^6}, 3d^5 4s, 3d^5 4p, 3d^5 4d, 3d^5 5s$ & 728 & 48,254 &51.27 \\ NiI & ${\bf 3d^8 4s^2}, 3d^{10}, 3d^8 4s 4p, 3d^9 4s, 3d^9 4p, 3d^9 4d, 3d^9 4f, 3d^9 5s, 3d^9 5p, 3d^9 6s$ & 174 & 2,776 & 7.64\\ NiII & ${\bf 3d^9}, 3d^8 4s, 3d^8 4p, 3d^8 4d, 3d^8 4f, 3d^8 5s, 3d^8 5p, 3d^8 6s, 3d^7 4s4p, 3d^7 4s^2$ & 520 & 25,496 & 16.18 \\ NiIII & ${\bf 3d^8}, 3d^7 4s, 3d^7 4p, 3d^7 4d, 3d^7 4f, 3d^7 5s, 3d^7 5p, 3d^7 6s, 3d^6 4s^2$ & 1644 & 61,108 & 35.19 \\ NiIV & ${\bf 3d^7}, 3d^6 4s, 3d^6 4p, 3d^6 4d, 3d^6 4f, 3d^6 5s, 3d^6 5p, 3d^6 6s, 3d^5 4s4p, 3d^5 4s^2$ & 751 & 258,305 & 54.92 \\ NdI & ${\bf 4f^4 6s^2}, 4f^3 5d 6s^2, 4f^4 5d 6s, 4f^4 5d^2, 4f^3 5d 6s 6p, 4f^4 5d 6p$ & 18104 & 24,632,513 & 5.52\\ NdII & ${\bf 4f^4 6s}, 4f^4 5d, 4f^4 6p, 4f^3 5d^2, 4f^3 5d 6s, 4f^3 5d 6p, 4f^3 6s 6p $ & 6888 & 3,873,372 & 10.7\\ NdIII & ${\bf 4f^4}, 4f^3 5d, 4f^3 6s, 4f^3 6p, 4f^2 5d^2, 4f^2 5d 6s, 4f 5d^2 6s$ & 1650 &232,715 & 22.14\\ NdIV & ${\bf 4f^3}, 4f^2 5d, 4f^2 6s, 4f^2 6p$ & 241 & 5780 & 40.4\\ CeII & ${\bf 4f 5d^2}, 4f 5d 6s, 4f^2 6s, 4f^2 5d, 4f 6s^2, 4f 5d 6p, 4f^2 6p, 5d^3, 4f 6s 6p,4f^3$ & 5,637 & 4,349,351 & 10.8\\ CeIII & ${\bf 4f 5d}, 4f 6s, 5d^2, 4f 6p, 5d 6s$ & 3,069 & 868,640 & 20.19 \\ OsII & ${\bf 5d^6 6s}, 5d^6 5f, 5d^6 5g, 5d^6 6s, 5d^6 6p, 5d^6 6d, 5d^6 6f, 5d^6 6g $ & 3271 & 1,033,972 & 17.0\\ SnII & ${\bf 5s^2 5p}, 5s^2 4f, 5s^2 5d, 5s^2 6s, 5s^2 6p, 5s 5p^2, 5s 5p 6s, 5s 5p 6p$ & 47 & 371 &14.63 \\ \tableline \end{tabular} \end{center} \end{small} \label{tab:config} \end{table*} To estimate the radiative data for high-Z elements, we used the \texttt{Autostructure}\ program \citep{Badnell_2011}. This code was used previously to calculate data up to Ni for the updated opacities of the Opacity Project \citep{Badnell_2005}. Recent developments \citep{Badnell_2012} have enabled it to be used to make extensive calculations of radiative (and autoionization) rates for a half-open f-shell. \texttt{Autostructure}\ calculates the approximate level energy structure of ions, and all relevant radiative transition rates, given a user specified set of electron configurations. The many-electron quantum mechanical problem is treated using a multi-configuration wavefunction expansion with a Breit-Pauli Hamiltonian. We used the kappa-averaged relativistic wavefunction option as introduced by \cite{Cowan_1976}. The radial orbitals were determined using a Thomas-Fermi-Dirac-Amaldi potential. As the standard $LS$-coupling scheme breaks down for high $Z$ elements, we adopted a level-resolved intermediate coupling scheme. We used the NIST \citep{NIST} atomic database to identify the electron configurations corresponding to the ground and low-lying states of each ion. For several of the high-Z ions, the NIST data appeared to be incomplete, and we included additional configurations suspected to be relevant. Table~\ref{tab:config} lists the configurations used for each ion. For the lanthanides, the highest $nl$ orbital we considered was the $6p$ one. We experimented with including configurations generated by electron promotion to higher orbitals (e.g., $n=7,8$), however these typically produced highly excited levels not significantly thermally populated under the relevant physical conditions. As we did not did not notice any large effects on the opacities, we omit these configurations from our final calculations, although more exhaustive explorations of configuration space are certainly warranted. \texttt{Autostructure}\ includes a dimensionless radial scaling parameter for each $nl$-orbital, which must be optimized to establish a realistic level structure for low-charge ions. The optimization consists of varying the scaling parameters so as to minimize a user specified weighted sum of eigenenergies. The closed-shell core cannot be excluded from the structural optimization of complex heavy near-neutral ions because of the strength of core polarization effects on the valence orbitals \citep{Palmeri_2000}. We therefore used a single common variational scaling parameter for all closed-shell orbitals, but varied the parameters of the valence orbitals independently. This {\it ab initio} optimization procedure does not require any observed energies. Thus, it is ideally suited to situations such as the present one where the observed data is at best sparse. We explored several strategies for optimization. The first -- which we label {\it opt1} -- was to simultaneously vary the scaling parameters for all included (core-plus-valence) orbitals and to minimize the equally weighted sum of all energy levels included by the configuration expansion. This has the advantage of not biasing the structure towards any given configuration(s), which is valuable given that we seek radiative data for many excited levels. The disadvantage is that it gives no due preference to the ground state, and thus does not always predict a ground state configuration in agreement with what is deduced experimentally. Figure~\ref{fig:Nd_energies} shows the energy level structure for NdII, where it is seen that the {\it opt1} optimization predicts the wrong ground configuration. While for many applications this would be a fatal flaw, in the present context we expect several low lying configurations to be significantly populated, in which case the mean line opacity may be less sensitive to the exact configuration ordering. We considered a second strategy ({\it opt2}) whereby the above optimization was first applied to only those orbitals included in the ground configuration. These scaling parameters were then fixed, and a second optimization was carried out varying the parameters of all remaining orbitals. This method usually produced the correct ground state configuration. The energies of the excited levels were also close to but a bit higher than the available NIST values, and overall not as good as those found using the {\it opt1} approach (Figure~\ref{fig:Nd_energies}). The model structure can be further refined by iteratively adjusting the scaling parameters by hand. We attempted this for NdII, guided by the trends found in the {\it opt1} and {\it opt2} calculations. An improved solution was found ({\it opt3}) which reproduced the ground and first two excited level energies almost exactly. Further iterations could presumably improve the result, but this sort of manual alignment is time consuming, and more art than science. We only attempted this {\it opt3} approach only for NdII, which is the most important ion for our r-process light curve calculations. \section{Iron Group Opacities} \label{sec:iron} \begin{figure*}[t] \includegraphics[width=3.5in]{f3a.pdf} \includegraphics[width=3.5in]{f3b.pdf} \caption{Calculated line expansion opacities for a mixture of iron group elements (10\% Ni, 80\% Co, 10\% Fe) representative of decayed \ensuremath{^{56}\mathrm{Ni}}. The right panel plots the Planck mean opacity versus temperature, while the left panel plots the wavelength dependence of the opacity at a specific temperature, $T = 5000$~K. The opacities calculated using our \texttt{Autostructure}\ derived line data generally agree with those using the Kurucz linelists. The calculations assume $\rho = 10^{-13}$~\ensuremath{{\rm g~cm^{-3}}}, $\ensuremath{t_{\mathrm{ej}}} = 10$~days, and a wavelength binning $\Delta \lambda = 0.01 \lambda$. } \label{fig:feg_mean_opac} \end{figure*} \subsection{Comparison to Kurucz Line Data} The atomic properties of $Z < 30$ ions are reasonably well known based on experiment and previous structure modeling. In particular, R.~Kurucz has generated extensive line lists, including CD23, ($\sim 500,000$, \cite{Kurucz_CD23}), and CD1 ($\sim 42$ million lines, \cite{Kurucz_CD1}). These lists (which are dominated by iron group lines) have been derived from atomic structure calculations using the Cowan code \citep{Cowan_1981} which have been iteratively tuned to reproduce the extensive observed experimental level energies \citep{Kurucz_Bell_1995}. Supernova modelers have used the Kurucz data to successfully model the optical light curves and spectra of observed (iron-rich) SNe~Ia \citep[e.g.,][]{Kasen_2009,Sim_2010} which suggests that, for the iron group, the Kurucz line data can be taken to be reasonably accurate and complete. To validate our {\it ab initio} \texttt{Autostructure}\ line data against the observationally constrained data of Kurucz, we ran structure models for the first 4 ionization stages of Fe, Co, and Ni, using the electron configurations listed in table~\ref{tab:configs}. Unlike Kurucz, we made no attempt (beyond our {\it ab initio} {\it opt1} optimization scheme) to tune the model, and our calculated level energies can differ from the experimental values by factors of 2 or more. Nevertheless, we find that our derived iron group expansion opacities are in good agreement with those of Kurucz (Figure~\ref{fig:feg_mean_opac}). Our Planck mean opacities differ from those of Kurucz by only $\sim 30\%$ over the temperature range $1000-20,000$~K, and the wavelength dependence of the opacity is quantitatively similar, with the opacity rising sharply to the blue. The good agreement indicates that our \texttt{Autostructure}\ calculations capture the statistical properties of the lines, even if the individual energy levels and line wavelengths may be inaccurate. \subsection{Application to Supernova Modeling} \label{sec:sn1a} \begin{figure*} \includegraphics[width=3.5in]{f4a.pdf} \includegraphics[width=3.5in]{f4b.pdf} \caption{Synthetic light curves and spectra of a model consisting of pure \ensuremath{^{56}\mathrm{Ni}}\ ejecta, computed with different line data. The left panel shows that the SED (at 50 days after merger) calculated using the \texttt{Autostructure}\ line data resembles that obtained using the Kurucz data. Both resemble the spectrum of the Type~Ia SN~2003du observed 34 days after peak ($\sim 52$ days after explosion). The right panel compares the broadband light curves of the model calculated using the \texttt{Autostructure}\ line data (solid lines) and the Kurucz CD23 linelist (dashed lines). } \label{fig:feg_spec} \end{figure*} To demonstrate how our \texttt{Autostructure}\ derived iron group opacities perform in a real transport calculation, we calculated synthetic light curves and spectra of a simple SN~Ia model. As a numerical ``thought" experiment, we pretended that our understanding of SNe~Ia was as rudimentary as it is for NSM outflows, and that our only expectation was that a carbon/oxygen star was burned to nuclear statistical equilibrium (NSE). We thus constructed a spherically symmetric ejecta model consisting of uniform density, initially pure \ensuremath{^{56}\mathrm{Ni}}\ with a total mass of $1~\ensuremath{M_\odot}$ and kinetic energy of $10^{51}$~ergs, roughly the nuclear energy released in burning the C/O to NSE. Of course, real SN~Ia are not homogenous, and in addition to \ensuremath{^{56}\mathrm{Ni}}\ are observed to contain a significant amount of intermediate mass elements (IMEs, Si, S, Ca). We calculated synthetic observables of this ejecta model using the time dependent, multi-wavelength radiation transport code \texttt{Sedona}\ \citep{Kasen_MC}, and assuming that the level populations were in LTE. Figure~\ref{fig:feg_spec} shows that, despite the simplistic nature of the ejecta model, the broadband light curves qualitatively resemble those of observed SNe~Ia, peaking at a B-band magnitude around -19 about 20 days after explosion. Overall, the light curves calculated using our \texttt{Autostructure}\ derived linelist are rather similar to those calculated using the Kurucz linelist, although differences up to 1 magnitude are seen at some epochs. Figure~\ref{fig:feg_spec} shows that the model spectrum (at 50 days after explosion) derived from the \texttt{Autostructure}\ linelist is also similar to that using Kurucz. Both calculations resemble the SED of an observed SN~Ia. The \texttt{Autostructure}\ model does not reproduce the positions of most spectral features, which is to be expected given that the line wavelengths are only approximate. Even the Kurucz calculation fails to reproduce every observed spectral feature, as the underlying ejecta model did not include the IMEs present in real SNe~Ia. These results indicate that line data derived from our \texttt{Autostructure}\ models can be used to predict supernova SEDs (but not line features) with some reliability. The general agreement of our synthetic observables with those of real SNe~Ia suggests that -- even with very crude knowledge of the underlying ejecta structure -- we may still be able to predict the light curve and colors of radioactive transients to a reasonable level of accuracy. \section{High $Z$ Opacities} \label{sec:high_Z} We have calculated structure models for several elements beyond the iron group, including tin (Sn, $Z=50$, p-shell), cerium (Ce, $Z=58$, f-shell), neodymium (Nd, $Z=60$, f-block), and osmium (Os, $Z=76$, d-shell). These species were chosen to sample different blocks on the periodic table corresponding to valence shells of different orbital angular momentum. The total number of atomic levels/lines determined by the structure models are listed in Table~1 and illustrated in Figure~\ref{fig:lev_dens}, and are generally consistent with the simple complexity estimates of \S\ref{sec:intro}. As expected from simple physical arguments, we find that more complex atoms, in particular the lanthanides, have higher line expansion opacities. Figure~\ref{fig:Nd_Fe_Si} shows that the Planck mean opacity of neodymium is a factor $\sim 10-100$ greater than that of iron, depending on the temperature. This is roughly consistent with the estimate one gets by squaring the complexity measure (equation~\ref{eq:complexity}) to gauge the relative number of strong lines, ($C_{\rm NdII}/C_{\rm FeII})^2 \approx 22$. The variation of the mean opacity with temperature (Figure~\ref{fig:Nd_Fe_Si}) shows several bumps which reflect changes in the ionization state. As the temperature increases, the excited levels become more populated, and the number of optically thick lines increases. The opacity therefore increases with temperature until the gas becomes hot enough to ionize. This leads to multiple maxima in the mean opacity curve, each of which occur around the transition temperatures of the various stages of ionization. At sufficiently low temperatures, when the element becomes neutral, the opacities cut off sharply, and drop exponentially with decreasing temperature due to the Boltzmann factor in the excited state level populations. An important property of the lanthanides is that, relative to the iron group, the opacity remains high at relatively low temperatures. This is because the ionization potentials of the lanthanides are generally $\sim 30\%$ lower than those of the iron group (see Table~1). For neodymium, the mean opacity peaks at $T \approx 5000$~K, when the ion is mostly singly ionized and cuts offs at $T \la 2500$~K when Nd becomes neutral. In comparison, the opacity peak for iron occurs at $T \approx 7000$~K and the neutral cutoff is at $T \la 3500$~K. The general persistence of the lanthanide opacity to lower temperatures has an important impact on the color of the emergent spectra, contributing to cooler, redder photospheres. Another important feature of the lanthanide opacity is the wavelength dependence -- while the opacity decreases to the red (as there are more lines at bluer wavelengths), the decrease is much slower than that of the iron group (Figure~\ref{fig:high_Z_exp}). This is due to the much denser energy level spacing of the lanthanides, resulting in a much larger number of $\sim 1$~eV optical/infrared transitions. The shallower opacity profile means that the lanthanides can line blanket not only UV wavelengths, but the entire optical region of the spectrum. This will influence the color of r-process SNe, as photons will eventually be re-emitted or fluoresce (through the many lines) to infrared wavelengths where they may escape more easily. As seen in Figure~\ref{fig:high_Z_exp}, the opacity of osmium ($Z=76$) is very similar to that iron, despite the much higher atomic number. This is not surprising, as osmium is a homologue of iron, with a nearly half open d-shell. Similarly, the opacity of the lanthanide cerium ($Z=58$) is comparable to, though slightly less than, that of neodymium. This confirms that species with similar complexity measures have roughly similar opacities, which we use to derive approximate opacities for r-process mixtures (\S\ref{sec:rp_mix}). \begin{figure} \includegraphics[width=3.5in]{f5.pdf} \caption{ Histogram of the number of atomic levels versus level energy (bin size = 0.25~eV) in our \texttt{Autostructure}\ models, which illustrates the much greater complexity of the lanthanide neodymium (with an open f-shell) as compared to iron (open d-shell) and tin (open p-shell). \label{fig:lev_dens}} \end{figure} \begin{figure} \includegraphics[width=3.5in]{f6.pdf} \caption{ Planck mean expansion opacities for three different elements, showing the expected dependence on atomic complexity. The Nd opacities (blue line, $Z=60$, open f-shell) were derived from \texttt{Autostructure}\ models, while the silicon (red line, $Z=14$, open p-shell) and iron (green line, $Z=26$, open d-shell) opacities used Kurucz line data. The calculations assume a density $\rho = 10^{-13}~\ensuremath{{\rm g~cm^{-3}}}$ and a time since ejection $\ensuremath{t_{\mathrm{ej}}} = 1$~days. \label{fig:Nd_Fe_Si}} \end{figure} \subsection{Uncertainties and Comparison to Existing Data} \begin{figure} \includegraphics[width=3.5in]{f7.pdf} \caption{ Wavelength dependent line expansion opacities resulting from \texttt{Autostructure}\ derived linelists. The opacity of the lanthanides (Nd, Ce) is much higher then iron and its d-shell homologue, osmium, especially in the infrared. \label{fig:high_Z_exp}} \end{figure} \begin{figure} \includegraphics[width=3.5in]{f8.pdf} \caption{ Variations in the wavelength dependent expansion opacity for pure neodymium ($Z = 60$) ejecta obtained using different \texttt{Autostructure}\ optimization approaches. These calculations adopt a density $\rho = 10^{-13}$~\ensuremath{{\rm g~cm^{-3}}}, temperature $T = 4000$~K, time since ejection $\ensuremath{t_{\mathrm{ej}}} = 1$~days, and a wavelength binning $\Delta \lambda = 0.01 \lambda$. \label{fig:Nd_vary}} \end{figure} Our derived opacities must possess some error, since the \texttt{Autostructure}\ model energies do not exactly match the experimental values (Figure~\ref{fig:Nd_energies}). To estimate how sensitive the results are to the detailed level energy structure and configuration ordering, we examined the NdII opacities derived from the three different optimization schemes described in \S\ref{sec:AS}. The resulting variation provides an estimate of our level of uncertainty. Figure~\ref{fig:Nd_vary} shows that the opacities calculated using the {\it opt1} and {\it opt3} models are quite similar, while the {\it opt2} model opacities are lower by a factor of $\sim 5$ at some wavelengths. The {\it opt2} model has relatively higher energy levels, and hence smaller excited state LTE level populations, which is presumably the reason for the lower opacities. The {\it opt1} and {\it opt3} models had similar level energies, but the ground state configuration and ordering were different. These results suggest that what matters most to the opacities is the energy level spacing, and not the exact configuration ordering. Given that the low lying {\it opt3} NdII level energies fairly well reproduce experiment, we suspect that further fine tuning of the \texttt{Autostructure}\ model is unlikely to change the resulting opacity by much more than a factor of $\sim 2$. We have also compared our \texttt{Autostructure}\ opacities to existing line data from the VALD database, which collects atomic data from a variety of sources \citep{VALD_2008}. The only high-Z ions with enough lines in VALD to derive expansion opacities are CeII and CeIII, which have wavelengths and oscillator strengths calculated by the Mons group \citep{Biemont_1999,Palmeri_2000,Quinet_2004}. The approach of the Mons group to atomic structure is the same as that of Kurucz, {\it viz.} calculations with Cowan's code utilizing extensive experimental energies. In Figure~\ref{fig:ce_opacity}, we compare the expansion opacities of Ce calculated using the VALD linelist and our own \texttt{Autostructure}\ list. The agreement in both the mean and wavelength dependent values is good to a factor of $\sim 2$. Our conclusions about the size and wavelength dependence of the lanthanide opacities are therefore confirmed when using radiative data from independent structure calculations. \begin{figure*} \includegraphics[width=3.4in]{f9a.pdf} \includegraphics[width=3.4in]{f9b.pdf} \caption{ Comparison of the expansion opacity for pure cerium ($Z = 58$) ejecta, computed using the \texttt{Autostructure}\ line data (red lines) and the VALD linelist (green lines). The calculations assume a density $\rho = 10^{-13}$~\ensuremath{{\rm g~cm^{-3}}}, time since ejection $\ensuremath{t_{\mathrm{ej}}} = 1$~days, and a wavelength binning $\Delta \lambda = 0.01 \lambda$. {\it Left:} Line expansion opacity versus wavelength for a temperature $T = 5000$~K. At optical wavelengths, the \texttt{Autostructure}\ results are in reasonably good agreement with the VALD; both are orders of magnitude higher than the opacity of pure iron ejecta (brown line). {\it Right:} Planck mean opacity as a function of temperature. Only CeII and CeIII are included in the calculation. \label{fig:ce_opacity}} \end{figure*} \section{Opacities of r-process mixtures} \label{sec:rp_mix} \begin{figure} \includegraphics[width=3.50in]{f10.pdf} \caption{Dependence of the mean expansion opacity on the abundance of lanthanides. The solid lines show the Planck mean opacity for various mass fractions of neodymium in a mixture with iron. The dashed line shows the opacity of the approximate r-process mixture (with all 14 lanthanides) discussed in \S\ref{sec:rp_mix}. \label{fig:Nd_abun}} \end{figure} Although we have only calculated atomic structure models for a few ions, the results (Figure~\ref{fig:high_Z_exp}) suggest that ions of similar complexity have roughly similar opacities. This allows us to construct approximate r-process mixtures based on the representative cases. In an r-process mixture, the abundance of any individual lanthanide is relatively low $(\la 1\%)$. Nevertheless, these species likely dominate the total opacity. In fact, the opacity will depend rather weakly on the exact lanthanide abundance. This is because for the conditions found in NSM ejecta, many of the strong lanthanides lines are extremely optically thick ($\tau_s \gg 1$). Such lines contribute equally to the expansion opacity regardless of the ion's abundance, just as long as that abundance remains high enough to keep $\tau_s$ above unity. We illustrate this weak dependence on lanthanide abundance in Figure~\ref{fig:Nd_abun}, by computing the opacity of a mixture of neodymium and iron. Decreasing the Nd mass fraction by a factor of ten (from 100\% to 10\%) only reduces the total opacity of the mixture by $\sim 40\%$. Decreasing the Nd mass fraction by two orders of magnitudes (from 100\% to 1\%) reduces the total opacity of the mixture by a factor of 5. We find that the Nd opacity dominates over that of iron as long as its mass fraction is $\ga 10^{-4}$. The actual r-process ejecta from NSMs will be a heterogeneous mixture of many high $Z$ elements. This multiplicity of species should enhance the opacity, as each ion contributes a distinct series of lines. To estimate the opacity of the mixture we assume the line data of Nd is representative of all f-shell species (the lanthanides) and that iron is representative of all d-shell elements. We ignore the s-shell and p-shell elements since their opacities will be very low. We then construct the expansion opacity of the mixture by generalizing eq.~\ref{eq:ex_opacity} \begin{equation} \kappa_{\rm mix}(\lambda) = \sum_Z \frac{\xi_Z}{\rho c \ensuremath{t_{\mathrm{ej}}}} \sum_i \frac{\lambda_i}{\ensuremath{\Delta \lambda_i} } \biggl( 1 - \exp[-\tau_i(\rho_Z)] \biggr) \end{equation} where the sum $Z$ runs over the representative ions (in this case only Fe and Nd) and the quantity $\xi_Z$ specifies the total number of elements represented by each. For Nd, $\xi_Z = 14$ to account for all 14 lanthanides, while for iron $\xi_Z = 30$ to account for all d-shell elements between $21 \le Z \le 80$. The quantity $\rho_Z = X_Z \rho$ is the density of the representative elements, where $X_Z$ is the mass fraction of each. As an illustrative r-process mixture, we assume that the average mass fraction of each lanthanide is $X_f = 1\%$, and the average d-shell species fraction is $X_d = 2\%$. The remainder of the composition was taken to be calcium (s-shell) as a neutral filler. We used the Nd line data from the {\it opt3} structure model, and the iron line data from the Kurucz CD23 list. The dashed line in Figure~\ref{fig:Nd_abun} shows the Planck mean opacity of our approximated r-process mixture. Because each of the 14 lanthanides is assumed to contribute independently in the sum, the total opacity is essentially 14 times that of the mixture with only 1\% Nd. At certain temperatures when the lanthanide opacity dips, the d-shell opacity makes a comparable contribution. We note that opacity of the mixture can approach the saturation level discussed in \S\ref{sec:sobolev}, such that our assumption that the strong lines do not overlap can be called into question. \section{Spectra of NS merger ejecta} \label{sec:spectra} To illustrate the general effect of our r-process opacities on the emission from NSM ejecta, we have calculated model spectra using the \texttt{Sedona}\ radiation transport code \citep{Kasen_MC}. A more comprehensive discussion of the light curves and colors of these transients, and their dependence on the ejecta parameters, is given in \cite{Barnes_2013}. As a simple, fiducial ejecta model, we considered a spherically symmetric, homologously expanding remnant with a broken power-law density profile. The total ejecta mass was taken to be $M_{\rm ej} = 0.01~\ensuremath{M_\odot}$ and the kinetic energy $E = 1/2 M_{\rm ej} v_c^2$ with a characteristic velocity $v_c = 0.1c$. The transport calculations assume the ionization/excitation state is given by LTE, and that the line source function is described by the Planck function, i.e., the medium is purely absorbing. In reality, the probability of absorption in lines may be small, with fluorescence being the more likely result of line interactions. However, supernova transport calculations have shown that for complex ions, repeated fluorescence among a multitude of lines has an approximately thermal character \citep{Pinto_2000a, Kasen_IR}. We use the opacities of an r-process mixture derived in \S\ref{sec:rp_mix}. \begin{figure} \includegraphics[width=3.5in]{f11.pdf} \caption{Synthetic spectra (2.5 days after mass ejection) of the r-process SN model described in the text, calculated using either Kurucz iron group opacities (black line) or our \texttt{Autostructure}\ derived r-process opacities (red line). For comparison, we overplot blackbody curves of temperature $T = 6000$~K (black dashed) and $T = 2500$~K (red dashed). The inset shows the corresponding bolometric light curves assuming iron (black) or r-process (red) opacities. \label{fig:rp_spectra}} \end{figure} \begin{figure} \includegraphics[width=3.5in]{f12.pdf} \caption{Synthetic spectra time series of the r-process SN model described in the text. The times since mass ejection are marked on the figure. \label{fig:spec_series}} \end{figure} The high opacity of r-process material has a significant impact on the predicted radioactive transients from NSMs (Figure~\ref{fig:rp_spectra}). Compared to previous calculations (which assumed iron-like opacities) the predicted bolometric light curve is of much longer duration, $\sim 1$ week as opposed to $\sim 1$ day. This is due to the longer effective diffusion time through the more opaque ejecta. The peak luminosity is also lower, as the radiation suffers greater loses due to expansion over this period. The spectrum at 2.5 days after merger is much redder, with most of the flux emitted in the near infrared ($\sim 1~\mu$m). Due to the extreme line blanketing at bluer wavelengths, the photons are eventually redistributed (through lines) to the infrared, where the opacities are lower and radiation can escape more readily. Other than the unusually red color, the r-process spectra generally resemble those of ordinary SNe, and in particular those with high expansion velocities (e.g., the hyper-energetic Type~Ic event, SN~1998bw \citep{Galama_1998}). The continuum flux, which is produced by emission in the Doppler-broadened forest of lines, resembles a blackbody with a few broad ($\sim 200$~\AA) spectral features. It is not easy to associate these features with either absorption or emission from a single line; instead they arise from blends of many lines. Because our atomic structure models do not accurately predict line wavelengths (and we only include lines of Nd and Fe), the location of the features in our synthetic spectra are not to be trusted. Nevertheless, the general appearance of the model spectra are likely qualitatively correct. One can anticipate where features are most likely to appear by examining the energy spacing of the low lying levels of the lanthanides. Figure~\ref{fig:spec_series} shows the time evolution of the synthetic spectra. At the earliest times ($\lesssim 0.25$~days after ejection) some flux emerges at optical wavelengths, but this phase is short lived. By day $0.5$, the optical emission has faded, and the spectra evolve relatively slowly thereafter, with effective blackbody temperatures steady in the range $T \approx 2000-3000$~K. The temporal evolution can be understood by considering the mean opacity curves (e.g., Figure~\ref{fig:Nd_Fe_Si}). At early times, the ejecta is relatively hot ($\gtrsim 4000$~K) throughout, and the opacity is roughly constant with radius. By day $\sim 0.3$, however, the outermost layers have cooled below $\lesssim 3000$~K, and the r-process opacities drop sharply due to lanthanide recombination. The ejecta photosphere forms near the recombination front (as overlying neutral layers are essentially transparent) which regulates the effective temperature to be near the recombination temperature. This behavior is similar to the plateau phase of the (hydrogen rich) Type~IIP SNe, although in this case the opacity is due to line blanketing, not electron-scattering. More importantly, the temperature at the recombination front ($T_{I} \sim 2500$~K) is a factor of $\sim 2$ lower for r-process ejecta, as the ionization potentials of the lanthanides ($\sim 6$~eV) are lower than that of hydrogen ($\sim 13.6$~eV). It is possible that non-thermal ionization from radioactive decay products (see \S\ref{sec:LTE}) become important at these phases and prevent the outer layers from ever becoming completely neutral. \begin{figure} \includegraphics[width=3.5in]{f13.pdf} \caption{Effect of varying the atomic structure model on the predicted observables of an r-process SN. The figure shows the synthetic spectrum (2.5 days after ejection) calculated using \texttt{Autostructure}\ linelists under different optimization schemes. The dashed lines show, for comparison, blackbody curves of temperature $T = 2500$~K. \label{fig:comp_spec}} \end{figure} Our calculated SED's are somewhat sensitive to the atomic structure model used to generate the line data. Figure~\ref{fig:comp_spec} compares calculations using line data from the different \texttt{Autostructure}\ optimization runs ({\it opt1, opt2,} and {\it opt3}). The observed differences can be taken as some measure of the uncertainty resulting from inaccuracies in our atomic structure calculations. Notably, the spectrum calculated using the {\it opt2} linelist has significantly higher flux in the optical ($\sim 6000$~\AA). This is presumably due to the lower overall opacity of the {\it opt 2} model (Figure~\ref{fig:Nd_vary}). Given the superior match of the {\it opt3} model to the experimental level data, we consider the spectral predictions using this line data to be the most realistic, however it is clear that some significant uncertainties remain. Another concern for the spectrum predictions is the potential breakdown of the Sobolev approximation (see \S\ref{sec:sobolev}). At bluer wavelengths, the mean spacing of strong lines can become less than the intrinsic (presumed thermal) width of the lines, which violates the assumptions used to derive an expansion opacity. It is not immediately clear how this will impact the results. Two lines that overlap exactly will behave like a single line, which suggests that an overlapping line should be discounted, not double counted. In this case, the opacity should saturate at a maximum value $\kappa_{\rm sat}$ (equation~\ref{eq:kappa_sat}). On the other hand, in this saturation limit, photons can no longer escape lines by redshifting past them, as there are no longer any optically thin ``windows" between lines. The individual line optical depths then become relevant, and the effective opacity may be larger than one would estimate from the expansion formalism. In practice, the impact of line overlap may not be so dramatic -- the opacity generically declines to longer wavelength, and the net effect of the transport is to distribute photons to the red/infrared where saturation may no longer be an issue. To fully address the question will require transport calculations that dispense with the Sobolev approximation and resolve individual line profiles. \section{Summary and Conclusions} The opacity of r-process ejecta is orders of magnitude higher than that of ordinary supernova debris, a fact we have demonstrated using new atomic structure calculations (Figure~\ref{fig:high_Z_exp}), pre-existing line data (Figure~\ref{fig:ce_opacity}), and simple physical counting arguments (Figure~\ref{fig:complexity}). There are two physical reasons for the high opacity: 1) {\it Complexity:} The r-process composition includes rare elements with complex valence electron structure, in particular the lanthanides which have an open valence $4f$-shell. Such elements have a significantly greater number of levels and lines, which results in an overall higher expansion opacity; 2) {\it Multiplicity:} The r-process produces a heterogenous mixture of many elements, each of which contributes a distinct series of lines. Since the expansion opacity depends on the total number of strong lines (and not the strength of any individual line) this diversity of the mixture enhances the opacity relative to a more homogenous composition. While our r-process opacity calculations offer a significant improvement over previous estimates (which were little more than educated guesses) several uncertainties remain. The \texttt{Autostructure}\ models only approximate the level structure of the high-Z elements, and so do not correctly predict the wavelength of individual transitions. Fortunately, the pseudo-continuum opacities depend only on the statistical distribution of lines and are fairly robust, although our numerical experiments indicate uncertainties at the factor of $\sim 2$ level. In the future, we can iteratively tune the structure models to better reproduce the observed energy levels, although this is a time consuming process. Moreover, many of the high-Z ions lack good experimental level data. Another, more important, uncertainty is that we have used the radiative data for one species (Nd, $Z=60$) to represent all lanthanides. In fact, not all lanthanides are equal -- the ions whose valence f-shell is nearly open (e.g., La, $Z=56$) or closed (Yb, $Z=70$) are considerable less complex, and should have correspondingly lower opacities. On the other hand, gadolinium (Gd, $Z=64$) has a nearly half filled f-shell, and is one of the most complex species on the periodic table. Simple counting arguments (Figure~\ref{fig:complexity}) suggest that Gd may have an opacity $\gtrsim 10$ times that of Nd. We therefore suspect that our current opacities {\it underestimate} the true values for an r-process mixture where Gd is present at the $\sim 1\%$ level. In future work, we will quantify the line data for all lanthanides. The actinides should also be considered, as they are open f-shell as well, although only a few species (e.g., uranium) likely have high enough abundance to make a difference. In total, the atomic data required for r-process opacities is massive, involving numerous structure calculations and many billions of lines. The high opacity of r-process material has significant implications for discovering and interpreting the radioactively powered transients associated with NSMs. With more realistic opacities, the predicted light curves are of longer duration and dimmer at peak \citep[see][]{Barnes_2013}. Perhaps even more important to detection is that the SED is shifted into the infrared, peaking at $\sim 1~\mu$m. This is due to the strong line blanketing at optical wavelengths, which pushes the photosphere out to cooler layers ($T \sim 2500$~K) where the lanthanides recombine and become more transparent. Other than the very red color, the spectra qualitatively resemble those of other high-velocity SNe, with a pseudo blackbody continuum and broad ($\sim 200$~\AA) spectral features. The predicted emission at optical wavelengths is somewhat sensitive to the details of the opacity and its associated uncertainties. Spectra calculated using our {\it opt2} radiative data were fairly bright in the V-band ($\sim 5000$~\AA), while those calculated using {\it opt3} data had almost no flux at these wavelengths. Because the level structure of our {\it opt3} model agrees better with experiment (and given that the overall opacity may be even higher than our Nd-based estimates) we consider the latter case to be the more likely reality. However, the opacities are still not fully converged, and we cannot altogether rule out the possibility that r-process SNe may emit some persistent emission in the optical. These results suggest that (to the extent possible) it is worthwhile to search for and/or follow up gravitational wave sources at red or infrared wavelengths. Optical surveys, however, will still be sensitive to a radioactive transient if some component of the ejecta is nearly lanthanide free. In fact, it is likely that, in addition to the tidal tails, a second component of lighter elements ($Z \lesssim 50$) is ejected from a post-merger disk wind. If this wind includes radioisotopes with appropriate half-lives (e.g., \ensuremath{^{56}\mathrm{Ni}}), the light curve may be relatively bright and peak in the optical \citep{Barnes_2013}. A detailed understanding of the wind nucleosynthesis and mixing with the tails is important, as our results suggest that contamination by lanthanides at just the $10^{-3}$ level may significantly raise the opacities. Beyond detection, a significant observational challenge will be confirmation that a transient is indeed due to a NSM. There are likely many classes of stellar explosions that produce low mass ejections of radioactive material \citep[e.g.,][]{Bildsten_2007, Moriya_2010}, and an increasing number of fast, faint transients have been observed at optical wavelengths \citep[e.g.,][]{Kasliwal_2010,Perets_2010,Kasliwal_2012,Foley_2012}. Fortunately, our calculations demonstrate that the optical properties of r-process ejecta differ dramatically from transients due to lower mass isotopes. A key distinguishing features is that the r-process SED peaks in the infrared with a nearly constant color temperature regulated to the lanthanide recombination temperature, $\sim 2500$~K. The population of brief, infrared variables is mostly unknown, but it is possible that the infrared transient sky is much cleaner than the optical one. By comparing observations of an r-process SN to light curve models, one could presumably constrain the mass of heavy nuclei ejected in an compact object merger, which would go a long way to understanding the unknown site(s) of the r-process. One would like to go further and spectroscopically study the abundance distribution of the outflows. That will be challenging -- the lines are heavily blended and we do not yet have accurate wavelengths for most of them. In the future, though, we can refine the line data by tuning structure models to match experimental data (where available) and can use radiative transport calculations to quantify how global abundances variations affect the blended features. While measuring a detailed abundance pattern will be hard, spectroscopic modeling should permit strong constraints on the amount and gross composition of ejecta. This assumes that radioactive r-process transients exist and that we can find them. Admittedly, we test dangerous waters any time that, lacking observational input, we attempt to describe a new astrophysical phenomenon on purely theoretical grounds. The situation here is a step more treacherous; not only must we rely on simulations of a complex macroscopic system, even the {\it microscopic} structures of our ions are model-based. Obviously, observational input is needed to ground the theory. In the meantime, the numerical experiment presented in \S\ref{sec:sn1a} may offer a bit of comfort. In that example, we calculated the light curve of an obliviously crude model of a SN~Ia (a uniform \ensuremath{^{56}\mathrm{Ni}}\ blob) using opacities derived entirely from {\it ab-initio} atomic structure models. Despite deliberately ignoring decades of work in the field, our ``supernova from scratch" predictions weren't all that bad, and certainly good enough to allow one to search for and identify a thermonuclear event. There is then reason to hope that the predictions for r-process SNe are (or will soon be) good enough for us to know one when we see one. \begin{acknowledgements} This research has been supported by a Department of Energy Office of Nuclear Physics Early Career Award, and by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Divisions of Nuclear Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work is supported in part by an NSF Division of Astronomical Sciences collaborative research grant AST-1206097. The work of NRB was supported by STFC (ST/J000892/1). We are grateful for computing time made available the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \end{acknowledgements}
1,314,259,995,484
arxiv
\section{Introduction} Inspired by several graph theoretical notions studied in mathematical chemistry, and, especially, by the notions of distance-balanced graphs \cite{ha,je} and the Mostar index of a graph \cite{do}, Miklavi\v{c} and \v{S}parl \cite{misp2} introduced the distance-unbalancedness of a graph $G$. Here, we confirm one of their conjectures from \cite{misp2}. Before we can explain the distance-unbalancedness as well as our contribution, we need to introduce some notation. We consider only finite, simple, and undirected graphs. For a graph $G$, and two vertices $u$ and $v$ of $G$, let ${\rm dist}_G(u,v)$ denote the {\it distance in $G$ between $u$ and $v$}, and let $n_G(u,v)$ be the number of vertices $w$ of $G$ that are closer to $u$ than to $v$, that is, that satisfy ${\rm dist}_G(u,w)<{\rm dist}_G(v,w)$. The {\it Mostar index} \cite{do} of $G$ is $${\rm Mo}(G)=\sum\limits_{uv\in E(G)}|n_G(u,v)-n_G(v,u)|.$$ A graph $G$ is {\it distance-balanced} \cite{ha,je} if $n_G(u,v)=n_G(v,u)$ for every edge $uv$ of $G$; or, equivalently, if ${\rm Mo}(G)=0$. The {\it distance-unbalancedness} \cite{misp2} of $G$ is $${\rm uB}(G)=\sum\limits_{\{ u,v\}\in {V(G)\choose 2}}|n_G(u,v)-n_G(v,u)|,$$ where ${V(G)\choose 2}$ denotes the set of all $2$-element subsets of the vertex set $V(G)$ of $G$, that is, the edge set of the complete graph with vertex set $V(G)$. A graph $G$ is {\it highly distance-balanced} \cite{misp1} if $n_G(u,v)=n_G(v,u)$ for every two distinct vertices $u$ and $v$ of $G$; or, equivalently, if ${\rm uB}(G)=0$. For a detailed discussion about the role of the above notions in mathematical chemistry, we refer to the cited references. In \cite{misp2} Miklavi\v{c} and \v{S}parl collect numerous observations concerning the distance-unbalancedness and pose several conjectures. Confirming Conjecture 4.2. from \cite{misp2}, we prove the following. \begin{theorem}\label{theorem1} If $T$ is a tree of order $n$, then $${\rm uB}(T)\geq {\rm uB}(K_{1,n-1})=(n-1)(n-2)$$ with equality if and only if $T$ is either a star $K_{1,n-1}$ or $n=4$ and $T$ is the path $P_4$. \end{theorem} As the definition of distance-unbalancedness involves a summation over all unordered pairs of distinct vertices, this parameter is much harder to approach than many other comparable parameters. In particular, it is much more difficult to analyze the effect of the kind of local modifications that are usual proof techniques in this area. Our proof relies on the insight, implicit in Lemma \ref{lemma1} below, that considering all unordered pairs of vertices of distance one or two is sufficient. The rest of this paper is devoted to the proof of Theorem \ref{theorem1}. \section{Proof of Theorem \ref{theorem1}} For a graph $G$, the square $G^2$ of $G$ has the same vertex set as $G$, and two distinct vertices of $G$ are adjacent in $G^2$ if their distance in $G$ is at most two. For the proof of Theorem \ref{theorem1}, we consider the following auxiliary parameter $${\rm uB}_2(G)=\sum\limits_{uv\in E(G^2)}|n_G(u,v)-n_G(v,u)|,$$ and we establish the following. \begin{lemma}\label{lemma1} If $T$ is a tree of order $n$, then ${\rm uB}_2(T)\geq (n-1)(n-2)$. \end{lemma} Before proving this lemma, we show that Theorem \ref{theorem1} is an immediate consequence. \begin{proof}[Proof of Theorem \ref{theorem1}] By definition and Lemma \ref{lemma1}, ${\rm uB}(T)\geq {\rm uB}_2(T)\geq (n-1)(n-2)$ for every tree $T$ of order $n$. It is an easy calculation that stars and $P_4$ satisfy ${\rm uB}(T)=(n-1)(n-2)$. Now, in order to complete the proof, we suppose, for a contradiction, that $T$ is a tree of order $n$ with ${\rm uB}(T)=(n-1)(n-2)$ that is neither a star nor $P_4$. Clearly, this implies that $n\geq 5$, and that $T$ has diameter at least three. Since ${\rm uB}(T)=(n-1)(n-2)$ implies ${\rm uB}(T)={\rm uB}_2(T)$, we have $n_T(u,v)=n_T(v,u)$ for every two vertices $u$ and $v$ at distance three. Let $u$ and $v$ be two vertices at distance three. If $u$ has a neighbor $u'$ that does not lie on the path $P$ between $u$ and $v$, and $v'$ is the neighbor of $v$ on $P$, then $u'$ and $v'$ have distance three but $n_T(u',v')<n_T(u,v)=n_T(v,u)<n_T(v',u'),$ which is a contradiction. Using $n_T(u,v)=n_T(v,u)$ this easily implies that $T$ arises from the disjoint union of two stars of order $\frac{n}{2}$ by adding an edge between the two center vertices. Now, \begin{eqnarray*} {\rm uB}(T) & \geq & {\rm uB}_2(T)\\ &=& (n-2)^2+2\left(\frac{n}{2}-1\right)^2\\ &=& (n-1)(n-2)+\frac{1}{2}(n-2)(n-4)\\ &>& (n-1)(n-2), \end{eqnarray*} which is a contradiction, and completes the proof. \end{proof} We proceed to the proof of the lemma. \begin{proof}[Proof of Lemma \ref{lemma1}] Choose the tree $T$ of order $n$ such that ${\rm uB}_2(T)$ is as small as possible. If $T$ is a path, then a simple calculation yields ${\rm uB}_2(T)=(n-1)(n-2)$, and the desired result follows. Hence, we may assume that $T$ has at least one vertex of degree at least three. We consider different cases. \medskip \noindent {\bf Case 1} {\it $T$ has exactly one vertex $c$ of degree $k$ at least three.} \medskip \noindent Let the $k$ components of $T-c$ have orders $n_1,\ldots,n_k$ with $n_1\geq \ldots\geq n_k\geq 1$. Note that all these components are paths, and that $n_1+\cdots+n_k=n-1$. \medskip \noindent {\bf Case 1.1} {\it $n_1\leq \frac{n}{2}$.} \medskip \noindent We have \begin{eqnarray*} {\rm uB}_2(T) & = & \sum\limits_{i=1}^k\Big((n-2)+(n-3)+\cdots+(n-2n_i)\Big)+\sum\limits_{i=1}^{k-1}\sum\limits_{j=i+1}^{k}(n_i-n_j)\\ & \geq & \sum\limits_{i=1}^k\Big((n-2)+(n-3)+\cdots+(n-2n_i)\Big)+(n_1-n_2)\\ & = & \sum\limits_{i=1}^k\Big((2n_i-1)n-n_i(2n_i+1)+1\Big)+(n_1-n_2)\\ & = & f_1(n,k)-\sum\limits_{i=1}^k2n_i^2+(n_1-n_2), \end{eqnarray*} where $f_1(n,k)$ is a suitable function of $n$ and $k$. We consider the following optimization problem: \begin{eqnarray}\label{e1} \begin{array}{rrcl} \min & f_1(n,k)-\sum\limits_{i=1}^k2n_i^2+(n_1-n_2) & & \\[3mm] s.th. & \frac{n}{2}\geq n_1\geq \ldots\geq n_k&\geq &1 \\ & n_1+\cdots+n_k&=&n-1 \\ & n_1,\ldots,n_k & \in& \frac{\mathbb{N}}{2}. \end{array} \end{eqnarray} Note that in (\ref{e1}), the originally integral values of the $n_i$ have been relaxed to being half-integral. Let $(n_1,\ldots,n_k)$ be a lexicographically maximal optimal solution of (\ref{e1}). If $n_1<\frac{n}{2}$ and $n_2>n_3$, then \begin{eqnarray*} \Bigg(-2\left(n_1+\frac{1}{2}\right)^2-2\left(n_2-\frac{1}{2}\right)^2+\left(n_1+\frac{1}{2}\right)-\left(n_2-\frac{1}{2}\right)\Bigg) &-&\Bigg(-2n_1^2-2n_2^2+n_1-n_2\Bigg)\\ &=&-2(n_1-n_2)\\ &\leq &0 \end{eqnarray*} implies that $\left(n_1+\frac{1}{2},n_2-\frac{1}{2},\ldots,n_k\right)$ is a lexicographically larger optimal solution of (\ref{e1}), which is a contradiction. If $n_1<\frac{n}{2}$, $n_i>1$ for some $i\in \{ 3,\ldots,k\}$, and $i$ is chosen largest with this property, then \begin{eqnarray*} \Bigg(-2\left(n_1+\frac{1}{2}\right)^2-2\left(n_i-\frac{1}{2}\right)^2 +\left(n_1+\frac{1}{2}\right)\Bigg) &-&\Bigg(-2n_1^2-2n_i^2+n_1\Bigg)\\ &=&-2(n_1-n_i)-\frac{1}{2}\\ &<&0 \end{eqnarray*} implies that $\left(n_1+\frac{1}{2},\ldots,n_i-\frac{1}{2},\ldots,n_k\right)$ is a better solution of (\ref{e1}), which is a contradiction. Finally, if $n_1=\frac{n}{2}$ and $n_2<\frac{n}{2}-k+1$, then $n_i>1$ for some $i\in \{ 3,\ldots,k\}$. If $i$ is largest with this property, then \begin{eqnarray*} \Bigg(-2\left(n_2+\frac{1}{2}\right)^2-2\left(n_i-\frac{1}{2}\right)^2-\left(n_2+\frac{1}{2}\right)\Bigg) &-&\Bigg(-2n_2^2-2n_i^2-n_2\Bigg)\\ &=&-2(n_2-n_i)-\frac{3}{2}\\ &<&0 \end{eqnarray*} implies that $\left(n_1,n_2+\frac{1}{2},\ldots,n_i-\frac{1}{2},\ldots,n_k\right)$ is a better solution of (\ref{e1}), which is a contradiction. These observations imply that \begin{enumerate}[(a)] \item either $n\geq 2k$, $n_1=\frac{n}{2}$, $n_2=\frac{n}{2}-k+1$, and $n_3=\ldots=n_k=1$, \item or $n<2k$, $n_1=n-k$, and $n_2=\ldots=n_k=1$. \end{enumerate} In the first case, \begin{eqnarray*} {\rm uB}_2(T) & \geq & \sum\limits_{i=1}^k\Big((2n_i-1)n-n_i(2n_i+1)+1\Big)+n_1-n_2\\ &\stackrel{(a)}{=}& (n-1)(n-2)+(n-2k)(k-2)\\ &\geq & (n-1)(n-2), \end{eqnarray*} and, in the second case, \begin{eqnarray*} {\rm uB}_2(T) & \geq & \sum\limits_{i=1}^k\Big((2n_i-1)n-n_i(2n_i+1)+1\Big)+n_1-n_2\\ &\stackrel{(b)}{=}& (n-1)(n-2)+(2k-n)(n-k-1)\\ &\geq & (n-1)(n-2). \end{eqnarray*} Altogether, we obtain ${\rm uB}_2(T)\geq (n-1)(n-2)$ as required in both cases. \medskip \noindent {\bf Case 1.2} {\it $n_1>\frac{n}{2}$.} \medskip \noindent We have \begin{eqnarray*} {\rm uB}_2(T) & = & \Big((n-2)+\cdots+1+0+1+\ldots+(2n_1-n)\Big) +\sum\limits_{i=2}^k\Big((n-2)+\cdots+(n-2n_i)\Big)\\ &&+\sum\limits_{i=1}^{k-1}\sum\limits_{j=i+1}^{k}(n_i-n_j)\\ & \geq & \Big((n-2)+\cdots+1+0+1+\ldots+(2n_1-n)\Big) +\sum\limits_{i=2}^k\Big((n-2)+\cdots+(n-2n_i)\Big)\\ &+&\sum\limits_{i=2}^{k}(n_1-n_i)\\ & = & \frac{1}{2}(n-1)(n-2)+\frac{1}{2}(2n_1-n)(2n_1-n+1)+(k-1)n_1\\ &&+\sum\limits_{i=2}^k\Big((2n_i-1)n-n_i(2n_i+1)+1-n_i\Big)\\ & = & f_2(n,k) +2n_1^2-n_1(4n-k)-\sum\limits_{i=2}^k(2n_i^2+2n_i), \end{eqnarray*} where we used $\sum\limits_{i=2}^k 2n_in=(2(n-1)-2n_1)n$, and $f_2(n,k)$ is a suitable function of $n$ and $k$. Note that, for $i\in \{ 2,\ldots,k\}$, we have $n_1+n_i\leq n_1+n_2\leq n-k+1$, and, hence, $$4(n_1+n_i)-4n+k+2\leq -3k+6<0.$$ If $n_i>1$ for some $i\in\{ 2,\ldots,k\}$, and $i$ is largest with this property, then \begin{eqnarray*} \Bigg(2(n_1+1)^2-(n_1+1)(4n-k)-2(n_i-1)^2-2(n_i-1)\Bigg) &-&\Bigg(2n_1^2-n_1(4n-k)-2n_i^2-2n_i\Bigg)\\ &=&4(n_1+n_i)-4n+k+2\\ &<&0. \end{eqnarray*} This observation implies that $$ \begin{array}{rrcl} \min & f_2(n,k)+2n_1^2-n_1(4n-k)-\sum\limits_{i=2}^k(2n_i^2+2n_i) & & \\[3mm] s.th. & n_1&>&\frac{n}{2}\\ &n_1\geq \ldots\geq n_k&\geq &1 \\ & n_1+\cdots+n_k&=&n-1 \\ & n_1,\ldots,n_k & \in& \mathbb{N} \end{array} $$ is assumed \begin{enumerate}[(c)] \item for $n_1=n-k$ and $n_2=\ldots=n_k=1$. \end{enumerate} This implies \begin{eqnarray*} {\rm uB}_2(T) & \geq & \frac{1}{2}(n-1)(n-2)+\frac{1}{2}(2n_1-n)(2n_1-n+1)+(k-1)n_1\\ &&+\sum\limits_{i=2}^k\Big((2n_i-1)n-n_i(2n_i+1)+1-n_i\Big)\\ & \stackrel{(c)}{\geq} & (n-1)(n-2)+(k-1)(k-2)\\ & \geq & (n-1)(n-2), \end{eqnarray*} and, hence, also ${\rm uB}_2(T)\geq (n-1)(n-2)$ as required in this case. \medskip \noindent {\bf Case 2} {\it $T$ has at least two vertices of degree at least three.} \medskip \noindent Considering two vertices of degree at least three at maximum distance, it follows that $T$ has a vertex $c$ of degree $k+1$ at least three such that $T-c$ has \begin{itemize} \item $k$ components that are paths of orders $n_1,\ldots,n_k$ with $n_1\geq \ldots \geq n_k\geq 1$ and $$n':=1+n_1+\ldots+n_k\leq \frac{n}{2},$$ as well as \item one component $K$ of order $n-n'$. \end{itemize} Let $d$ be the neighbor of $c$ in $V(K)$. Let the tree $T'$ arise from the disjoint union of $K$ and a path $P$ of order $n'$ by adding one edge between $d$ and an endvertex of $P$. Our goal is to show that ${\rm uB}_2(T)>{\rm uB}_2(T')$, which would contradict the choice of $T$, and complete the proof. We have \begin{eqnarray*} {\rm uB}_2(T)-{\rm uB}_2(T') & = & \sum\limits_{i=1}^k\Big((n-2)+\cdots+(n-2n_i)+(n-n')-n_i\Big) +\sum\limits_{i=1}^{k-1}\sum\limits_{j=i+1}^{k}(n_i-n_j)\\ && -\Big((n-2)+\cdots+(n-(2n'-1))\Big)\\ & \geq & \sum\limits_{i=1}^k\Big((n-2)+\cdots+(n-2n_i)+(n-n')-n_i\Big)\\ && -\Big((n-2)+\cdots+(n-(2n'-1))\Big)\\ &=& \sum\limits_{i=1}^k\Big((2n_i-1)n-n_i(2n_i+1)+1+(n-n')-n_i\Big)\\ &&-\Big((2n'-2)n-n'(2n'-1)+1\Big)\\ &=& f_3(n,n',k)-\sum\limits_{i=1}^k2n_i^2, \end{eqnarray*} where $f_3(n,n',k)$ is a suitable function of $n$, $n'$, and $k$. By the convexity of $x\mapsto x^2$, $$ \begin{array}{rrcl} \min & f_3(n,n',k)-\sum\limits_{i=1}^k2n_i^2 & & \\[3mm] s.th. & n_1\geq \ldots\geq n_k&\geq &1 \\ & n_1+\cdots+n_k&=&n'-1 \\ & n_1,\ldots,n_k & \in& \mathbb{N} \end{array} $$ is assumed \begin{enumerate}[(d)] \item for $n_1=n'-k$ and $n_2=\ldots=n_k=1$. \end{enumerate} Note that $3n'=2n'+n'\geq 2(k+1)+3\geq 2k+5$. Now, we obtain \begin{eqnarray*} {\rm uB}_2(T)-{\rm uB}_2(T') & \geq & \sum\limits_{i=1}^k\Big((2n_i-1)n-n_i(2n_i+1)+1+(n-n')-n_i\Big)\\ &&-\Big((2n'-2)n-n'(2n'-1)+1\Big)\\ & \stackrel{(d)}{\geq} & (3n'-2k-3)(k-1)\\ & \stackrel{k\geq 2}{>} & 0, \end{eqnarray*} which is the desired contradiction, completing the proof. \end{proof}
1,314,259,995,485
arxiv
\section{Introduction} \indent \indent Since the original Randall-Sundrum model \cite{Randall:1999vf,Randall:1999ee} many works intends to extend the localization of various fields on a 3-brane embedded in the higher dimensional bulk. Besides the localization of gravity and other fields, many models have been suggested for explain other physical problems, for instance, the small value of the cosmological constant \cite{Chen:2000at}. In order to explain the geometry used to localize the fields in noncompact extra dimensions, some authors have assumed that the 3-brane is generated by a topological defect. In a six-dimensional bulk, Cohen and Kaplan \cite{Cohen:1999ia} have found such a geometry generated by a global string. In this context the geometry has cylindrical symmetry and a naked singularity at $r=0$, where the 3-brane is placed, and also another singularity far from the origin. Gregory \cite{Gregory:1999gv} has found a nonsingular string-like solution by adding a cosmological constant to the bulk and splitting the metric inside and outside the core of the string defect. For the continuity boundary condition on the core, Gregory has found the phase space of solutions describing the stable and unstable points. For a geometry generated by a local string-like defect, Gherghetta and Shaposhnikov \cite{Gherghetta:2000qi} have found a solution with negative cosmological constant on bulk that has trapped gravity. Oda \cite{Oda:2000zc} has extended this solution for a bulk builded from a warped product of a (p-1)-brane and a $S^{n}$ sphere and he has studied the localization of many kind of fields. All solutions above assume that the transverse space has spherical symmetry and the whole bulk has cylindrical symmetry. We have studied an extension of this approach for the localization of scalar field where the transverse space has a conifold geometry whose singularity depends on a resolution parameter. The conifold here is a conical manifold $\mathcal{C}^{n}$ over a $X^{n-1}$ called a base space. $X^{n-1}$ is topological equivalent to $S^{n-1}$ defined by the coset $X^{n-1}=SU(n-1)/SU(n)$ \cite{Candelas:1989js}. It has a naked singularity that arises as an orbifold fixed point of the group $Z_{n}$, i.e., $\mathcal{C}^{n}=R^{n}/Z_{n}$. The conifold is an example of a Calabi-Yau space, a Ricci-flat manifold that is a candidate to a internal space in compactification of string theories. The conifold is a generator of all \textit{Calabi-Yau} spaces through a process that generates singularities and is called conifold transitions \cite{Greene:1995hu}. In this process, some fields become massless and then the spectrum of the fields is changed \cite{p}. Despite these interesting properties, the general relativity is not well defined on singularities and sometimes it is necessary take off the conical singularity. There are two main processes to smooth out the singularity: the first one is called deformation because it deforms the quadric that defines the conifold; the second is called resolution because it introduces a resolution parameter that controls the blow-up of singularity \cite{Candelas:1989js}. These processes are used to study the extensions of AdS-CFT correspondence \cite{Klebanov:2000hb,Pando Zayas:2000sq}. The change of spectrum on conifold spaces and the symmetry properties of the their smoothed versions have motivated the study of those spaces in brane worlds cenarios. Firouzjahi and Tye \cite{Firouzjahi:2005qs} have studied the behavior of the gravitational and Kaluza-Klein modes on deformed conifold and they have shown that the graviton has a rather uniform probability distribution everywhere while a KK mode is peaked in the region near $r=0$. This region is called throat because it has a big curvature and interpolates between asymptotically flat regions. Furthermore, Noguchi \textit{et al} \cite{Noguchi:2005ws} have used the Klebanov-Strassler throat of a deformed conifold in order to obtain localized gravitational KK modes. Since the supergravity solution of a 3-brane converges to $AdS_{5}\times S^{5}$ for $r\rightarrow 0$, Brummuer \textit{et al} \cite{Brummer:2005sh} have used a throat of conifold to deduce and extend the original Randall-Sundrum geometry. Further, V\'{a}zquez-Poritz \cite{VazquezPoritz:2001zt} has shown that the $Z_{2}$ symmetry of the Randall-Sundrun model can be deduced from a dimensional reduction from a six dimensional Eguchi-Hanson resolved conifold. Since this symmetry is natural in Eguchi-Hanson spaces, V\'{a}zquez-Poritz has shown that the metric used for the localization of the gravity can de obtained from a particular conifold. Furthermore, Pont\'{o}n and Poppitz \cite{Ponton:2000gi} have studied the relation between gravity localization on string-like defects and an AdS-CFT correspondence on the so called hidden brane. Since the string-like geometry has a conical singularity far from the origin the authors have found that the singularity could be resolved using the AdS-CFT duality. On the other hand, Kehagias \cite{Kehagias:2004fb} had used a compact conical transverse space to explain the small value of the cosmological constant. All of these points have motivated us to study geometries where the transverse space is a smoothed conifold and ask whether that geometry could localize some kind of field in a 3-brane. In this work we replaced the usual spherically symmetric transverse space by a 2-cycle of the resolved conifold. Since the resolved conifold has spherical symmetry for a fixed $r$ and the radial metric component approaches to one asymptotically, this geometry converges to string-like one if we put the 3-brane far from the tip of the cone. Another feature of the resolved conifold that has a great importance in the Randall-Sundrum-like model, is its $\mathbb{Z}_{2}$ symmetry as pointed out in Ref. \cite{Pando Zayas:2000sq}. We have studied here the effects that variations of the resolution parameter, or in other terms, the singularity, has on the localization of a scalar field in a 3-brane placed in the origin of the resolved conifold. The study of effects of geometrical singularities has on the localization problems has already done by Cvetic \textit{et al} \cite{Cvetic:2000dz} where the geometry was generated by a singular domain wall as well by Gregory and Santos \cite{Gregory:2002tp} in the global vortex geometry. In present work, however, we have chosen a transverse space whose singularity depends continually on a parameter. The resolved conifold geometry also generalize the so called cigar-like geometries. Indeed, in cigar manifolds the curvature is great but not infinity around the origin and flat asymptotically \cite{chow}. In resolved conifold geometry the value of the curvature in the origin is parameterized and asymptotically the curvature converges to zero or another constant. Using a cigar-like geometry without cosmological constant Carlos and Moreno have found a supersymmetric solution that has trapped gravity \cite{deCarlos:2003nq}. On the other hand, the so called Ricci flow is given by a parameter evolution of the metric through a heat-type equation called Ricci equation \cite{chow,perelman,topping}. This flow provides information about the stability of the manifold like the formation or blow up of singularities. Therefore, it is interesting to use the smoothed conifold geometries to study the stability of the bulk geometry in brane worlds using methods of geometric analysis like Ricci equation. This work is organized as follows: in the section \ref{Conifold} we have defined the metric for the resolved conifold and we have studied its principal properties as well as its dependence on the resolution parameter. We also have defined the conifold 2-cycle that we have worked out. In the section \ref{Bulk geometry} we have proposed a warped metric ansatz, studied its Einstein equations and compared it with the well studied string-like solutions. Still in this section, we have studied this geometry for a linear and a nonlinear warp factor. In the section \ref{Scalarfieldinminimalcoupling} we have studied the localization of a real scalar field using a linear and a non-linear warp factor for both massive and massless modes. Finally we have concluded in the section \ref{Conclusions} summarizing our results and presenting some perspectives. \section{Conifold geometry} \label{Conifold} The 6-Conifold is a conical manifold $C_{6}\subset \mathbb{C}^{4}$ defined by the quadric \cite{Candelas:1989js}: \begin{equation} z_{1}^{2}+z_{2}^{2}+z_{3}^{2}+z^{2}_{4}=0. \end{equation} The metric of a 6-conifold over a $X^{5}$ compact space is: \begin{equation} ds^{2}_{6}=dr^{2}+r^{2}ds^{2}(X^{5}). \end{equation} This space has a naked singularity in $r=0$. For $X^{5}=T^{1,1}=SU(2)\times SU(2)/U(1)$ the metric is \cite{p,Klebanov:2000hb}: \begin{eqnarray} ds^{2}_{6} & = & dr^{2}+\frac{r^{2}}{9}(d\psi+\cos\theta_{1}d\phi_{1}+\cos\theta_{2}d\phi_{2})^{2}\nonumber\\ & + & \frac{r^{2}}{6}(d\theta_{1}^{2}+\sin^{2}\theta_{1} d\phi_{1}^{2}+d\theta_{2}^{2}+\sin^{2}\theta_{2} d\phi_{2}^{2}). \end{eqnarray} A smooth version of this conifold, called resolved conifold, has the metric \cite{Pando Zayas:2000sq,Cvetic:2000mh} \begin{eqnarray} ds^{2}_{6} & = & \left(\frac{r^{2}+6a^{2}}{r^{2}+9a^{2}}\right)dr^{2}+\frac{r^{2}}{9}\left(\frac{r^{2}+9a^{2}}{r^{2}+6a^{2}}\right)(d\psi+\cos\theta_{1}d\phi_{1}+\cos\theta_{2}d\phi_{2} )^{2}\nonumber\\ & + & \frac{1}{6}r^{2}(d\theta_{1}^{2}+\sin^{2}\theta_{1} d\phi_{1}^{2})+\frac{1}{6}(r^{2}+6a^{2})(d\theta_{2}^{2}+\sin^{2}\theta_{2} d\phi_{2}^{2}). \end{eqnarray} \begin{figure} \centering \includegraphics[scale=1.1]{resolvedscalarcurvature.eps} \caption{The scalar curvature of the resolved conifold. The origin was shifted to the point $r=5$. For $a=1$ (thick line), the function is regular in $r=0$ while for $a=0$ (thin line) the scalar curvature diverges in the origin.} \label{resolvedscalarcurvature} \end{figure} We have plotted the scalar curvature in the figure (\ref{resolvedscalarcurvature}) where we have shifted the origin to point $r=5$. Note that the curvature is smooth for $a=1$ in $r=0$ and the curvature diverges in the origin for $a=0$. Furthermore, the resolved conifold has a positive curvature and is asymptotically flat. These issues have motivated us to use this manifold as a prototype of extension of transverse spaces in the brane worlds. As a matter of fact, many authors have studied the localization of fields in spherical backgrounds whose transverse space has positive, constant and non-singular curvature \cite{Cohen:1999ia,Gregory:1999gv,Gherghetta:2000qi,Oda:2000zc}. Since the resolved conifold is parameterized by the resolution parameter that controls the singularity at $r=0$, we can study the effects the singularity has on the localization of fields in this particular space. Note that in the limit $r\rightarrow 0$ the metric converges to a spherical one of radius $a$ \begin{equation} \lim_{r\rightarrow 0}{ds^{2}_{6}}=a^{2}(d\theta_{2}^{2}+\sin^{2}\theta_{2} d\phi_{2}^{2}), \end{equation} that has no singularity. Topologically this can be seen as a result of to take out a small neighborhood around $r=0$ and replaced it by a $S^{2}$ of radius $a$. Since in the limit $a\rightarrow 0$ we re-obtain the singular conifold again, the radius $a$ can be used in order to measure how smooth is the conifold and then it is called resolution parameter. Now if we take as constants our angular coordinates $\psi,\phi_{1},\theta_{2},\phi_{2}$, the cone 2-cycle can be written as the 2-resolved cone, namely \begin{eqnarray} ds^{2}_{2} & = & \left(\frac{r^{2}+6a^{2}}{r^{2}+9a^{2}}\right)dr^{2}+ \frac{1}{6}(r^{2}+6a^{2})d\theta^{2}. \end{eqnarray} This cone has a radial metric component $g_{rr}=\alpha(r)=\left(\frac{r^{2}+6a^{2}}{r^{2}+9a^{2}}\right)$ whose graphic is plotted in the fig.(\ref{resolvedconifoldmetric}). Note that $\lim_{r\rightarrow \infty}{g_{rr}}=1$ and therefore asymptotically the cone approaches the plane $\mathbb{R}^{2}$ with cylindrical metric of an effective radius $r_{eff}=\sqrt{\frac{(r^{2}+6a^{2})}{6}}$ which is the transverse metric used in string-like geometries. Near $r=0$ we have a hyperbolic behavior with high curvature and this region is called the throat. The angular resolved conifold metric component $\beta(r,a)=\frac{(r^{2}+6a^{2})}{6}$ has a conical singularity dependent on the resolution parameter. It is worthwhile to mention that as $a\rightarrow 0$ the width of the throat approaches zero. Since the angular metric components diverge the effective radius of the base sphere grows without limit. The scalar curvature of this 2-manifold is \begin{equation} R=R(r,a)=-\frac{6a^{2}(r^{2}+18a^{2})}{(r^{2}+6a^{2})^{3}}. \end{equation} For sake of comparison we cite here the scalar curvature for the Hamilton cigar geometry \cite{chow}, namely \begin{equation} R_{H}=\frac{4}{(1+r^{2})}. \end{equation} Thus, this 2-cycle of the resolved conifold is a space of varying negative scalar curvature that converges asymptotically to zero. This behavior is similar to the Hamilton cigar that is a Ricci soliton used in the study of the stability of manifolds \cite{topping}. The Hamilton cigar is an example of solution of the Ricci flow equation: \begin{equation} \frac{\partial g_{ab}}{\partial t}= -2 R_{ab} \end{equation} where $t\in [0,1]$ is a parameter that describes the evolution of the geometry of the manifold. \begin{figure} \centering \includegraphics[scale=1.1]{./resolvedconifoldmetric.eps} \caption{Radial metric component of resolved conifold. For $a=1$ (thick line), the factor has a large variation and asymptotically converges to one. For $a=0$ (thin line) radial factor is equal to one as in the string-like defects geometry.} \label{resolvedconifoldmetric} \end{figure} \section{Bulk geometry} \label{Bulk geometry} Once described the metric and the 2-cycle of the resolved conifold which we will use as a transverse space, we want to study the localization of a scalar field in a 3-brane embedded in a six dimensional bulk of form $M_{6}=M_{4}\times \mathcal{C}_{2}$, where $\mathcal{C}_{2}$ is a 2-cycle of the resolved conifold described above. The action for the gravitational is: \begin{equation} S_{g} =\frac{1}{2K_{6}^{2}}\int_{\mathcal{M}_{6}}{d^{6}x\sqrt{-g}((R-2\Lambda)}+\int_{\mathcal{M}_{6}}{dx^{6}\sqrt{-g}L_{m}} \end{equation} where $K_{6}^{2}=\frac{8\pi}{M_{6}^{4}}$ and $M_{6}^{4}$ is the six-dimensional bulk Planck mass. Further, let us assume the following ansatz for the energy-momentum tensor: \begin{align} T^{\mu}_{\nu} & = t_{0}(r)\delta^{\mu}_{\nu}\\ T^{r}_{r} & = t_{r}(r)\\ T^{\theta}_{\theta} & = t_{\theta}(r) \end{align} where, \begin{equation} T_{ab}=-\frac{2}{\sqrt{-g}}\frac{\partial L_{m}}{\partial g^{ab}} \end{equation}\\ Now, let us choose the metric components in such way that we can obtain the string-like defect geometry in some limits. For now on, the metric ansatz will be \begin{eqnarray} \label{ansatz} ds^{2}_{6} & = & e^{-A(r)}\hat{g}_{\mu\nu}dx^{\mu}dx^{\nu}+\alpha(r,a) dr^{2} + \beta(r,a) e^{-B(r)}d\theta^{2}, \end{eqnarray} where $a\geq0$ is the resolution parameter. This ansatz extends the solution for string-like defects by the inclusion of the resolved conifold metric factors $\alpha(r,a), \beta(r,a)$ . Since the radial component approaches to one at infinity, if we put the 3-brane in a point far from the origin of the resolved conifold, this ansatz goes to the Oda ansatz one \cite{Oda:2000zc}. Furthermore, the geometry of the bulk is parameter-dependent which enable us to control the singularity. The Einstein equations for the metric ansatz in eq. (\ref{ansatz}) are \begin{align} 3A''+B''-\frac{3}{2}A'B'-B'^{2}-3A'^{2} +\frac{3}{2}\left(\frac{\beta'}{\beta}-\frac{\alpha'}{\alpha}\right)A'+\nonumber\\ \left(\frac{\beta'}{\beta}-\frac{1}{2}\frac{\alpha'}{\alpha}\right)B'-\frac{\beta''}{\beta} \label{eq3.1} +\frac{1}{2}\frac{\alpha'}{\alpha}\frac{\beta'}{\beta}+\alpha\left(\frac{1}{2}e^{A}\hat{R} -2\Lambda+2K_{6}^{2}t_{0}\right) & = 0,\\ -3A'^{2}-2A'B'+2\frac{\beta'}{\beta}A'+\alpha\left(e^{A}\hat{R}-2\Lambda+2K_{6}^{2}t_{r}\right) & = 0,\label{eq3.2} \\ 4A''-5A'^{2}-2\frac{\alpha'}{\alpha}A'+\alpha\left(e^{A}\hat{R} -2\Lambda + 2K^{2}_{6}t_{\theta}\right)& = 0.\label{eq3.3} \end{align} The continuity equation for the energy-momentum tensor is \begin{equation} \nabla^{a}T_{ab}=0. \end{equation} This equation yields a constraint on the components of energy-momentum tensor \begin{equation} t'_{r}=2A'(t_{r}-t_{0})+\frac{B'}{2}(t_{r}-t_{\theta})+\frac{\beta'}{2\beta}(t_{\theta}-t_{r}). \label{continuity} \end{equation} The equations (\ref{eq3.1}, \ref{eq3.2}, \ref{eq3.3}) and the continuity equation (\ref{continuity}) differ from the solution of the string-like defects by the addition of the angular factor $\beta$ of the resolved conifold metric. Let us now sum the radial and angular Einstein equations and assume that $A(r)=B(r)$. This yields a linear differential equation for $A(r)$ in the form \begin{equation} \label{Aequation} 2A''(r)-\left(\frac{\alpha'}{\alpha}+\frac{\beta'}{\beta}\right)A'(r)+K_{6}^{2}\alpha(t_{\theta}-t_{r})=0. \end{equation} Defining \begin{eqnarray} \delta(r,a) & = & -\left(\frac{\alpha'}{\alpha}+\frac{\beta'}{\beta}\right),\\ \chi(r,a) & = & K_{6}^{2}\alpha(t_{\theta}-t_{r}), \end{eqnarray} the eq. (\ref{Aequation}) can be rewritten as \begin{equation} \label{A'equation} 2A''(r)+\delta(r,a)A'(r)+\chi(r,a)=0. \end{equation} The solution of eq.(\ref{A'equation}) is \begin{equation} A(r)= A(0)-\int_{0}^{r}{\left(\frac{\int_{0}^{r'}{\eta(r'',a)\chi(r'',a) dr''}}{\int_{0}^{r'}{\eta(r'',a)dr''}}\right)dr'. } \end{equation} Let us suppose the boundary conditions \begin{equation} \label{boundary1} A(0) = q, \end{equation} \begin{equation} \label{boundary2} \lim_{r\rightarrow \infty}{A(r)} = \infty, \end{equation} where $q$ is a constant. The equation (\ref{A'equation}) with boundary conditions gives the warp factor. In the point $r=0$, the metric defined by eq. (\ref{ansatz}) becomes \begin{eqnarray} \label{fiber} ds^{2}_{6} & = & \hat{g}_{\mu\nu}dx^{\mu}dx^{\nu}+ \frac{a^{2}}{6}d\theta^{2}. \end{eqnarray} This is a factorizable metric of the space $M_{4}\times S^{1}$ where $S^{1}$ has radius $\frac{a}{\sqrt{6}}$. Therefore, the 3-brane can be realized as a normal fiber bundle of strings in $r=0$. In this geometry, the relationship between the four-dimensional Planck mass ($M_{4}$) and the bulk Planck mass ($M_{6}$) is given by \begin{equation} \label{planckmass} M^{2}_{4}=2\pi M_{6}^{4}\int_{0}^{\infty}{e^{-A(r)-\frac{B(r)}{2}}\sqrt{\alpha(r,a)\beta(r,a)}dr}. \end{equation} Therefore, we can use the resolution parameter in order to tuning the ratio between the Planck masses and so explaining the hierarchy between them. This is an extension of the string-like tuning of the Planck masses: in the string-like geometry, the adjust is made by the six-dimensional cosmological constant $\Lambda$ and the tension of the string $\mu$ \cite{Gherghetta:2000qi,Oda:2000zc}. Here, we have added a dependence on a geometrical parameter $a$. Note that the hierarchy is well-defined even for the singular cone $(a=0)$. Therefore, using parameter dependent transverse spaces we could obtain a parameter dependent hierarchy. Since there are many parameter-dependent spaces these manifolds could be used to solve the hierarchy problem. We argue that this dependence could be related to possible transformations in transverse space, for instance, the conical transitions. \subsection{Linear warp factor} \label{linear warp factor} Now let us choose a specific warp factor $A(r)$ and study his geometrical consequences. Let us choose the linear warp factor, i.e., $A(r)=kr$, where $k$ is a real constant. This warp factor was widely used both in Randall-Sundrum models \cite{Randall:1999vf,Randall:1999ee} and in string-like geometries \cite{Gregory:1999gv,Gherghetta:2000qi,Oda:2000zc}. This warp factor was the first used to solve the hierarchy problem. Further, still in Randall-Sundrum and string-like geometries, it provides a $AdS_{6}$ geometry to the bulk, i.e., a maximally symmetric space with negative cosmological constant. With that choice the Einstein equations become \begin{align} B''-\frac{3k}{2}B'-B'^{2}-3k^{2} +\frac{3k}{2}\left(\frac{\beta'}{\beta}-\frac{\alpha'}{\alpha}\right)+\left(\frac{\beta'}{\beta}-\frac{1}{2}\frac{\alpha'}{\alpha}\right)B'\nonumber\\ -\frac{\beta''}{\beta} +\frac{1}{2}\frac{\alpha'}{\alpha}\frac{\beta'}{\beta}+\alpha\left(\frac{1}{2}e^{A}\hat{R} -2\Lambda+2K_{6}^{2}t_{0}\right) & = 0,\\ -3k^{2}-2kB'+2\frac{\beta'}{\beta}k+\alpha\left(e^{A}\hat{R}-2\Lambda+2K_{6}^{2}t_{r}\right) & = 0,\\ -5k^{2}-2\frac{\alpha'}{\alpha}k+\alpha\left(e^{A}\hat{R} -2\Lambda + 2K^{2}_{6}t_{\theta}\right)& = 0.\label{tteta} \end{align} Summing the radial and angular equations above, we obtain the solution for the warp factor $B(r)$: \begin{equation} B(r)=kr+\ln(\alpha(r,a) \beta(r,a))+\frac{K_{6}^{2}}{k}\int_{0}^{r}{\alpha(r',a)(t_{r}(r')-t_{\theta}(r'))dr'}. \end{equation} Therefore, we can get $B(r)=A(r)$ if \begin{equation} \label{A=B} \ln(\alpha(r,a) \beta(r,a))+\frac{K_{6}^{2}}{k}\int_{0}^{r}{\alpha(r',a)(t_{r}(r')-t_{\theta}(r'))dr'}=0. \end{equation} Equation (\ref{A=B}) provides a constraint between the resolved conical geometry and the content of matter. In vacuum, the warp factor $B(r)$ is given by \begin{equation} \label{Bvaccum} B(r)=kr+\ln(\alpha(r,a) \beta(r,a)) \end{equation} The solution for the function $B(r)$ above differs from the string-like defect one by the conifold metric components $\alpha,\beta$ \cite{Oda:2000zc}. Besides, note that in general $B(r)$ depends on the resolution parameter. Therefore, we can make $\beta(r,a)=1$ and still detects the effects of the resolution on the conifold. Let us suppose now an angular energy-momentum tensor of the form \begin{equation} t_{\theta}(r)= \zeta e^{kr}+\lambda(r)+\rho, \end{equation} where $\zeta,\rho$ are constants and \begin{equation} \lambda(r)=\frac{1}{4K_{6}^{2}\alpha(r)}\left(5P^{2}+2P\frac{\alpha'}{\alpha}-3\left(\frac{\alpha'}{\alpha}\right)^{2}\right). \end{equation} Now let us suppose the 3-brane $M_{4}$ is a maximally symmetric space. Therefore, we can define a 3-cosmological constant $\Lambda_{3}$, satisfying \begin{equation} \hat{R}_{\mu\nu}-\frac{\hat{R}}{2}\hat{g}_{\mu\nu}=-\Lambda_{3}\hat{g}_{\mu\nu} \end{equation} Thus, its scalar curvature $\hat{R}$ must be constant. Therefore, from equation (\ref{tteta}) we conclude that \begin{eqnarray} \zeta & = & \frac{\hat{R}}{2K_{6}^{2}},\\ \rho & = & \frac{\Lambda}{K_{6}^{2}},\\ k & =& P. \end{eqnarray} It is worthwhile to mention that the solution above for $t_{\theta}$ differs of string-like type by the terms $\alpha$ and $\frac{\alpha'}{\alpha}$ and so we obtain the string-like defect as a special case of the resolved conifold one. For $B(r)=A(r)$ the components of the energy-momentum tensor are \begin{eqnarray} t_{r}(r) & = & \frac{\Lambda}{K_{6}^{2}}-\frac{e^{kr}\hat{R}}{K_{6}^{2}}+\frac{1}{\alpha K_{6}^{2}}(5k^{2}-2\frac{\beta'}{\beta}k),\\ t_{0}(r) & = & \frac{\Lambda}{K_{6}^{2}}-\frac{e^{kr}\hat{R}}{2K_{6}^{2}}+\frac{1}{2\alpha K_{6}^{2}}\left(\frac{11}{2}k^{2}, +2(\frac{\beta'}{\beta}-\frac{\alpha'}{\alpha})k-\frac{\beta''}{\beta}+\frac{1}{2}\frac{\alpha'}{\alpha}\frac{\beta'}{\beta}\right). \end{eqnarray} Since $\beta'$ diverges, then for $\hat{R}\leq 0$ the component $t_{0}$ satisfies the energy dominant condition, $t_{0}\geq 0$. This linear warp factor satisfies the boundary conditions (\ref{boundary1}) and (\ref{boundary2}). Since the warp factor diverges asymptotically we can impose the following condition \begin{equation} A'(r)>0. \end{equation} Now let us analyze the asymptotic behavior of the linear warp factor. The angular Einstein equation is \begin{equation} -5k^{2}-2\frac{\alpha'}{\alpha}k+\alpha\left(e^{A}\hat{R} -2\Lambda + 2K^{2}_{6}t_{\theta}\right)= 0 \end{equation} Asymptotically $\lim_{r\rightarrow\infty}{\frac{\alpha'}{\alpha}}=0$ and $\lim_{r\rightarrow\infty}{\alpha=1}$. Thus, for $t_{\theta}=-\frac{e^{kr}\hat{R}}{2K_{6}^{2}}$, the angular equation becomes: \begin{eqnarray} 5k^{2}+2\Lambda=0 & \Rightarrow & \Lambda<0 \end{eqnarray} Therefore, the bulk is asymptotically $AdS_{6}$ for the linear warp factor. Further, since \begin{equation} R=3\Lambda -\frac{K_{6}^{2}}{2}T \end{equation} the energy-momentum tensor has a core around the brane which is very similar to the string-like geometry \cite{Gherghetta:2000qi}. Another important feature in this approach is the freedom on the factor $k$ since the constant $P$ must only to be positive. We have plotted the angular component for some different values of $k$ and for $a=0.5$ in fig.(\ref{fatormetricoangular}). For $k=3$, this component has an exponential decreasing behavior and then approaches to the configuration of string-like defects and cigar geometries \cite{Gregory:1999gv,Gherghetta:2000qi,Oda:2000zc,deCarlos:2003nq}. However, for $0.5\leq k \leq 1$ the angular component increases until reaches a maximum and then decreases exponentially. Hence, the decreasing of the $k-$parameter has the feature of damping the exponential decreasing of the angular component. \begin{figure} \centering \includegraphics[scale=1.1 ,bb=-41 -80 233 146]{fig-3.eps} \caption{Angular metric factor of bulk for $a=0.5$. For $k=3$, we have a monotonic exponential behavior characteristic of the string-like defects and cigar geometries (thick line). For $k=0.9$ (thin line) and $k=0.6$ (dotted line) the angular factor grows until a maximum and then decreases exponentially. Note that the former behavior makes the angular component vanish more slowly.} \label{fatormetricoangular} \end{figure} The same features appear if we fix $k$ and vary the resolution parameter. Indeed, as $a\rightarrow 0$ a peak arises making the angular component vanish more slow as shown in the figure (\ref{f}). Besides, the angular metric component has the same asymptotic behavior for any value of $a$ but it changes its behavior close to the brane. \begin{figure} \centering \includegraphics[scale=1.1, bb=-41 -80 233 146]{fig-4.eps} \caption{Angular metric component for $k=1$. For $a=3$ the component decreases exponentially like for string-like defects and cigar geometries (thick line). For $ a \simeq 0.3 $ (thin line) there is no conical singularity and the component reaches a maximum before decay. For $a=0$ (dotted line) the component begin with a conical singularity at the origin, increases until a maximum and then vanish slowly.} \label{f} \end{figure} For the linear warp factor the scalar curvature is given by \begin{equation} R=-\frac{3}{2}\frac{(1620a^{6}k^{2}+kr^{5}(5kr-4)+12a^{4}(60k^{2}r^{2}-13kr+6)+a^{2}r^{2}(105k^{2}r^{2}-50kr+4))}{(r^{2}+6a^{2})^{3}} \end{equation} whose graphic is plotted in the fig.(\ref{linearscalarcurvature}). For better viewing, let us make the change of variable \begin{equation} r\rightarrow r-5. \end{equation} Therefore, the bulk has a varying negative scalar curvature. \begin{figure} \centering \includegraphics[scale=1.1, bb=-41 -80 233 146]{fig-5.eps} \caption {Bulk scalar curvature for the linear warp factor $(k=1)$. We have put the brane in $r=5$. For $a=1$ (thick line) the curvature increases until an asymptotic value around $-7.5$. For $a=0.5$ (thin line) the curvature grows until reach a maximum and then decreases to the same asymptotic value. For $a=0$ (dashed line) the curvature diverges on the brane.} \label{linearscalarcurvature} \end{figure} \subsection{Nonlinear warp factor} \label{Nonlinear warp factor} In addition to the configuration above, we have used another warp factor slightly different from the warp factor previously studied by Fu \textit{et al} \cite{Fu:2011pu}. Our proposed warp factor is given by \begin{equation} A(r)=B(r)=\cosh(r)+\tanh(r)^{2}. \end{equation} Note that, like in Randall-Sundrum model where the warp factor is a modulus function, this nonlinear warp factor is symmetric with respect to reflection on the brane, i.e., it has $Z_{2}$ symmetry, as shown in the figure (\ref{nonlinearwarpfactor}). Furthermore, it gives a localized angular component as seen in the fig. (\ref{warpedangularcomponent}). \begin{figure} \centering \includegraphics[scale=1.1,bb=-41 -80 233 146]{./nonlinearwarpfactor.eps} \caption{Nonlinear warp factor. This function is symmetric around the brane and diverges asymptotically.} \label{nonlinearwarpfactor} \end{figure} \begin{figure} \centering \includegraphics[scale=1.1,bb=-41 -80 233 146]{./bulknonlinearangularcomponent.eps} \caption{Bulk angular metric factor for the nonlinear warp factor. For $a=1$ (thick line) the component is similar to a gaussian function and is not zero on the brane - there is no conical singularity. For $a=0$ the component has two maxima and it has a conical singularity.} \label{warpedangularcomponent} \end{figure} We have plotted the scalar curvature for this warp factor for $a=0$ and $a=1$. Note that the behavior of the scalar curvature is opposite of the linear case because for the non-linear case the scalar curvature is regular for $r=0$ but diverges at infinity. Moreover, the curvature is positive around the origin and is negative for large distances. Therefore, the geometry for the warp factor $A(r)=cosh(r)+tanh(r)^{2}$ has a well-behavior in the origin but diverges asymptotically. Furthermore, asymptotically the warp factor satisfies \begin{eqnarray} lim_{r\rightarrow\infty}A'(r)=0,\\ lim_{r\rightarrow\infty}A''(r)>0. \end{eqnarray} Then, for $t_{\theta}=0$ and $\hat{R}=0$ the angular Einstein equation satisfies \begin{eqnarray} 4A''(r)-2\Lambda=0 & \Rightarrow \Lambda>0. \end{eqnarray} However, since the scalar curvature diverges far from the brane, we can not conclude that the bulk converges to the $dS_{6}$ space. Indeed, since the scalar curvature diverges at infinite and \begin{eqnarray} R=K_{6}^{2}T+6\Lambda &\Rightarrow& \lim_{r\rightarrow \infty}{T}=-\infty, \end{eqnarray} whatever produces this geometry, the scalar curvature has a radial energy-momentum tensor component that diverges asymptotically. This feature contrasts a lot with the string-like geometry. \begin{figure} \centering \includegraphics[scale=1.1,bb=-41 -80 233 146]{./nonlinearscalarcurvature.eps} \caption{Bulk scalar curvature for the nonlinear warp factor.} \label{curvaturanaolinear} \end{figure} \section{Scalar field in minimal coupling} \label{Scalarfieldinminimalcoupling} Now let us study the localization of a scalar fields in the geometry analyzed so far. The action for scalar field minimally coupled to the gravity is \begin{equation} S_{s}=\int_{\mathcal{M}_{6}}{dx^{6}\sqrt{-g}g^{ab}\partial_{a}\phi \partial_{b}\phi} \end{equation} The equation of motion for the scalar field is given by \begin{equation} \partial_{A}(\sqrt{-g}g^{AB}\partial_{B}\Phi)=0 \end{equation} Let us assume that this scalar field is a product of a 4-component field with Poincar\'{e} symmetry and another scalar field living only in the 2-cycle of conifold, i.e., \begin{equation} \Phi(x^{\mu},r,\theta)=\hat{\phi}(x^{\mu})\tilde{\phi}(r,\theta). \end{equation} From Poincar\'{e} symmetry the 3-brane scalar field must satisfies the mass condition \begin{equation} \partial_{\mu}\partial^{\mu}(\hat{\phi}(x^{\mu}))=m^{2}\hat{\phi}(x^{\mu}). \end{equation} Since $0\leq \theta \leq 2\pi$, let us assume that $\tilde{\phi}(r,\theta)$ can be expanded in Fourier series as \begin{equation} \label{transverseansatz} \tilde{\phi}(r,\theta)=\chi(r)\sum_{l=0}^{\infty}{e^{il\theta}}. \end{equation} Using the ansatz $(\ref{transverseansatz})$ yields \begin{equation} \label{stequation1} \left(\frac{\sqrt{-g}}{\alpha(r,a)}\chi'(r)\right)'-\frac{l^{2}\sqrt{-g}e^{B}}{\beta(r,a)}\chi(r) + m^{2}\sqrt{-g}e^{A}\chi(r)=0 \end{equation} Equation $(\ref{stequation1})$ is a Sturm-Liouville like equation. Further, let us looking for solutions that satisfy the boundary conditions \begin{equation} \chi'(0)= \lim_{r\rightarrow\infty}\chi'(r)=0. \end{equation} If we have two solutions of eq. $(\ref{stequation1})$, namely $\chi_{i}(r)$ and $\chi_{j}(r)$, the orthogonality relations between them are \begin{equation} \int_{0}^{\infty}{\sqrt{\alpha(r,a)\beta(r,a)}e^{(-A(r)-\frac{B(r)}{2})}\chi_{i}*\chi_{j}dr}=\delta_{ij} \end{equation} We can rewrite eq. (\ref{stequation1}) as \begin{equation} \label{radialequation} \chi''(r)-\frac{1}{2}\left(4A'+B'+\frac{\alpha'}{\alpha}-\frac{\beta'}{\beta}\right)\chi'(r)+ \alpha e^{A}\left(m^{2}-l^{2}\frac{e^{B-A}}{\beta}\right)\chi(r)=0. \end{equation} Note that equation (\ref{radialequation}) is similar to that found in string-like geometries \cite{Gherghetta:2000qi,Oda:2000zc}, regardless the conifold terms $\alpha(r,a),\beta(r,a)$. Further, we can see from equation (\ref{radialequation}) that it is possible to add the conifold terms to the warp factors. Here we can choose two distinct paths. On one hand it is possible to study the resolution behavior directly in the warp factors. On other hand, we can study the localization in a factorized geometry, i.e., without the exponential warp factors $e^{A(r)}, e^{B(r)}$. Besides, we could define an effective angular number $l_{eff}=\frac{l^{2}}{\beta(r,a)}$ which would depends on the point and on the resolution parameter. \subsection{Massive modes} \label{Massive modes} Let us simplify the equation (\ref{radialequation}) making the following change of variable \begin{equation} \label{change} z=z(r)=\int_{0}^{r}{\alpha(r')^{\frac{1}{2}}e^{\frac{A(r')}{2}}dr'}. \end{equation} Since the radial metric component $\alpha(r,a)$ is a non-negative smooth function of $r$, i.e., $\forall r\in [0,\infty), \alpha(r,a)>0$, so for a fixed $a$, $\alpha(r_{1})\neq \alpha(r_{2})\Leftrightarrow r_{1}\neq r_{2}. $ Thus, \begin{equation} \frac{dz}{dr}>0, \end{equation} and so $z(r)$ is a smooth, monotonic increasing function of $r$. Using the change of variable in eq.(\ref{change}) the equation (\ref{radialequation}) turns to be \begin{equation} \ddot{\chi}(z)-\frac{1}{2}\left(3\dot{A}+\dot{B}-\frac{\dot{\beta}}{\beta}\right)\dot{\chi}(z)+ \left(m^{2}-l^{2}\beta^{-1}e^{B-A}\right)\chi(z)=0. \label{motionequation} \end{equation} In order to simplify further, let us write $\chi(z)$ in the form \begin{equation} \chi(z)=e^{\frac{(3A+B-\ln(\beta))}{4}}\Psi(z). \end{equation} From equation (\ref{motionequation}), the $\Psi(z)$ function must obey \begin{eqnarray} -\ddot{\Psi}(z)+V(z)\Psi(z)=m^{2}\Psi(z), \end{eqnarray} where \begin{eqnarray} V(z) & = & \left(\frac{3\dot{A}+\dot{B}-\beta^{-1}\dot{\beta}}{4}\right)^{2}-\frac{\left(3\ddot{A}+\ddot{B}+ \beta^{-2}(\dot{\beta})^{2}-\beta^{-1}\ddot{\beta}\right)}{4}\nonumber\\ & + & l^{2}\beta^{-1}e^{B-A} \end{eqnarray} This is a time-independent Schroedinger-like equation. We can study the localization of the scalar field by analyzing the behavior of the potential around a potential well. Returning to $r$ coordinate the potential can be written as \begin{eqnarray} V(r,a,l) & = & \frac{e^{-A}}{\alpha}\Big\{\frac{1}{16}(15A'^{2}+B'^{2}+8A'B')-\frac{1}{4}\left(3A''+B''\right)\nonumber\\ & + & (3A'+B')\frac{\alpha'}{8\alpha}-\frac{\alpha'\beta'}{8\alpha\beta} -\frac{\beta'}{8\beta}A'+\left(\frac{\beta'}{4\beta}\right)' \Big\}+l^{2}\beta^{-1}e^{B-A} \end{eqnarray} The study of this potential provides graphic information about the possible stable solutions around a minimum. \subsubsection{Linear warp factor} Now, let us analyze the case for the linear warp factor keeping the condition $A(r,k)=B(r,k)$. The potential is given by \begin{eqnarray} V(u,a,l,k) & = & \frac{e^{-ku}}{\alpha}\Big\{\frac{3}{2}k^{2}+\frac{k}{2}\frac{\alpha'}{\alpha}-\frac{1}{8}k\frac{\beta'}{\beta}- \frac{1}{8}\frac{\alpha'}{\alpha}\frac{\beta'}{\beta}\nonumber\\ & + & \frac{1}{4}\left(\frac{\beta'}{\beta}\right)'\Big\}+l^{2}\beta^{-1} \end{eqnarray} We have plotted the potential for $l=0,k=1$ in the figure (\ref{potenciallinear1}). For large values of $a$ the potential well decay exponentially. As $a\rightarrow 0$ two asymmetric minima arise far from the brane. However, for $a=0$, the point $r=5$ turns to be an infinite potential well. Therefore, it is possible to find a localized solution of eq. (\ref{motionequation}). Indeed, the eigenfunction $\chi(r)$ must satisfies the differential equation \begin{equation} \label{linearradialequation} \chi''(r)+\left(\frac{(r-5)}{(r-5)^{2}+9a^{2}}-\frac{5}{2}k\right)\chi'(r)+\left(\frac{(r-5)^{2}+6a^{2}}{(r-5)^{2}+9a^{2}}\right)e^{k(r-5)}m^{2}\chi(r)=0. \end{equation} \begin{figure} \centering \includegraphics[scale=1.1,bb=0 0 240 161]{fig-9.eps} \caption{Potential for linear warp factor and $l=0$. For $a=1$ (thick line) the potential decays exponentially. For $0.1< a <0$ (dashed line) it appears two asymmetric potential wells beyond the point $r=5$ but in this point the potential is a maximum. For $a=0$ a potential well arises on the brane (dotted line)}. \label{potenciallinear1} \end{figure} For $\left(\frac{a}{r-5}\right)\rightarrow 0$, which is valid for the singular cone ($a=0$) and for distant points ($(r-5)\rightarrow \infty$), the radial function satisfies \begin{equation} \label{massive1} \chi''(r)+\left(\frac{1}{(r-5)}-\frac{5}{2}k\right)\chi'(r)+e^{k(r-5)}m^{2}\chi(r)=0. \end{equation} Note that eq. (\ref{massive1}) differs from the string-like \cite{Gherghetta:2000qi} equation for massive modes by the factor $\frac{1}{r-5}$. For $(r-5)\gg\max(a,\frac{2}{2k})$, eq.(\ref{massive1}) converges to well-known equation of string-like defects \textbf{\cite{Gherghetta:2000qi}} \begin{equation} \label{massivegs} \chi''(r)-\frac{5}{2}k\chi'(r)+e^{kr}m^{2}\chi(r)=0, \end{equation} and so asymptotically the field has the same features as in string-like geometry. For instance, the spectrum of mass has the same asymptotic behavior. In order to study the behavior of massive modes near the brane let us take the limit $\left(\frac{r-5}{a}\right)\rightarrow 0$, $a\neq 0$ and therefore the equation for the radial component becomes \begin{equation} \label{massive2} \chi''(r)+\left(\frac{r-5}{9a^{2}}-\frac{5}{2}k\right)\chi'(r)+\frac{2}{3}e^{k(r-5)}\chi(r)=0. \end{equation} We can also use eq. (\ref{massive2}) to study the asymptotic behavior of the eingenfunction for singular conifold $a=0$ whereas $\left(\frac{r-5}{a}\right)\rightarrow 0$. For $|r-5|\ll \frac{45a^{2}k}{2}$, eq. (\ref{massive2}) turns to be the string-like equation for massive modes \cite{Gherghetta:2000qi}. Hence, the resolved conifold geometry resembles asymptotically and close to the brane the string-like geometry. Therefore, for $(r-5)\rightarrow 0$ or $(r-5)\rightarrow \infty$ we have the well-known string-like solution \begin{equation} \label{assolution} \chi(r)\rightarrow e^{\frac{5}{4}cr}\left(C_{1}J_{\frac{5}{2}}\left(\frac{2m}{c}e^{\frac{c}{2}}r\right)+C_{2}Y_{\frac{5}{2}}\left(\frac{2m}{c}e^{\frac{c}{2}}r\right)\right) \end{equation} However, this solution in eq. ($\ref{assolution}$) is not normalizable for $[0,\infty)$. In order to normalize the field we can use a cut-off distance $r_{c}$ and apply the boundary condition in the point $r=r_{c}$ instead of the infinity. Then we take the limit $r_{c}\rightarrow \infty$ and analyze the behavior of the mass spectrum. Since the solution of eq. $(\ref{linearradialequation})$ behaves near and far the brane like the Bessel functions, for $r_{c}>\max(a,\frac{2}{2k})$, the asymptotic mass spectrum must be of the form \begin{equation} m_{n}=c\left(n-\frac{1}{2}\right)\frac{\pi}{2} e^{-\frac{cr_{c}}{2}} \end{equation} On the other hand, expanding the exponential until first order yields \begin{equation} \chi''(r)+\left(\frac{r-5}{9a^{2}}-\frac{5}{2}k\right)\chi'(r)+\frac{2}{3}(1+k(r-5))\chi(r)=0. \end{equation} Therefore, this equation describes either the behavior of the field in the neighborhood of the brane or the asymptotic behavior of the eigenfunction for $a\neq 0$ . Their solution is the product between an exponential function and the confluent hypergeometric function of second kind, namely \begin{equation} \chi(r,a)=E(r,a)M\left(\frac{(1 - 6 a^{2} - 135 a^{4} - 324 a^{6})}{2} , \frac{1}{2},-\frac{10+45a^{2}+216a^{4}-2r}{6\sqrt{2}a}\right), \end{equation} where, $E(r,a)=e^{\frac{(10+45a^{2}+108a^{4}-r)}{18a^{2}}}$. The graphic of this function was plotted in the fig. (\ref{massivemode1}). \begin{figure} \centering \includegraphics[scale=1.1,bb=0 0 240 161]{./massivemode1.eps} \caption{Eigenfunction for linear warp factor and $a=1,k=1$. The solution is defined for points closed to the brane. The field has compact support and is well defined on the brane $(r=5)$. } \label{massivemode1} \end{figure} \subsubsection{Nonlinear warp factor} Now let us study the solutions for the nonlinear warp factor. First, we have plotted the potential using several values of the resolution parameter $a$ and $l=0$ in the figure (\ref{potencialnaolinearl0}). For $a\geq 0.5$ there is a potential well on the brane ($r=5$) and then there are massive modes trapped to the brane. Nevertheless, for $0.5<a< 0$, there is a potential barrier on the brane and there is only a potential well beside the brane. However, for $a=0$ a potential well appears again on the brane and thus there are localized states on the brane in the singular conifold. The eigenfunction must satisfies \begin{equation} \chi''(r)+\left(\frac{(r-5)}{(r-5)^{2}+9a^{2}}-\frac{5}{2}A'(r)\right)\chi'(r)+ \left(\frac{(r-5)^{2}+6a^{2}}{(r-5)^{2}+9a^{2}}\right)e^{A(r)}m^{2}\chi(r)=0. \end{equation} For $\left(\frac{(r-5)}{a}\right)\rightarrow 0$ we can expand the warp factor and its exponential around $r=5$. This yields the equation \begin{equation} \chi''(r)+\left(\frac{(r-5)}{9a^{2}}-\frac{5}{2}(r-5)\right)\chi'(r)+ \frac{2}{3}(r-5)^{2}m^{2}\chi(r)=0. \end{equation} The solution for this equation is again a product between a exponential function and the hypergeometric confluent function. Its plot is shown in the figure (\ref{nonlineareigenfunctiona1}). \begin{figure} \centering \includegraphics[scale=1.1,bb=0 0 240 157]{nonlinearpotential.eps} \caption{Potential for the nonlinear warp factor. For $a=0.24$ (thick line) there is only one potential well around the brane (the usual \textit{volcano} potential); for $a=0.16$, there is a maximum in the brane and the formation of two minima besides the brane (thin line); as $a\rightarrow 0$ the width of the maximum decreases and for $a=0$ the potential has a infinite minimum on the brane (dashed line).} \label{potencialnaolinearl0} \end{figure} \begin{figure} \centering \includegraphics[scale=1.1,bb=0 0 240 157]{./nonlineareigenfunctiona1.eps} \caption{Eigenfunction for the nonlinear warp factor closed to the brane.} \label{nonlineareigenfunctiona1} \end{figure} For $\left(\frac{a}{r-5}\right)\rightarrow 0$ the eigenfunction satisfies \begin{equation} \chi''(r)+\left(\frac{5(r-5)}{2}\right)\chi'(r)+ (r-5)^{2}\chi(r)=0 \end{equation} whose solution is \begin{equation} \chi(r)=e^{10r-r^{2}}\left(H\left(-\frac{4}{3},\frac{\sqrt{3}}{2}(r-5)\right)+M\left(\frac{2}{3},\frac{1}{2},\frac{\sqrt{3}}{2}(r-5)\right)\right) \end{equation} We have plotted the solution above in the figure (\ref{nonlineareigenfunctiona0}). Note that the function is well defined on the brane but asymmetric in relation to the brane. Therefore, the eigenfunction for the singular conifold $(a=0)$ is localized on the brane. Besides, since the eigenfunction vanishes at infinity, the eigenfunction for $a\neq 0$ is also localized. \begin{figure} \centering \includegraphics[scale=1.1,bb=0 0 240 157]{./nonlineareigenfunctiona0.eps} \caption{Eigenfunction for the nonlinear warp factor far from the the brane and or for $a=0$.} \label{nonlineareigenfunctiona0} \end{figure} \subsection{Massless modes} \label{Massless mode} Now let us turn our attention to the massless modes ( or Kaluza-Klein modes). Considering $m=0$ and $l=0$ (which is called \textit{s-wave}) \cite{Oda:2000zc}, the radial equation (\ref{radialequation}) becomes \begin{equation} \label{masslessequation} \chi_{k}''(r)-\frac{1}{2}\left(4A'+B'+\frac{\alpha'}{\alpha}-\frac{\beta'}{\beta}\right)\chi'_{k}(r)=0. \end{equation} The constant function $\chi(r)=\chi_{0}$ satisfies the equation above. Thus, this solution is said to be localized if its action is localized around the 3-brane, i.e., if its action have compact support. Since the equation (\ref{masslessequation}) is a Sturm-Liouville equation and we are seeking localized functions that satisfy the asymptotic condition \begin{equation} \lim_{|r-5|\rightarrow \infty}\chi'(r)=0, \end{equation} we can find a spectra of eigenfunctions $(\phi(r))_{n}$ satisfying the orthonormal condition \begin{equation} \int_{0}^{\infty}{e^{-\frac{3A(r)}{2}}\sqrt{\alpha\beta}*\phi_{n}(r)\phi_{m}(r)dr}=\delta_{nm}. \end{equation} Therefore, we can define the eigenfunction in flat space as \begin{equation} \chi_{n}(r,a)=e^{-\frac{3A(r)}{4}}(\alpha(r,a)\beta(r,a))^{\frac{1}{4}}\phi_{n}(r). \end{equation} On the other hand, since \begin{equation} \chi_{0}^{2}\int_{0}^{\infty}{e^{-\frac{3A(r)}{2}}\sqrt{\alpha\beta}dr}=1 \end{equation} the zero-mode eigenfunction is \begin{equation} \chi_{0}(r,a)=le^{-\frac{3A(r)}{2}}\sqrt{\alpha(r,a)\beta(r,a)}, \end{equation} where \begin{equation} l=\frac{1}{\int_{0}^{\infty}{e^{-\frac{3A(r)}{2}}\sqrt{\alpha\beta}}dr}. \end{equation} Again, the eigenfunction is quite similar to that found in refs. \cite{Gherghetta:2000qi,Oda:2000zc} regardless the factor $(\alpha(r,a)\beta(r,a))^{\frac{1}{2}}$. We have plotted the zero-mode $\psi_{0}(r,a)$ for some values of $a$ using the linear warp factor in the fig. (\ref{masslesslinear}) and for the nonlinear warp factor in the fig. (\ref{masslessnonlinear}). Since for every $a$ the function is integrable we can say that the massless field is localized on the brane even for the singular conifold case. \begin{figure} \centering \includegraphics[scale=1.1]{./masslesslinear.eps} \caption{Plot of the zero-mode eigenfunction for the linear warp factor. For any $a$ it has an exponential decreasing behavior and thus it is localized (but not symmetric) around the brane.} \label{masslesslinear} \end{figure} \begin{figure} \centering \includegraphics[scale=1.1]{./masslessnonlinear.eps} \caption{Plot of the zero-mode eigenfunction for the nonlinear warp factor. The function has compact support even for the singular conifold $a=0$ (dotted line). Note that this solution is localized and symmetric around the brane.} \label{masslessnonlinear} \end{figure} \section{Conclusion and perspectives} \label{Conclusions} We have studied the localization of a scalar field on a 3-brane in a resolved conifold background of six dimensions, builded from a warped product between a 2-cycle of a resolved conifold and a 3-brane placed in the tip of the cone. We have chosen a geometry such that when the resolution parameter goes to zero or when we put the 3-brane far from the origin, we re-obtain the well-studied string-like geometries. The use of the resolved conifold as a transverse space brought a very nice feature: an extension to the solution of the hierarchy problem using the resolution parameter $a$. Indeed, in the Randall-Sundrum and string-like models, the ratio between the six-dimensional Planck mass $M_{6}$ and the four dimensional Planck mass $M_{4}$ depends only on the cosmological constant $\Lambda$ and on the tension of the brane $\mu$. Since this parameter has a purely geometric origin it has opened the way to use another parameter-dependent manifolds like Eguchi-Hanson spaces, Taub-Nut, etc, as transverse spaces. Furthermore, using parameter dependent transverse spaces we could study the evolution and stability of the hierarchy between $M_{6}$ and $M_{4}$ through some mechanisms, for example, the conical transitions or the geometrical flux. For a linear warp factor, the bulk geometry asymptotically approaches to the $AdS_{6}$ space ($\Lambda <0$) which is similar to the string-like geometry. Further, the massive modes have a family of potentials parameterized by the resolution parameter. For $a\geq 1$, the potential decreases exponentially and then it is not possible to have stable trapped states in the brane. As $a\rightarrow 0$, two asymmetric minima appear around the 3-brane put in the point $r=5$, but the potential reaches its maximum value on the brane. The respective eigenfunction is localized around the deepest minimum close to the brane. In the limit $a=0$, the potential well turns out to be a infinite potential well in the brane and thus the eigenfunction is localized in the brane. Indeed, both for $\frac{a}{(r-5)}\rightarrow 0 $, and for $\frac{(r-5)}{a}\rightarrow 0$ we obtain the well-known solution for string-like geometry \cite{Gherghetta:2000qi}. Besides, the massless modes was localized even for the singular case. Indeed, the action for a constant solution in this geometry has compact support and therefore is normalizable. The main difference between this conifold geometry and the standard string-like one lies on the parameterization of the potentials by a geometric factor. This parameterization provides the asymmetric minima potentials far from the brane. Therefore, the resolution parameter could be used as a filter of fields that can be or not be localized on the brane. Using the nonlinear warp factor $A(r)=\cosh(r)+\tanh(r)^{2}$ the scalar curvature diverges asymptotically and so the energy-momentum tensor is not restricted to points closed to brane which is in contrast to the string-like geometry. For the scalar field, the massive modes are trapped on the 3-brane even for the singular conifold. For $a\geq 1$ the potential behaves like the standard volcano potential with a minimum on the brane. As $a\rightarrow 0$ two more local minima appear inside the global minimum and the minimum on the brane turns to be a maximum. Therefore, in the range $0<a\leq 1$ is not possible localize the scalar field on the brane. In the same way that for linear warp factor, for $a=0$ the potential on the brane returns to be a potential well but now infinite. At last, the massless modes are localized for all values of $a$. A difference between this geometry and the string-like one lies on the radial field solutions. Indeed, for string-like geometries the usual field solutions are Bessel functions and in this work we have found confluent hypergeometric functions that depend on two parameters and which are very sensible to variations on these parameters. This was expected since the field equation depends on the mass $m$ and angular number $l$. Also, the field equation, through the geometry, depends on the resolution parameter $a$. We argue that using other parameter-dependent geometries we could find more general solutions and study its stability. We also argue that the successful localization of massless modes in the singular conifold for both linear and nonlinear warp factor might be related with the fact that in the conifold transitions, where we make singularities in the manifold, some fields become massless. Thus, this closed relationship between conical singularity and the spectrum of massless fields provides the localization of the massless modes in the singular geometry. This work suggests many perspectives. Using the same geometry studied so far, we could study the localization of other fields in this scenario, like the vector, gravitational and spinor fields. Since these fields have more degrees of freedom than the scalar field the effects of asymmetry of the potential could be more relevant, for example, for the resonance of fermionic modes. Another way it could be use another parameter-depend geometries like the deformed conifold or even orbifold instead of the resolved conifold. Since these smooth conifolds are related with supergravity solutions that near horizon behaves like $AdS_{5}\times S^{5}$ we could study the localization of fields where the transverse space are well-known solutions like the Eguchi-Hanson spaces or the Klebanov-Strassler throat. Furthermore, since we have parameterized the geometry we could study the stability of this geometry using some analytical method like Ricci equation, where the variable of the flux would be the resolution parameter. Since there is a relation between the resolved conifold geometry and the cigar-like geometry we could study the flux through a parameterized cigar geometry. On the other hand, we could use the geometry bulk used here to study other problems as the small value of the cosmological constant in the 3-brane and the supersymmetry breakdown. It worthwhile to mention that even though we have studied the behavior of the field in that geometry we have not said how this geometry was generated. Hence, a next step could be deduce this geometry from some field and so to give a physical meaning to the resolution parameter. The authors would like to thank the Brazilian agencies FUNCAP, CNPq and CAPES for financial support.
1,314,259,995,486
arxiv
\section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmlarge]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmtog]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf,authordraft]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[manuscript,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi-a]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{margintable} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{margintable} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{marginfigure} \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{marginfigure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigplan,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Discussion} \label{sec:discussion} In this section, we discuss key ideas and limitations. Below we discuss critical comments: model generalizability, strategy space, and use of one social network dataset. \begin{description} \item[Types of content-production]: Our framework applies to content production scenarios with discrete strategy spaces. While the framework applies to the case when there is a mix of single-authored and collaboratively-authored documents, the proposed dual attention networks make the most sense in the case when the content has multiple authors, for example in venues such as academic citation networks and online forums such as Piazza. \item[Generalization to other networks:] Each social network has a different strategy space (the set of possible actions, reward mechanism---mapping actions to utility), social norms, and the network structure (bipartite person and content graph). Thus, interpreting the results of our framework on each social network requires care. This paper discusses results only from the DBLP dataset. We do have results from StackExchange data, where we assumed that the participants collaborated in creating the set of answers, with user reputation as the payoff. The strategy space for StackExchange included, among others, what questions to answer, which users to follow. Since this is the first paper on discovering strategic behaviors from networks, we felt that it was essential to perform a qualitative analysis of the discovered behaviors~(\Cref{sec:6}), and not restrict ourselves to a link prediction task~(\Cref{sec:5.6},~\Cref{tab:experiment_results_table}). It is the qualitative analysis that provides a more in-depth understanding (emergence of order; connections between order and payoffs) of behavior. We lacked space in the paper to describe the strategy space for StackExchange (different from DBLP) \textit{and} perform a careful qualitative analysis of the StackExchange results. We plan to report these results in an extended ArXiv paper. \item[Limitations:] We state two limitations here. First, for each social network, we require a specific set of strategies and a utility function. Identification of a complete strategy space may be non-trivial for some networks. Second, our rational model is myopic; instead, we could use an explore-exploit strategy in the vein of reinforcement learning literature. \end{description} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi-a]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{margintable} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{margintable} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{marginfigure} \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{marginfigure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf,authordraft]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmlarge]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmsmall]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[manuscript,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigplan,screen]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[acmtog]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigchi]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} \label{sec:1} This paper examines if individuals can successfully discover strategies with high payoffs in social networks. In seminal work,~\citet{simon1972theories} introduced the idea of bounded rationality---that human beings use limited resources to make decisions. In more recent work, Gigerenzer et al.~\cite{gigerenzer1996reasoning, gigerenzer2011heuristic} argued that human beings used heuristics to make decisions whose quality matched that of rational agents. Online social networks typically have an explicit mechanism that allocates rewards (usually points) that vary with users' behaviors; for example, the right answer on a community question-answer website like StackOverflow (\url{https://www.stackoverflow.com}) may earn the individual who posted the answer reputation points. In some other networks, the mechanism is more opaque. For example, on Twitter, the inclusion of a celebrity's twitter handle on your tweet causes your tweet to appear on their timeline, increasing your visibility. Perhaps over time, this improved visibility results in one having more followers, and that increased visibility may cause one to become an influencer---advertisers may reach out to market their products. In a different example, an assistant professor needs to decide where she should publish her current work. Should she submit the paper to a high prestige conference with a lower probability of acceptance or a lower-tier conference with a higher probability? The former strategy yields greater visibility, but with a lower acceptance rate. If the conference rejects her paper, she may need to wait out a year. Some individuals on websites with explicit mechanisms have many reputation points; others in networks such as Twitter, are influencers---did these individuals employ strategic behaviors to gain points or to become influential? At first glance, it might appear that individuals who do well, ``have gamed the system'' as it were, and the rest have not figured out the mechanism. However, for games with opaque mechanisms, the strategy space is unclear; and for games with explicit mechanisms (e.g., StackOverflow), the payoffs for a particular action are still unknown. It is not straightforward to conclude that the winners of these social networks (e.g., influencers on Twitter) are rational in the classic sense (i.e., maximize expected utility) due to several reasons. The combinatorial strategy space (e.g., on StackOverflow, which question to answer, when, answer length, readability, etc.); the inability to determine the payoffs (e.g., reputation points) for any given strategy; and the fact that individuals do not have unlimited resources to determine their best response. Instead, the best we can conclude is that the best players are \textit{differentially rational}---that is, when compared to their peers, they have a better understanding of the correlation between a strategy and its payoff. The theoretical Computer Science community has paid attention to games of incomplete information~\cite{hartline2015no,lykouris2016learning,feng2018learning} with dynamic populations (as in behavior in online auctions). For example, a key result from~\citet{lykouris2016learning} is that when agents play repeated games with strategies that guarantee low-adaptive regret, high social welfare is ensured. One of the challenges with theoretical work is that it is unclear if, in practice, individuals can find successful strategies. Most existing social modeling tasks target on discovering people's interests from textual contents on social media \cite{zhu2017next, bhattacharya2014inferring, li2008tag, qiu2013not} or tracing the propagation of social influence along social networks \cite{papagelis2011individual, barbieri2013topic, tang2009social}. A few do consider the latent strategies adopted by people in social networks; however, they either ignore contextual information \cite{dong2014inferring}, or do not further examine the impact of those strategies \cite{xu2012modeling}. Thus we are motivated to ask a simple question: \begin{quote} Can individuals with limited resources discover content production strategies with high payoffs in social networks? \end{quote} To operationalize our question, we analyze the preference order over strategies. In particular, we ask two questions: first, does the preference order among strategies for authors stabilize over time, indicating the emergence of strategic behavior? Second, if the preference order is stable, does the preference order maximize utility? Notice that preference order stability does not imply high payoffs; the stability may arise due to other factors such as social norms. We wish to answer this question through an analysis of empirical data from a social network. An empirical analysis is non-trivial: while we may observe a particular outcome (e.g., which paper to cite; which celebrity's handle to mention; the topic of the message that we post on social media), as well as be able to compute the reward, \textit{we do not observe the strategic considerations} underlying the action. \noindent\textbf{Our technical insight:} to model the observed behavior as a generative process. That is, a strategic decision changes the posterior distribution over the action space. We assume that while the set of strategies is common to all, each individual adopts a mixed-strategy over the set of different strategies. In other words, the distribution over the set of strategies is private to each individual. To model individual behavior, we propose conceptualizing content production as a bipartite graph where the nodes include individuals and contents, and where content may have multiple authors. Thus, the strategy to produce a piece of content (e.g., author a paper in an academic social network, post blogs online) depends on the strategy distributions of its authors; and the co-authors may influence the strategy distribution of an author. We identify an elegant dual attention neural architecture motivated by~\citet{velivckovic2017graph} to model individual behavior. Then, we compare our results with a counterfactual condition: the inferred strategic behavior of an idealized expected-utility maximizer. We summarize our contributions as follows: \begin{description} \item[Coupling authors and content:] We propose a novel Dynamic Dual Attention Network (DDAN) to jointly model the role of the authors in the determination of content production strategy, and how co-authoring content influences authors' content production strategy. The DDAN helps discover the author's strategy. In contrast, past works either focus on theoretical concepts (e.g., ~\cite{hartline2015no,lykouris2016learning}) or do not attempt to identify strategic behaviors from data. In the dynamic dual attention mechanism, the content strategy depends on the strategies of all of its authors. Conversely, the strategy of an author depends on her prior production strategy as well as the strategies of all the content that she played a role in producing at the current moment. Extensive experiments show that our framework models user behaviors well. \item[Strong experimental findings:] We have strong qualitative findings. First, we show that different strategies result in different payoffs. Second, we show through rank correlation, that the authors with the top $10\%$ normalized utility exhibit stability in their preference orders. Furthermore, a majority of authors do not discover the correlation between strategies and payoffs. Third, we show that the stability of preference is correlated with high payoffs. \end{description} \noindent\textbf{The significance of this work:} to the best of our knowledge, this is the first attempt to identify strategic behaviors from empirical data formally. We organize the rest of the paper as follows. In the next section, we introduce the problem. In~\Cref{sec:3}, we show how to model strategic behavior, including identifying our assumptions, and introducing our Dynamic Dual Attention Network formalism. In~\Cref{sec:4}, we propose a model for rational behavior, to serve as an idealized baseline. Then, in~\Cref{sec:5}, we present experiments on an academic dataset, including specification of the strategic space.~\Cref{sec:6} highlights the qualitative findings. Then, we discuss challenges and limitations in~\Cref{sec:discussion} followed by a discussion of related work in~\Cref{sec:7}. We conclude in~\Cref{sec:8}. \section{Problem Formulation} \label{sec:2} We first present an informal description before presenting the problem in detail. \subsection{Informal Problem Description} \label{sub:2.1} Consider a general scenario, where a set of individuals $\mathcal{A}$ work together to author a piece of content $c$. This content could be a blog post, an academic paper, or when a group attempts to answer a question on a community question-answer forum such as StackExchange. We can associate attributes with the created content $c$, including venue of publication; time of publication; content topic; links to other entities including contents (e.g., citations to other academic papers, links to other blog posts) and authors (e.g., tweets can include mentions of other individuals on Twitter). We can associate a time-varying utility $\mu_c(k)$ to each content $c$ published at time $t$, where we evaluate the utility after $k$ time units (i.e., at time $t+k$): academic papers receive citations; blog posts receive in-links; answers on a community question answer forum receive up or down votes (the forum will transform the votes to reputation points). In general, the relationship between a particular choice of attributes for the content (e.g., topic) and its utility is non-trivial to assess. Not only is the relationship non-deterministic, but also, the payoffs are distant. Authors have to make decisions about content attributes, including, for example, the topic, the publication venue, links to other entities. We identify four challenges. First, the attribute cardinality is large. For example, for an author to identify a paper to cite, she needs to consider \textit{the entire set of past published papers}. Instead, she may have \textit{private} strategies, including picking papers that are highly cited, to bias her attribute selection. Second, the mapping between attribute choice and utility is non-deterministic, with a distant payoff. Third, while she can observe the choice of attributes (e.g., content topic), made by her peers, and their payoffs (e.g., citations), \textit{she cannot observe the strategic consideration behind that choice}. Finally, authors are resource-limited (e.g., limited time, attention), limiting their ability to determine co-variation between their choices and payoffs. Thus, we can ask: \begin{quote} Are resource-limited individuals in social networks able to discover content-production strategies that yield high payoffs? \end{quote} \subsection{Data Model} \label{sec:2.2} Now, we develop the data model for the problem. Let $\mathcal{A}$ denote the set of individuals who produce content on a social network, and let $\mathcal{C}$ denote the set of content. Authors may either collaborate or work alone to author content $c \in \mathcal{C}$. Thus, we can construct an undirected bipartite graph $G_{a,c} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V} = \mathcal{A} \cup \mathcal{C}$, $\mathcal{E} = \{(a,c) \: | \: a \in \mathcal{A}, \: c \in \mathcal{C}, \: a \text{ is an author of } c \}$, to compactly represent content production~(\Cref{fig:section2} (a)).~\Cref{fig:section2} (b) illustrates that each content $c \in \mathcal{C}$ may have multiple attributes. The author picks the attribute values strategically. Let every author use the \textit{same} strategy space $\mathcal{S}$, where $|\mathcal{S}|=m$. However, each author randomizes over the $m$ strategies independently. That is, for each author $a$, we associate a probability distribution $\vec{D}_a$ over the $m$ strategies, \textit{private to each individual}, from which she draws her strategy to determine the attributes for $c$. For example, if an author wishes to determine which papers to cite, her strategies could include picking papers uniformly at random from past papers, as well as picking papers based on citation count. More formally, each choice of strategy $S \in \mathcal{S}$ affects the posterior distribution of attribute values. When a group of authors collaborate, we assume that they negotiate and develop a consensus strategy. We use $\vec{D}_a(t)$ to represent $a$'s strategy distribution at time $t$. We assume that the author's past strategy $\vec{D}_a(t-1)$ and her co-authors' strategy at time $t$ influence $\vec{D}_a(t)$. Assume that a content $c$ authored by $a$ at time $t$ receives a utility $\mu_c(k)$ after $k$ time units. In general, at time $t$, an author $a$ creates a set of content $\mathcal{C}_a(t)$. Some content $c \in \mathcal{C}_a(t)$ may have co-authors. Thus the total utility for author $a$ for authoring these contents $\mathcal{C}_a(t)$ at time $t$, $k$ time units after publication, is: \begin{equation} \mu_a(t, k) = \sum_{c \in \mathcal{C}_a(t)} \mu_c(k) \cdot r( a \mid c), \label{eq:author_contribution} \end{equation} where, $r( a \mid c) \in [0,1]$ is the attribution of author $a$'s role in creating content $c$. Notice for each $c \in C_a(t)$, the co-author group may be different. Each author $a$ has a \textit{private} preference between a pair of strategies $S_i, S_j$. We denote $S_i \succ_a S_j$ (or $S_i$ dominates $S_j$) if the probability of picking $S_i$ is greater than $S_j$ and $S_i \sim_a S_j$ if $a$ is indifferent between the two. We ask two questions: \begin{quote} First, does the preference order among strategies for authors stabilize over time, indicating the emergence of strategic behaviors? Second, if the preference order is stable, does the preference order maximize utility? \end{quote} Notice that the emergence of stable preferences by itself does not imply that the author is maximizing utility, since social norms may cause preference stability. We will compare the strategic behaviors of social network participants against myopic rational agents that maximize expected utility. \begin{table}[t] \centering \begin{tabular}{ll} \toprule Notation & Description \\ \midrule $G(t)$ & Snapshot of the author-content graph at time $t$ \\ $\mathcal{C}_a(t)$ & Set of contents created by $a$ at time $t$ \\ $\widetilde{\mathcal{A}}'(t)$ & Set of authors with over 5 active contents at time $t$ \\ $\Vec{h}_a(t)$ & Embedding vector of $a$ at time $t$ \\ $\Vec{F}_a(t)$ & Field vector of $a$ at time $t$ \\ $\vec{D}_a(t)$ & Strategy distribution of $a$ at time $t$ \\ $r( a \mid c)$ & $a$'s contribution to $c$'s strategy distribution \\ $\mu_a(t,k)$ & Utility received by $a$ with $\vec{D}_a(t)$ over $k$ time units \\ $\hat{\mu}_a(t-k)$ & Normalized total utility received by $a$ with $\vec{D}_a(t-k)$ \\ $\bar{\mu}_g(t,S)$ & The global expected normalized utility for strategy $S$ \\ \bottomrule \end{tabular} \caption{Notation table.} \label{tab:notation_table} \end{table} \section{Modeling Strategic Behavior} \label{sec:3} To identify strategic behavior for content production, we need to address two questions. First, how to determine the strategy distribution $\vec{D}_c$ for content $c$, jointly authored by a set of authors $\mathcal{A}_c$. Notice that each author $a \in \mathcal{A}_c$ has an individual strategy distribution $\vec{D}_a(t)$. Second, we need to determine how the prior strategy distribution $\vec{D}_a(t-1)$ and the strategy distributions of the co-authors of $a$ influence the strategy distribution $\vec{D}_a(t)$. Next, we introduce key modeling assumptions followed by an elegant Dynamic Dual Attention Network (DDAN) to jointly solve both questions. \subsection{Assumptions} \label{sec:3.1} Now we discuss assumptions useful for developing our model. \begin{description} \item[Strategy distributions:] We associate a strategy distribution $\vec{D}_c$ with content $c$ produced at time $t$. $\vec{D}_c$ depends on the individual strategy distributions $\vec{D}_a(t)$ of the set of authors $\mathcal{A}_c$ who jointly produce $c$. In other words, the set of authors $\mathcal{A}_c$ draw the strategy $S_i \in \mathcal{S}$ given $\vec{D}_c$ to determine attributes for $c$. Assume that an author $a$ participates in the production of a set of contents $\mathcal{C}_a(t)$. We assume that two factors influence her strategy distribution $\vec{D}_a(t)$: her prior strategy distribution $\vec{D}_a(t-1)$; the strategy distribution of her co-authors for each $c \in \mathcal{C}_a(t)$. \item[Utility calculation:] Let $\mu_c(k)$ be the utility accumulated by content $c$ after $k$ time units. Since each author $a \in \mathcal{A}_c$ contributes to a different extent to produce $c$, we assume that the utility that flows back to $a$ is in proportion to her contribution. That is, the utility $\mu_{a \mid c}(k) \propto \mu_{c}(k) \cdot r( a \mid c)$, where $\mu_{a \mid c}(k)$ is the utility that flows back to $a$ after $k$ time units in proportion to her contribution $r( a \mid c)$. Notice that $\sum_a r( a \mid c)=1, a \in \mathcal{A}_c$. \item[Vertex representation:] We associate each content $c$ with a node embedding vector $\vec{h}_c \in \mathbb{R}^{F}$ and each author $a$ at time $t$ with a node embedding vector $\vec{h}_a(t) \in \mathbb{R}^{F}$ (e.g., ESim~\cite{shang2016meta}). We obtain a time-dependent embedding vector for an author, by treating the same author at different times as separate nodes when embedding the network. \item[Network snapshots:] Since the graph $G = (\mathcal{V},\mathcal{E})$ grows over time, we divide the graph into snapshots. Specifically, we define the vertex set $\mathcal{V}(t)$ and the edge set $\mathcal{E}(t)$ for the graph $G(t)$ to include the authors active at time $t$, the contents created at time $t$, and the links from the content created at time $t$ with their attributes. If an author appears for the first time in snapshot $t$, we draw the prior strategy distributions $\vec{D}_a(t-1)$ from a flat Dirichlet distribution and use an all zero vector as the prior embedding $\vec{h}_a(t-1)$. \end{description} \subsection{Dynamic Dual Attention Networks} \label{sec:3.2} We propose a novel Dynamic Dual Attention Network (DDAN), inspired by the work on Graph Attention Networks by \citet{velivckovic2017graph}, to identify the strategy distributions for content and its authors. The DDAN elegantly addresses the two central dependencies: the strategy for the production of any content depends on the strategies of its authors, and an author's prior strategy as well as her co-authors influence her current strategic behavior. We jointly optimize two attention mechanisms. \subsubsection{Determining the strategy for the production of a single content} \label{sec:3.2.1} The strategy distribution $\vec{D}_c$ of a content $c$ created at time $t$ is affected by the strategy distribution $\vec{D}_a(t)$ of all its authors $a \in \mathcal{A}_c$. To determine the contribution $\alpha_{a|c}$ of a specific author $a$ towards $\vec{D}_c$, we feed the embedding vector of the content $c$ (i.e., $\vec{h}_c$) and of the author $a$ at time $t$ (i.e., $\vec{h}_a(t)$) into a one-layer attention mechanism as follows: \begin{align} \label{eq:con_att_aut} e_{a|c} & = \sigma \left( \vec{\phi}_{c,a}^\top \cdot \big[ \mathbf{W}_{c,a} \vec{h}_c \, || \, \mathbf{W}_{c,a} \vec{h}_a(t) \big] \right), \\ \alpha_{a|c} & = \mbox{softmax}_a(e_{a|c}) = \frac{\exp( e_{a|c} )}{\sum_{a'\in\mathcal{A}_c}\exp(e_{a'|c})}, \end{align} where $\mathbf{W}_{c,a} \in \mathbb{R}^{F'\times F}$ is a shared linear transformation and $\vec{\phi}_{c,a} \in \mathbb{R}^{2F'}$ is the weight vector in a one-layer feedforward neural network. Note that $||$ is the concatenation operator to concatenate two vectors and we use \texttt{LeakyReLU} for the nonlinearity $\sigma$. We use \texttt{softmax} normalization to ensure that the contributions of all the coauthors to a particular content sum to $1$. Finally, note that since $\alpha_{a|c}$ is $a$'s contribution to the determination of $\vec{D}_c$, we set $r( a \mid c) = \alpha_{a|c}$. Then the strategy distribution $\vec{D}_c$ of content $c$ is the sum of its authors' strategy distributions $\vec{D}_a(t)$ at time $t$, weighted by each authors $a$'s contribution $\alpha_{a|c}$. We use $\xi$ to represent $\tanh$ nonlinear activation. We use $L_1$ normalization to ensure that $\vec{D}_c$ is a valid strategy distribution: \begin{align} \label{eq:con_dist} \vec{D}_c & = \xi \left(\sum_{a \in \mathcal{A}_c} \alpha_{a|c} \cdot \vec{D}_a(t)\right). \end{align} \subsubsection{Determining an author's strategy} \label{sec:3.2.2} An author's strategy $\vec{D}_a(t)$ depends on the strategy adopted for each content she authors at time $t$ as well as her past strategy distribution $\vec{D}_a(t-1)$. First, we examine the effect of the strategy for the production of content $c$ where she is a co-author in $\mathcal{A}_c$. We apply an attention mechanism to learn content $c$'s contribution $\alpha_{c|a}$ on author $a$'s strategy distribution $\vec{D}_a(t)$ as follows: \begin{align} \label{eq:aut_att_con} e_{c|a} & = \sigma \left( \vec{\phi}_{a,c}^\top \cdot \big[ \mathbf{W}_{a,c} \vec{h}_a(t) \, || \, \mathbf{W}_{a,c} \vec{h}_c \big] \right), \\ \alpha_{c|a} & = \mbox{softmax}_c(e_{c|a}) = \frac{\mbox{exp}(e_{c|a})}{\sum_{c'\in\mathcal{C}_a(t)}\mbox{exp}(e_{c'|a})}. \end{align} We use a different attention mechanism to determine the contribution of $a$'s strategy distribution at time $t-1$ on her current strategy distribution: \begin{equation} \label{eq:aut_att_aut} \beta_a(t) = \mbox{sigmoid} \left( \vec{\phi}_{a,a}^\top \cdot \big[ \mathbf{W}_{a,a} \vec{h}_a(t) \, || \, \mathbf{W}_{a,a} \vec{h}_a(t-1) \big] \right). \end{equation} Thus, $a$'s strategy distribution $\vec{D}_a(t)$ at time $t$ is the weighted sum of the strategy distribution $\vec{D}_c$ for $c \in \mathcal{C}_a(t)$, and $\vec{D}_a(t-1)$: \begin{align} \label{eq:aut_dist} \vec{D}_a(t) & = \xi \left( \beta_a(t)\vec{D}_a(t-1) + (1-\beta_a(t))\sum_{c \in\mathcal{C}_a(t)} \alpha_{c|a} \cdot \vec{D}_c \right). \end{align} We use $L_1$ normalization to ensure that $\vec{D}_a(t)$ is a valid distribution. In this section, we discussed modeling assumptions and showed how to determine the content production strategy as a function of the author strategies as well as how past author strategy $\vec{D}_a(t-1)$ and co-author strategies influence an author's current strategy $\vec{D}_a(t)$. Next, we show how to model a rational agent that maximizes expected utility. \section{A Model for Rational Behavior } \label{sec:4} Rational behavior is a useful baseline to understand better the behaviors that we discover in this paper. An author engaged in rational behavior would be able to evaluate the utilities of all strategies and be able to identify the optimal strategy. However, determining rational behavior is hard for several reasons. First, notice that while the actions (e.g., the paper that author cites; content topic) are observable, \textit{the strategies that result in the actions (e.g., pick highly cited papers to cite) are not observable}. This means that any author with access to unlimited resources, who wishes to engage in rational play, will need to develop a model of user behavior that connects strategies to outcomes, fit the model, and then connect strategy distributions $\vec{D}_a$ to payoffs. Unsurprisingly, we could develop several plausible rational models, and below, we discuss one such model that utilizes our DDAN framework. First, we ask: given the utility at time $t$ of content co-authored by $a$ at time $t-k$, what is the utility of author $a$ using strategies distribution $\vec{D}_a(t-k)$? We compute a normalized utility as follows: \begin{equation} \hat{\mu}_a(t-k) = \frac{1}{\underbrace{k}_{\text{time}}} \times \frac{\sum_{c \in \mathcal{C}_a(t-k)} \mu_{a|c}(t-k) }{ \underbrace{|\mathcal{C}_a(t-k)|}_{\text{count}} }, \end{equation} The equation says that the normalized utility $\hat{\mu}_a(t-k)$ due to the distribution $\vec{D}_a(t-k)$ depends on the relative utility $\mu_{a|c}(t-k)$ accruing due to participation in the creation of $c \in \mathcal{C}_a(t-k)$. We normalize this sum utilities by the number of content co-authored by $a$ at time $t-k$ and further normalized by the time elapsed $k$, between content production and evaluation. In our model, to simplify analysis, we allocate the utility $\hat{\mu}_a(t-k)$ to the maximum likelihood strategy in $\vec{D}_a(t-k)$. A rational author $r$ would thus learn the global (i.e., over all authors) expected utility $\bar{\mu}_g(t,S)$ for each strategy $S \in \mathcal{S}$. Using the expected value allows us to average out over unobserved confounds that may co-vary with utility (e.g., author institution; prior author reputation). Next, we present our experimental results. \section{Experiments} \label{sec:5} In this section, we discuss our datasets in~\Cref{sec:5.1}, strategy spaces in~\Cref{sec:5.2}, DDAN training in~\Cref{sec:5.3}, experiment setting in~\Cref{sec:5.4}, competing baselines in~\Cref{sec:5.5} and evaluation in~\Cref{sec:5.6}. We used XSEDE~\cite{xsede} resources for computation and our implementations are publicly available~\footnote{\url{https://github.com/CrowdDynamicsLab/Discovering_Strategic_Behaviors}}. \subsection{Datasets} \label{sec:5.1} We use the latest version of the DBLP academic dataset~\cite{Tang:08KDD, sinha2015overview}. The goal with this dataset is to discover strategic behavior associated with two paper attributes: citations and publication venue. That is, \textit{what are the strategic considerations behind whom to cite, and where to publish?} We set papers created during the years 1980--1999 as the background papers. We aim to examine the strategies adopted by authors starting from the year 2000 and use $G(t)$ for $t \in \{1,\dots,19\}$ to represent the status of the network corresponding to each year between $2000$ and $2018$ inclusive. Not every remaining paper in the dataset contains both citation and location information. We infer an author or paper's citation or location strategies only if we can observe the corresponding citation or location edges. This corresponds to 97\% of the papers. \subsection{Strategy Spaces} \label{sec:5.2} Now, we discuss four strategy spaces that alter the distributions of the attributes. As we discussed in~\Cref{sec:2.2}, a bipartite graph $G_{a,c}$ represents the content production, connecting authors to the content that they help co-author. One can create using the attributes of each paper, additional graphs: an author-author citation graph $G_{a,a}$, a paper-paper citation graph $G_{c,c}$, a paper-location graph $G_{c,u}$ and an author-location graph $G_{a,u}$. Since our DDAN is symmetric with respect to content and authors, in this section we will discuss strategic considerations for content (i.e. focus on explaining $G_{c,c}$ and $G_{c,u}$); similar arguments hold for authors. Thus, consider a paper $c_1$ that cites $c_2$ and is published at location $u_1$. We need to identify strategic considerations that explain the directed edges $(c_1, c_2)$ and $(c_1, u_1)$. We identify four aspects (see~\Cref{tab:pure_strategies}) based on popularity, similarity of field, familiarity and time recency. As a reminder, each strategy alters the posterior distributions of the attribute value, and \textit{does not} deterministically set the attribute value. When picking papers to cite, authors may pick highly cited papers, from similar fields. They may also pick papers by authors whom they know (e.g. papers by past co-authors), and if the paper topic is in a newly emerging area of research, more recent papers. \begin{table}[t] \centering \begin{tabular}{cl} \toprule Aspect & Strategy \\ \midrule \multirow{2}{*}{Popularity} & $s_{1,0}$, preferential attachment \\ & $s_{1,1}$, uniform attachment \\ \multirow{2}{*}{Field} & $s_{2,0}$, preferring similar fields \\ & $s_{2,1}$, preferring distinct fields \\ \multirow{2}{*}{Familiarity} & $s_{3,0}$, preferring familiar nodes \\ & $s_{3,1}$, preferring unfamiliar nodes \\ \multirow{2}{*}{Time} & $s_{4,0}$, preferring small time gaps \\ & $s_{4,1}$, choosing random time gaps \\ \bottomrule \end{tabular} \caption{Meaning of each pure strategy. Each composite citation strategy consists of one pure strategy from each of the four aspects. Each composite location strategy consists of one pure strategy from each of the first three aspects.} \label{tab:pure_strategies} \end{table} \subsubsection{Popularity} \label{sec:5.2.1} We use two strategies to explain the directed edge $(c_1, c_2)$ (and $(c_1, u_1)$) based on popularity. The first, is preferential attachment (i.e., strategy $s_{1,0}$), documented by~\citet{barabasi1999emergence}, where the probability of citing a past paper is proportional to its citations, as the strategy to pick highly cited papers (or publication venues; for example, authors may want to publish in journals with high impact factor). The second is to pick papers (or locations to publish at) uniformly at random (i.e., strategy $s_{1,1}$). \subsubsection{Field} \label{sec:5.2.2} We use LSA \cite{deerwester1990indexing} to assign each content a 100-dimension field vector $\vec{F}_c$. Then an author's field vector $\vec{F}_a(t)$ at time $t$ is the average of the field vectors of contents that he has created by time $t$. A publication venue's field vector $\vec{F}_u(t)$ at time $t$ is the average of the field vectors of contents published there by time $t$. We perform $L_2$ normalization on all the field vectors. To support their arguments, authors are likely to cite thematically similar papers (i.e. topic homophily~\cite{McPherson2001,Kossinets2009}), or publish in venues with fields similar to the field of the paper. High-impact papers, on the other hand, often cite papers outside of their field~\cite{Uzzi2013}. Thus, we may explain edge $(c_1, c_2)$ either with homophily (i.e., strategy $s_{2,0}$) or with choosing from different fields (i.e., strategy $s_{2,1}$), if the central theme of $c_1$ spans multiple fields. Thus, we can set the likelihood of edge $(c_1,c_2)$ based on strategy $s_{2,0}$ to be $\ell((c_1, c_2) \mid s_{2,0}) \propto \exp(|| - \vec{F}_{c_1} - \vec{F}_{c_2}||)$. The likelihood of choosing a paper $c_2$ from a field distinct from $c_1$ is just the complement of $\ell((c_1, c_2) \mid s_{2,0})$. We make similar arguments for explaining edge $(c_1, u_1)$. \subsubsection{Familiarity} \label{sec:5.2.3} Some papers may preferentially cite other papers based on authorship; for example, self-citation is a well known strategy to boost the popular $h$-index~\cite{Glanzel2006, Engqvist2008}. Thus, we can partition the set of papers published before time $t$ into two disjoint sets: one set $\bm{A}$ that contains papers, each of which has one of the co-authors of $c_1$ as a co-author. The second set $\bm{B}$ is the complementary set, containing papers whose authors do not include any of the co-authors of paper $c_1$. We can make a parallel argument to partition the set of past publication venues. Thus in our first familiarity based strategy $s_{3,0}$, a paper $c_1$ will cite another $c_2$ with a high probability if $c_2 \in \bm{A}$ and with a low probability if $c_2 \in \bm{B}$. The converse is true for strategy $s_{3,1}$. \subsubsection{Time} \label{sec:5.2.4} Paper citations also exhibit recency bias~\cite{ghosh2011time}, and thus time is an important factor for explaining edge $(c_1, c_2)$. Since content created at $t$ cannot occur at venues active \textit{prior to} time $t$, we do not include time as a strategic consideration for selecting venue, that is, to explain edge $(c_1, u_1)$. To incorporate recency bias (i.e., strategy $s_{4,0}$), we do the following. Assume that the normalized time difference between the publications $c_1$ and $c_2$ is $0 \leq \delta \leq 1$. Then, to model recency bias, we use a Beta distribution to alter the posterior probability of selecting papers to cite. That is, likelihood $\ell((c_1, c_2) \mid s_{4,0}) \propto B(1-\delta \mid \alpha, \beta)$, where $\alpha, \beta$ are parameters of the Beta distribution. To model recency, we set $\alpha=10, \beta=1$. In the complementary strategy (i.e., strategy $s_{4,1}$), we pick a paper uniformly at random with respect to time of publication. \subsubsection{Composite Strategies} \label{5.2.5} Thus far, we discussed four different strategic considerations to explain edge $(c_1, c_2)$: popularity, field, familiarity and time recency. We identify three strategic considerations to explain edge $(c_1, u_1)$: popularity, field, familiarity. Thus the likelihood of the edge $(c_1, c_2)$ is a composite of each of the four strategies. Since each strategic consideration has two possibilities, we can enumerate $2^4=16$ composite strategies to explain edge $(c_1, c_2)$. Correspondingly, we can enumerate $2^3=8$ composite strategies to explain edge $(c_1, u_1)$. A composite citation strategy consists of a pure strategy under each of the four aspects (Popularity, Field, Familiarity, Time); a composite location strategy consists of a pure strategy under each of the first three aspects (Popularity, Field, Familiarity). For easy reference, we use a binary sequence to represent composite strategies w.r.t. pure strategies (e.g., citation strategy $S_4^c=s_{1,0}\times s_{2,0}\times s_{3,1}\times s_{4,0}$, location strategy $S_6^l=s_{1,0}\times s_{2,1}\times s_{3,1}$). Then the likelihood of forming an edge $e=(c_1,c_2)$ given a composite strategy $S_i$ is the product of the likelihoods of forming that edge given each of $S_i$'s constituent pure strategies. In this subsection, we explained the strategic considerations to help explain the formation of edges $(c_1, c_2)$ and $(c_1, u_1)$. We can use the same strategies to help explain the formation of edges $(a_1, a_2)$ (i.e., author $a_1$ cites author $a_2$) and $(a_1, u_1)$ (i.e., author $a_1$ publishes in venue $u_1$) \subsection{DDAN Training \& Optimization} \label{sec:5.3} Now, we we discuss how to train and optimize the Dynamic Dual Attention Networks (DDAN). We first initialize $\vec{D}_c$ and $\vec{D}_a(t)$ as generated by the flat Dirichlet distribution for content set and authors active at time $t$. Then we train the attention networks to update $\vec{D}_c$ and $\vec{D}_a(t)$ alternatively by using the current snapshot $G(t)$ as the ground truth. The overall framework is illustrated in~\Cref{fig:section5}. We need to explain four graphs using DDAN: an author-author citation graph $G_{a,a}$, a paper-paper citation graph $G_{c,c}$, a paper-location graph $G_{c,u}$, and an author-location graph $G_{a,u}$. As a concrete example, consider the graph $G_{c,c}$. Then, at time $t$, we need to minimize the negative log likelihood: \begin{equation} \label{eq:3.5_loss_func} L_{c,c}(t) = \sum_{(c_i,c_j)\in \mathcal{E}_{c,c}(t)} -\log \sum_{S_i \in \mathcal{S}} P(S_i \mid D_{c_i}) \cdot \ell((c_i,c_j) \mid S_i) \end{equation} Where, $P(S_i \mid D_{c_i})$ is the probability of picking strategy $S_i$ given the distribution $\vec{D}_{c_i}$ for content $c_i$, and $\ell((c_i,c_j) \mid S_i)$ is the likelihood of edge $(c_i, c_j)$ given strategy $S_i$. Thus~\Cref{eq:3.5_loss_func} states that we need to sum over all edges $(c_i, c_j)$, the negative log of the likelihood of observing edge $(c_i, c_j)$ conditioned on strategy distribution $\vec{D}_{c_i}$. We can construct similar likelihood functions to explain graphs $G_{a,a}$, $G_{c,u}$ and $G_{a,u}$. The overall likelihood is just a sum of the constituent likelihoods. That is, $L(t) = L_{c,c}(t) + L_{c,u}(t) + L_{a,a}(t)+L_{a,u}(t)$. Once the DDAN converges in the current snapshot $G(t)$, we move on to the next snapshot $G(t+1)$ until all the snapshots are covered. \subsection{Experiment Settings} \label{sec:5.4} To properly evaluate the strategy distributions identified by the proposed DDAN framework, we apply them to the task of link prediction. We model the link prediction problem as a recommendation problem which aims to rank node pairs in terms of the posterior probability of forming an edge between them. That is, the identified strategy distributions should best explain the observed network. We identify the set of authors $\widetilde{\mathcal{A}}'(t)$ with over five new contents in the current snapshot $G(t)$ and partition each author's contents for 5-fold cross validation. For each fold $\mathcal{C}_a(t,j)$ where $j \in \{1,\dots,5\}$, we hide the author-content edges between $a$ and $c \in \mathcal{C}_a(t,j)$ as well as the edges between $a$ and attribute nodes $\mathcal{A}^+_a(t,j)$ and $\mathcal{U}^+_a(t,j)$ formed due to $c \in \mathcal{C}_a(t,j)$. The model is trained using the remaining network and aims to recover the hidden attribute edges. Since exhaustive computation over all node pairs is expensive, we utilize the information of $a$'s coauthors when creating $c \in \mathcal{C}_a(t,j)$. The authors cited by those co-authors and the venues where those coauthors have made publications by time $t$ constitute the negative testing sets $\mathcal{A}^-_a(t,j)$ and $\mathcal{U}^-_a(t,j)$, respectively. We apply this process on the DBLP dataset with a five-year gap (i.e., we only look at snapshots corresponding to Year 2000, 2005, 2010, 2015, and 2018). We summarize these statistics in~\Cref{tab:testset_statistics}. We use Mean Average Precision (MAP) as the evaluation metric. \begin{table}[t] \centering \begin{tabular}{@{}crcc@{}} \toprule Snapshot Year & {$|\widetilde{\mathcal{A}}'(t)|$} & {$|\mathcal{A}_a^+(t,j)|$} & {$|\mathcal{U}_a^+(t,j)|$} \\ \midrule 2000 & 3,145 & 31.48 & 1.34 \\ 2005 & 10,316 & 42.34 & 1.47 \\ 2010 & 18,062 & 57.11 & 1.50 \\ 2015 & 25,759 & 82.18 & 1.56 \\ 2018 & 9,192 & 109.64 & 1.41 \\ \bottomrule \end{tabular} \caption{Statistics of the test set. $|\widetilde{\mathcal{A}}'(t)|$ is the size of the set of authors with over five new contents at time $t$. $|\mathcal{A}_a^+(t,j)|$ and $|\mathcal{U}_a^+(t,j)|$ are the average sizes of the positive testing sets per author per fold at time $t$ when we examine the citation and publication strategies, respectively.} \label{tab:testset_statistics} \end{table} \subsection{Baselines} \label{sec:5.5} We want to point out that the problem of identifying authors' strategy distributions can also be modeled by topic generative models. Therefore, we compare our DDAN framework against two different topic models \cite{yin2014dirichlet, wang2006topics} and one traditional regression model \cite{hosmer2013applied}. For static models \cite{yin2014dirichlet, hosmer2013applied}, we apply the model to each testing snapshot separately. For dynamic models \cite{wang2006topics} and DDAN, we first obtain the history information with the entire dataset, and then apply the model to the training sets in each testing snapshot. \begin{enumerate} \item \textbf{Logistic Regression (LR)} \cite{hosmer2013applied}: When applying the logistic regression model on each author individually, we treat the likelihood of forming an edge as the predictor and the ground truth as the response variable. The coefficients in the regression model are constrained to be non-negative and sum to 1 so that they can be interpreted as strategy distributions. \item \textbf{Dirichlet Multinomial Mixture Model (DMM)} \cite{yin2014dirichlet}: Words become strategies and topics over words become distributions over strategies. Authors need to pick one strategy from their strategy distributions to form an edge. We set the number of topics to be the same as the number of strategies so that each topic is initialized with a maximum likelihood strategy. All the authors in each testing snapshot are trained together to detect their strategy distributions. \item \textbf{Topics Over Time (TOT)} \cite{wang2006topics}: In comparison with DMM, TOT requires authors to first choose a topic from her distribution over topics in that snapshot and then pick a strategy from the chosen topic to form an edge. Meanwhile, each topic is also associated with a continuous distribution over time snapshots. \item \textbf{DDAN}: The proposed framework which models individuals' content production strategies under the influence of social interactions involved in the process. \end{enumerate} \begin{table}[b] \centering \begin{tabular}{@{}cccccc@{}} \toprule Year & Strategies & LR~\cite{hosmer2013applied} & DMM~\cite{yin2014dirichlet} & TOT~\cite{wang2006topics} & DDAN \\ \midrule \multirow{2}{*}{2000} & Citation & 0.72 & 0.72 & 0.73 & \textbf{0.74} \\ & Publication & 0.71 & 0.70 & 0.73 & \textbf{0.75} \\ \multirow{2}{*}{2005} & Citation & 0.69 & 0.69 & 0.70 & \textbf{0.71} \\ & Publication & 0.69 & 0.69 & 0.72 & \textbf{0.73} \\ \multirow{2}{*}{2010} & Citation & 0.67 & 0.67 & 0.68 & \textbf{0.69} \\ & Publication & 0.71 & 0.71 & 0.73 & \textbf{0.74} \\ \multirow{2}{*}{2015} & Citation & 0.67 & 0.67 & 0.68 & \textbf{0.69} \\ & Publication & 0.72 & 0.72 & 0.74 & \textbf{0.75} \\ \multirow{2}{*}{2018} & Citation & 0.67 & 0.67 & 0.68 & \textbf{0.69} \\ & Publication & 0.76 & 0.75 & 0.77 & \textbf{0.78} \\ \bottomrule \end{tabular} \caption{Experiment results using Mean Average Precision (MAP) as the evaluation metrics. DDAN achieves the highest scores for both strategies in all testing snapshots.} \label{tab:experiment_results_table} \end{table} \subsection{Evaluation} \label{sec:5.6} Each model is tested using 5-fold cross-validation, and the average MAP scores are reported in~\Cref{tab:experiment_results_table}. We can see that DDAN outperforms the rest for both strategies in all snapshots. LR performs slightly better than DMM since LR is trained on each author individually, while the training of DMM requires the entire set of active authors and may involve some noises. Meanwhile, dynamic models (DDAN, TOT) also give better results than static models (LR, DMM) since dynamic models consider the connection across snapshots. In general, unlike the baseline models (LR, DMM, TOT) which focus on authors' information and ignore the social context, the proposed DDAN framework comprehensively models the interactions between authors and contents within the same snapshot, as well as the dependencies between the present and the past. Therefore, DDAN achieves the highest performance. In this section, we introduced four strategic considerations---popularity, field, familiarity, and time---to explain the existence of an edge in four different derived graphs from the DBLP dataset. We also covered DDAN training and reported experimental results comparing DDAN with state-of-the-art baselines. The goal of these baselines was to understand if DDAN modeled the observations well. Next, we present a qualitative analysis of our results. \section{Qualitative Analysis} \label{sec:6} Two questions in~\Cref{sec:2.2} motivated us: if the preference order amongst strategies stabilizes for individuals (thus indicating the emergence of strategic behavior) and if these stable preference orders are correlated with high utility. As a reminder, stable preference orders may not correlate to high payoffs---since individuals are resource-limited, they may lack the resources to discover the correlation between behavior and payoff. Instead, stability may arise due to other factors, including social norms. Let us examine each question in turn. In the analysis that follows, we use the same strategy spaces and payoffs introduced in the previous section: the authors in an academic social network make decisions on whom to cite, and where to publish. We used sixteen citation strategies and eight location strategies. Since we use the academic dataset, we use the number of citations as the content utility, that is, $\mu_c(k)$ refers to the number of citations $c$ received over $k$ time units after publication. We assume for this analysis that citation strategies (i.e., which papers to cite) and location strategies (i.e., where to publish) are independent and contribute equally to the content utility $\mu_c(k)$. \subsection{Do strategies matter?} \label{sec:6.1} Let us first examine if there are any differences amongst the strategies $S \in \mathcal{S}$. Consider~\Cref{fig:combined_curves} (a)-(b). The two figures show the distribution of maximum likelihood utilities over the citation and location strategy space for the whole population. Observe that the utility curves in each sub-figure are distinct: each strategy distribution has a different mode, and some strategies have a higher payoff. Consider~\Cref{fig:combined_curves} (b), the distribution of location strategies. It shows that location strategy 0, i.e., $S^l_0$, has the maximum modal payoff (i.e., the distribution with the highest mode). This strategy says that the authors pick venues based on preferential attachment (that is, pick venues in proportion to their publication popularity), that are from similar fields as the author, and that are familiar (i.e., the author has published there earlier). In hindsight, this is intuitive---by publishing in popular venues, there is an increasing likelihood that their papers will be visible, with similar fields, there is an increasing chance that the paper is more likely to be accepted due to topical match, and if the author has published there previously, then the author understands the social norms in terms of how to write for that audience, again increasing the chance of acceptance. \Cref{fig:combined_curves} (a), the distribution of citation strategies offers similar insights. It shows that citation strategy 2, 4 and 15, i.e. $S^c_2$, $S^c_4$ and $S^c_{15}$, have the highest modal utilities. Let us examine strategy 4, i.e. $S^c_4$, in detail; the insights for the other two follow a similar argument. $S^c_4$ says that authors cite papers based on preferential attachment (i.e., they cite highly cited papers), from similar fields (i.e., they cite papers similar to their own paper), pick papers that are not familiar (that is, they don't cite their own papers), and pick most recent papers. This strategy of picking papers that are well cited in their own field makes sense---it is less likely that their paper will be rejected for inadequate references; that they don't self-cite is also reasonable since excessive self-citation is frowned upon, and citing more recent papers implies that they have covered all the recent, relevant works in their area. \subsection{Emergence of Order} \label{sec:6.2} Having established that the strategies have different payoffs, let us examine if authors begin to converge on a preferential order over strategies. In our DDAN formulation, each author $a$ has a strategy distribution $D_a(t)$, for each time $t$. We compute the preference order, by utilizing the likelihood of the strategy, for the citation and location strategies for each author. Then, we compute the Spearman rank correlation coefficient \cite{myers2013research}, to compute the correlation in preference order across consecutive publication years, for all authors with at least five publication years. Then, to aggregate across authors, we group rank correlation coefficients in relation to the author career length. \begin{figure}[t] \includegraphics[width=\columnwidth]{figure/change_in_strategy.pdf} \caption{For authors who published in at least 15 years, Figure (a) shows the distribution of their citation strategies in the 1st, 5th, 10th and 15th year. Figure (b) is the corresponding distribution of their citation utilities. Similar heatmaps are drawn for location strategies in Figure (c) and (d).} \label{fig:author_utility_over_time} \end{figure} ~\Cref{fig:combined_curves} (c) shows the rank correlation curves. The curves show that the correlations \textit{increase} for those authors with the normalized utility in the top $10\%$, with the highest increases for those in the top $1\%$. What the curves imply for the top $1\%$ is that this group quickly converge onto the citation and location strategy, while for the group in the top $1-10\%$ converge onto the citation strategy (but less quickly than do the top $1\%$), but takes a while for them to figure out \textit{where to publish}. What is of note: authors in the bottom $90\%$ in terms of the normalized utility are less likely to be correlated in terms of their citation or location strategy. ~\Cref{fig:author_utility_over_time} shows the strategy distributions for both citation and location strategy spaces, over time, for authors who have published in at least $15$ years (there are 23,238 of them). Notice that over time, for this group of authors, one can see that citation strategy 2, 4, 6 and 15, i.e., $S^c_2$, $S^c_4$, $S^c_6$ and $S^c_{15}$, are beginning to stabilize and location strategy 0, i.e., $S^l_0$, is stabilizing; notice that the corresponding utilities are also high; these plots indicate that for many of these authors, a preference order emerges. Notice in~\Cref{fig:combined_curves} (a)-(b), we find somewhat surprisingly, that a small percentage of authors ($\sim7\%$) have the normalized utility greater than the rational agent. This is reasonable because the rational agent computes the \textit{expected payoff} of any strategy; there will be some authors for whom that a strategy works better than average. In other words, some authors appear better than the rational agent, because most authors never figure out the correct strategy, depressing the mean. \subsection{Stability and Payoffs} \label{sec:6.3} \begin{figure}[t] \includegraphics[width=\columnwidth]{figure/top_util_marginal.pdf} \caption{For authors with top $1\%$ normalized utilities, we show the distribution over their citation strategies in Figure (a) and over citation utilities in Figure (b). Similar distributions are drawn for location strategies in Figure (c) and (d). High utility of citation strategy 6 in 5(a) and 5(b) is notable: this strategy emphasizes citing papers from different fields. This is consistent with findings by~\citet{Uzzi2013} that high-impact papers tend to cite papers from different fields.} \label{fig:top1} \end{figure} Having established the emergence of a stable preference order for some of the authors, it is natural to ask if this order is correlated with expected utilities. That is, do the preferences over strategies match the utilities that accrue from using them? We examine this issue in~\Cref{fig:top1}. The plots show the marginal strategy distribution for the top $1\%$ of the authors in terms of their normalized utility, with the top-left sub-figure showing the marginal citation strategy distribution, and the top right sub-figure showing the corresponding utility distribution. Notice that again, citation strategy 2, 4, 6 and 15, i.e. $S^c_2$, $S^c_4$, $S^c_6$ and $S^c_{15}$, have the highest utility values. While strategy 6 does not have the same high mode as strategy 2, 4 and 15 from~\Cref{fig:combined_curves}, those authors at the top $1\%$ derive more utility from it than most other authors. The main difference between strategy 4 and strategy 6: strategy 6, consistent with work by~\citet{Uzzi2013} on publications with high-impact, emphasizes citing highly cited papers from \textit{different fields}. At the same time, strategy 4 suggests that we pick highly cited papers from the same field. A similar pattern emerges for location strategy---strategy 0 being the strategy with the highest payoff for the top $1\%$ is consistent with the findings from~\Cref{fig:combined_curves} (b). In this section, we qualitatively examined the questions that motivated this paper: the emergence of preference order, and the relationship between preference order and utility. In the case of the academic dataset, and for the citation and location strategy spaces, we observe the emergence of order. Interestingly, we see that preference order is stable only for the top $10\%$ of the individuals in our dataset. Furthermore, we find that the preferences of the top $1\%$ are correlated with utility. \section{Related Work} \label{sec:7} Our approach targets strategic behavior modeling in social networks via dual graph attention. We provide a brief overview of related past work. \textbf{Strategic Behavior Modeling in Social Networks.} Social behavior modeling is connected to a wide range of past work: \citet{papagelis2011individual} investigates how individual behavior is affected by those of her friends; \citet{kohli2012colonel} looks at how social relations affect players' strategies in a resource allocation game. Others target specific scenarios: \citet{xu2012modeling} studies users' posting behavior on Twitter; \citet{irfan2018power} includes behavioral context in its model of congressional voting; and more recent work models relation types~\cite{rase, igcn} and social influence~\cite{infvae, socialgan} in neural recommendation frameworks; \citet{dong2014inferring} discovers social strategies among mobile users. Some general frameworks are also proposed: \citet{mueller2013general} formally characterizes rational learning in social networks. In contrast to past work, we examine if strategic behaviors emerge among social content producers to maximize social rewards. \textbf{Graph Attention Networks.} With recent advancements in graph neural networks \cite{zhou2018graph, wu2019comprehensive, zhang2018deep} and attention mechanisms \cite{vaswani2017attention, lee2018attention}, GAT \cite{velivckovic2017graph} introduces the attention mechanism in network feature aggregation by implicitly prioritizing node neighbors. Several works attempt to extend GAT to dynamic versions: combining recurrent neural networks with GAT~\cite{song2019session}; combined attention on structural neighborhood and temporal dynamics~\cite{sankar2018dynamic} and node-aware attention for user interaction predictions in real-world dynamic graphs~\cite{mlg2019_45}. \citet{wu2019dual} uses user-specific and dynamic context-aware attention weights for social recommendation. In contrast, our proposed DDAN framework extracts latent strategies via forward-backward dual attention to model the social interactions centered on content and authors. \section{Conclusion} \label{sec:8} In this paper, we investigated the question of whether resource-limited individuals were able to discover strategic behaviors associated with high payoffs when producing content in social networks. Empirical analysis is challenging since while we do observe the action, and the payoff, we do not observe the strategic considerations underlying the action. Our technical insight was to conceptualize the observed behavior as a generative process. That is, a strategic decision changes the posterior distribution over the action space. We assumed that while the set of strategies is common to all, each individual randomizes over the set of strategies. We proposed a novel framework of Dynamic Dual Attention Networks (DDAN) to model authors' strategic behaviors when creating contents in social networks. We made three strong empirical findings: first, different strategies lead to different payoffs; second, the group of authors with the highest $10\%$ normalized utility exhibit stability in their preferential orders over strategies, which indicates the emergence of strategic behaviors; third, the stability of preference is related to high payoffs. While our technical insight is generalizable, adapting our framework to other social networks requires care: the strategy spaces of these networks may differ, and if the content-production is not collaborative, DDAN may not be the best approach. \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for all short- and full-length articles, and optional for two-page abstracts. \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{The 1907 Franklin Model D roadster.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bfseries also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work. Figure captions are placed {\itshape below} the figure. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format}
1,314,259,995,487
arxiv
\section{Booleanization details} \label{app:Booleanizationdetails} Recall that given an arithmetic circuit ${\cal C}$ over $\mathbb{C}$ on variables $x_1,\ldots,x_N$, the Booleanization $B_{r,M}({\cal C})$ is the Boolean circuit constructed by assuming that all inputs $x_1,\ldots,x_N$ have magnitude $|x_i| \leq M$, and rounding them to $r$ bits of precision using the operation:\footnote{The floor function is applied to the real and imaginary parts separately.} $$R_r(z) := \floor{2^r \cdot z} / 2^r.$$ In this section, we will prove Lemma \ref{lem:Booleanization}, restated below: \begin{lemma} Let $\varepsilon > 0$, and let ${\cal C}$ be a circuit over $\mathbb{C}$ of depth $h$, computing a polynomial $g(x_1,\ldots,x_N)$ of degree $d$. Suppose that each multiplication gate of ${\cal C}$ is of fan-in 2, and each addition gate is of fan-in at most $m$. For technical reasons, suppose that all input gates of ${\cal C}$ are labelled by a variable in $\{x_1,\ldots,x_N\}$ (i.e., there are no input gates labelled by a constant in $\mathbb{C}$). If $r > (2hd^2 \ceil{{\rm log}(m)} + 1) {\rm log}_2(4NMd/\varepsilon)$, then $B_{r,M}({\cal C})$ is a logspace-uniform Boolean circuit of size ${\rm poly}(|\mathcal{C}|dhr({\rm log} m){\rm log}(M))$ and depth-$O(h \cdot {\rm log}(dhrmM))$. Moreover, $B_{r,M}({\cal C})$ computes a function $\tilde{g}(x_1,\ldots,x_N)$ such that for all $a_1,\ldots,a_N \in \mathbb{C}$ with $\max_i |x_i| \leq M$, $$|\tilde{g}(a_1,\ldots,a_n) - g(a_1,\ldots,g_n)| < \varepsilon.$$ \end{lemma} \begin{proof} This follows from Lemmas \ref{lem:boundingrounding} and \ref{lem:depthcomplexitybound}, which are proved in Subsections \ref{subsec:Booleanizationdepth} and \ref{subsec:roundingaccuracy}. \end{proof} In Subsection \ref{subsec:roundingaccuracy}, we ensure that the function computed by $B_{r,M}({\cal C})$ is a good approximation of the polynomial computed by ${\cal C}$. And in Subsection \ref{subsec:Booleanizationdepth} we bound the depth of $B_{r,M}({\cal C})$. This requires bounding the number of bits required to represent the values in the intermediate computation. In both Subsections \ref{subsec:roundingaccuracy} and \ref{subsec:Booleanizationdepth}, we will use the following lemma: \begin{lemma}[Bound on circuit value] Let ${\cal C}$ be an arithmetic circuit over $\mathbb{C}$. Suppose that each multiplication gate of ${\cal C}$ is of fan-in 2, and each addition gate is of fan-in at most $m$. For technical reasons, suppose that all input gates of ${\cal C}$ are all labelled with variables in $\{x_1,\ldots,x_N\}$ (i.e., there are no constants from $\mathbb{C}$ in the input gates). For each node $v$ at height $h(v)$ in ${\cal C}$, let $p_v(x_1,\ldots,x_N)$ denote the polynomial of degree $d(v)$ computed at $v$. Then if $\max_i |x_i| \leq M$, \begin{eqnarray} |p_v(x_1,\ldots,x_N)| \leq f(d(v), h(v)) := (2M)^{d(v) h(v) \ceil{{\rm log}_2(m)} + 1}. \end{eqnarray} \label{lem:precisionbound} \end{lemma} \begin{proof} First we note that we may assume that $m = 2$ without loss of generality, because each fan-in-$m$ addition gate can be replaced by a depth-$\ceil{{\rm log} m}$ tree of fan-in-2 addition gates, increasing the depth of the circuit by at most a factor of $\ceil{{\rm log} m}$. The proof is by induction on $h(v)$, the height of $v$. For the base case, $v$ is an input gate and $h(v) = 0$, $d(v) = 1$, since $p_v = x_i$ for some $|x_i|\leq M$, \begin{eqnarray} |p_v(x_1,\ldots,x_N)| \leq |x_i| \leq (2M)^{d(v) h(v) + 1}. \end{eqnarray} For the inductive step, if $v$ is not an input gate, let $w_1$ and $w_2$ be its children at heights $h(w_1), h(w_2) \leq h(v)-1$. If $v$ is a multiplication gate, then \begin{align*} |p_v(x)| &= |p_{w_1}(x)| \cdot |p_{w_2}(x)|\\ &\leq (2M)^{d(w_1) h(w_1) + 1} \cdot (2M)^{d(w_2) h(w_2) + 1}\\ &\leq (2M)^{d(w_1) (h(v)-1) + 1 + d(w_2) (h(v)-1) + 1}\\ &= (2M)^{d(v) h(v) + 2-d(v)} \tag{Using $d(w_1) + d(w_2) = d(v)$}\\ &\leq (2M)^{d(v) h(v) + 1}. \end{align*} And if $v$ is an addition gate then \begin{align*} |p_v(x)| &= |p_{w_1}(x)| + |p_{w_2}(x)|\\ &\leq (2M)^{d(w_1) h(w_1) + 1} + (2M)^{d(w_2) h(w_2) + 1}\\ &\leq 2 \cdot (2M)^{d(v) (h(v)-1) + 1} \tag{By $h(w_1),h(w_2) \leq h(v)$ and $d(w_1),d(w_2) \leq d(v)$}\\ &\leq (2M)^{d(v) h(v)+1}. \end{align*} \end{proof} \subsection{Bounding the error from rounding} \label{subsec:roundingaccuracy} A corollary to this lemma is that we can round the input values to a low number of bits of precision, and incur only a small additive error. \begin{lemma}[Bound on rounding error] \label{lem:boundingrounding} Let ${\cal C}$ be an arithmetic circuit over $\mathbb{C}$ of depth $h$ and degree $d$ such that all input gates are labelled by input variables $\{x_1,\ldots,x_N\}$ and not constants in $\mathbb{C}$. Suppose that each multiplication gate of ${\cal C}$ is of fan-in 2, and each addition gate is of fan-in at most $m$. Let $g(x_1,\ldots,x_N) \in \mathbb{C}[x_1,\ldots,x_N]$ be the polynomial computed by ${\cal C}$. Let $M > 0$, $0 \leq \varepsilon < 1$, $a_1,\ldots,a_n \in \mathbb{C}$ and $b_1,\ldots,b_n \in \mathbb{C}$ be such that \begin{equation*}\max_i |b_i - a_i| \leq \varepsilon, \quad \mbox{ and } \quad \max_i |a_i| \leq M, ,\max_i |b_i| \leq M.\end{equation*} Then $$|g(a_1,\ldots,a_n) - g(b_1,\ldots,b_n)| \leq Nd\varepsilon(2M)^{2hd^2\ceil{log m}+1}$$. \end{lemma} \begin{proof} Assume $m = 2$ without loss of generality, because each fan-in-$m$ addition gate can be replaced by a tree of addition gates of depth $\ceil{{\rm log} m}$. For each $i \in [N]$ and $0 \leq j \leq d$, consider the polynomial $$[x_i^j]g(x_1,\ldots,x_N) \in \mathbb{C}[x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_N].$$ By Lemma \ref{lem:deriv}, there is a depth-$(2hd)$ arithmetic circuit ${\cal C}_{i,j}$ computing $[x_i^j]g$. Moreover, the construction in Lemma \ref{lem:deriv} does not add any field elements from $\mathbb{C}$ to the input gates. Therefore, by Lemma \ref{lem:precisionbound} we have the following inequality: $$|[x_i^j]g(a_1,\ldots,a_{i-1},a_{i+1},\ldots,a_N)| \leq (2M)^{2hd^2+1}$$ Therefore, for all $i \in [N]$, defining $\Delta_i := g(a_1,\ldots,a_i,b_{i+1},\ldots,b_N) - g(a_1,\ldots,a_{i-1},b_i,\ldots,b_N)$, we have \begin{align*}|\Delta_i| \leq \sum_{j=0}^d |b_i - a_i|^j \cdot |[x_i^j]g(a_1,\ldots,a_{i-1},b_{i+1},\ldots,b_N)| \leq d \varepsilon (2M)^{2hd^2 + 1}.\end{align*} So \begin{align*}|g(a_1,\ldots,a_n) - g(b_1,\ldots,b_n)| = |\sum_{i \in [N]} \Delta_i| \leq \sum_{i \in [N]} |\Delta_i| \leq Nd\varepsilon(2M)^{2hd^2 + 1}.\end{align*} \end{proof} A corollary of Lemma \ref{lem:boundingrounding} is the following: \begin{corollary}\label{cor:boundingroundingimplication} Let $\varepsilon,M > 0$, and let $g(x_1,\ldots,x_N)$ be as in the statement of Lemma \ref{lem:boundingrounding}. If $r > (10hd^3\ceil{{\rm log} m}+1){\rm log}_2(4NdM/\varepsilon)$, then for $x_1,\ldots,x_N \in \mathbb{C}$ such that $\max_i |x_i| \leq M$, $$|g(R_r(x_1),\ldots,R_r(x_N)) - g(x_1,\ldots,x_N)| \leq 2^{-r+1} \cdot Nd(2M)^{10hd^3\ceil{{\rm log} m} + 1} \leq \varepsilon.$$ \end{corollary} For example, if $m = 2$ and the degree of $g$ is ${\rm poly}{\rm log}(N)$ and the height is ${\rm poly}{\rm log}(N)$, then Corollary \ref{cor:boundingroundingimplication} implies that we can round the inputs of the circuit down to ${\rm poly}{\rm log}(N) \cdot {\rm log}(M)$ bits of precision and still obtain a $1/{\rm poly}(N)$ overall approximation to the true value. \subsection{Bounding the depth of the Booleanization} \label{subsec:Booleanizationdepth} In this subsection, we show that replacing each arithmetic operation in an arithmetic circuit with its fixed-precision Boolean analogue does not increase the depth of the circuit significantly. The parallel Boolean complexity of the basic arithmetic operations is a folklore result: \begin{lemma} [Boolean complexity of addition, multiplication and exponentiation] Complex multiplication and iterated addition are contained in ${\sf{NC}}^1$. That is, \begin{enumerate} \item If $a_1,\ldots,a_t \in \mathbb{C}$ are $t$ complex numbers in $t$ bits then $\sum_{i=1}^t a_i$ can be computed using a bounded fan-in Boolean logspace-uniform circuit of size ${\rm poly} (t)$ and depth $O({\rm log} t)$. \item If $a,b \in \mathbb{C}$ are complex numbers in $t$ bits, then $a \times b$ can be computed using a bounded fan-in Boolean logspace-uniform circuit of size ${\rm poly} (t)$ and depth $O({\rm log} t)$. \end{enumerate} \label{lem:simple-operations-cost} \end{lemma} Recall that $B_{r,M}({\cal C})$ is defined to be the Boolean circuit formed by rounding the input variables to $r$ bits of precision, and replacing each arithmetic operation with a Boolean operation, assuming that at the input gates we have $\max_i |x_i| \leq M$. \begin{lemma} \label{lem:depthcomplexitybound} Let $r > 0$, and let ${\cal C}$ be an arithmetic circuit over $\mathbb{C}$ of depth $h$ and degree $d$ such that the input gates are labelled by variables $\{x_1,\ldots,x_N\}$ and not constants in $\mathbb{C}$. Suppose that each multiplication gate of ${\cal C}$ is of fan-in 2, and each addition gate is of fan-in at most $m$. Then $B_{r,M}({\cal C})$ has depth $O(h{\rm log} (dhrmM))$. \end{lemma} \begin{proof} We will prove this lemma by bounding the bit complexity of the values at the intermediate gates $v$ of the arithmetic circuit by $B = O(d^3 hr ({\rm log} m)({\rm log} M))$. This will suffice, because when the number of bits at each gate is bounded by $B$ then replacing each addition or multiplication operation with the Boolean implementation adds only ${\rm log}(mB)$ depth to the circuit by Lemma \ref{lem:simple-operations-cost}. Thus the total depth is $O(h {\rm log}(mB)) = O(h {\rm log} (dhrmM))$, as claimed. For each node $v$ in ${\cal C}$, let $p_v(x_1,\ldots,x_N)$ be the polynomial computed at $v$. Write $$p_v(x_1,\ldots,x_N) = \sum_{i=0}^d p_{v,i}(x_1,\ldots,x_N),$$ where for each $0 \leq i \leq d$, $p_{v,i}$ is a homogeneous polynomial of degree $i$ (i.e., each of its monomials is of degree $i$). By arithmetic circuit homogenization (cf. Lemma \ref{lem:deriv}), for each $0 \leq i \leq d$ there is an arithmetic circuit of depth $h(d+1)$ computing $p_{i,v}$ such that all input gates are labelled by variables in $\{x_1,\ldots,x_N\}$. We apply Lemma \ref{lem:precisionbound} to $p_{v,i}(x_1 \cdot 2^r,\ldots,x_N \cdot 2^r) = p_{v,i}(x_1,\ldots,x_N) \cdot 2^{ir}$, and conclude that $|p_{v,i}(x_1 \cdot 2^r,\ldots,x_N \cdot 2^r)| \leq (2M)^{(r+1)(2d^2\ceil{{\rm log}_2(m)} h+1)}$. Since $2^{r d} p_{v,i}(x_1,\ldots,x_N)$ is a Gaussian integer, we conclude that only $O((r+1)(d^3 h+1)({\rm log} M))$ bits of precision are required to represent $p_{v,i}(x_1 \cdot 2^r,\ldots,x_N \cdot 2^r)$. Hence only $O(d(v) \cdot (r+1)(d^2 h+1)({\rm log} M))$ bits are required to represent $p_v(x_1,\dots,x_N)$. So overall, all intermediate values in $B_{r,M}({\cal B})$ can indeed be represented with $B = O(d^3 hr({\rm log} m)({\rm log} M))$ bits of precision. \end{proof} \section{Computational Analytic-Continuation} \subsection{Improved analysis for CAC} In \cite{EM18} a subset of the authors has outlined an algorithmic technique called CAC for interpolating the value of a complex function given sufficiently many derivatives at some origin. In this work, we require slightly stronger conditions on the performance of the algorithm so we present a refined analysis thereof. We begin by rewriting the algorithm with slightly modified parameters: \begin{mdframed} \begin{algorithm}[Computational analytic continuation] \label{algorithm:cac} \begin{enumerate} \noindent \item \textbf{Input:} Integer $m_0 \geq 1$. An oracle $\mathcal{O}_g$ that takes a number $m_0$ as input and outputs the first $m_0$ derivatives of $g$ at $z = 0$, where $g(0) = 1$. $t$ complex numbers $\Delta_1,\hdots,\Delta_t$. A number $0 < \theta \leq 1$. \item \textbf{Fixed parameters:} \begin{enumerate} \item $s_0=0$ and $s_i= s_0 + \sum_{j=1}^{i} \Delta_j $ for each $1 \leq i \leq t$ \hfill \end{enumerate} \item \textbf{Variables:} \begin{enumerate} \item $m_i$ for $0\leq i \leq t$ \hfill \% the number of derivatives at each point $s_i$. \item $\hat{f}^{(l)}_i$ for $0 \leq l\leq m_i$ and $0 \leq i \leq t$ \hfill \% the $l$'th derivative of $f$ at $s_i$. \end{enumerate} \item \textbf{Main:} \begin{enumerate} \item\label{it:it1} Query $\mathcal{O}_g(m_0)$ to obtain $g^{(0)}(0),\ldots, g^{(m_0)}(0)$ \item\label{it:it2} Using derivatives from step \ref{it:it1} compute $\hat{f}^{(l)}_0 \leftarrow f^{(l)}(s_0)$ for $1 \leq l\leq m$. \item\label{it:iteration} For each $i = 0,\ldots,t-1$: \begin{itemize} \item Set: $m_{i+1} = \ceil{\theta m_i / (2{\rm log}\ m_i)}$. \item Compute $ \forall 0 \leq l \leq m_{i+1}, \ \ \ \hat{f}^{(l)}_{i+1} = \sum_{p = 0}^{m_{i} - l} \frac{\hat{f}^{(p + l)}_i}{p!} \Delta_i^{p}. $ \end{itemize} \label{it:dyn} \end{enumerate} \item \textbf{Output}: Let $\hat{f} := \hat{f}^{(0)}_t$ and return ${\cal O} = e^{\hat{f}}$. \end{enumerate} \vspace{5mm} \end{algorithm} \end{mdframed} \noindent \begin{lemma} [Correctness of algorithm \ref{algorithm:cac}]\label{lem:caccorrectness} \noindent Let $g(z)$ be a polynomial of degree at most $n$ such that $g(0) = 1$, and let $f(z) = {\rm log}(g(z))$. Suppose the inputs to algorithm \ref{algorithm:cac} satisfy the following conditions: \begin{enumerate} \item Non-increasing sequence of segments: $|\Delta_i| \leq |\Delta_{i-1}|$ for all $i \geq 1$. \item\label{it:ratio} Root avoidance: For each $i$ the ratio between the distance from the closest root of $g(z)$ to $s_i$ and the step size $|\Delta_{i+1}|$ is at least $\beta = e^{\theta}$ for $0 < \theta \leq 1$. \end{enumerate} Then, for small enough $\epsilon > 0$, letting \begin{eqnarray} \label{eq:m0setting} m_0 \geq 10\left({\rm log}(n/\epsilon \theta)\right)\left(10t({\rm log} t + {\rm log} {\rm log}(n/\epsilon \theta))\right)^{t},\end{eqnarray} Algorithm \ref{algorithm:cac} outputs an $\varepsilon$-additive approximation to $f(s_t)$. \end{lemma} Prior to establishing the correctness of the algorithm, we define shifted versions of $g(z)$ as follows: \begin{eqnarray} \forall i\in [t] \quad \tilde{g}_i(z) = g(z + s_i), \label{eq:tilde-functions} \end{eqnarray} and \begin{eqnarray} \tilde{f}_i(z) = {\rm log}(\tilde{g}_i(z)), \end{eqnarray} and denote $f_i^{(l)} = \tilde{f}_i^{(l)}(0)$. We need the following elementary fact, which we leave without proof: \begin{lemma} If the closest root of $g$ to the point $s_i$ in the complex plane is $\lambda$, then the closest root of $\tilde{g}_i$ to $z=0$ is also $\lambda$. \label{lemma:closest} \end{lemma} \noindent We now prove correctness of the algorithm: \begin{proof}[Proof of Lemma \ref{lem:caccorrectness}] Let $f (z) := {\rm log} (g(z))$. It is sufficient to show that \begin{eqnarray}\label{eq:delta} \left|\hat{f}-f(s_t)\right| \leq \varepsilon \end{eqnarray} Let $\hat{f}^{(k)}_i$ denote the approximation of the $k$'th derivative of $f$ at point $s_i$ obtained by the algorithm. Using oracle $\mathcal{O}_g$ for $0 \leq l \leq m_0$ we can compute precisely the derivatives of $g$ at $s_0=0$ and using Lemma \ref{lem:brunodifaasubstitute} (whose statement and proof we momentarily defer) we can compute the derivatives of $f$ precisely at $s_0$: \begin{eqnarray} \hat{f}^{(l)}_0 \leftarrow f^{(l)} (s_0). \end{eqnarray} For $i = 1,\ldots,t$ (in order) algorithm \ref{algorithm:cac} computes the lowest $m_i$ derivatives at $s_i$ using the first $m_{i-1}$ derivatives at $s_{i-1}$ as follows: \begin{eqnarray}\label{eq:alg1} \forall 0 \leq l \leq m_i, \quad \hat{f}^{(l)}_i = \sum_{p = 0}^{m_{i-1} - l} \frac{\hat{f}^{(p + l)}_{i-1}}{p!} \Delta_i^{p}. \end{eqnarray} By assumption \ref{it:ratio} and Lemma \ref{lemma:closest} for each $1\leq i \leq t$ the function $\tilde{f}_{i-1}$ is analytical about point $0$ in a disk of radius $\beta |\Delta_i|$. Hence, we can write the $\ell$-th derivative of $\tilde{f}_i(z)$ as the infinite Taylor-series expansion of the $\ell$-th derivative of $\tilde{f}_{i-1}(z)$ evaluated at point $\Delta_i$: \begin{eqnarray}\label{eq:ideal1} \tilde f_{i}^{(\ell)} := \tilde f^{(l)}_{i} (0) = \sum_{p=0}^{\infty} \frac{ \tilde f^{(p+l)}_{i-1} (0)}{ p!} \Delta_i^p. \end{eqnarray} Let ${\cal E}^{(l)}_i$ denote the additive approximation error of the $l$-th derivative at step $i\in [t]$ and $0 \leq l \leq m_i$. \begin{eqnarray} \mathcal{E}^{(l)}_i := \Big| \hat{f}^{(l)}_i - {f}^{(l)} _{i}\Big|, \hspace{1cm} \forall 0 \leq l \leq m_{i} \end{eqnarray} Using the triangle inequality to bound the difference between equations \eqref{eq:alg1} and \eqref{eq:ideal1}, we get: \begin{eqnarray} \forall i\in [t], 0 \leq l \leq m_i,\ \ \mathcal{E}^{(l)}_i &\leq& \sum_{p=0}^{m_{i-1} - l} \frac{|\hat{f}^{(p+l)} _{i-1}-{\tilde f}^{(p+l)} _{i-1}|}{p!} |\Delta_i|^{p} + \sum_{p=m_{i-1} - l +1}^{\infty} \frac{|{\tilde f}^{(p+l)} _{i-1} |}{p!} |\Delta_i|^{p},\\ &=& \sum_{p=0}^{m_{i-1} - l} \frac{\mathcal{E}^{(p+l)}_{i-1}}{p!} |\Delta_{i}|^{p} + \sum_{p=m_{i-1} - l +1}^{\infty} \frac{|{\tilde f}^{(l+p)}_{i-1} |}{p!} |\Delta_{i}|^p,\\ &=:&\sum_{p=0}^{m_{i-1} - l} \frac{\mathcal{E}^{(p+l)}_{i-1}}{p!} |\Delta_{i}|^{p} +\kappa_{i,l}, \label{equation:err} \end{eqnarray} where \begin{eqnarray} \kappa_{i,l} := \sum_{p=m_{i-1} - l +1}^{\infty} \frac{|{\tilde{f}}^{(p+l)}_{i-1} |}{p!} |\Delta_i|^{p} = \sum_{p=m_{i-1} - l +1}^{\infty} \frac{|\tilde{f}_{i-1}^{(p+l)} (0) |}{p!} |\Delta_i|^{p}. \label{eq:kappa4} \end{eqnarray} At this point, we focus on placing an upper bound on $\kappa_{i,l}$. Fix any index $i$ and let $z_1, \ldots, z_n$ be the roots of the shifted function $\tilde{g}_{i-1}$. Then \begin{eqnarray} \tilde{g}_{i-1} (z) = \tilde{g}_{i-1} (0) \left(1- \frac{z}{z_1}\right) \ldots \left(1- \frac{z}{z_n}\right). \end{eqnarray} We can write the derivatives of $\tilde{f}_{i-1}(z) = {\rm log}(\tilde{g}_{i-1}(z))$ in terms of $z_1,\ldots,z_n$: \begin{eqnarray} \forall k > 0, \quad \tilde{f}^{(k)}_{i-1} (0) = - \sum_{j=1}^n\frac{(k-1)!}{z^k_j}. \end{eqnarray} Using these derivatives and the triangle inequality we can bound equation \eqref{eq:kappa4} for all $0 \leq l \leq m_{i}$, \ba\label{eq:kappa5} \kappa_{i,l} &\leq \sum_{j=1}^n \sum_{p=m_{i-1} - l +1}^{\infty} \frac{(l + p-1)!}{p!} \frac{|\Delta_i|^{p}}{|z_j|^{p + l}} \\ &\leq e \cdot \sum_{j=1}^n \sum_{p=m_{i-1} - l +1}^{\infty} (p/e)^{l} (\frac {l + p}{p})^{p+l} \frac{|\Delta_i|^{p}}{|z_j|^{p + l}} & \text{using Lemma \ref{lem:factorialdivbound}}\\ &\leq \frac{en}{|\Delta_{i}|^l} \sum_{p=m_{i-1} - l +1}^{\infty} (p/e)^{l} (\frac {l + p}p)^{p+l} \frac{1}{\beta^{p+l}}\\ &\leq \frac{en}{|\Delta_{i}|^l} (\frac {m_{i-1} + 1}{m_{i-1}-l + 1})^{l}\sum_{p=m_{i-1} - l +1}^{\infty} (1+l/p)^p (p/e)^{l} \frac{1}{\beta^{p+l}}\\ &\leq \frac{en}{|\Delta_{i}|^l} (\frac {m_{i-1} + 1}{m_{i-1}-l + 1})^{l}\sum_{p=m_{i-1} - l +1}^{\infty} p^{l} \frac{1}{\beta^{p+l}} \label{eq:beforeint1} \ea In order to bound this quantity, we prove a lower-bound on $m_i$ for all $0 \leq i \leq t$. Since the update rule for $m_i$ is $m_{i+1} = \ceil{\theta m_i / (2 {\rm log} m_i)}$, and $x / {\rm log}(x)$ is increasing for $x > 10$, in order to prove the lower bound on $m_i$ we can without loss of generality assume $m_0 = \ceil{10\left({\rm log}(n/\epsilon \theta)\right)\left(10t({\rm log} t + {\rm log} {\rm log}(n/\epsilon \theta))\right)^{t}}$. The following facts immediately follow. For all $0 \leq i < t$, \ba \label{eq:logmiupperbound}{\rm log}(m_i) &\leq {\rm log}(m_0) \\ & \leq {\rm log}(11) + {\rm log}\log(n/\epsilon \theta) + t{\rm log}(10t) + t {\rm log}\log{\rm log}(n/\epsilon \theta) + t {\rm log} {\rm log}(t) \\ &\leq 2t{\rm log}(t) + 2t {\rm log}\log(n/\epsilon \theta), \hspace{5.5cm},\ea and therefore for all $0 \leq i \leq t$ \ba m_i &\geq 10 ({\rm log}(n/\epsilon \theta))(10t({\rm log} t + {\rm log} {\rm log}(n/\epsilon \theta)))^{t-i} 5^i\ea and in particular \ba m_i &\geq 10 ({\rm log}(n/\epsilon \theta)) \cdot 5^t \\ &\geq 10(t + {\rm log}(n/\epsilon \theta)) \label{eq:milowerbound} .\ea So \ba m_{i-1} &= 2 m_i {\rm log}(m_{i-1}) / \theta & \text{by construction} \\ &\geq (m_i {\rm log}(m_{i-1}) + m_i) / \theta \\ &\geq (m_i {\rm log}(m_{i-1}) + 10t + 10{\rm log}(n/\epsilon \theta))/\theta & \text{using } \eqref{eq:milowerbound} \\ &\geq (m_i {\rm log}(m_{i-1}) + {\rm log}(e^3ne^t/\epsilon \theta))/\theta \label{eq:midecreasesfastenough}\ea Also, by the lower bound \eqref{eq:milowerbound} on $m_{i-1}$ the algorithm chooses $m_{i-1} \geq 3m_i / \theta \geq m_i \cdot (1 + 2/\theta)$, so since $l \leq m_i$ it follows that \begin{equation} \label{eq:michange} m_{i-1} - l + 1 > 2l/\theta.\end{equation} Therefore we may apply the bound of technical lemma \ref{lem:int1} to \eqref{eq:beforeint1}, \ba \kappa_{i,l} &\leq \frac{en}{|\Delta_{i}|^l} (\frac {m_{i-1} + 1}{m_{i-1}-l + 1})^{l}\frac{(m_{i-1} - l + 1)^l}{\beta^{m_{i-1} +1}(1-\beta^{-1}e^{l/(m_{i-1}-l+1)})} & \text{using Lemma \ref{lem:int1}} \label{eq:afterint1}\\ &\leq \frac{en}{|\Delta_{i}|^l} \frac{(m_{i-1} + 1)^l}{\beta^{m_{i-1} +1}(1-\beta^{-1/2})} & \text{using \eqref{eq:michange}} \\ &\leq \frac{e^2n}{\theta |\Delta_{i}|^l} \frac{(m_{i-1} + 1)^l}{e^{\theta m_{i-1}}} & \text{using } \beta = e^{\theta}, 0 < \theta \leq 1 \\ &\leq \frac{e^2n}{\theta |\Delta_i|^{l}} \frac{(m_{i-1} + 1)^{m_i}}{e^{\theta m_{i-1}}} & \text{using }l \leq m_i \\ &\leq \frac{e^3n}{\theta |\Delta_i|^l} \frac{(m_{i-1})^{m_i}}{e^{\theta m_{i-1}}} & \text{using } m_i \leq m_{i-1}/3 \\ &\leq \frac{\epsilon e^{-t}}{|\Delta_i|^l} & \text{using } \eqref{eq:midecreasesfastenough} \ea We now complete the error analysis in Equation \eqref{equation:err}. Using the above equation \ba \mathcal{E}^{(l)}_i &\leq \sum_{p=0}^{m_{i-1} - l} \frac{\mathcal{E}^{(p+l)}_{i-1}}{p!} |\Delta_{i}|^{p} + \frac{\varepsilon e^{-t}}{|\Delta_i|^l}. \ea We do the change of variable $\mathcal{F}^{(l)}_i = \mathcal{E}^{(l)}_i \cdot |\Delta_{i}|^l$. Using this notation this bound becomes \ba \mathcal{F}^{(l)}_i &\leq \sum_{p=0}^{m_{i-1} - l} \frac{\mathcal{F}^{(p+l)}_{i-1}}{p!} (\frac{|\Delta_{i}|}{|\Delta_{i-1}|})^{p + l} + \varepsilon e^{-t}\\ &\leq \sum_{p=0}^{m_{i-1} - l} \frac{\mathcal{F}^{(p+l)}_{i-1}}{p!} + \varepsilon e^{-t} & \text{using } |\Delta_i| \leq |\Delta_{i-1}| \ea Now define ${\cal F}_i = \max_{l} {\cal F}^{(l)}_i$. From the above, \ba \mathcal{F}_i &\leq e \cdot \mathcal{F}_{i-1} + \varepsilon e^{-t}. \label{eq:error-version2} \ea The boundary condition is ${\cal F}_0 = 0$ since the derivatives are computed exactly at the first segment. Using \eqref{eq:error-version2}, by induction on $i \in [t]$ one can show that $\mathcal{F}_i \leq \frac{e^i - 1}{e-1} \cdot \varepsilon e^{-t}$. We conclude that the output additive error is ${\cal E}_t^{(0)} = {\cal F}_t^{(0)} \cdot |\Delta_t|^0= {\cal F}_t^{(0)} \leq {\cal F}_t \leq \epsilon \cdot e^{-t} \cdot e^t = \epsilon$. \end{proof} \section{Introduction} \subsection{Background} \subsubsection{Exact computation of the determinant} Computing the determinant of a matrix is arguably one of the most basic operations in numerical linear algebra and is ubiquitous in many areas of science and engineering. As such, it has been intensively researched over the years with landmark results that have reduced its complexity, improved its numerical stability and increased its parallelism (see for example a survey at \cite{KALTOFEN04}). Today, we know that given an $n \times n$ matrix we can compute the determinant in time $O(n^{\omega})$ where $\omega < 2.373$ is such that $O(n^\omega)$ is the cost of matrix multiplication \cite{williams2012multiplying,le2014powers}. One can also try to parallelize the computation of the determinant using many processors. We know that the computation of the determinant is in ${\sf{NC}}^2$ \cite{B84} -- that is, it can be computed by circuits of depth $O({\rm log}^2 n)$. While this implies that the determinant is parallelizable, it is a major open question whether or not the determinant can be parallelized even further - namely, for instance, whether the determinant lies in ${\sf{NC}}^1$, meaning that it can be computed by circuits of depth $O({\rm log}(n))$. Letting $\DET$ denote the class of problems that are ${\sf{NC}}^1$-reducible to computing the determinant of an integer-valued matrix, we know that \begin{equation} {\sf{NC}}^1 \subseteq \Lspace \subseteq \RL \subseteq \NL \subseteq \DET \subseteq {\sf{NC}}^2 \tag*{\cite{csanky1975fast, cook1985taxonomy}}. \end{equation} In particular, a $O({\rm log}(n))$-depth circuit for the determinant would imply ${\sf{NC}}^1 = \Lspace = \RL = \NL$, which would be a major breakthrough in our understanding of space-bounded computation. Furthermore, since the class $\DET$ captures many fundamental linear-algebraic tasks such as matrix powering, matrix inversion, and solving systems of linear equations, a faster algorithm for exact computation of the determinant would have far-reaching applications \cite{cook1985taxonomy}. \subsubsection{Approximate Computation of the Determinant} \label{subsubsec:approxdetcomputation} In this paper, instead of exact computation, we consider the problem of {\em approximately} computing the determinant up to a $(1 + 1/{\rm poly}(n))$ multiplicative factor. The purpose of this section is to provide an overview of known bounds on the complexity of this task prior to this paper. The approximation problem is trivially reducible to $\DET$ and hence is contained in ${\sf{NC}}^2$. Interestingly, it turns out that merely computing the sign of the determinant is complete for the complexity class probabilistic logspace (${\sf{PL}}$) \cite{AO96}. ${\sf{PL}}$ is the logspace analog of the class ${\sf{PP}}$, contains $\NL$ and is contained in ${\sf{NC}}^2$. As a result, similarly to exact computation, providing an $\Lspace$ algorithm for determinant approximation would imply surprising statements such as $\Lspace = \NL = {\sf{PL}}$. Hence, we would like to ask a more fine-grained question: Can the determinant be approximated using small space or depth on special inputs? The answer turns out to concretely depend on the degree to which a matrix is singular. In more precise terms, it depends on condition number of the input matrix, which is the largest-to-smallest singular value ratio of the matrix. Computing the sign of the determinant is complete for ${\sf{PL}}$ if the matrix is allowed to be arbitrary, in which case the condition number can be exponentially large. However, a result of Ta-Shma \cite{Ta13} shows that inverting a matrix with polynomially large condition number is contained the complexity class $\BQL$. As we describe in Appendix \ref{app:psddetreductiontosolving}, the techniques of Ta-Shma imply that $\BQL$ can approximate the determinant for matrices with polynomially large condition numbers. One may conjecture that the determinant approximation problem for polynomially-conditioned matrices is complete for $\BQL$. An item of evidence in favor of this conjecture is a result of Fefferman and Lin \cite{fefferman}, who show that inverting polynomially-conditioned matrices is complete for the class $\BQL$.\footnote{And while it is known that approximate inversion is ${\sf{NC}}^1$-reducible to determinant approximation through Cramer's rule, this reduction does not immediately work within the class of well-conditioned matrices considered by Fefferman and Lin.} If a polynomially-conditioned matrix has certain special structure, then the absolute value of its determinant may be approximable by a weaker classical machine. This follows from an ${\sf{NC}}^1$-reduction that we give in Appendix \ref{app:psddetreductiontosolving} from approximating the absolute value of the determinant to approximate matrix inversion. Implications of our reduction include: \begin{itemize}\item if $A$ is a ${\rm poly}(n)$-conditioned symmetric diagonally dominant (SDD) matrix, then $\text{Det}(A) = |\text{Det}(A)|$ can be approximated in near-$\Lspace$, because there is a nearly-logspace SDD solver for $A$ \cite{murtagh2017derandomization}. \item if $A$ is a ${\rm poly}(n)$-conditioned Hermitian stochastic matrix, then $|\text{Det}(A)|$ can be approximated in $\BPL$. This follows by combining the $\BPL$-algorithm of Doron et al. \cite{doron2017approximating} for approximating powers of $A$ with a parallelized version of the gradient descent algorithm for solving $Ax = b$. \item if $A$ is a $\kappa$-conditioned matrix, then $|\text{Det}(A)|$ can be approximated in $\tilde{O}({\rm log}(n)\cdot {\rm log} (\kappa))$ depth, because equations of the form $Ax = b$ can be solved in $\tilde{O}({\rm log}(n) \cdot {\rm log} (\kappa))$ depth using parallelized gradient descent. \end{itemize} Details on the reduction from approximate calculation of $|\text{Det}(A)|$ to approximate inversion of $A$ are given in Appendix \ref{app:psddetreductiontosolving}, but we note that an important barrier to this technique is the computation of the sign of the determinant, even for a Hermitian matrix. It is a priori unclear how to compute the signed determinant $\text{Det}(A)$ with a classical algorithm running in space less than ${\rm log}^2(n)$, even if it has condition number $\kappa = O(1)$. The main contribution of this paper is to surmount this barrier in computing the sign. For example, we show that the signed determinant of ${\rm poly}{\rm log}(n)$-conditioned matrices can be computed in nearly-logarithmic depth or nearly-logarithmic space if either the matrices are Hermitian or Hurwitz stable (Hurwitz stable matrices are defined as those that matrices that have eigenvalues with negative real parts). \subsection{Main Results} In this work we improve on prior art by proposing an algorithm for computing the determinant of well-conditioned $n\times n$ Hermitian or Hurwitz stable matrices with condition number $\kappa$ that runs in depth $$ \tilde{O}({\rm log}(n) \cdot {\rm log}(\kappa)) $$ A matrix is Hurwitz stable if the real parts of its eigenvalues are negative. \begin{theorem}(sketch) Let $A$ be an $n\times n$ Hermitian or Hurwitz stable matrix with condition number $\kappa$. There exists a Boolean circuit for approximating $\text{Det}(A)$ to multiplicative error $1 + 1/{\rm poly}(n)$ that has $\tilde O({\rm log}(n) \cdot {\rm log} (\kappa))$ depth. The circuits for this algorithm can be computed by a Turing machine using ${\rm log}(n)$-space. \end{theorem} \noindent A direct corollary is the following: \begin{corollary}(sketch) Let $A$ be an $n \times n$ Hermitian or Hurwitz stable matrix with condition number $\kappa$. There exists an algorithm for approximating $\text{Det}(A)$ to multiplicative error $1 + 1/{\rm poly}(n)$ that uses $\tilde{O}({\rm log}(n) \cdot {\rm log} (\kappa))$ space. \end{corollary} \subsection{Proof Overview} Our algorithm is inspired by the Taylor-series approach to computing a multiplicative approximation of the permanent, pioneered by Barvinok \cite{Bar16}. In this approach, the permanent is reduced from a degree-$n$ polynomial in its entries to a univariate polynomial as follows: $$ g_A(z) = \rm Per( (1 - z) \cdot J + z \cdot A), $$ where $J$ is the all ones matrix. The polynomial $g_A(z)$ admits a Taylor series decomposition which converges to the true value of the function, and in particular at $z=1$ -- namely $\rm Per(A) = g_A(1)$ which is our target -- whenever all the roots of $g_A(z)$ are bounded away from the unit disk. In order to compute a multiplicative approximation of $\rm Per(A)$, Barvinok considers $f_A(z) = {\rm log}(g_A(z))$ and computes an {\it additive} approximation of $f_A(z)$ at $z=1$ for any matrix $A$ for which the roots of $g_A(z)$ are bounded away from the unit disk. For this algorithm, the Taylor series of $f(z)$ needs to contain only $O({\rm log}(n))$ terms in order to provide a $1/{\rm poly}(n)$ additive approximation error for ${\rm log}(\rm Per(A))$. The algorithm then computes all low-order derivatives of $g(z)$ in time $2^{O({\rm log}^2(n))}$, uses them to compute all low-order derivatives of $f(z)$ at $z=0$, and then computes $f(1)$ as a Taylor-series approximation and finally returns $e^{f(1)}$. Barvinok's approach was used in recent years to show a quasi-polynomial-time algorithm for computing the permanent of special cases of matrices \cite{Barvinok2013, Barvinok2016, Bar16}, and to provide better approximations of partition functions \cite{patel2017deterministic,liu2019ising,liu2019fisher,mann2019approximation,harrow2019classical}. In particular, authors 2 and 3 of this paper showed how to approximate the permanent of most Gaussian matrices by considering a random ensemble of such matrices with a vanishing, non-zero mean \cite{EM18}. The determinant is an $n$-variate polynomial that is very similar to the permanent, at least as a syntactic polynomial, with permutation signs multiplying each monomial. Hence it is natural to consider the determinant as a candidate for applying the Taylor-series approach. However, a polynomial-time algorithm is already known for the determinant and this approach will not do any better. Our goal, instead, is to focus on the {\it depth} complexity of the algorithm, which we then use to conclude a space-efficient log-space algorithm for approximating the determinant, by the folklore connection between space and depth complexity. To recap, the main logical steps of the Taylor series meta-algorithm are: \begin{enumerate} \item Define a polynomial $g(z)$ that interpolates between an easy-to-compute determinant at $z=0$ and the determinant of our target matrix $A$. \item Choose the polynomial $g(z)$ so that for a large natural class of matrices we have that $g(z)$ satisfies the condition that all its roots are bounded-away from the unit disk. \item Demonstrate a low-depth algorithm for computing all low-order derivatives of $g(z)$. \end{enumerate} Notably, the first two steps all appeared in works on the permanent \cite{Barvinok2016, Bar16, EM18}. However, the third step is required for the case of the determinant, where our goal is to construct a {\it low-depth} circuit. In this work, we solve these requirements in the following way: \begin{enumerate} \item We set $g(z) = \text{Det}( (1-z) \cdot I + z \cdot A)$. Clearly $g(0) = \text{Det}(I)$ which is easy to compute and $g(1) = \text{Det}(A)$. \item The polynomial $g(z)$ is reminiscent of the characteristic function of $A$ $$ \chi(A) \equiv \text{Det}( \lambda I - A) $$ One can easily check that if $A$ is a Hermitian matrix that is well-conditioned the roots of $g(z)$ are all bounded away from either $z=0$ or $z=1$, {and that they are all real}. \item In order to compute the derivatives of $g(z)$ using shallow circuits we build upon the fact that (contrary to the permanent) we do in fact have a polynomial-time algorithm for the determinant. We use that algorithm, in conjunction with the algorithm for parallelizing computation of low-degree polynomials due to Valiant et al. \cite{valiant1983fast}, to show that any order-$t$ derivative of $g(z)$ can be computed by a circuit of depth $O({\rm log}(n) \cdot {\rm log} (t))$. \end{enumerate} In order to compute a multiplicative approximation of the determinant of the input matrix, several additional steps must be added that can compute the derivatives of $f(z) = {\rm log} g(z)$ from those of $g(z)$, and making sure that one can implement the arithmetic circuits for these polynomials using Boolean circuits with small overhead (which is one of the reasons that our space is not precisely logarithmic but rather has extra ${\rm log}\log(n)$ factors.) We summarize the main steps of the parallel algorithm here and refer the reader to section \ref{sec:alg} for a detailed description of the computational steps. \begin{mdframed} \begin{enumerate} \item Input: $\kappa \leq {\rm poly}(n)$, and an $n\times n$ Hermitian or Hurwitz stable matrix $A$ such that $I / \kappa \preceq |A| \preceq I$. \item Round each entry to $O(\kappa {\rm log}(n))$ bits of precision. \item Compute the first $k = ({\rm log} n) \cdot (\kappa {\rm log} {\rm log} n)^{O({\rm log} {\rm log} \kappa )}$ derivatives of $g(z) = \text{Det}( (1-z) I + z A)$ at $z=0$ using a dynamic program that is attached to the Samuelson-Berkowitz algorithm \cite{B84, S42}. This dynamic program can be parallelized to depth $\tilde{O}({\rm log}(n) \cdot {\rm log}(k)) = \tilde{O}({\rm log}(n) \cdot {\rm log}(\kappa))$ by the algorithm for parallelizing low-degree arithmetic circuits due to Valiant-Skyum-Berkowitz-Rackoff \cite{valiant1983fast}. \item Using Bell polynomials, compute the first $k$ derivatives of $f(z) = {\rm log}(g(z))$ at $z=0$ as in \cite{Bar16}. Also parallelize this step to depth $\tilde{O}({\rm log}(n) \cdot {\rm log}(\kappa))$ using \cite{valiant1983fast}. \item Use CAC interpolation, introduced in \cite{EM18}, to compute the value of $f(1)$ by constructing an interpolation path that avoids the roots of $g(z)$ (or poles of $f(z))$. Again parallelize CAC interpolation to depth $\tilde{O}({\rm log}(n) \cdot {\rm log}(\kappa))$ using \cite{valiant1983fast}. \item Return $e^{f(1)}$. \end{enumerate} \end{mdframed} \subsection{Discussion and Future Work} Our result implies that the determinant of a large class of matrices, namely polylogarithmically well-conditioned Hermitian or Hurwitz stable matrices, can be approximated to inverse polynomial relative error in space which is nearly logarithmic in the matrix size. A natural next step would be to try to extend our algorithm to run in logarithmic space or depth for matrices with polynomial condition number, which could then amount to an ${\sf{NC}}^1$ algorithm for "almost" any matrix in the Wigner ensemble \cite{Tao}. Another direction could be to try to show that approximating the determinants of polynomially-conditioned matrices is $\BQL$-complete, as discussed in Section \ref{subsubsec:approxdetcomputation}. We note that proving both the algorithm and the completeness result would imply the de-quantization of $\BQL$. \section{Main results}\label{sec:alg} \subsection{Theorem Statement} Our main theorem is that for $O(\kappa)$-conditioned Hermitian or Hurwitz stable matrices one can compute a $1 + 1/{\rm poly}(n)$ approximation to the determinant using an arithmetic circuit of depth $\tilde {O}({\rm log} (\kappa) \cdot {\rm log}(n))$. Furthermore, this circuit can be implemented as a logspace-uniform Boolean circuit of polynomial size and $\tilde {O}({\rm log} (\kappa) \cdot {\rm log}(n))$ depth as well as $\tilde {O}({\rm log} (\kappa) \cdot {\rm log}(n))$ space: \begin{theorem}[Approximation of the determinant of ${\cal H}_\delta$ and ${\cal S}_\delta$ matrices in near-${\sf{NC}}^1$] \ \noindent For every $n$ and $\epsilon, \delta > 0$ there exists a logspace-uniform Boolean circuit of size ${\rm poly}(n)$ and depth $\tilde{O}(({\rm log} n) \cdot ({\rm log}(1/\delta) + {\rm log} {\rm log}(1/\epsilon)))$ such that for every input $A\in {\cal H}_{\delta}$ it approximates $\text{Det}(A)$ to multiplicative error $1 + \epsilon$. In particular, for $\delta = 1/{\rm poly}{\rm log} (n)$, this circuit can be implemented in depth $\tilde{O}({\rm log}(n))$. The same result holds for ${\cal S}_\delta$ in place of ${\cal H}_{\delta}$. \label{thm:main} \end{theorem} A direct corollary of Theorem \ref{thm:main} is the following: \begin{corollary}[Approximation of the determinant of ${\cal H}_\delta$ and ${\cal S}_\delta$ matrices in near-$\Lspace$] For every $n$ and $\epsilon, \delta > 0$, and $A \in \mathcal{H}_{\delta}$, there is a $\tilde{O}(({\rm log} n) \cdot ({\rm log}(1/\delta) + {\rm log}\log(1/\epsilon))$-depth algorithm that approximates $\text{Det}(A)$ to multiplicative error $1 + \epsilon$. The same result holds for ${\cal S}_\delta$ in place of ${\cal H}_{\delta}$. \end{corollary} \subsection{CAC interpolation points} Recall the definition of the determinant interpolation polynomial $g_A(z) = \text{Det}((1-z)I + zA)$. The proof of Theorem \ref{thm:main} will proceed by using Computational Anaytic Continuation (CAC) to approximate the value of $g_A(1) = \text{Det}(A)$ from the low-order derivatives of $g_A(z)$ at $z = 0$. \begin{lemma}[Interpolating segments for well-conditioned Hermitian matrices]\label{lem:CAC2} \ \noindent Let $\delta > 0$, let $A\in {\cal H}_{\delta}$, and let $g_A(z) = \text{Det}( (1-z) I + zA)$. Then there exist $t+1 = O({\rm log}(1/\delta))$ CAC points $s_0,\hdots, s_t \in \mathbb{C}$ satisfying the conditions of Lemma \ref{lem:caccorrectness} with respect to $g_A$, with parameter $\theta > 0.4$. \end{lemma} \begin{proof} Since $A$ is Hermitian, the roots of $g_A(z)$ for $A\in {\cal H}_\delta$ all lie on the real line. And by Theorem \ref{thm:roots} we have that $g_A(z)$ is root-free in ${\cal D}(0,1/2) \cup {\cal D}(1,\frac{\delta}{1+\delta})$. Consider CAC segments of 2 types: \begin{enumerate} \item \textbf{Cross over:} We cross from $0$ to $1+i/2$ above the real line using $6$ segments: $$ s_0 = 0 \to s_1 = 0.25 i \to s_2 = 0.5 i \to s_3 = 0.5 i + 0.25 \to s_4 = 0.5 i + 0.5 $$ $$ \to s_5 = 0.5 i + 0.75 \to s_6 = 0.5 i + 1 $$ \item \textbf{Decelerate:} We shuttle down from $s_6 = 1+i/2$ to $s_t = 1$ via a sequence of $O({\rm log} 1/\delta)$ decreasing intervals. As we shuttle down, we reduce the interpolation disk radius on each step by a constant multiplicative factor. Let $t = {\rm log}_3(1/\delta) + O(1)$, $r_0 = 1/3$ and $b = 3$. We navigate $$ s_6 = 1 + i/2 \rightarrow \tilde s_7 = \tilde s_6 - i r_0 \rightarrow \tilde s_8 = \tilde s_7 - i r_0/b \rightarrow \ldots \rightarrow \tilde s_{t-1} = \tilde s_{t-2} - r_0 / b^{t-8}. $$ More formally, for $6 \leq j \leq t-1$, we have $s_j = 1 + i/2 - \frac{i}{2}(1 - (1/3)^{j-6})$ At the end, move one more step from $\tilde s_{t-1}$ to $\tilde s_t = 1$. Note that in order to do this and still satisfy the CAC requirements we use $0 \leq \Im(s_{t-1}) \leq \delta/5$. \end{enumerate} We note that for each $j$ the polynomial is root-free in the disk ${\cal D}(s_j,(3/2) \cdot |s_{j+1}-s_j|)$. In particular, for $j \geq 6$ we have $|s_{j+1}-s_j| = (1/3)^{j-5}$, but the closest root to $s_i$ is on the real line, at least $(3/2)(1/3)^{j-5}$ distance away. For the segment from $s_{t-1} = 1 - ic \delta$ (for $0 \leq c \leq 1/5$) to $s_t = 1$, we use that $g_A$ is root-free in ${\cal D}(1,9\delta/10)$ Since ${\rm log}(3/2) > 0.4$, the bound on $\theta$ holds. Also, the segments are of non-increasing length and $g_A(0) = 1$, satisfying the other conditions of Lemma \ref{lem:caccorrectness}. \end{proof} \begin{lemma} [Interpolating segments for well-conditioned Hurwitz stable matrices]\label{lem:CAC3} \noindent Let $\delta > 0$, let $A\in {\cal S}_{\delta}$, and let $g_{-A}(z) = \text{Det}( (1-z) I - zA)$. Then there exist $t+1 = O({\rm log}(1/\delta))$ CAC points $s_0,\hdots, s_t \in \mathbb{C}$ satisfying the conditions of Lemma \ref{lem:caccorrectness} with respect to $g_{-A}$, with parameter $\theta > 0.4$. \end{lemma} \begin{proof} The proof is very similar to the proof of Lemma \ref{lem:CAC2} we just presented. We first move to $z = 1/2$. This is doable because ${\cal P}_{1/2} = \{x : \Re(x) < 1/2\}$ and ${\cal D}(1/2,1/2)$ are root free by Theorem \ref{thm:roots}. Then, since ${\cal D}(1/2,1/2)$ and ${\cal D} (1,\delta/(1+\delta))$ are root free, we take a sequence of decelerating segments from $z = 0$ to $z=1/2$ with lengths shrinking by a constant factor at each step. Here is a way of doing this. Pick $t = {\rm log}_3(1/\delta) + O(1)$: $$ s_0 = 0 \rightarrow s_1 = 1/6 \rightarrow s_2 = 1/3 \rightarrow s_3 = 1/2 \rightarrow s_4 = 1/2 + 1/6 \rightarrow s_5 = 1/2 + 1/3\rightarrow $$ $$s_6 = 1/2 + 1/3 + 1/3^2 \rightarrow \dots \rightarrow s_{t-1} = 1/2 + 1/3 + \ldots + 1/3^{t-5} \geq 1 - \delta/5 \rightarrow s_t = 1. $$ More formally, for $5 \leq j \leq t-1$, we have $s_j = 1/2 + \frac{1}{2} (1 - (1/3)^{j-4})$. We note that for each $j$ the polynomial is root-free in the disk ${\cal D}(s_j,(3/2) \cdot |s_{j+1}-s_j|)$. In particular, for $j \geq 5$ we have $|s_{j+1}-s_j| = (1/3)^{j-5}$, but the closest root to $s_i$ lies outside $D(1/2,1/2)$, at least $(3/2)(1/3)^{j-5}$ distance away. For the segment from $s_{t-1} \geq 1 - \delta/5$ to $s_t = 1$ we use that $g_A$ is root-free in ${\cal D}(1,9\delta/10)$. Since ${\rm log}(3/2) > 0.4$, the bound on $\theta$ holds. Also, the segments are of non-increasing length and $g_{-A}(0) = 1$, satisfying the other conditions of Lemma \ref{lem:caccorrectness}. \end{proof} \subsection{Proof of Theorem \ref{thm:main}} Consider the following algorithm \begin{mdframed} \begin{algorithm}\label{alg:main} \begin{enumerate} \noindent \item \textbf{Input:} $\delta > 0$, matrix $A \in {\cal H}_\delta$ or $A \in {\cal S}_\delta$. \item \textbf{Fixed parameters:} \begin{enumerate} \item $\theta = 0.4$ \hfill \% parameter in the CAC algorithm \item $t = O({\rm log}(1/\delta))$ \hfill \% number of CAC segments from $z = 0$ to $z = 1$ \item $k = \ceil{40\left({\rm log}(n/\epsilon \theta)\right)\left(40t({\rm log} t + {\rm log} {\rm log}(n/\epsilon \theta))\right)^{t}}$ \hfill \% number of derivatives CAC uses \item $r = k^{14}$ \hfill \% number of bits to which to round $A$ \item $M = k!$ \hfill \% size of the maximum constant used in the arithmetic circuits \end{enumerate} \item \textbf{Main (for Hermitian $A \in \mathcal{H}_{\delta}$):} \begin{enumerate} \item If $k \geq n$, return the ${\sf{NC}}^2$-circuit exactly computing the determinant. Otherwise perform the following steps: \item Construct $C_{SB}$, the Samuelson-Berkowitz circuit (Theorem \ref{thm:det1}) computing $$g_A(z) = \text{Det}((1-z) \cdot I + z \cdot A).$$ \hfill \ \item For each $0 \leq i \leq k$ construct $C_i = H_i(C_{SB})$, the arithmetic circuit computing the derivative $g_A^{(i)}(0)$ (using Lemma \ref{lem:deriv}). \hfill \ \item Construct the circuit $C_{CAC} = C_{CAC}(C_0, \ldots, C_k,\Delta_1,\ldots,\Delta_t)$ doing Computational Analytic Continuation from $z = 0$ to $z = 1$ as in Algorithm \ref{algorithm:cac} with steps $\Delta_1,\ldots,\Delta_t$, parameter $\theta$ and using the first $m_0 := k$ derivatives of $g_A$ at $z = 0$. (Lemma \ref{lem:shallowCAC}) \item Reduce the depth of the CAC circuit $C_{\mbox{low-depth}} = D(C_{CAC})$. (Theorem \ref{thm:depth-reduction}) \item Hard-code $\Delta_1, \hdots, \Delta_t$ to get the CAC points from Lemma \ref{lem:CAC2}. \item Compute the Booleanization of the circuit $C_{\mbox{bool}} = B_{r,M}(C_{\mbox{low-depth}})$. (Lemma \ref{lem:Booleanization}) \item Return the Boolean circuit $C_{\mbox{out}} = \exp (C_{\mbox{bool}})$. \label{it:O} \end{enumerate} \item \textbf{Main (for Hurwitz stable $A \in \mathcal{S}_{\delta}$):} The algorithm is essentially the same if $A \in {\cal S}_{\delta}$, but we use $g_{-A}(z)$ instead of $g_A(z)$, the interpolating segments are given by Lemma \ref{lem:CAC3} instead of Lemma \ref{lem:CAC2}, and we return $(-1)^{n}\cdot\exp(C_{\mbox{bool}})$ instead of $\exp(C_{\mbox{bool}})$, because $g_{-A}(1) = \text{Det}(-A) = (-1)^{n} \text{Det}(A)$. \end{enumerate} \vspace{5mm} \end{algorithm} \end{mdframed} In order to prove correctness of Algorithm \ref{alg:main}, we first prove the following technical lemma: \begin{lemma}[$C_{\mbox{low-depth}}$ has low depth] \label{lem:lowdepthmeanslowdepth} If $k < n$, then $C_{\mbox{low-depth}}$ has size ${\rm poly}(n)$, degree $O(k^3)$, and depth $O({\rm log} k)$. Each multiplication gate has fan-in 2 and each addition gate has fan-in at most ${\rm poly}(n)$. \end{lemma} \begin{proof} The Samuelson-Berkowitz circuit $C_{SB}$ constructed using Theorem \ref{thm:det1} is an arithmetic circuit of size ${\rm poly}(n)$. By Lemma \ref{lem:deriv}, for all $0 \leq i \leq k$ the circuit $C_i$ is of size ${\rm poly}(n)$. Since $C_i$ computes the derivative of order $i\leq n$ w.r.t. the variable $z$ of $g_A(z)$ at $z = 0$, $C_i$ has degree $O(k)$ in the entries of $A$, by Lemma \ref{lem:der-degree}. Therefore, by Lemma \ref{lem:shallowCAC}, $C_{CAC}$ is of size ${\rm poly}(n)$ and has degree $O(k^3)$ in the entries of $A$ and in $\Delta_1,\ldots,\Delta_t$. It follows by Theorem \ref{thm:depth-reduction} (depth-reduction) that $C_{\mbox{low-depth}}$ is of size ${\rm poly}(nk) = {\rm poly}(n)$ and of depth $O({\rm log} k)$, and that each multiplication gate has fan-in 2 and each addition gate has fan-in ${\rm poly}(n)$. \end{proof} \begin{lemma}[The circuit outputted by Algorithm \ref{alg:main} approximates the determinant of $A$] \label{lem:outputalgmaincorrect} \noindent Algorithm \ref{alg:main} computes a circuit $C_{\mbox{out}}$ that satisfies: $$ C_{\mbox{out}}(A) = \text{Det}(A) \cdot (1 + {\cal E}) , \ \ |{\cal E}| \leq \epsilon $$ \end{lemma} \begin{proof} If $k \geq n$, then the algorithm computes the determinant exactly. Otherwise, by the error bound for CAC in Lemma \ref{lem:caccorrectness}, $C_{CAC}$ outputs an $\epsilon/4$ additive approximation to ${\rm log}(\text{Det}(A))$ when the CAC segments from Lemma \ref{lem:CAC2} (respectively, Lemma \ref{lem:CAC3}) are hard-coded. Applying depth reduction (Theorem \ref{thm:depth-reduction}) does not change the output of $C_{CAC}$, and therefore $C_{\mbox{low-depth}}$ also computes an $\epsilon/4$ additive approximation. We note that the constants used in the arithmetic circuit all have magnitude at most $k! = M$ (the largest constants are in the calculations of the derivatives by Lemma \ref{lem:deriv}), and the input variables have magnitude $\leq 1$. And by Lemma \ref{lem:lowdepthmeanslowdepth}, $C_{\mbox{low-depth}}$ is of size ${\rm poly}(n)$, degree $O(k^3)$, depth $O({\rm log} k)$, has multiplication gates with fan-in 2, and addition gates with fan-in at most $n$. These are the preconditions to apply the Booleanization procedure (Lemma \ref{lem:Booleanization}). Since $r = k^{14} \geq k^{10} \cdot ({\rm log} n/\epsilon)^2 {\rm log} (k!) = \omega(k^9 {\rm log}(k) {\rm log}(n) + 1) {\rm log}(nk/\varepsilon)({\rm log} M))$, by the error bound in Lemma \ref{lem:Booleanization} we may conclude that the Booleanization procedure yields a Boolean circuit $C_{\mbox{bool}}$ that approximates $C_{CAC}$ up to additive $\epsilon/4$ error when the CAC points $s_1,\ldots,s_t$ are hard-coded. Hence overall $C_{\mbox{out}}$ is a $\exp(\epsilon/2)$ relative-error approximation of $\text{Det}(A)$. \end{proof} \begin{lemma}\label{lem:bool2} Algorithm \ref{alg:main} computes a ${\rm poly}(n)$-size Boolean circuit of depth $\tilde{O}({\rm log}(n) \cdot ({\rm log}(1/\delta) + {\rm log} {\rm log}(1/\epsilon)))$. \end{lemma} \begin{proof} If $k \geq n$, then the algorithm returns a size-${\rm poly}(n)$, depth-$O(({\rm log} n)^2)$ circuit. In this case, ${\rm log}(k) \geq {\rm log}(n)$, so $t {\rm log}(t) + {\rm log}\log(1/\epsilon) = \Omega({\rm log}(n))$, and since $t = \Theta({\rm log}(1/\delta))$, the claim holds in this case. In the case $k < n$, we also have $\delta > 1/n$. By Lemma \ref{lem:lowdepthmeanslowdepth} and Lemma \ref{lem:Booleanization}, we have that $C_{\mbox{bool}}$ is a circuit of size ${\rm poly}(nrk({\rm log} k)({\rm log} k!)) = {\rm poly}(n)$ and depth $O(({\rm log} k) \cdot {\rm log}(k({\rm log} k)rn)) = O(({\rm log} n) \cdot ({\rm log} k)) = O(({\rm log} n) \cdot (t{\rm log}(t) + {\rm log} {\rm log}(n) + {\rm log}\log(\epsilon))) = \tilde{O}(({\rm log} n) \cdot ({\rm log}(1/\delta) + {\rm log}\log(1/\epsilon)))$. The final exponentiation operation is applied to a ${\rm poly}({\rm log}(n/\epsilon)/\delta)$-bit number, and by the results of \cite{beame1986log,chiu2001division} it can be implemented by a logspace-uniform ${\rm poly}(n)$-size circuit of depth $O({\rm log}(1/\delta) + {\rm log}\log(n/\epsilon))$ depth, which is negligible overhead. \end{proof} \begin{lemma} \label{lem:algmainlogspaceuniformity} The circuit $C_{\mbox{out}}$ can be computed by Algorithm \ref{alg:main} in space $O({\rm log}(n))$. \end{lemma} \begin{proof} This follows from the fact that all of the operations involved can be done in logspace: computing $C_{BS}$ (Theorem \ref{thm:det1}), taking derivatives (Lemma \ref{lem:deriv}), CAC interpolation (Lemma \ref{lem:caccorrectness}), Booleanization (Lemma \ref{lem:Booleanization}), and, by \cite{beame1986log,chiu2001division}, taking the exponential. \end{proof} Theorem \ref{thm:main} follows from Lemmas \ref{lem:outputalgmaincorrect}, \ref{lem:bool2}, and \ref{lem:algmainlogspaceuniformity}. \section{Preliminaries} \subsection{Basics} Given a complex matrix $A \in \mathbb{C}^{n \times n}$, let $A^{\dag}$ denote its conjugate transpose. $A$ is Hermitian if $A = A^{\dag}$, in which case the eigenvalues of $A$ are real. $A$ is positive semi-definite (PSD) if it is Hermitian and has nonnegative eigenvalues. We write $A \succeq 0$ if $A$ is PSD. For Hermitian matrices $A$ and $B$, we write $A \succeq B$ if $A - B \succeq 0$, and we note that $\succeq$ defines a valid partial ordering. The absolute value of $A$ is defined as $|A| = \sqrt{A^{\dag}A}$. The singular values $0\leq s_n(A) \leq \dots \leq s_1(A)$ of $A$ are the eigenvalues $s_i(A) = \lambda_i(|A|)$ of $|A|$. The spectral norm $\|A\|_2$, is the maximum singular value $s_1(A)$. The max-norm $\|A\|_{\max} = \max_{i,j} |A_{i,j}|$ is the maximum absolute value of an entry in $A$. \begin{definition}[Condition number] The condition number of $A$ is $\kappa(A) := s_1(A) / s_n(A)$. \end{definition} In this paper, we will focus on well-conditioned Hermitian and Hurwitz stable matrices that are normalized to have spectral norm at most $1$: \begin{definition} [Well-conditioned matrices] Let $0 \leq \delta \leq 1$. \begin{enumerate} \item[(1)] The class of well-conditioned Hermitian matrices with parameter $\delta$ is defined as $$ {\cal H}_\delta = \{A \in \mathbb{C}^{n\times n} : A = A^\dag, \delta \cdot I \preceq |A| \preceq I \}. $$ \item[(2)] The class of well-conditioned Hurwitz stable matrices with parameter $\delta$ is defined as $$ {\cal S}_\delta = \{A \in \mathbb{C}^{n\times n} : \forall i, \Re(\lambda_i(A)) < 0, \delta \cdot I \preceq |A| \preceq I \}. $$ \end{enumerate} Note that a matrix $A \in {\cal H}_{\delta} \cup {\cal S}_{\delta}$ has condition number $\kappa \leq 1/\delta$. \end{definition} One of the main complexity classes discussed in this paper is ${\sf{NC}}$ defined as the following. \begin{definition} [Nick's class] ${\sf{NC}} [h (n)]$ is the class of $\{0,1\}^n \rightarrow \{0,1\}^*$ Boolean functions computable by a logspace-uniform family of Boolean circuits, with polynomial size, depth $O(h)$, and fan-in $2$. ${\sf{NC}}^i := {\sf{NC}} [{\rm log}^i n]$. \end{definition} \subsection{Arithmetic circuits} \begin{definition}[Polynomials] \noindent Let $g \in \mathbb{F} [x_1, \ldots, x_N]$, be a multivariate polynomial in variables $x_1, \ldots, x_N$, over field $\mathbb{F}$. The degree of a monomial of $g$ is the total number of variables in the monomial (counting with multiplicity). The total degree of $g$ ($\deg g$) is the maximum degree of a monomial in $g$. \end{definition} \begin{definition}[Arithmetic circuits] An arithmetic circuit is a directed acyclic graph on nodes $v_1,\ldots,v_s$, called gates. If a node has indegree 0, it is called an input gate, and is labelled with a either a field element in $\mathbb{F}$ or a variable in $\{x_1,\ldots,x_N\}$. Otherwise $v$ is labelled as either an addition or a multiplication gate. Finally, $v_s$ is the ``output'' gate of the circuit. Each gate $v$ recursively computes a polynomial $p_v \in\mathbb{F}[x_1,\ldots,x_N]$. If $v$ is an input gate, then $p_v$ is its label. Otherwise, let $w_1,\ldots,w_t$ be the children of $v$. If $v$ is an addition gate then it computes $p_v = \sum_{i=1}^t p_{w_i}$, and if $v$ is a multiplication gate then it computes $p_v = \prod_{i=1}^t p_{w_i}$. Overall, the arithmetic circuit is said to compute the polynomial $p_{v_s}$ computed at its output gate. If all gates have indegree 0 or indegree 2, then the circuit is said to have fan-in 2. Except when explicitly stated otherwise, all arithmetic circuits in this paper have fan-in 2. \label{def:circuits} \end{definition} In this paper, we will use two fundamental algorithms from the arithmetic circuit literature. The first algorithm, which can be traced back to Strassen \cite{strassen73vermeidung}, allows us to efficiently convert an arithmetic circuit ${\cal C}$ computing a polynomial $p(x_1,\ldots,x_N,z) \in \mathbb{F}[x_1,\ldots,x_N,z]$ into an arithmetic circuit ${\cal C}'$ computing the coefficient of $z^k$ in $p$ (which itself is a polynomial in $\mathbb{F}[x_1,\ldots,x_N]$). Formally: \begin{definition}For any polynomial $g(z,x_1,\ldots,x_N) \in \mathbb{F}[z,x_1,\ldots,x_N]$ and integer $k \geq 0$, let $$ [z^k]g(z,x_1,\ldots,x_N) \in \mathbb{F}[x_1,\ldots,x_N] $$ denote the coefficient of $z^k$ in $g$: i.e., $$ g(z,x_1,\hdots, x_N) = \sum_{i=0}^{\infty} ([z^k]g(z,x_1,\ldots,x_N)) \cdot z^k $$ \end{definition} Note that up to a factor of $k!$, the polynomial $[z^k]g$ is the same as the $k$th partial derivative of $g$ with respect to $z$, evaluated at $z = 0$: $$k! \cdot [z^k]g(z,x_1,\ldots,x_N) = \frac{\partial^k}{\partial z^k} g(z,x_1,\ldots,x_N) |_{z=0}.$$ The result that we use is stated below. \begin{lemma}[Computing the single-variable derivative of an arithmetic circuit, \cite{strassen73vermeidung}]\label{lem:deriv} Let $$ g(z, x_1, \ldots, x_N) \in \mathbb{F}[z,x_1,\ldots,x_N] $$ be a polynomial computed by a fan-in-2 arithmetic circuit ${\cal C}$ of height $h$. Then, for any $k\geq 0$, $[z^k]g(z,x_1,\ldots,x_N)$ can be computed by a fan-in-2 arithmetic circuit ${\cal C}'$ of size $|{\cal C}'| = O(k^2 \cdot |{\cal C}|)$ and depth $(k+1)h$. Moreover, ${\cal C}'$ can be computed from ${\cal C}$ and $k$ in logarithmic space. \end{lemma} \begin{proof} For each node $v$ of the circuit ${\cal C}$, let $p_v(x_1,\hdots, x_N, z)$ denote the polynomial computed at $v$. We construct a circuit ${\cal C}'$ computing $[z^k]$ with a dynamic program based on ${\cal C}$: \begin{enumerate} \item For each gate $v$ of ${\cal C}$ and each integer $0 \leq i \leq k$, add a gate $(v,i)$ to $C'$. We will guarantee that the polynomial $p'_{(v,i)}$ computed at $(v,i)$ equals $[z^i]p_v(z,x_1,\ldots,x_N)$. \item For each $(v,i)$ such that $v$ is an input gate of ${\cal C}$, let $(v,i)$ be an input gate of ${\cal C}'$ and label it by $p'_{(v,i)} = [z^i]p_v \in \mathbb{F} \cup \{x_1,\ldots,x_N\}$. \item For each $(v,i)$ such that $v$ is an addition gate of ${\cal C}$ with inputs $w_1,w_2$, let $p'_{(v,i)} = p'_{(w_1,i)} + p'_{(w_2,i)}$. \item For each $(v,i)$ such that $v$ is a multiplication gate of ${\cal C}$ with inputs $w_1,w_2$, let $p'_{(v,i)} = \sum_{j=0}^i p'_{(w_1,j)} \cdot p'_{(w_2,i-j)}$. This can be implemented by adding at most $2i \leq 2k$ intermediate addition and multiplication gates. \end{enumerate} By induction on the depth, the polynomial $p'_{(v,i)}$ computed at each gate $(v,i)$ equals $[z^i]p_v$. Let the output gate of ${\cal C}'$ be $(v_s,k)$, where $v_s$ is the output gate of ${\cal C}$. Therefore ${\cal C}'$ correctly computes $[z^k]p_{v_s}$. This entire construction can be implemented in logarithmic space. Finally, $|{\cal C}'| = O(k^2 |{\cal C}|)$ because for each gate $v$ in ${\cal C}$ at most $2k(k+1)$ gates are added in the construction of ${\cal C}'$. And ${\cal C'}$ has depth $(k+1)h$ because each gate is replaced with a gadget of height at most $k+1$. \end{proof} The second classical result that we require is the depth-reduction theorem of Valiant-Skyum-Berkowitz-Rackoff for fast parallel computation of low-degree polynomials. Informally, this theorem states that if a low-degree polynomial can be computed by a small arithmetic circuit, then it can also be computed in low parallel complexity: \begin{theorem}[Depth reduction for arithmetic circuits, \cite{valiant1983fast}] Let $g(x_1,\ldots,x_N) \in \mathbb{F}[x_1,\ldots,x_N]$ be a multivariate polynomial of total degree $d$ computed by a fan-in-2 arithmetic circuit ${\cal C}$ of size $s$. Then there is an arithmetic circuit $D({\cal C})$ of size ${\rm poly}(sd)$ and depth $O({\rm log} d)$ that computes $g$. Moreover, $D({\cal C})$ can be computed from ${\cal C}$ in logarithmic space, each multiplication gate of $D({\cal C})$ has fan-in 2, and each addition gate of $D({\cal C})$ has fan-in ${\rm poly}(sd)$. \label{thm:depth-reduction} In particular, by replacing each addition gate by a $O({\rm log}(sd))$-depth tree of fan-in 2 addition gates, $D(\mathcal{C})$ can be transformed into a $O(({\rm log} d) \cdot ({\rm log} s + {\rm log} d))$-depth arithmetic circuit of size ${\rm poly}(sd)$ and fan-in 2. \end{theorem} Let us illustrate this result with an example application to the exact computation of the determinant. We know that the determinant $\text{Det}(A) = \sum_{\sigma \in S_n} \prod_{i \in [n]} \text{sgn}(\sigma) A_{i\sigma(i)}$ is a degree-$n$ polynomial in the entries of $A$, and that it has an arithmetic circuit of size ${\rm poly}(n)$\footnote{For example, this small circuit can be constructed from a division-free variant of Gaussian elimination.}. Therefore, Theorem \ref{thm:depth-reduction} implies that there is a $O({\rm poly}(sd)) = O({\rm poly}(n))$-size and $O(({\rm log} d) \cdot ({\rm log} sd)) = O(({\rm log} n)^2)$-depth arithmetic circuit computing $\text{Det}(A)$. This result was mentioned in the introduction. An $O(({\rm log} n)^2)$-depth circuit for the exact computation of $\text{Det}(A)$ is not sufficient for our purposes. Recall that our goal is instead to multiplicatively approximate $\text{Det}(A)$ using depth $\tilde{O}(({\rm log} n) \cdot ({\rm log} \kappa))$, which scales with the condition number $\kappa$ of $A$. Hence, when $A$ is particularly well-conditioned (e.g., $\kappa = O({\rm poly}{\rm log}(n))$), then our circuit will have $o(({\rm log} n)^2)$ depth. At a high level, in order to achieve this result we will also apply Theorem \ref{thm:depth-reduction}. However, instead of applying the theorem directly to $\text{Det}(A)$ we will apply it to a ${\rm poly}(n)$-size-computable degree-$O({\rm poly}(\kappa))$ polynomial that approximates $\text{Det}(A)$. Assuming without loss of generality that $\kappa = O({\rm poly}(n))$, this will give a $O(({\rm log} n) \cdot ({\rm log} \kappa))$-depth arithmetic circuit for the polynomial approximating $A$. \subsection{From arithmetic circuits to Boolean circuits} In this paper we use arithmetic circuits to represent and manipulate low-degree polynomials over $\mathbb{C}$. In order to evaluate low-depth arithmetic circuits, we have to translate them into low-depth Boolean circuits. This takes some care, because, when we convert arithmetic circuits into Boolean circuits, we cannot represent the values computed at each gate up to arbitrary precision. Our approach is to Booleanize an arithmetic circuit $\mathcal{C}$ on variables $x_1,\ldots,x_N$ by rounding every input to $r$ bits of precision and then replacing each arithmetic operation in the circuit with the corresponding exact Boolean implementation, assuming that the inputs $x$ are such that $\max_i |x_i|$ is bounded by some number $M$. The resulting Boolean circuit is denoted by $B_{r,M}({\cal C})$. In order to ensure that $B_{r,M}({\cal C})$ remains small and low-depth, we have to bound the number of bits used to represent the intermediate values in the computation. In order to ensure that $B_{r,M}({\cal C})$ is accurate, we also have to bound the error incurred by the rounding step. These correctness guarantees are provided by the following lemma: \begin{lemma} \label{lem:Booleanization} Let $\varepsilon > 0$, and let ${\cal C}$ be a circuit over $\mathbb{C}$ of depth $h$, computing a polynomial $g(x_1,\ldots,x_N)$ of degree $d$. Suppose that each multiplication gate of ${\cal C}$ is of fan-in 2, and each addition gate is of fan-in at most $m$. For technical reasons, suppose that all input gates of ${\cal C}$ are labelled by a variable in $\{x_1,\ldots,x_N\}$ (i.e., there are no input gates labelled by a constant in $\mathbb{C}$). If $r > (2hd^2 \ceil{{\rm log}(m)} + 1) {\rm log}_2(4NMd/\varepsilon)$, then $B_{r,M}({\cal C})$ is a logspace-uniform Boolean circuit of size ${\rm poly}(|\mathcal{C}|dhr({\rm log} m){\rm log}(M))$ and depth $O(h \cdot {\rm log}(dhrmM))$. Moreover, $B_{r,M}({\cal C})$ computes a function $\tilde{g}(x_1,\ldots,x_N)$ such that for all $a_1,\ldots,a_N \in \mathbb{C}$ with $\max_i |x_i| \leq M$, $$|\tilde{g}(a_1,\ldots,a_n) - g(a_1,\ldots,g_n)| < \varepsilon.$$ \end{lemma} Note that Lemma \ref{lem:Booleanization} requires that each of the input gates of ${\cal C}$ be labelled with an input variable: in other words, none of the input gate labels are constants from $\mathbb{C}$. We place this technical restriction so that we can conveniently bound the bit complexity of the values computed by the circuit. This is not an important restriction in our case, because for all of the arithmetic circuits considered in this paper, the degree of the polynomial computed by the circuit does not significantly change if we replace each input gate constant $c \in \mathbb{C}$ with a variable $y_c$ whose value will eventually be hard-coded to $c$. The proof of Lemma \ref{lem:Booleanization} is deferred to Appendix \ref{app:Booleanizationdetails}. \section{Determinants and Complex Polynomials} The determinant of an $n\times n$ matrix can be computed efficiently by a well-known result of Samuelson and Berkowitz: \begin{theorem}[Samuelson-Berkowitz \cite{B84, S42}]\label{thm:det1} The determinant of an $n\times n$ matrix can be computed by an arithmetic circuit of size ${\rm poly}(n)$ and fan-in 2. \end{theorem} Similarly to the line of work pioneered by Barvinok \cite{Bar16}, we analyze this problem using tools for analyzing complex polynomials: \begin{definition}[Disks, sleeves and root-free area] For a polynomial $g: \mathbb{C} \rightarrow \mathbb{C}$ and $S \subseteq \mathbb{C}$, $g$ is root free inside $S$ if $z \in S \implies g(z) \neq 0$. In this work we will use three specific kinds of regions $S$: \begin{enumerate} \item Open disk: denote an open disk of radius $r \geq 0$ around $c \in \mathbb{C}$ by ${\cal D} (c,r) = \{ z \in \mathbb{C} : |z - c| < r\}$. \item Unit sleeve: The unit sleeve with width $\delta$ is denoted with ${\cal S}_\delta := \{z \in \mathbb{C} : |\Im (z)| < \delta \text { and } -\delta < \Re (z) < 1+ \delta\}$. \item Half-plane: The left-hand side of a point $x \in \mathbb{R}$ is defined as ${\cal P}_x := \{z \in \mathbb{C} : \Re (z) < x \}$. \end{enumerate} \label{def:disk-sleeve} \end{definition} \subsection{The determinant interpolation polynomial} Let $g_A(z) = \text{Det} (I(1-z) + z A)$. Therefore \begin{align*} g_A(0) &= 1,\\ g_A(1) &= \text{Det}(A). \end{align*} \begin{lemma} [Derivatives of $g_A$] The $k$-th derivative of $g_{A}(z) = \text{Det}(I(1-z) + z A)$ at $z=0$ is a polynomial of degree $k$ in the entries of $A$. \label{lem:der-degree} \end{lemma} \begin{proof} We use the notation $B \rightarrow_k A$ to denote $B$ is a $k\times k$ principal sub-matrix of $A$. We show that the $k$-th derivative of $g_A(z)$ at $z=0$ is $g^{(k)}(0) = \sum_{B \rightarrow_k A} \text{Det} (B- I)$: \begin{align*} g^{(k)}(0) &= \sum_{\sigma \in S_n} \text{sgn}(\sigma) \sum_{i_1 < \ldots < i_k} \prod_{j\notin \{i_1, \ldots, i_k\}}\delta_{j, \sigma(j)} \prod_{j\in \{i_1, \ldots, i_k\}}(a_{j,\sigma(j)} -\delta_{j, \sigma(j)})\\ &= \sum_{i_1 < \ldots < i_k} \sum_{\sigma \in S \{i_1, \ldots, i_k\}} \text{sgn}(\sigma) \prod_{j\in \{i_1, \ldots, i_k\}}(a_{j,\sigma(j)} -\delta_{j, \sigma(j)})\\ &= \sum_{B \rightarrow_k A} \text{Det} (B- I). \end{align*} Each summand $\text{Det} (B- I)$ is a polynomial of degree $k$ in the entries of $B$ and hence degree $k$ in the entries of $A$. \end{proof} \begin{theorem}[Roots vs. eigenvalues] Let $0 < \delta < 1$. \begin{enumerate} \item [(1)] (Hermitian) If $A \in {\cal H}_\delta$ then $g_A$ is root-free inside ${\cal D} (0,1/2) \cup {\cal D} (1, \frac \delta {1+\delta})$. \item [(2)] (Hurwitz stable) If $A \in {\cal S}_\delta$ then $g_{-A}$ is root-free inside ${\cal P}_{1/2} \cup {\cal D} (1/2,1/2) \cup {\cal D} (1, \frac \delta {1+\delta})$. \end{enumerate} \label{thm:roots} \end{theorem} \begin{proof} Let $\omega_1, \ldots, \omega_n$ be the eigenvalues of $A$. Then $$ g_A(z) = \text{Det} (I(1-z) + z A) = \prod_{i \in [n]} ((1-z) + z \omega_i). $$ For any $\omega_i \neq 1$, $z_i := \frac 1 {1- \omega_i}$ is a root of $g_A$. Also if $A$ has a unit eigenvalue that does not introduce a root for $g_A$. \begin{enumerate} \item [(1)] First, since $-1 \leq \omega_i$ implies that $z_i \geq 1/2$ this establishes ${\cal D} (0,1/2)$ as a root-free disk. Second, $\delta \cdot I \preceq |A|$ implies $\forall i \in [n], \quad |\omega_i| \geq \delta$ which implies $\forall i \in [n]$, either $z_i \geq \frac 1 {1-\delta}$ or $z_i \leq \frac 1 {1+\delta}$. This establishes ${\cal D} (1, \frac \delta {1+\delta})$ as a root-free disk. \item [(2)] When $A \in {\cal S}_\delta$ the eigenvalues of $-A$ lie inside $\Lambda = \{\omega \in \mathbb{C} : \delta \leq |\omega| \leq 1, \Re(\omega) > 0\}$. We compute the image of $\Lambda$ under the map $z = \frac{1}{1-\omega}$ through the following observations: \begin{enumerate} \item [(i)] $ |\omega| \geq \delta \Leftrightarrow |z-1| \geq \delta \cdot |z| \Rightarrow |z-1| \geq \delta (1- |1-z|) \Leftrightarrow |1-z| \geq \frac{\delta}{1+\delta},$ \item [(ii)] $|\omega| \leq 1 \Leftrightarrow |z-1| \leq |z| \Leftrightarrow \Re(z) \geq 1/2,$ \item [(iii)] $\Re(\omega) \geq 0 \Leftrightarrow |1 + \omega | \geq |1-\omega| \Leftrightarrow |2 z - 1| \geq 1 \Leftrightarrow |z-1/2| \geq 1/2$. \end{enumerate} Items (i), (ii) and (iii) establish root-freeness inside ${\cal D} (1, \frac \delta {1+\delta})$, ${\cal P}_{1/2}$ and ${\cal D} (1/2,1/2)$, respectively. \end{enumerate} \end{proof} \input{CAC} \subsection{Shallow Circuits for CAC} In this section we establish that in fact Algorithm \ref{algorithm:cac} can be computed by shallow circuits. To do that, we first note that the $k$ lowest derivatives of ${\rm log}(g(z))$ can be computed efficiently from the lowest $k$ derivatives of $g(z)$: \begin{lemma}\label{lem:brunodifaasubstitute} Let $g(z)$ be an analytic function that is root-free in an open set $U$ containing $0$, and let $g(0) = 1$. Let $f(z) = {\rm log}(g(z))$. Then for each $k > 0$, there is an arithmetic circuit of fan-in 2 that receives as input the first $k$ derivatives of $g$ at $0$, $$g^{(0)}(0),\ldots,g^{(k)}(0),$$ and computes $f^{(k)}(0)$. Moreover, the circuit is of size ${\rm poly}(k)$, logspace-uniform, and computes a polynomial of degree $k$. \end{lemma} \begin{proof} The Bruno di Fa\`a formula, which generalizes the chain rule to higher-order derivatives, states that given a composition of two functions $f(z) = h(g(z))$, the derivative $f^{(k)}(0)$ depends only on the first $k$ derivatives of $h$ at $z=g(0)=1$ and $g$ at $z=0$. In particular, we may define $$h(z) = {\rm log}(z), \quad \tilde{h}(z) = \sum_{i=1}^k \frac{(-1)^{i+1}}{i} \cdot (z-1)^i$$ and $$\tilde{g}(z) = 1+\sum_{i=1}^k \frac{g^{(i)}(0)}{i!} \cdot z^i,$$ and by Bruno di Fa\`a, $f(z) = h(g(z))$ will have the same $k$th derivative as $\tilde{f}(z) = \tilde{h}(\tilde{g}(z))$: $$f^{(k)}(0) = \tilde{f}^{(k)}(0).$$ Since $\tilde{f}(z)$ has a size-$O(k^2)$ logspace-uniform arithmetic circuit in $g^{(0)}(0),\ldots,g^{(k)}(0),z$, it follows by the derivative calculation lemma (Lemma \ref{lem:deriv}) that $\tilde{f}^{(k)}(0) = f^{(k)}(0)$ has a size-${\rm poly}(k)$ logspace-uniform arithmetic circuit in $g^{(0)}(0),\ldots,g^{(k)}(0)$. Moreover, $\tilde{f}^{(k)}(0)$ is clearly of degree at most $k$ in $g^{(0)}(0),\ldots,g^{(k)}(0)$. \end{proof} We now use this lemma to establish that CAC can be computed by small circuits of low degree: \begin{lemma}[Low-degree circuits for CAC]\label{lem:shallowCAC} \noindent Under the conditions of Lemma \ref{lem:caccorrectness}, Algorithm \ref{algorithm:cac} can be implemented by a logspace-uniform arithmetic circuit of size ${\rm poly}(m_0)$ that computes a polynomial of degree $O(m_0^2)$ in $$ g^{(0)}(0),\ldots,g^{(m_0)}(0),\Delta_1,\ldots,\Delta_t. $$ \end{lemma} \begin{proof} Construct arithmetic circuits for $f^{(0)}(0),\ldots,f^{(m_0)}(0)$, the first $m_0$ derivatives of $f$ at $0$, using the procedure from Lemma \ref{lem:brunodifaasubstitute}. These circuits are logspace-uniform, and are of size at most ${\rm poly}(m_0)$ and degree at most $m_0$. For each $0 \leq i \leq t$ and $0 \leq j \leq m_i$, construct a size-$(1+10im_0^3)$ arithmetic circuit computing $\hat{f}_i^{(j)}$, with degree at most $m_0-j+1$ in the variables $f^{(0)}(0),\ldots,f^{(m_0)}(0),\Delta_1,\ldots,\Delta_k$. This construction is performed inductively on $i$. The base case $i=0$ is clear because $\hat{f}^{(j)}_0 = f^{(j)}(0)$. For the inductive step, $\hat{f}_{i+1}^{(0)},\ldots,\hat{f}_{i+1}^{(m_{i+1})}$ can all be computed from $\hat{f}_i^{(0)},\ldots,\hat{f}_i^{(m_i)}$ using step \ref{it:iteration} of Algorithm \ref{algorithm:cac}, which can be implemented at an extra cost of $10m_i^3 \leq 10m_0^3$ gates. Moreover each $\hat{f}_{i+1}^{(j)}$ has degree $\max_{0 \leq p \leq m_i - j} \deg \hat{f}_i^{(p+j)} + p \leq (m_0-(p+j)+1+p) = m_0-j+1$ by the inductive hypothesis. Let ${\cal C''}$ be the degree-$(m_0+1)$, size-$(1 + 10tm_0^3)$ circuit computing $\hat{f}_t^{(0)}$ in terms of $$ f^{(0)}(0), \ldots, f^{(m_0)}(0), \quad \mbox{and} \quad \Delta_1, \ldots, \Delta_t. $$ Compose ${\cal C'}$ with the degree-$m_0$, size-${\rm poly}(m_0)$ circuits computing $f^{(0)}(0),\ldots,f^{(m_0)}(0)$ in terms of $g^{(0)}(0),\ldots,g^{(m_0)}(0)$ in order to obtain a degree-$O(m_0^2)$ size-${\rm poly}(t,m_0) = {\rm poly}(m_0)$ circuit ${\cal C'}$ computing $\hat{f}_t^{(0)}$ in terms of $g^{(0)}(0),\ldots,g^{(m_0)}(0),\Delta_1,\ldots,\Delta_t$. \end{proof} Combining Lemma \ref{lem:shallowCAC} with Theorem \ref{thm:depth-reduction} (the depth-reduction theorem for arithmetic circuits) implies that the CAC algorithm can be computed by arithmetic circuits of ${\rm poly}(m_0)$ size, fan-in 2, and depth $O(({\rm log} m_0)^2)$. We will use this observation in the proof of the main theorem. \input{mainresults} \bibliographystyle{hyperabbrv}
1,314,259,995,488
arxiv
\section{Definitions and discussion} \subsection{Automatic groups} In~\cite{Can}, J.W. Cannon proved certain geometrical properties of the Cayley graph of cocompact discrete hyperbolic groups. Two years later, in 1986, W. Thurston noticed that some of these properties could be reformulated in terms of finite state automata ({\sf fsa}; this abbreviation will be used for both the singular and plural). In particular, the goedesic paths in the Cayley graph that start at the origin form a regular set or, equivalently, they form the language of an {\sf fsa}. Furthermore, any pair of such geodesic paths that end at the same or neighbouring vertices lie within a bounded distance of each other. It can be deduced that such geodesic pairs also form the language of an {\sf fsa}. This led Thurston to formulate the following general definition. \begin{defn} \label{aut} Let $G$ be a group with finite generating set $X$, let $A = X \cup X^{-1}$, and let $A^\prime = A \cup \{\$\}$, where $\$ \not\in A$. Then $G$ is said to be {\em automatic} (with respect to $X$), if there exist {\sf fsa} $W$ and $M_a$ for each $a \in A^\prime$, such that\\ (i)\stdspace $W$ has input alphabet $A$, and accepts at least one word in $A^*$ mapping onto each element of $G$.\\ (ii)\stdspace Each $M_a$ has input alphabet $A^\prime \times A^\prime$, it accepts only padded pairs, and it accepts the padded pair $(w^+,x^+)$ for $w,x \in A^*$ if and only if $w,x \in L(W)$ and $wa =_G x$. \end{defn} Here $A^*$ as usual denotes the set of words in $A$. For $w \in A^*$, ${\overline {w}}$ denotes the element of $G$ onto which $w$ maps; for $w, x \in A^*$, we also use $w =_G x$ to mean that $w,x$ map onto the same element of $G$. The extra symbol $\$$ maps onto the identity element of $G$. For $w,x \in A^*$, the associated {\em padded pair} $(w^+,x^+) \in (A^\prime \times A^\prime)^*$ is obtained by adjoining symbols $\$$ to the end of the shorter of $w$ and $x$ to make them have equal length. The language of the {\sf fsa}\ $W$ is denoted by $L(W)$. For general properties of finite state automata, the user is referred to any textbook on automata or formal language theory, such as~\cite{HU}. In the definition, $W$ is called the {\em word-acceptor} and the $M_a$ the {\em multiplier} automata. The complete collection $\{W,M_a\}$ is known as an {\em automatic structure} for $G$. Note that the multiplier $M_\$$ recognises equality in $G$ between words in $L(W)$. From a given automatic structure, we can always use $M_\$$ to construct another one such that $W$ accepts a unique word mapping onto each element of $G$; we simply choose the lexicographically least amongst the shortest words that map onto each element as the `normal form' representative of that element. We shall call such a $W$ a word-acceptor with uniqueness. The best general reference for the theory of automatic groups is the multi-author book~\cite{ECHLPT}. In particular, it turns out that the automaticity of $G$ is independent of the choice of generating set $X$. This immediately suggests that the definition is a sensible one, because it means that automaticity is an algebraic property of the group, rather than just a geometrical property of its Cayley graph. All finite groups are easily seen to be automatic; in fact the class of automatic groups is invariant under finite variations, such as sub- and super-groups of finite index. It is also closed under direct and free products, and includes, for example, all word-hyperbolic groups, braid groups, Coxeter groups and Artin groups of finite and of `large' type. All automatic groups have finite presentations. Some of the most important and useful applications of this theory only involve an explicit knowledge of a word acceptor with uniqueness, particularly in the frequently occurring case when the accepted words are all geodesics in the Cayley graph. From such a word-acceptor, one can quickly enumerate unique representatives of all words up to a given length. This can serve as an invaluable time-saving device in certain computer graphics applications, such as drawing tessellations of hyperbolic space on which these groups act freely. One can also use $W$ to compute the growth function for the group (see~\cite{EIZ}). Another important application of automatic structures for groups $G$ is their use for the efficient (quadratic time) solution of the word problem in $G$. More precisely, the multiplier automata can be used to reduce an arbitrary word in $A^*$ in quadratic time to the $G$--equivalent word in $L(W)$. With these applications in mind, a collection of programs was written at Warwick in the late 1980's for computing automatic structures. These programs take a finite presentation of the group $G$ as input. Currently, they only work for so-called {\em shortlex} structures, which are those in which $L(W)$ consists of the lexicographically least amongst the shortest words that map onto each group element. (So $W$ depends upon the order of $A$ as well as on $A$ itself.) Many, but not all, of the known classes of automatic groups are known to possess shortlex structures. The programs are described in some detail in~\cite{EHR} and~\cite{Holt}, and in a much more general setting in~\cite{ECHLPT}. The latest version is part of a package called {\sf kbmag}\ and is available by anonymous \verb-ftp- from \verb-ftp.maths.warwick.ac.uk- in the directory \verb-people/dfh/kbmag2-. From an algorithmic point of view, there is a close connection between automatic groups and rewriting systems for groups, and the programs used make use of the Knuth--Bendix completion process in groups. However, typically, this process alone would not terminate and in fact automatic groups normally have infinite regular rather than finite complete rewriting systems. When the automatic structure is successfully computed it is, in some sense, enabling this infinite regular system to be used to solve the word problem in a manner that is typically at least as efficient as could be done with a finite rewriting system. The idea of trying to use infinite regular rewriting systems for this purpose was first proposed by Gilman in~\cite{Gil}. Given a word-acceptor automaton for a group, it turns out that the existence and properties of the multiplier automata are equivalent to the so-called (synchronous) fellow-traveller property, which was one of the geometrical properties of hyperbolic groups observed originally by J. W. Cannon, and is defined as follows. For a word $w \in A^*$ we denote the length of $w$ by $l(w)$ and, for $g \in G$, $l(g)$ (or more precisely $l_A(g)$) denotes the length of the shortest word $w \in A^*$ with ${\overline w} = g$. For $t \geq 0$, $w(t)$ denotes the prefix of $w$ of length $t$ when $t \leq l(w)$, and $w(t)=w$ for $t \geq l(w)$. The fellow-traveller property asserts that there exists a constant $k$ such that, for all $w, x \in L(W)$ and $a \in A$ such that $wa =_G x$, and all $t \geq 0$, we have $l_A( {\overline {w(t)}}^{-1} {\overline {x(t)}} ) \leq k$. In other words, two travellers proceeding at the same speed along the words $w$ and $x$ from the base point in the Cayley graph of $G$ would always remain a bounded distance away from each other. The fellow-traveller property enables the multiplier automata $M_a$ to be defined in a uniform manner (see Definition 2.3.3 of~\cite{ECHLPT}). Their state set is the set of triples $(s_1,s_2,g)$, where $s_1,s_2$ are states of $W$, and $g \in G$ with $l(g) \leq k$. The start state is $(s_0,s_0,1)$, where $s_0$ is the start state of $W$. For $(a_1,a_2) \in A \times A$, there is a transition from $(s_1,s_2,g)$ to $(t_1,t_2,h)$ with label $(a_1,a_2)$ if and only if there are transitions $s_1 \rightarrow t_1$ and $s_2 \rightarrow t_2$ in $W$ with labels $a_1$ and $a_2$, respectively, and if $a_1^{-1}ga_2 =_G h$. The state $(s_1,s_2,g)$ is a success state of $M_a$ if and only if $s_1$ and $s_2$ are success states of $W$, and $g =_G a$. Thus the $M_a$ differ only in their accept states. (We have omitted a technicality from this definition. To deal with the padding symbol, we have to add an extra state to $W$ which is reached when $W$ is in an accept state and the padding symbol is read.) It is clear that the $M_a$ behave precisely according to Condition (ii) of Definition~\ref{aut}. This method is used to construct the $M_a$ in the programs mentioned above. Note also that it follows from the fellow-traveller property that if $g$ is any fixed element of $G$ and $w,x \in L(W)$ with $wg =_G x$, then $w$ and $x$ fellow-travel with constant at most $kl_A(g)$. Finally, we must mention the weaker concept of an asynchronously automatic group, because it will arise in the next section. The definition is the same as before, except that the multiplier automata are allowed to read their two input strings at different rates. More precisely, rather than reading one symbol from each of the two input words at each transition, they read a symbol from one of the two words only, where the choice of which word to read is a function of the state of $M_a$. Of course, when the end of one of the words is reached, the other word must be selected. See Chapter~7 of~\cite{ECHLPT} for the formal definition. Again there is a corresponding fellow-traveller property, in which the imaginary travellers are allowed to move at different speeds. See~\cite{ECHLPT} or Section 7, Part II of~\cite{BGSS} for details. The word problem is still solvable for asynchronously automatic groups, but it is unknown whether this can be done in polynomial time. There are examples known, such as the Baumslag--Solitar groups $\langle x,y \, | \, y^{-1}x^py = x^q \, \rangle$ with $p \neq q$, which are asynchronously automatic but not automatic. There is a more detailed treatment, with references to the literature, of the synchronous and asynchronous fellow-traveller properties in groups in the article~\cite{Rees} in these proceedings. \subsection{Subgroups} Let $L=L(W)$ be the language of the word-acceptor in an automatic structure of a group $G$. A subgroup $H$ of $G$ is called $L$--rational if $L \cap H$ is a regular language (ie the language of an {\sf fsa}). Such subgroups were studied in~\cite{GS}, where it is proved that $L$--rational is equivalent to $L$--quasiconvex. This means that any prefix of a word in $L \cap H$ lies within a bounded distance of $H$ in the Cayley graph of $G$. Such subgroups are always finitely generated. An algorithm for constructing an {\sf fsa}\ $W_H$ with language $L \cap H$, which takes as input an automatic structure for $G$ and a set of generators for an $L$--rational subgroup $H$ of $G$, is described in~\cite{Kap}. A practical and efficient version is described in~\cite{Hurt}, and an implementation is available in {\sf kbmag}. The {\sf fsa}\ $W_H$ can be used together with the automatic structure to determine whether a given word in $A^*$ lies in $H$; that is, to solve the generalised word problem for $H$ in $G$. First use the the multiplier automata to reduce the word to one in $L$, and then use $W_H$ to test whether it lies in $H$. Given $W_H$ and $W_K$ for two subgroups $H$ and $K$ of $G$, it is easy to intersect their languages to obtain a {\sf fsa}\ $W_{H \cap K}$ for their intersection, which can then be used to construct a finite generating set for $H \cap K$. \subsection{Cosets} It is possible to generalise the concept of an automatic group from a notion about the elements of the group to one about the cosets of a given subgroup $H$ of $G$. This has been carried out by two doctoral students of the author (see~\cite{Redfern} and~\cite{Hurt}). The definition is as follows. \begin{defn} Let $G$ be a group with finite generating set $X$, let $A = X \cup X^{-1}$, $A^\prime = A \cup \{\$\}$, and let $H$ be a subgroup of $G$. Then $G$ is said to be {\em coset automatic} with respect to $H$, if there exist {\sf fsa} $W$, and $M_a$ for each $a \in A^\prime$, such that:\\ (i)\stdspace $W$ has input alphabet $A$, and accepts at least one word in each right coset of $H$ in $G$;\\ (ii)\stdspace Each $M_a$ has input alphabet $A^\prime \times A^\prime$, it accepts only padded pairs, and it accepts the padded pair $(w^+,x^+)$ for $w,x \in A^*$ if and only if $w,x \in L(W)$ and ${H\overline {wa}} = {H\bar x}$. \end{defn} Here $W$ is called the {\em coset word-acceptor} and the $M_a$ the {\em coset multiplier} automata. The complete collection $\{W,M_a\}$ is known as an {\em automatic coset system} for the pair $(G,H)$. Again the existence of such a system turns out to be independent of the generating set $X$ of $G$, and we can, if we wish, always find a new system in which $W$ accepts a unique word in each right coset. It is proved in~\cite{Redfern} that if $L$ is the language of the shortlex automatic structure of a word-hyperbolic group $G$ (or even the set of all geodesics in the Cayley graph of $G$), and if the subgroup $H$ is $L$--quasiconvex, then $G$ is coset automatic with respect to $H$. In~\cite{Hurt} the converse is proved for word-hyperbolic groups, although we shall see from the example in the next section that the converse does not hold in general. An interesting application to the drawing of limit sets of Kleinian groups is described in~\cite{MPR}. As in the graphical applications of ordinary automatic structures, this involves only the use of $W$ to enumerate unique shortest words in each coset. An algorithm for computing automatic coset systems in the shortlex case was first described in~\cite{Redfern}, and was implemented by him as a standalone program. It has the disadvantage that it is not usually possible to prove conclusively that the system computed is correct. A different approach is described in~\cite{Hurt}. This does enable the output to be proved correct, but it requires an additional hypothesis, to be described below, for it to work at all. It has the further advantage that it has an optional extension to compute a finite presentation for the subgroup $H$ of $G$ after the automatic coset system has been found. This second algorithm, together with the subgroup presentation facility, has been implemented and is available in {\sf kbmag}. The theory, implementation details and performance statistics can also be found in~\cite{HH}. These algorithms provide an alternative method to that described in the previous subsection for solving the generalised word problem for $H$ in $G$. The given word in $w \in A^*$ is reduced (in quadratic time, using the coset multiplier automata) to the unique word $w^\prime$ in the language of the coset word-acceptor for which $H{\overline w} = H{\overline {w^\prime}}$. Then $w \in H$ if and only if $w^\prime$ is the empty word. The two methods of solving the generalised word problem are to some extent complementary to each other, since there can exist $L$--quasiconvex subgroups that are not coset automatic and vice versa, although the two concepts are equivalent in word-hyperbolic groups. The additional hypothesis required for the algorithm developed by Hurt is the following generalisation of the fellow-traveller condition. Let $\{W,M_a\}$ be the shortlex automatic coset system for $(G,H)$ that we are trying to compute. Then, if $(w^+,x^+) \in L(M_a)$ for some $a \in A$, there exists $h \in H$ such that $wa =_G hx$. The hypothesis is that there exists a constant $k \geq 0$ such that for all such $w, x, a$ and $h$, and all $t \geq 0$, we have $l_A( {\overline {w(t)}}^{-1} h {\overline {x(t)}} ) \leq k$. In particular, taking $t=0$, we get $l_A(h) \leq k$, and so in all such equations, only a finite number of elements $h$ occur. One step in the algorithm is to define the states of the $M_a$ as triples $(s_1,s_2,g)$, as in the automatic group case, but now the initial states are $(s_0,s_0,h)$, where $s_0$ is the initial state of $W$, and $h$ is one of the elements of $H$ occurring in the above equations. So the $M_a$ are in fact constructed initially as non-deterministic automata with multiple initial states, If the hypothesis holds, then we shall say that $G$ is {\em strongly coset automatic} with respect to $H$, and call $\{W,M_a\}$ a {\em strong automatic coset system} for $(G,H)$. It is proved in~\cite{Hurt} that word-hyperbolic groups are always strongly coset automatic with respect to their quasiconvex subgroups. It is easy to construct examples in which the hypothesis does not hold, by choosing $H$ to be normal in $G$, in which case $G$ coset automatic with respect to $H$ is equivalent to $G/H$ automatic, but we do not know of any example in which ${\rm Core}_G(H) = 1$. \section{HNN extensions and an example} For the application to be described in this section, we need to strengthen the hypothesis defined at the end of the preceding section for strong automatic coset systems. \begin{defn} Let $\{W,M_a\}$ be a strong automatic coset system for $(G,H)$ with respect to the generating set $X$ of $G$. Let $Y$ be a finite set of generators of $H$, and let $B = Y \cup Y^{-1}$. Then $Y$ is said to be {\rm efficient} with respect to $\{W,M_a\}$ if, for any $w,x \in L(W)$ and any $b \in B, h \in H$ such that $wb =_G hx$, we have either $h = 1$ or $h \in B$. \end{defn} We are not currently aware of any particular situations under which an efficient generating set could be shown to exist; it would be interesting to investigate this question. In specific examples of automatic coset systems that we have calculated with the programs, it is often possible to observe directly from the calculation that a particular $Y$ is efficient. The concept is useful to us here, because it enables us to prove the following result about HNN extensions, which can then be applied to a specific example. Note that a rather different condition under which an HNN extension of an automatic group is asynchronously automatic has been proved by Shapiro in~\cite{Shap}, and results of a similar nature for amalgamated free products are proved in~\cite{BGSS}. \begin{theorem} Let $\{W,M_a\}$ be a strong automatic coset system for $(G,H)$, let $G = \langle X\,|\,R \rangle$ be a finite presentation of $G$, and suppose that $H$ has the efficient generating set $Y$. Suppose also that $H$ is automatic, and let $\alpha$ be an automorphism of $H$ such that $\alpha(Y) = Y$. Then the HNN extension $$K = \langle X, z\,|\,R,\,z^{-1}yz = \alpha(y)\ (y \in Y) \rangle$$ is asynchronously automatic. \end{theorem} \begin{proof} Let $T$ be a right transversal for $H$ in $G$. Then by the normal form theorem for HNN extensions (see, for example, Theorem 2.1 (II), page 182 of~\cite{LS}), each element of $g \in K$ has a unique expression of the form $$k = ht_1 z^{n_1} t_2 z^{n_2} \ldots t_r z^{n_r},$$ where $h \in H$, $t_i \in T$, $n_i \in \Z$, $t_i \not\in H$ for $i>1$ and $n_i \neq 0$ for $i < r$. We use this normal form in the natural manner to construct a regular language $L_K$ for $K$ on the alphabet $A \cup B \cup \{z^{\pm 1}\}$ where, as before, $A = X \cup X^{-1}$ and $B = Y \cup Y^{-1}$. We are assuming that $H$ is automatic, so we can use the language $L_H$ of the word-acceptor from an associated automatic structure with alphabet $B$ to obtain a word $w_h \in L_H$ for the element $h \in H$ in the normal form. For $T$ we choose the image in $G$ of $L(W)$, and to represent $t_i$, we choose the unique word $w_i \in L(W)$ with ${\overline {w_i}} = t_i$. This clearly yields a regular language $L_K$ mapping bijectively onto $K$. We now have to show how to construct the asynchronous multiplier automata $M_c$ for $c \in A \cup B \cup \{z^{\pm 1}\}$. Since this is fairly routine, we describe the construction in outline only. Suppose that $u, v \in L_K$ and $uc =_K v$, and let the HNN normal form of $k = {\overline {u}}$ be $ht_1 z^{n_1} t_2 z^{n_2} \ldots t_r z^{n_r},$ as above. If $c = z$ or $z^{-1}$, then the HNN normal form for $kc$ in $K$ is just $ht_1 z^{n_1} \ldots t_r z^{n_r \pm 1}$, and it is easy to construct $M_c$. So suppose $c \in A \cup B$. We shall suppose that $n_r \neq 0$ and omit the details of the case $n_r=0$, which are similar. There exist words $c_1 \in B^*$ and $c_2 \in L(W)$ such that $c =_G c_1c_2$. Let $l_B(c_1) = k$. Then, from the assumptions that the generating set $Y$ of $H$ is efficient and that $\alpha(Y) = Y$, it follows that the HNN normal form in $K$ for $kc$ is $$kc = h^\prime t_1^\prime z^{n_1} t_2^\prime z^{n_2} \ldots t_r^\prime z^{n_r}{\overline {c_2}},$$ where there are elements $x_i, y_i \in H\ (1 \leq i \leq r)$, all having $B$--length at most $k$, such that $z^{n_r} {\overline{c_1}} = y_r z^{n_r}$, $t_i y_i = x_i t_i^\prime$ for $1 \leq i \leq r$, $z^{n_i} x_{i+1} = y_i z^{n_i}$ for $1 \leq i < r$, and $hx_1 = h^\prime$. Thus we have $u = w_hw_1 z^{n_1} \ldots w_r z^{n_r}$ and $v = w_{h^\prime}w_1^\prime z^{n_1} \ldots w_r^\prime z^{n_r} c_2$, where $w_h, w_{h^\prime} \in L_H$ map onto $h, h^\prime \in H$, and $w_i, w_i^\prime \in L(W)$ map onto $t_i, t_i^\prime \in T$ for $1 \leq i \leq r$. The multiplier $M_c$ proceeds by reading the words $w_h$ and $w_{h^\prime}$ in parallel at the same rate, then the $z^{n_1}$ together, then $t_1$ and $t_1^\prime$ together, and so on. If either of $w_h$ or $w_{h^\prime}$ is longer than the other, then it will wait at the end of the shorter one until the longer word has been read, and similarly for $t_i$ and $t_i^\prime$. (This explains why $M_c$ needs to be asynchronous. Although $|l(w_h) - l(w_{h^\prime})|$ and $|l(t_i) - l(t_i^\prime)|$ are all bounded, there is no bound on $r$, and so one of the two tapes of the input of $M_c$ may conceivably get indefinitely ahead of the other; indeed, we have verified that this really can happen in the example below.) Of course, if either of the two words input to $M_c$ is not in $L_K$, or if they do not both have the same pattern with respect to the occurrences of $z$, then they are rejected. Otherwise, if after $t$ transitions, $M_c$ has read $\phi(t)$ symbols from $u$ and $\psi(t)$ from $v$, then the element $g(t) = {\overline {u(\phi(t))}}^{-1}{\overline {v(\psi(t))}}$ of $K$ is remembered as a function of the state of $M_c$. As in the synchronous case, it is sufficient to show that $l(g(t))$ is bounded. There are four essentially different situations that occur as the words $u,v$ are read. \begin{enumerate} \item[(i)] $u(\phi(t))$ and $v(\psi(t))$ are prefixes of $w_h$ and $_{h^\prime}$, where $|\phi(t) - \psi(t)|$ is bounded. Then the the boundedness of $l(g(t))$ from the automaticity of $H$, and the fact that $hx_1=h^\prime$ with $l(x_1) \leq k$. \item[(ii)] $u(\phi(t)) = w_hw_1 z^{n_1} \ldots w_i(s_1)$ for some $i$ and some prefix $w_i(s_1)$ of $w_i$, and $v(\psi(t)) = w_{h^\prime}w_1^\prime z^{n_1} \ldots w_i^\prime(s_2)$, where $|s_1-s_2|$ is bounded. Then $g(t) = {\overline {w_i(s_1)}}^{-1} x_i {\overline {w_i^\prime(s_2)}}$, and its boundedness follows from the assumptions that $l_B(x_i) \leq k$ and that $\{W,M_a\}$ is a strong automatic coset system for $(G,H)$. \item[(iii)] $u(\phi(t)) = w_hw_1 z^{n_1} \ldots w_iz^{m_1}$ for some $i$ and some $m_1 \leq n_i$, and $v(\psi(t)) = w_{h^\prime}w_1^\prime z^{n_1} \ldots w_i^\prime z^{m_2}$, where $|m_1-m_2| \leq 1$. Then $g(t) z^{-m_1}y_iz^{m_2}$, and its boundedness follows from $l_B(y_i) \leq k$ and the assumption that $\alpha(Y) = Y$. \item[(iv)] $\phi(t) > l(u)$ and $\psi(t) \geq l(v) - l(c_2)$. Then $l(g(t)) \leq l(c_2)$ which is clearly bounded. \end{enumerate} This completes the proof of the theorem. \end{proof} \vspace{1mm} As an application, we shall use this theorem together with the results of some machine computations that were done with {\sf kbmag}, to prove that the group defined by the presentation \begin{eqnarray*} \lefteqn{\langle \,a,b,r,t,x,z \, | \,}\\ &&xaxa=t, bxbx=t, bbtaa=t, a^{-1}br=ra^{-1}b, zt=tz, btaz=zbta \, \rangle \end{eqnarray*} is asynchronously automatic. This group, which we shall denote by $K$, was originally proposed by Mark Sapir as a possible building block in his attempts to construct groups with given Dehn functions. However, he later found a different approach to his problem, and so the example is no longer relevant from that viewpoint. He had hoped that it could be proven automatic, but the methods we have been discussing in this paper only appear to be sufficient to prove it asynchronously automatic. The computer programs could make no progress with the presentation as given above, but matters improved after manipulating it a little. Eliminating $t=bxbx$, we get \begin{eqnarray*} \lefteqn{\langle \,a,b,r,x,z \, | \,xaxa=bxbx, bbxbxaa=xbx, }\\ &&a^{-1}br=ra^{-1}b, zbxbx=bxbxz, bbxbxaz=zbbxbxa \, \rangle. \end{eqnarray*} Now, putting $u=xa$ and $v=bx$, and eliminating $a=x^{-1}u=v^{-1}bu$ and $x = b^{-1}v$, we get \begin{eqnarray*} \lefteqn{\langle \,u,v,b,r,z \, | \,u^2=v^2, bvbuv^{-1}bu=b^{-1}v^2, }\\ &&u^{-1}b^{-1}vbr=ru^{-1}b^{-1}vb, zv^2=v^2z, bvbuz=zbvbu \, \rangle. \end{eqnarray*} Finally, using $u^2=v^2$ to simplify the second relation , we get \begin{eqnarray*} \lefteqn{\langle \,u,v,b,r,z \, | \,u^2=v^2, bvbu=b^{-1}ub^{-1}v,}\\ && u^{-1}b^{-1}vbr=ru^{-1}b^{-1}vb, zu^2=u^2z, zbvbu=bvbuz \, \rangle, \end{eqnarray*} This is now visibly an HNN extension of the group $$G = \langle \,u,v,b,r \, | \,u^2=v^2, bvbu=b^{-1}ub^{-1}v, u^{-1}b^{-1}vbr=ru^{-1}b^{-1}vb \, \rangle.$$ with respect to the subgroup $H = \langle\,u^2, bvbu\,\rangle$, where $H$ is centralised by the new generator $z$. (In fact $G$ is itself an HNN extension with extra generator $r$, but we shall not make use of that fact.) Running the automatic coset system program from {\sf kbmag}\ on the subgroup $H$ of $G$ verifies that $G$ is strongly coset automatic with respect to $H$. (The coset word acceptor has 302 states, and the coset multiplers about 1400 states.) The presentation of $H$ computed by the program proves that $H$ is free of rank 2, and so it is certainly automatic. The programs can also be used to verify that the set $Y = \{u^2, bvbu^{-1} \}$ is an efficient generating set for $H$. (Briefly, this is done by constructing the multiple initial state multiplier automata for the elements $u^2$ and $bvbu^{-1}$. The elements of $H$ corresponding to the initial states of these automata can then be inspected from the output, and it turns out that these are just the identity and elements of $B = Y \cup Y^{-1}$.) We can now deduce from the theorem that Sapir's group $K$ is asynchronously automatic. As a final remark about this example, it turns out (again using calculations carried out by {\sf kbmag}) that the subgroup $H$ is not $L$--quasiconvex, where $L$ is the language of the word-acceptor of the shortlex automatic structure of $G$. The element $(bub^{-1}v^{-1})^n(b^{-1}vbu^{-1})^n$ of $L$ lies in $H$ for all $n \geq 0$, but the coset representative of $(bub^{-1}v^{-1})^n$ in the language of the coset word acceptor is $b^{2n}$.
1,314,259,995,489
arxiv
\section{\bf Introduction} As known, pion production in $NN$ collisions, in particular the channel $NN\to d\pi$, has been investigated by many theorists and experimentalists over the last decades. An earlier study of this reaction \cite{begin} shows that the excitation of the $\Delta$-isobar is a crucial ingredient for explaining the observed energy dependence of the cross section. A lot of papers are based on multichannel Schr\"odinger equations with separable or local potentials \cite{schrodinger}. However, those studies were performed within the nonrelativistic approach. Early attempts to develop the relativistic approach were made in \cite{begin_rel}. Both the pole graph, i.e. one-nucleon exchange, and the rescattering graph presented below were calculated in those papers. As shown, this diagram should result in a dominant contribution to the cross section of the discussed process. By the calculation of this one, some approximations, in particular the factorization of nuclear matrix elements, neglect of recoil etc., were introduced which lead to an uncertainty of the final results. A more careful relativistic study of the reaction $pp\to d\pi^+$ was made in \cite{locher}. The pole and rescattering graphs were shown to be insufficient to describe the experimental data; higher order rescattering contributions should be taken into account. However, in this approach there was no successful description of all the polarization observables, especially the asymmetries $A_{y0}$, $iT_{11}$. Really, analyzing reactions of the type $NN\to d\pi$, there occurs a problem related to the off-mass shell effects of nucleons inside the deuteron. When the pion is absorbed by a two-nucleon pair or the deuteron, the pion energy is shared between two nucleons. So, for example, the relative momentum of the nucleon inside the deuteron increases at least by a value $\sim\sqrt{m\mu}=360 MeV$ if the rest pion is absorbed by the off-shell nucleon what corresponds to intra-deuteron distances of the order of $\sim 1/\sqrt{m\mu}\simeq 0.6 fm$. This means that the absorption process should be sensitive to the dynamics of the $\pi NN$ system at small distances. In this paper we concentrate mainly on the investigation of the role of these effects and the contribution of the $P$-wave of the deuteron wave function \cite{gross}. The sensitivity of all the polarization observables to these effects is studied, and it is shown that some polarization characteristics can change the sign by including the off-mass shell effects of nucleons inside the deuteron. The detailed covariant formalism of the construction of the relativistic invariant amplitude of the reaction $NN\to d\pi$ for this process are presented in chapter 2. We analyze in detail both the pole graph, one-nucleon exchange, and the triangle diagram, i.e. the pion rescattering graph, in sections 3. The inputs by this consideration, the covariant pseudoscalar $\pi NN$ and deuteron $d\to pn$ vertices, are discussed in detail. The discussions of the obtained results and the comparison with the experimental data are presented in chapter 4. The conclusion is presented in the last section 5. \section{\bf General Formalism} \noindent $\bullet~~${ \it Relativistic invariant expansion of the amplitude. } \\ We start with the basic relativistic expansion of the reaction amplitude $NN\to d\pi$ using Itzykson-Zuber conventions \cite{zuber}. In the general case, the relativistic amplitude of the production of two particles of spins $1$ and $0$ by the interaction of two spin $1/2$ particles has $6$ relativistic invariant amplitudes if all particles are on-mass shell and taking $P$-invariance into account. It can be written in the following form: \begin{eqnarray} \hspace{-1cm} \begin{minipage}{5.2cm} \begin{center} \unitlength1cm \begin{picture}(6,4) \thicklines \put(3,2){\circle{1.4}} \put(0,0){\makebox(6,4){$\chi_\mu$}} \thinlines \multiput(1,1.5)(0,1){2}{\vector(1,0){0.8}} \multiput(1.8,1.5)(0,1){2}{\line(1,0){0.7}} \multiput(3.5,1.5)(0.3,0){4}{\line(1,0){0.15}} \put(4.7,1.5){\vector(1,0){0.15}} \thicklines \put(3.5,2.51){\line(1,0){1.265}} \put(3.55,2.45){\line(1,0){1.21}} \put(4.6,2.385){\makebox(0.2,0.1)[lb]{$>$}} \put(0.5,2.8){\parbox[b]{2cm}{$\bar v_{\sigma_2}^{~r_2}(p_2)$}} \put(0.5,1){\parbox[b]{2cm}{$u_{\sigma_1}^{~r_1}(p_1)$}} \put(4.5,2.8){\parbox[b]{2cm}{$\xi^{~(\beta)}_\mu(d)$}} \put(4.8,1.05){\parbox[b]{2cm}{$\varphi_\pi$}} \end{picture} \end{center} \end{minipage} \hfill {\cal M}_{\sigma_2,\sigma_1}^{\beta}(s,t,u)= \left[ \bar v_{\sigma_2}^{~r_2}(p_2)\chi^\mu_{r_2r_1}(s,t,u) u_{\sigma_1}^{~r_1}(p_1) \right] \xi^{~(\beta)}_\mu(d)\varphi_\pi, \label{IA1} \end{eqnarray} where $u_{\sigma_1}^{~r_1}(p_1)\equiv u_1$ and $\bar v_{\sigma_2}^{~r_2}(p_2)\equiv\bar v_2$ are the spinor and anti-spinor of the initial nucleons with spin projections $\sigma_1$ and $\sigma_2$ and dirac indices $r_1$ and $r_2$, respectively; $\xi_\mu(d)$ is the deuteron polarization vector, $\varphi_\pi$ is the $\pi$-meson field; $s,t,u$ are the invariant Mandelstam variables: \begin{equation} s = (p_1 + p_2)^2~~;~~t = (d - p_2)^2~~;~~u = (d - p_1)^2~~. \label{Mvar} \end{equation} This amplitude should be symmetrized over the initial nucleon states, and therefore it takes the form: \begin{equation} \bar {\cal M}_{\sigma_2,\sigma_1}^{\beta}=\frac{1}{\sqrt{2}}\left[ {\cal M}_{\sigma_2,\sigma_1}^{\beta}(s,t,u)+ {(-1)}^\beta{\cal M}_{\sigma_1,\sigma_2}^{\beta}(s,u,t)\right] \label{IA2} \end{equation} The second term in (\ref{IA2}), corresponding to the exchange of two nucleons, is equivalent to the exchange of the $t-$ and $u-$ variables. Using transformation properties of the wave functions, one can finds the transformation lows of the spinor amplitude $\chi^\mu_{r_2r_1}$. {\it The Lorenz-invariance } of matrix element under the Lorenz transformation of all four-vectors $p'=\Lambda({\cal A})p$ leads to the following Lorenz transformation low of the spinor amplitude: \begin{equation} \chi_{\alpha\beta}^{\mu}(p_1,p_2;d,\pi) = {\cal S}_{\alpha}^{~\alpha'}({\cal A}) \chi_{\alpha'\beta'}^{\mu'}(p_1',p_2';d',\pi'){\cal S}_{\beta}^{~\beta'} ({\cal A}^{-1})\Lambda_{\mu'}^{~\mu}({\cal A}^{-1})~. \label{INVAR1} \end{equation} With respect to discrete symmetries, we have from {\it P-invariance} \begin{equation} \chi^\mu(\vec p_1,\vec p_2;\vec d,\vec\pi)= \eta_{\cal P}\gamma_0\chi^\mu(-\vec p_1,-\vec p_2;-\vec d,-\vec\pi) \gamma_0g^{\mu\mu}~, \label{INVAR2} \end{equation} where $\eta_{\cal P} = {\eta_1\eta_2 \over \eta_\pi\eta_d}(-1)^{s_d - s_1 - s_2}=(-1)$; $\eta_i, s_i-$ are internal parities and spins of particles. {\it Time-reversal symmetry} leads to time-reversal spinor amplitude $\chi_\mu^{\alpha\beta}$ \begin{equation} \chi^\mu_{\alpha\beta}(\vec p_1,\vec p_2;\vec d,\vec\pi) = \eta_{\cal T}\CT_{\alpha\ga'}^{-1} \chi_\mu^{\alpha'\beta'}(-\vec p_2,-\vec p_1;-\vec\pi,-\vec d){\cal T}_{\beta'\beta} g^{\mu\mu}~, \label{INVAR3} \end{equation} where the time-reversal matrix ${\cal T}=-i\gamma_5{\cal C}.$ {\it The charge conjugation} describe the connection of the spinor amplitudes $\chi$ for the process $NN\to d\pi$ and $\chi_{\cal C}$ for the charge conjugation process $\bar N\bar N\to \bar d\pi$: \begin{equation} \chi(\vec p_1,\vec p_2;\vec d,\vec\pi)=\eta_{\cal C}\CC \chi_{\cal C}^t(\vec p_1,\vec p_2;\vec d,\vec\pi){\cal C}^{-1}~. \label{INVAR4} \end{equation} The amplitude $\chi_\mu$ for the process $NN\to d\pi$ can be expanded over six independent covariants, which can choice in such way that every of them satisfy the above properties. For this one we introduce the orthogonal system of four-vectors, one of them, $P$, is time-like, and other, $p, N$ and $L$ are space-like: \begin{equation} P = p_1 + p_2,~p = (p_1 - p_2)/2,~ N_\mu = \varepsilon_\mu (p' p P),~L_\mu = \varepsilon_\mu (N p P)~. \label{basis} \end{equation} Here the four-vector $p' = (d - \pi)/2.$ Then, one can get the whole system of orthogonal unit four-vectors $\{e_\mu^{(\sigma)}\}_{\sigma = 0}^3$. Therefore, the spinor amplitude $\chi_\mu$ can be expanded over this unit orthogonal system \begin{eqnarray} \fbox{\parbox[c][0.5\height]{11.67cm}{ $$ \chi_\mu = \chi_ie_\mu^{(i)} = \chi_1l_\mu + \chi_2n_\mu + \chi_3e_\mu~,~~ \chi_i = -\chi^\mu e_\mu^{(i)}~ = \gamma_5\left( a_i + b_i{\widehat l} \right) $$ } } \label{IA3} \end{eqnarray} \section{Reaction Mechanism} \noindent $\bullet~~$ {\it One-nucleon exchange (ONE) and $\pi NN$-vertex.} \\ Within the framework of the one-nucleon exchange model, the amplitude $\chi_\mu$ can be written in a simple form: \vspace{-0.75cm} \begin{equation}\begin{array}{cc} \begin{minipage}{5cm} \begin{center} \unitlength1cm \begin{picture}(5,3) \multiput(1,1)(0,1.2){2}{\vector(1,0){1.1}} \multiput(2.1,1)(0,1.2){2}{\line(1,0){0.9}} \multiput(3,1)(0.3,0){6}{\line(1,0){0.15}} \put(4.8,1){\vector(1,0){0.2}} \put(3,1){\circle*{0.1}} \put(3,2.23){\line(1,0){2}} \put(3,2.17){\line(1,0){2}} \put(4.8,2.1){\makebox(0.2,0.1)[lb]{$>$}} \put(3,2.2){\circle*{0.25}} \thinlines \put(3,1){\vector(0,1){0.7}} \put(3,1.7){\line(0,1){0.5}} \put(1,2.4){\parbox[b]{1cm}{$p_2$}} \put(1,0.7){\parbox[b]{1cm}{$p_1$}} \put(4.7,2.4){\parbox[b]{1cm}{$d$}} \put(4.7,0.7){\parbox[b]{1cm}{$\pi$}} \put(3.1,1.5){\parbox{1cm}{$n$}} \put(2.8,2.5){\parbox[b]{1cm}{$\bar\Gamma_\mu$}} \put(2.8,0.6){\parbox[b]{1cm}{$\Gamma_5$}} \end{picture} \end{center} \end{minipage} & \chi_\mu = g^+ \bar\Gamma_\mu(d) {\cal S}_{\cal F}(n) \Gamma_5(n)~, \label{ONE1} \end{array}\end{equation} where $\bar\Gamma_\mu(d)$ is the deuteron vertex $pn\to d$ with one off-mass shell nucleon, ${\cal S}_{\cal F}(n)=\left(\widehat n-m+i0\right)^{-1}$ is the fermion propagator and the value of the coupling constant is $g^+ = \sqrt2g~,~~g^2/4\pi = 14.7~$. The vertex $\bar\Gamma_\mu(d)$ can be related to the deuteron wave function (${\cal D}{\cal W}{\cal F}$) with the help of the following equation \cite{gunion}: \begin{equation} \bar\Psi_\mu={\bar\Gamma_\mu\over n^2-m^2+i0} =\varphi_1(t)\gamma_\mu+\varphi_2(t){n_\mu\over m}+ \left(\varphi_3(t)\gamma_\mu+\varphi_4(t){n_\mu\over m}\right){\widehat n-m\over m}~. \label{ONE2} \end{equation} The form factors $\varphi_i(t)$ are related to two large components of the ${\cal D}{\cal W}{\cal F}$ $u$ and $w$ (corresponding to the $^3{\cal S}_1$ and $^3{\cal D}_1$ states) and to small components $v_t$ and $v_s$ (corresponding to the $^3{\cal P}_1$ and $^1{\cal P}_1$ states) as in \cite{gross}. In the theoretical description of processes at intermediate energies, the structure of hadrons is often described by multiplying the point-like operators by form factors. It is common practice to assume that these vertices, i.e. their operator structures and the associated form factors, are in all situations the same as for a free on-shell hadrons. In our case, however, the pion vertex can have a much richer structure: there can be more independent vertex operators and the form factors can depend on more than one scalar variable. The situation is similar to the construction of the off-shell electromagnetic vertex \cite{EMV}. The common treatment of such off-shell effects is to presume them small and to ignore them by using the free vertices. However, as much of the present effort in intermediate energy physics focuses on delicate effects, such as evidence of quark/gluon degrees of freedom or small components in the wave functions, it is mandatory to examine these issues in detail \cite{davidson}. The most general pion-nucleon vertex, where the incoming nucleon of mass $m$ has momentum $p_i^\mu$, the outgoing nucleon has momentum $p_f^\mu$ and the pion has momentum $\pi^\mu = p_f^\mu - p_i^\mu$, can be written as \cite{kazes} \begin{equation} \Gamma_5\left(p_f, p_i\right) = \gamma_5 G_1 + \frac{\widehat p_f - m}{m} \gamma_5 G_2 + \gamma_5 \frac{\widehat p_i - m}{m} G_3 + \frac{\widehat p_f - m}{m} \gamma_5 \frac{\widehat p_i - m}{m} G_4~; \label{ONE4} \end{equation} here $\{G_i(t; p_i^2, p_f^2)\}_{i=1}^4$ are some functions depending on the relativistic invariant transfer $t = (p_i - p_f)^2$ and particles masses $p_{i,f}^2$ or the so-called pion form factors. By sandwiching $\Gamma_5$ between on-shell spinors one obtains $G_1(t,m^2,m^2) \bar u(p_f) \gamma_5 u(p_i)$. In our case, one nucleon is the off-shell only, and therefore we will consider the ``half-off-shell'' vertex with incoming nucleon on-shell. We obtain in that case two terms in eq.(\ref{ONE4}) instead of four because the third and the fourth ones are vanishing, taking into account the Dirac equation for a free fermion. Then, eq.(\ref{ONE4}) can be written in the form: \begin{equation} \Gamma_5(t) = \gamma_5 \left( G_1(t) + G_2(t)\frac{\widehat n+m}{m}\right)~= \lambda G^{{\cal P}{\cal S}}(t) \gamma_5 + \left (1-\lambda \right) G^{{\cal P}{\cal V}}(t) {\widehat\pi\over2m} \gamma_5~, \label{ONE5} \end{equation} Note, according to the so-called equivalence theorem \cite{shweb} the sum of all Born graphs for elementary processes, for example the pion photoproduction on a nucleon \cite{scherer} and the other ones, is invariant under chiral transformation \cite{friar}. This means that starting with the Lagrangian appropriate to the pseudoscalar $({\cal P}{\cal V})$ coupling, one ends up in the Lagrangian appropriate to the pseudoscalar $({\cal P}{\cal S})$ coupling by performing a chiral transformation. This equivalence theorem is related to the processes for elementary particles. But in our case, for the reaction $NN\rightarrow d\pi$ there is a bound state, a deuteron, and therefore reducing this process to the one where only elementary particles participate, we will have the diagrams of a higher order over the coupling constant than the Born graph. So, the equivalence theorem cannot be applied to our considered processes. Therefore, the vertex $\Gamma_5$ in our case can be written in the form of eq.(\ref{ONE5}) which is actually a linear combination of pseudoscalar and pseudovector coupling with the so-called mixing parameter $\lambda$. The $dNN$ vertex has been studied by Buck and Gross \cite{gross} within the framework of the Gross equation of nucleon-nucleon scattering. They used a one boson exchange (OBE) model with $\pi, \rho, \omega$ and $\sigma$ exchange. In their study, they suggest that the form factors $G^{{\cal P}{\cal S}}$ and $G^{{\cal P}{\cal V}}$ have the same $t$ - dependence, in particular $G^{{\cal P}{\cal S}}(t) = G^{{\cal P}{\cal V}}(t) = h_N(t)$, and consider $\lambda=0.0; 0.2; 0.4; 0.6; 0.8$ and $1.0$. In each case, the parameters of the OBE model were adjusted to reproduce the static properties of the deuteron. They found that the total probability of the small components of the ${\cal D}{\cal W}{\cal F}$: $P_{small}=\int_0^\infty p^2dp\left[v_t^2(p)+v_s^2(p)\right]$, increases monotonically with growing $\lambda$ from approximately $0.03\%$ for $\lambda=0$ to approximately $1.5\%$ for $\lambda=1$. \vskip 5mm \noindent $\bullet~~$ {\it Second-order graphs}\\ Let us consider now the second order graph corresponding to the rescattering of the virtual $\pi$-meson by the initial nucleon. This mechanism of the $NN\to \pi d$ process has been analyzed by many authors, see, for example, \cite{begin_rel,locher}. Our procedure of the construction of the helicity amplitudes corresponding to the triangle graph is similar to the ones published by \cite{locher}, and so we present it briefly. The most important result of this integration is the nucleon spectator contribution where the nucleon labelled $\eta$ is on mass shell $(\eta^2 = m^2)$: \begin{equation} \begin{minipage}{6cm} \begin{center} \unitlength1cm \begin{picture}(5,3) \put(0,2.2){\vector(1,0){0.8}} \put(0.8,2.2){\line(1,0){0.7}} \put(1.5,2.2){\vector(1,0){1.3}} \put(2.8,2.2){\line(1,0){1.1}} \put(1.5,2.2){\line(1,-1){0.36}} \put(1.9,1.8){\vector(1,-1){0.36}} \put(2.3,1.4){\line(1,-1){0.36}} \put(2.7,1){\vector(1,1){0.7}} \put(3.2,1.5){\line(1,1){0.7}} \put(1.5,2.2){\circle*{0.1}} \thicklines \put(3.9,2.23){\line(1,0){1.5}} \put(3.9,2.17){\line(1,0){1.5}} \put(5.2,2.1){\makebox(0.2,0.1)[lb]{$>$}} \put(3.9,2.2){\circle*{0.25}} \thinlines \put(0,1){\vector(1,0){0.8}} \put(0.8,1){\line(1,0){1.9}} \multiput(2.7,1)(0.45,0){5}{\line(1,0){0.35}} \put(5,1){\vector(1,0){0.37}} \put(2.7,1){\circle*{0.15}} \put(0.1,2.4){\parbox[b]{1cm}{$p_2$}} \put(0.1,0.7){\parbox[b]{1cm}{$p_1$}} \put(5,2.4){\parbox[b]{1cm}{$d$}} \put(5,0.7){\parbox[b]{1cm}{$\pi$}} \put(2.6,2.4){\parbox{1cm}{$\eta$}} \put(3.5,1.5){\parbox{1cm}{$k$}} \put(1.7,1.5){\parbox{1cm}{$q$}} \put(1.3,2.45){\parbox[b]{1cm}{$\Gamma_5$}} \put(3.7,2.45){\parbox[b]{1cm}{$\bar\Gamma_\mu$}} \put(2.4,0.5){\parbox[b]{1cm}{$f_{\pi N}^{el}$}} \end{picture} \end{center} \end{minipage} \hfill \chi_\mu^{sp}= {g^+\over (2\pi)^3}\int h_\pi(q^2){{\cal F}_\mu\left(\vec\eta, \eta_0=\sqrt{\vec\eta^2+m^2}\right)\over q^2-\mu^2}{d^3\eta\over2\eta_0} \label{SO1} \end{equation} where $h_\pi(q^2)$ is the pion form factor corresponding to the off-mass shell $\pi$-meson in the intermediate state; a monopole form has been chosen $h_\pi(q^2) = (\Lambda^2 - \mu^2)/(\Lambda^2 - q^2)$ as like as in \cite{machleidt}; here $\Lambda$ is the corresponding cut-off parameter. The general form of ${\cal F}_\mu$ can be written as follows: \begin{equation} {\cal F}_\mu = \Gamma_5 {\cal S}_{\cal F}^c(\eta) {\bar\Gamma}_\mu(d) {\cal S}_{\cal F}(k) f^{el}_{\pi N}~, \label{SO2} \end{equation} where $f^{el}_{\pi N}$ is the amplitude of $\pi N$ elastic scattering; it can be presented as expansion over two off-shell invariant amplitudes $f^{el}_{\pi N}=(A+B\widehat\pi)$ which depend on four momenta. We compute A and B from the on-shell $\pi N$ partial wave amplitudes ${\cal T}_{l\pm}^{on}(s_{\pi N})$ under the assumption \begin{equation} {\cal T}_{l\pm}(s_{\pi N},t_{\pi N}, u_{\pi N}) \approx {\cal T}_{l\pm}^{on}(s_{\pi N})~, \label{SO3} \end{equation} where ${\cal T}_{l\pm}^{on}(s_{\pi N})$ are taken from the Karlsruhe-Helsinki phase shift analysis \cite{holer}. However, in the partial wave decomposition of the invariant functions, full off-shell angular momentum projectors are used for the lowest waves in the manner discussed for the $NN\to NN\pi$ reaction in Ref.\cite{kroll}. The triple integral (\ref{SO1}) over azimuth $\varphi_\eta$, polar angle $\vartheta_\eta$ and the magnitude of three-momentum $\eta$ must be done numerically for which we used a Gaussian quadrature. There are 6 triple integrals over a complicated complex integrand for each scattering angle. \section{Results and Discussions} In order to investigate the effect of small components of the ${\cal D}{\cal W}{\cal F}$, we have calculated the differential cross section $d\sigma/d\Omega$, polarization characteristics $A_{ii}, A_{y0}$, etc. for $pp\to d\pi^+$ as a function of scattering angle at proton kinetic energy $T_p=578 MeV$ corresponding to pion kinetic one $T_\pi=147 MeV$ because at this energy the probability of $\Delta$-isobar production by the two - step mechanism is rather sizeable. All the calculated quantities are in the Madison convention and compared with the experimental data \cite{arndt} and partial-wave analysis (${\cal P}{\cal W}{\cal A}$) by R. A. Arndt et al. \cite{said} (dotted curve). The cut-off parameter $\Lambda$ and the mixing one $\lambda$ corresponding to the $\pi NN$ vertex are chosen by the best fitting of the experimental cross section $d\sigma/d\Omega$ data. We have checked that the polarization curves change very little if we vary the cut-off parameter $\Lambda$. Note that the contribution of the triangle graph is very large at intermediate initial kinetic energies and much smaller at lower energies. It is caused by a large value of the cross section of elastic $\pi N$ scattering because of a possible creation of the $\Delta$-isobar at this energy. One can stress that the application of Locher's form ${\cal D}{\cal W}{\cal F}$ \cite{locher} does not allow one to reproduce the absolute value of the differential cross section (see Fig. 1.) over the whole region of scattering angle $\vartheta$. But using the Gross approach for the ${\cal D}{\cal W}{\cal F}$ \cite{gross}, one can describe $d\sigma/d\Omega$ at $\lambda=0.6-0.8$ rather well. The next interesting result which can be seen from Fig. (2-6) is a large sensitivity of all the polarization characteristics to the small components of the ${\cal D}{\cal W}{\cal F}$. The asymmetry $A_{y0}$ (Fig. 2.) and the vector polarization $iT_{11}$ (Fig. 3.) calculated within the framework of Gross's approach particularly show this large sensitivity. These quantities are interference dominated and sensitive to the phases. The results for $iT_{11}$ have a wrong sign with Locher's form ${\cal D}{\cal W}{\cal F}$ \cite{locher}. On closer inspection, we observe that the first term in eq.(\ref{Ob2}), $(\Phi^*_1-\Phi^*_3)\Phi_2$, is very big due to constructive interference $\Phi_1\approx -\Phi_3$. It is caused by the $N\triangle$ configuration in a relative ${\cal S}$ wave having $pp$ spin zero ($^1{\cal D}_2$ state). The $^1{\cal D}_2$ partial-wave dominates making $\Phi_{1,2,3}$ large, but the results are the same contribution to $\Phi^{{\cal J}=2}_1$ and $\Phi^{{\cal J}=2}_3$ (with opposite signs caused by the relevant Wigner d-function signature). Since the contribution of $\Phi_{4,5,6}$ is negligible, the sign problem for $iT_{11}$ is therefore very sensitive to the $\Phi^{{\cal J}=0}_2$ (or $^1{\cal S}_0$) partial wave. As $iT_{11}$ is very nearly proportional to $\Phi_2$, the phase of $\Phi_2$ determines the sign of $iT_{11}$. The right structure of the observables starts to appear gradually in the theoretical curves as one increases the mixing parameter $\lambda$ in the Buck-Gross model, that is to say, as one increases the probability of the small components in the ${\cal D}{\cal W}{\cal F}$. We have checked that this structure originates indeed from the small components $v_t$ and $v_s$ in eq.(\ref{ONE2}). If we make $v_t=v_s=0$ in the Buck-Gross model, then all curves become very similar to Locher's ones. Similarly, if we vary the $\pi NN$ vertex given by eq.(\ref{ONE5}) by considering $\lambda$ between $0$ and $1$ but keep Locher's ${\cal D}{\cal W}{\cal F}$, then the curves change very little again. The proton spin correlations $A_{ii}$ are presented in Fig. (4-6). Actually, the data on $A_{zz}$ (Fig. 4.) is the measure of the $\Phi_{4,5,6}$ magnitudes because the deviation of $A_{zz}$ from $-1$ is determined by these amplitudes (\ref{Ob1}). According to the partial wave decomposition, $\Phi_4$ and $\Phi_6$ are the amplitudes containing only triplet spin states in the $pp$ channel. One can conclude that the magnitudes of the spin-triplet amplitudes are somewhat small. As for $A_{yy}$ (Fig. 5.) and $A_{xx}$ (Fig. 6.), the terms proportional to $\Phi_1 + \Phi_3$ can be neglected because there is a phase relation $\Phi_1 \approx -\Phi_3$. Therefore, the deviation of $A_{yy}$ and $A_{xx}$ from $-1$ is determined by $\Phi_{4,6}$ again, whereas $\Phi_5$ does not contribute to the numerator of $A_{yy}$. One can also see a large sensitivity of the observables $A_{ii}$ to the used form of ${\cal D}{\cal W}{\cal F}$. The application of Gross's approach by the construction of ${\cal D}{\cal W}{\cal F}$ \cite{gross} results in the shapes of these characteristics which are different from the corresponding ones obtained within the framework of Locher's approach \cite{locher}. Note, the energy dependence of all the observables within the framework of the suggested approach is the subject of our next investigation. \section{Summary and Outlook} A relativistic model for the reaction $NN\to d\pi$ has been discussed in detail using two forms of the ${\cal D}{\cal W}{\cal F}$ \cite{locher} and \cite{gross}. One of them \cite{locher} was already used in the analysis of the $pp\to d\pi$ process also taking into account the two-step mechanism with a virtual pion in the intermediate state. The difference between our approach and the model considered in \cite{locher} is the following. We have analyzed the sensitivity of all the observables to the form of $\pi NN-$current and the choice of the ${\cal D}{\cal W}{\cal F}$ relativistic form. First of all, from the results presented in Fig. (1-6), one can see very large sensitivity of all the observables, especially of the polarization characteristics to the choice of the ${\cal D}{\cal W}{\cal F}$ form. The inclusion of the $P$-wave contribution in the ${\cal D}{\cal W}{\cal F}$ within the framework of Gross's approach \cite{gross} results in a better description of the experimental data on the differential cross section and the polarization observables. The next interesting result is related to the extraction of some new information on the off-shell effects due to a virtual (off-shell) nucleon. Comparing the observable with the experimental data (see Fig. (1-6)), one can test the assumption, suggested by \cite{gross}, of a possible form of the pion form factor and conclude that one cannot use the mixing parameter $\lambda=1$ as like as in \cite{locher}. One can stress that the one-nucleon exchange and the pion rescattering graphs have been studied only in this paper in order to investigate very important effects: off-mass shellness of nucleon and pion, and $P-$wave contribution to the ${\cal D}{\cal W}{\cal F}$. The interactions in the initial $NN$ and final $d\pi$ states can be in principle contributed to the total amplitude of the considered reaction. However, it will be as a separate stage of this study because a more careful inclusion of elastic $NN$ and $d\pi$ interactions at intermediate energies is needed. {\bf Acknowledgements.} We gratefully acknowledge very helpful discussions with E. A. Strokovsky, S. M. Dorkin, S. S. Semikh and F. Kleefeld. \section{\bf Appendix.} \noindent $\bullet~~${\it Helicity formalism.} \\ To calculate the observables, differential cross sections and polarization characteristics, it would be very helpful to construct the helicity amplitudes of the considered process $NN\to d\pi$. So, we use the helicity formalism for this reaction presented in Ref.\cite{soffer}. Let us introduce initial nucleon helicities $\mu_1,\mu_2$ and the final deuteron $\lambda$, and helicity amplitudes $\bar{\cal M}^\lambda_{\mu_2,\mu_1}(W,\vartheta)$ depending on the initial energy $W$ in the $N-N$ c.m.s. and the scattering angle $\vartheta$ analogous to \cite{locher}. This amplitude $\bar{\cal M}^\lambda_{\mu_2,\mu_1}(W,\vartheta)$ corresponds to the transition of the $NN$ system from the state with helicities $\mu_1,\mu_2=\pm 1/2$ to the state with $\lambda=\pm 1,0$. \noindent With respect to discrete symmetries, we have from {\it parity conservation} (\ref{INVAR2}): \begin{equation} {\cal M}_{\mu_2\mu_1}^\lambda= \eta_P(-1)^{(\mu_2-\mu_1)-\lambda}{\cal M}_{-\mu_2-\mu_1}^{-\lambda}= (-1)^{\mu_2+\mu_1+\lambda}{\cal M}_{-\mu_2-\mu_1}^{-\lambda}~. \label{HA1} \end{equation} {\it Time - reversal symmetry} (\ref{INVAR3}) leads to \begin{equation} {\cal M}_{\mu_2\mu_1}^\lambda= (-1)^{(\mu_2-\mu_1)-\lambda} {\cal M}^{\mu_2\mu_1}_{\lambda}~. \label{HA2} \end{equation} We use the abbreviations for helicity amplitudes as \cite{illar}. Using the expansion (\ref{IA3}), one can get the following form of the helicity amplitudes: \begin{eqnarray} \Phi_{^1_3} = \bar{\cal M}_{++}^\pm = \mp{1\over\sqrt2}{\varepsilon\over m} \Big[ a_1^s \cos\vartheta \pm ia_2^a - a_3^a \sin\vartheta\Big],~ \Phi_2 = \bar{\cal M}_{++}^0 = {\varepsilon\gve_d\over mM}\Big[ a_1^s \sin\vartheta + a_3^a \cos\vartheta \Big], \nonumber \\ \Phi_{^4_6} = \bar{\cal M}_{+-}^\pm = \mp{1\over\sqrt2}{p\over m} \Big[ b_1^s \cos\vartheta \pm ib_2^s - b_3^a \sin\vartheta \Big],~ \Phi_5 = \bar{\cal M}_{+-}^0 = {p\varepsilon_d\over mM} \Big[ b_1^s \sin\vartheta + b_3^a \cos\vartheta \Big], ~~\label{HA3} \end{eqnarray} where $\chi_i^{\{_a^s\}}(s,t,u)$ are symmetric and antisymmetric combinations $\chi^{\{_a^s\} } = (\chi_i(\vartheta) \pm \chi_i(\pi - \vartheta))/\sqrt2$. All symmetry properties (\ref{HA1}) are satisfied by these amplitudes. \vskip 5mm \noindent $\bullet~~${\it Observables. } \\ Using the helicity amplitudes (\ref{HA3}), one can calculate the all observables: differential cross section, asymmetry, deuteron tensor polarization and so on. Let us present now the expressions for the following observables in the c.m.s. using $\Phi_i$: \begin{eqnarray} A_{y0}& = &4Im(\Phi_1\Phi^*_4+\Phi_2\Phi^*_5+\Phi_3\Phi^*_6)\Sigma^{-1},~~~ A_{0y}(\theta)=A_{y0}(\pi-\theta), \nonumber\\ A_{xz}& = &-4Re(\Phi_1\Phi^*_4+\Phi_2\Phi^*_5+\Phi_3\Phi^*_6)\Sigma^{-1},~~~ A_{zx}(\theta)=A_{xz}(\pi-\theta), \nonumber \\ A_{zz}& = &-1+4(|\Phi_4|^2+|\Phi_5|^2+|\Phi_6|^2)\Sigma^{-1}, \nonumber \\ A_{yy}& = &-1+2(|\Phi_1+\Phi_3|^2+|\Phi_4+\Phi_6|^2)\Sigma^{-1}, \nonumber \\ A_{xx}& = &A_{zz}+2(|\Phi_1+\Phi_3|^2-|\Phi_4+\Phi_6|^2)\Sigma^{-1}. \label{Ob1} \end{eqnarray} The expressions for the deuteron tensor polarization components are the following: \begin{eqnarray} iT_{11} & = &-\sqrt{6}Im\left [(\Phi^*_1-\Phi^*_3)\Phi_2+ (\Phi^*_4-\Phi^*_6)\Phi_5 \right]\Sigma^{-1}, \nonumber \\ T_{20} & = &\left[1-6(|\Phi_2|^2+|\Phi_5|^2)\Sigma^{-1}\right]/\sqrt{2}, \nonumber \\ T_{21} & = &\sqrt{6}Re\left[(\Phi^*_1-\Phi^*_3)\Phi_2+ (\Phi^*_4-\Phi^*_6)\Phi_5\right]\Sigma^{-1}, \nonumber \\ T_{22} & = &2\sqrt{3}Re(\Phi^*_1\Phi_3+\Phi^*_4\Phi_6)\Sigma^{-1}= (1+3A_{yy}-\sqrt{2}T_{20})/(2\sqrt{3})~. \label{Ob2} \end{eqnarray} The variable $\Sigma$ is related to the differential cross section as \begin{equation} \Sigma = 2 \sum_1^6{\mid \Phi_i\mid}^2 = 4\frac{p}{k}{(\frac{m}{4\pi\sqrt{s}})}^{-2} \frac{d\sigma}{d\Omega} = {1\over\sigma_0}\frac{d\sigma}{d\Omega}~, \label{Ob3} \end{equation} where $p$ and $k$ are the momenta of initial proton and final deuteron in the c.m.s.
1,314,259,995,490
arxiv
\section{Introduction} \label{sec:intro} The LHC accelerator will be upgraded to deliver instantaneous peak luminosities of $5-7.5 \times 10^{34}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}$ with 140-200 collisions per bunch crossing. This will allow ATLAS and CMS to collect integrated luminosities of the order of 300$\,\mathrm{fb}^{-1}$ per year and up to 3000$\,\mathrm{fb}^{-1}$ during the HL-LHC projected lifetime of ten years. Under the hypothesis of a replacement after the end of Run 5, the CMS tracker will have to endure a fluence of $2\times10^{16}\,\frac{\mathrm{n}_\mathrm{eq}}{\mathrm{cm}^2}$ for its first layer~\cite{CMS}. The CMS Inner Tracker (IT) will undergo a complete upgrade, known as the Phase-2 upgrade, in which the increased granularity enhances the track and vertex reconstruction, and copes with the increased trigger rate. In this regard, pixel sensor prototypes are developed in collaboration with LFoundry~\cite{LF} using the CMOS technology which is widely used in the semiconductor industry. The large-scale production and the possibility of small on-pixel structures makes these sensors cheaper, faster to produce, and possibly with better performance compared to standard productions in high energy physics (HEP). The samples have a 150$\,\mu\mathrm{m}$ thickness with a pixel size of 25$\times$100$\,\mu\mathrm{m}^2$ and 50$\times$50$\,\mu\mathrm{m}^2$, are irradiated with 23$\,$MeV protons at the Irradiation Center in Karlsruhe~\cite{KIT}, and are either DC or AC coupled to the RD53A readout chip (ROC). The RD53A ROC is a prototype for the development of the final readout chip for the Phase-2 upgrade of the CMS IT and ATLAS pixel detector~\cite{RD53}. It consists of 192 rows and 400 columns with a pitch of $50\times50\,\mu\mathrm{m}^2$. The measurements were done using the DESY test beam infrastructures~\cite{DESY}. The measurements on the non-irradiated CMOS sensors can be found in Ref.~\cite{non-irradiated}. \section{LFoundry CMOS technology} \label{sec:CMOS} CMOS technology is widely used in the semiconductor industry. Compared to the current technologies used in HEP detectors, it offers the possibility of small on-pixel structures. Features like 4 to 6 metal layers for signal redistribution, low- and high-resistivity polysilicon layers, and the possibility of mitigating leakage current noise with AC-coupled sensors improve the performance of silicon sensors. The samples in this study are passive planar n-in-p sensors for hybrid detector modules built in CMOS technology using the 150$\,$nm production line of LFoundry. The wafers are produced by implementing structures on reticule building blocks and applying the reticules on the photoresist. The reticules are then stitched to each other to build up the layout on 8-inch wafers. As most production costs are per-wafer, this will reduce the total production cost compared to the standard 6-inch wafers used for tracking detectors in HEP. \section{Test beam results of irradiated RD53A modules} \label{sec:results} Four RD53A modules with LFoundry CMOS sensors listed in Table~\ref{tab:modules} were irradiated with 23$\,$MeV protons at the Irradiation Center in Karlsruhe~\cite{KIT}. Three modules were DC coupled and had a pixel pitch of 25$\times$100$\,\mu\mathrm{m}^2$, while one module was AC coupled and had a pixel pitch of 50$\times$50$\,\mu\mathrm{m}^2$. The devices were tested with a 5.2 GeV electron beam at the DESY test beam facility. The tracks traversing the device under test (DUT) are reconstructed using an AIDA telescope~\cite{AIDA}. Irradiated RD53A modules are installed in a cooling box at $-35\,^\circ$C which can be precisely positioned and oriented. The RD53A data are read out with BDAQ53~\cite{BDAQ} and the event reconstruction is performed with the EUDAQ data acquisition framework~\cite{EUDAQ}. \begin{table}[htbp] \centering \caption{\label{tab:modules} List of irradiated RD53A modules with CMOS sensors.} \smallskip \begin{tabular}{|c|c|c|c|} \hline Pitch ($\mu m^2$) & Type & Fluence ($\times 10^{15}\,\frac{\mathrm{n}_\mathrm{eq}}{\mathrm{cm}^2}$) & Threshold (e)\\ \hline 25 $\times$ 100 & DC & 10 & 1192\\ 25 $\times$ 100 & DC & 2.1 & 1240\\ 25 $\times$ 100 & DC & 9.2 & 1219\\ 50 $\times$ 50 & AC & 5 & 1208\\ \hline \end{tabular} \end{table} \subsection{Cluster hit efficiency} \label{sec:efficiency} The hit efficiency is defined as the ratio of the hits in the detector over the number of tracks passing through its active area. The efficiency increases with the bias voltage, as shown in Figure~\ref{fig:efficiency_res} (left). The probability of collecting charges above the readout threshold and therefore the efficiency decreases with irradiation, as also discussed in Section~\ref{sec:charge}. The required efficiency of 99\% is achieved for all the irradiated modules. \begin{figure}[htbp] \centering \includegraphics[width=.47\textwidth]{figures/EffDESY.pdf} \qquad \includegraphics[width=.47\textwidth,]{figures/ResDESYLast.pdf} \caption{\label{fig:efficiency_res} Hit efficiency (left) and position resolution (right) of irradiated RD53A modules with CMOS sensors.} \end{figure} \subsection{Position resolution} \label{sec:resolution} The position resolution of a pixel detector is defined as the standard deviation of the difference between its reconstructed position with respect to the true position of the passing track. The measured position resolution is shown in Figure~\ref{fig:efficiency_res} (right) for two of the irradiated detectors in Table~\ref{tab:modules}. The distribution of the charge in the clusters and therefore the precision on the position reconstruction depends on the incident angle. For single-pixel clusters, which are dominant for perpendicular tracks, the position resolution is merely determined by the pixel pitch, while for double-pixel clusters the ratio of charge in the two pixels leads to a more precise position measurement. At high incident angles the uncertainty on the position starts to increase due to the digitization of relatively low pixel charges. \subsection{Charge collection measurements} \label{sec:charge} The readout chip represents the signal amplitude with a digitized measurement of the time over threshold (TOT). Using the injection circuit of the RD53A chip, one can inject arbitrary amounts of charge into individual readout pixels. Besides other advantages for pixel-to-pixel calibrations and measurements, the charge injection is used to construct a mapping of TOT as a function of injected charge. The charge is given in the $\Delta$VCAL unit, a readout chip variable corresponding to about 10 electrons. This map can then be reversed so that for each given TOT of each individual pixel the corresponding charge can be evaluated. This provides a pixel-to-pixel calibration of cluster charges of test beam events. Figure ~\ref{fig:chargeDist_mean} depicts the cluster charge distributions for the four irradiated LFoundry modules. Increasing the fluence, the probability of charge trapping is higher and the depleted volume shrinks. Therefore, for a fixed applied voltage, the collected charge decreases with fluence. The charge deposition in the silicon bulk can be described by a Landau distribution convoluted with the Gaussian contribution of the noise. Figure~\ref{fig:chargeDist_mean} shows the dependence of the collected charge on the fluence and the applied bias voltage. Less irradiated modules generally have a higher collected charge that saturates at lower bias voltages. \begin{figure}[htbp] \centering \includegraphics[width=.29\textwidth]{figures/chargeDist_comp_450.png} \qquad \includegraphics[width=.29\textwidth,]{figures/chargeDist_comp_max.png} \qquad \includegraphics[width=.3\textwidth]{figures/chargeLandauVSbias_comp.png} \caption{\label{fig:chargeDist_mean} Distribution of the collected charge for the four irradiated RD53A modules with LFoundry sensors at a fixed bias voltage of 450V (left) and at the maximum bias voltage the modules reached in the test beam (center). The mean collected charge at each bias voltage decreases with fluence (right).} \end{figure} \subsection{High incident angles} \label{sec:highEta} The incident angle of charged particles in the first layer of the upgraded CMS pixel detector will be up to 75--85 degrees in high pseudorapidity ($\eta$) regions. Charged particles with high incident angles deposit charge in long clusters of pixels. For pixels of 25$\times$100$\,\mu\mathrm{m}^2$ size, the tracks at high $\eta$ release charge along the 100$\,\mu$m side, with an effective path in silicon smaller than for the perpendicular tracks. This could lead to broken clusters, which are challenging to reconstruct. We have studied the cluster breaking rate for the non-irradiated LFoundry modules using the test beam data taken at high incident angles. After aligning the telescope tracks with pixel hits, a road can be defined for each track that passes through the silicon detector, which consists of the pixels along the track path. The length of a road is represented by the number of pixel columns it contains, while each column has a width of 100$\,\mu$m. Figure~\ref{fig:highEta} shows the distribution of the path length, expressed in number of traversed columns, for the reconstructed tracks at different incident angles. The distributions peak at the geometrically expected number of pixels, with a left tail corresponding to the tracks that were not reconstructed at the desired angles, due to multiple scattering and other uncertainties on the track incident angle. Keeping only the clean tracks from the peak of this distribution, one can loop through the pixels along this road that are expected to have a hit, and measure the reconstruction efficiency at each column position as depicted in Figure~\ref{fig:highEta}, right. The high reconstruction efficiency is promising, but the feasibility of tracking at high $\eta$ has to be validated by measurements with irradiated modules. \begin{figure}[htbp] \centering \includegraphics[width=.3\textwidth]{figures/roadncol.png} \qquad \includegraphics[width=.3\textwidth,]{figures/roadbreakage_dist_freq_prof.png} \caption{\label{fig:highEta} Expected length of clusters for tracks at high incident angle (left) and the pixel efficiency of reconstructing a pixel along the track road at each incident angle (right).} \end{figure} \section{Conclusions} \label{sec:conclusion} Irradiated RD53A modules with passive CMOS sensors are studied. The CMOS technology offers small on-pixel structures which can improve the performance of the pixel detectors by implementing AC coupling or using the intermediate metal layers for rerouting the signals between pixel cells and read-out chip channels. The use of the stitching method to produce large wafers reduces the production cost, even if the initial set-up can be expensive. Measurements in the laboratory and test beams show comparable efficiency and resolution compared to standard productions~\cite{standard}, meeting the Phase-2 requirements for the CMS pixel detector. \acknowledgments We are greatly thankful to our colleagues of the University of Bonn, Y. Dieter, J. Dingfelder, \\T. Hemperek, F. Huegging, D. Pohl, T. Wang, and N. Wermes, as well as D. Münstermann of the University of Lancaster, who carried out the design and production of these LFoundry samples.
1,314,259,995,491
arxiv
\section{Introduction} Recent results indicate that if one assumes the existence of a fundamental theory in eleven dimensions (let's call it the `M theory'\footnote{This name was suggested by E. Witten.}), this provides a powerful heuristic basis for understanding various results in string theory. For example, type II superstrings can be understood as arising from a supermembrane in eleven dimensions \cite{bergshoeff} by wrapping one dimension of a toroidal supermembrane on a circle of the spatial geometry \cite{duffa,duffstelle,townsend11,witten}. Similarly, when the spatial geometry contains a $K3$, one can obtain a heterotic string by wrapping a five-brane with the topology of $K3 \times S^1$ on the $K3$ \cite{harvey,townsend7}. This provides a very simple heuristic for understanding `string-string duality' between type IIA and heterotic strings in six dimensions \cite{hulla,duffsix,sensix,harvey,vafawitten,duffliu}. One simply considers the M theory on $\bb{R}^6 \times S^1 \times K3$. This obviously contains both type II strings and heterotic strings, arising by the two wrappings just described. Moreover, since the membrane and 5-brane are electric-magnetic duals in 11 dimensions, the two strings are dual in six dimensions, and so it is natural that the strong-coupling expansion of one corresponds to the weak-coupling expansion of the other. The remarkable thing about this kind of reasoning is that it works even though we don't understand how to formulate the M theory as a quantum theory. It is tempting to say that the success of the heuristic arguments that have been given previously, and those that will be given here, suggest that there really is a well-defined quantum M theory even when perturbative analysis is not applicable. The only thing that now appears to be special about strings is the possibility of defining a perturbation expansion. In other respects, all $p$-branes seem to be more or less equal \cite{townsendb,becker}. Recently, I have analyzed heuristic relationships between Type II strings and the M theory \cite{twob}. The approach was to compare the 9D spectrum of the M theory on $\bb{R}^9\times T^2$ with the IIB theory on $\bb{R}^9\times S^1$. A nice correspondence was obtained between states arising from the supermembrane of the M theory and the strings of the IIB theory. The purpose of this paper is to extend the analysis to include higher $p$-branes of both theories, and to see what can be learned from imposing the natural identifications. Let us begin by briefly recalling the results obtained in \cite{twob}. We compared the M theory compactified on a torus of area $A_{M}$ in the canonical 11D metric $g^{(M)}$ with the IIB theory compactified on a circle of radius $R_B$ (and circumference $L_B = 2 \pi R_B$) in the canonical 10D IIB metric $g^{(B)}$. The canonical IIB metric is the convenient choice, because it is invariant under the $SL(2,\bb{R})$ group of IIB supergravity. By matching the 9D spectra of the two models (especially for BPS saturated states), the modular parameter $\tau$ of the torus was identified with the modulus $\lambda_0 = \chi_0 + i e^{-\phi_0}$, which is the vev of the complex scalar field of the IIB theory. This identification supports the conjectured non-perturbative $SL(2,\bb{Z})$ duality symmetry of the IIB theory. (This was also noted by Aspinwall \cite{aspinwall}.) A second result was that the IIB theory has an infinite spectrum of strings, which forms an $SL(2,\bb{Z})$ multiplet. The strings, labelled by a pair of relatively prime integers $(q_1,q_2)$, were constructed as solutions of the low-energy 10D IIB supergravity theory using results in Refs. \cite{dabholkara,hullb}. They have an $SL(2,\bb{Z})$ covariant spectrum of tensions given by \begin{equation} T_{1q}^{(B)} = \Delta_q^{1/2} T_1^{(B)}, \label{t1qb} \end{equation} where $T_1^{(B)}$ is a constant with dimensions of mass-squared, which defines the scale of the theory, and\footnote {Equation (\ref{Delta}) was given incorrectly in the original versions of my previous papers \cite{twob}. Also, $T_1^{(B)}$ and $T_2^{(M)}$ were called $T$ and $T_{11}$, and $A_M$ was called $A_{11}$. A more systematic notation is now desirable.} \begin{equation} \Delta_q = e^{\phi_{0}} (q_1 - q_2 \chi_0 )^2 + e^{-\phi_{0}} q_2^2.\label{Delta} \end{equation} Note that strings with $q_2 \neq 0$, those carrying RR charge, have tensions that, for small string coupling $g_B = e^{\phi_0}$, scale as $g_B^{-1/2}$. The usual $(1,0)$ string, on the other hand, has $T \sim g_B^{1/2}$. In the string metric, these become $g_B^{-1}$ and $1$, respectively. The mass spectrum of point particles (zero-branes) in nine dimensions obtained from the two different viewpoints were brought into agreement (for BPS saturated states, in particular) by identifying winding modes of the family of type IIB strings on the circle with Kaluza--Klein modes of the torus and by identifying Kaluza-Klein modes of the circle with wrappings of the supermembrane (2-brane) on the torus.\footnote{The rule that gave sensible results was to allow the membrane to cover the torus any number of times (counting orientation), and to identify all the different ways of doing as equivalent. For other problems (such as Strominger's conifold transitions \cite{strominger}) a different rule is required. As yet, a single principle that gives the correct rule for all such problems is not known. I am grateful to A. Strominger and A. Sen for correspondence concerning this issue.} The 2-brane of the M theory has a tension (mass per unit area) in the 11D metric denoted $T_2^{(M)}$. If one introduces a parameter $\beta$ to relate the two metrics ($g^{(B)} = \beta^2 g^{(M)}$), then one finds the following relations \cite{twob} \begin{equation} (T_1^{(B)} L_B^2)^{-1} = {1 \over (2 \pi)^2} T_2^{(M)} A_{M}^{3/2}, \label{rbeqn} \end{equation} \begin{equation} \beta^2 = A_{M}^{1/2} T_2^{(M)}/T_1^{(B)}. \label{betaeqn} \end{equation} Both sides of eq. (\ref{rbeqn}) are dimensionless numbers, which are metric independent, characterizing the size of the compact spaces. Note that, since $T_1^{(B)}$ and $T_2^{(M)}$ are fixed constants, eq. (\ref{rbeqn}) implies that $L_B \sim A_{M}^{-3/4}$. Strings (1-branes) in nine dimensions were also matched. A toroidal 2-brane with one of its cycles wrapped on the spatial two-torus was identified with a type IIB string. When the wrapped cycle of the 2-brane is mapped to the $(q_1, q_2)$ homology class of the spatial torus and taken to have minimal length $L_q = (A_M/\tau_2)^{1/2} |q_2\tau -q_1| = (A_M \Delta_q)^{1/2}$, this gives a spectrum of string tensions in the 11D metric $T_{1q}^{(M)} = L_q T_2^{(M)}$. Converting to the IIB metric by $T_{1q}^{(B)} = \beta^{-2}T_{1q}^{(M)} $ precisely reproduces the previous formula for $T_{1q}^{(B)}$ in eq. (\ref{t1qb}), which therefore supports the proposed interpretation. \section{More Consequences of M/IIB Duality} Having matched 9D point particles (0-branes) and strings (1-branes) obtained from the IIB and M theory pictures, let us now explore what additional information can be obtained by also matching $p$-branes with $p = 2,3,4,5$ in nine dimensions.\footnote{For useful background on $p$-branes see Refs. \cite{horowitz,townsend92,duffreview}.} It should be emphasized that even though we use extremely simple classical reasoning, it ought to be precise (assuming the existence of an M theory), because we only consider $p$-branes whose tensions are related to their charges by saturation of a BPS bound. This means that the relations that are obtained should not receive perturbative or non-perturbative quantum corrections. This assumes that the supersymmetry remains unbroken, which is certainly believed to be the case. We begin with $p=2$. In the M theory the 2-brane in 9D is the same one as in 11D. In the IIB description it is obtained by wrapping an $S^1$ factor in the topology of a self-dual 3-brane once around the spatial circle. Denoting the 3-brane tension by $T_3^{(B)}$, its wrapping gives a 2-brane with tension $L_B T_3^{(B)}$. Converting to the 11D metric and identifying the two 2-branes gives the relation \begin{equation} T_2^{(M)} = \beta^3 L_B T_3^{(B)}. \end{equation} Using eqs. (\ref{rbeqn}) and (\ref{betaeqn}) to eliminate $L_B$ and $\beta$ leaves the relation \begin{equation} T_3^{(B)} = {1 \over 2\pi}\Big( T_1^{(B)} \Big)^2 . \label{threeb} \end{equation} The remarkable thing about this result is that it is a relation that pertains entirely to the IIB theory, even though it was deduced from a comparison of the IIB theory and the M theory. It should also be noted that the tension $T_3^{(B)}$ is independent of the string coupling constant, which implies that in the string metric it scales as $g_B^{-1}$. Next we consider 3-branes in nine dimensions. The only way they can arise in the M theory is from wrapping a 5-brane of suitable topology (once) on the spatial torus. In the IIB theory the only 3-brane is the one already present in ten dimensions. Identifying the tensions of these two 3-branes gives the relation \begin{equation} T_5^{(M)} A_{M} = \beta^4 T_3^{(B)}. \end{equation} Eliminating $\beta$ and substituting eq. (\ref{threeb}) gives \begin{equation} T_5^{(M)} = {1 \over 2 \pi}\Big( T_2^{(M)}\Big)^2 . \label{t5m} \end{equation} This result pertains entirely to the M theory. Section 3 of ref. \cite{duffliu} analyzed the implication of the Dirac quantization rule \cite{nepomechie} for the charges of the 2-brane and 5-brane in the M theory. It was concluded that (in my notation) $\pi T_5^{(M)}/ (T_2^{(M)})^2$ should be an integer. The present analysis says that it is $1/2$. Indeed, I believe that eq. (\ref{t5m}) corresponds to the minimum product of electric and magnetic charges allowed by the quantization condition. It is amusing that simple classical reasoning leads to a non-trivial quantum result. Next we compare 4-branes in nine dimensions. The IIB theory has an infinite $SL(2,\bb{Z})$ family of 5-branes. These are labeled by a pair of relatively prime integers $(q_1,q_2)$, just as the IIB strings are. The reason is that they carry a pair of magnetic charges that are dual to the pair of electric charges carried by the strings. Let us denote the tensions of these 5-branes in the IIB metric by $T_{5q}^{(B)}$. Wrapping each of them once around the spatial circle gives a family of 4-branes in nine dimensions with tensions $L_B T_{5q}^{(B)}$. In the M theory we can obtain 4-branes in nine dimensions by considering 5-branes with an $S^1$ factor in their topology and mapping the $S^1$ to a $(q_1,q_2)$ cycle of the spatial torus. Just as when we wrapped the 2-brane this way, we assume that the cycle is as short as possible, {\it i.e.,} its length is $L_q$. Identifying the two families of 4-branes obtained in this way gives the relation \begin{equation} L_q T_5^{(M)} = \beta^5 L_B T_{5q}^{(B)}. \end{equation} Substituting the relations \cite{twob} \begin{equation} L_q = \Delta_q^{1/2} T_1^{(B)} \beta^2 / T_2^{(M)} \end{equation} and \begin{equation} L_B \beta^3 = 2 \pi T_2^{(M)} / \big(T_1^{(B)}\big)^2 \label{rbthree} \end{equation} and using eq. (\ref{t5m}) gives \begin{equation} T_{5q}^{(B)} = {1 \over (2 \pi )^2} \Delta_q^{1/2} \Big( T_1^{(B)}\Big)^3. \label{fiveqb} \end{equation} This relation pertains entirely to the IIB theory. Since 5-brane charges are dual to 1-brane charges, they transform contragrediently under $SL(2,\bb{R})$. This means that {\it in this case} $q_1$ is a magnetic R-R charge and $q_2$ is a magnetic NS-NS charge. Thus 5-branes with pure R-R charge have a tension that scales as $g_B^{1/2}$ and ones with any NS-NS charge have tensions that scale as $g_B^{-1/2}$. Converting to the string metric, these give $g_B^{-1}$ and $g_B^{-2}$, respectively. Of course, $g_B^{-2}$ is the characteristic behavior of ordinary solitons, whereas $g_B^{-1}$ is the remarkable intermediate behavior that is characteristic of all $p$-branes carrying R-R charge. It is gratifying that these expected properties emerge from matching M theory and IIB theory $p$-branes. We have now related all 1-brane, 3-brane, and 5-brane tensions of the IIB theory in ten dimensions, so that they are determined by a single scale. We have also related the 2-brane and 5-brane tensions of the M theory in eleven dimensions, so they are also given by a single scale. The two sets of scales can only be related to one another after compactification, however, as the only meaningful comparison is provided by eqs. (\ref{rbeqn}) and (\ref{betaeqn}). All that remains to complete this part of the story, is to compare 5-branes in nine dimensions. Here something a little different happens. As is well-known, compactification on a space $K$ with isometries (such as we are considering), so that the complete manifold is $K \times \bb{R}^d$, give rise to massless vectors in $d$ dimensions. Electric charges that couple to these vectors correspond to Kaluza--Klein momenta and are carried by point-like 0-branes. The dual magnetic objects are $(d-4)$-branes. This mechanism therefore contributes ``Kaluza--Klein 5-branes'' in nine dimensions. However, which 5-branes are the Kaluza--Klein ones depends on whether we consider the M theory or the IIB theory. The original 5-brane of the M theory corresponds to the unique Kaluza--Klein 5-brane of the IIB theory, and the $SL(2,\bb{Z})$ family of 5-branes of the IIB theory corresponds to the Kaluza--Klein 5-branes of the M theory. The point is that there are three vector fields in nine dimensions which transform as a singlet and a doublet of the $SL(2,\bb{R})$ group. The singlet arises \`a la Kaluza--Klein in the IIB theory and from the three-form gauge field in the M theory. Similarly, the doublet arises from the doublet of two-form gauge fields in the IIB theory and \`a la Kaluza--Klein in the M theory. We can now use the identifications described above to deduce the tensions of Kaluza--Klein 5-branes in nine dimensions. The KK 5-brane of the IIB theory is identified with the fundamental 5-brane of the M theory, which implies that its tension is $T_5^{(B)} = \beta^{-6} T_5^{(M)}$. Combining this with eq. ~(\ref{rbthree}) gives \begin{equation} T_5^{(B)} = {1 \over (2\pi)^3} L_B^2 \big(T_1^{(B)}\big)^4 . \end{equation} Note that this diverges as $L_B \to \infty$, as is expected for a Kaluza--Klein magnetic $p$-brane. Similarly the $SL(2,\bb{Z})$ multiplet of KK 5-branes obtained from the M theory must have tensions that match the 5-branes of the 10D IIB theory. This implies that $T_{5q}^{(M)} = \beta^6 T_{5q}^{(B)}$. Substituting eqs.~(\ref{betaeqn}) and (\ref{fiveqb}) gives \begin{equation} T_{5q}^{(M)} = {1 \over (2\pi)^2} A_{M}^{3/2} \big(T_2^{(M)}\big)^3 \Delta_q^{1/2} . \label{fiveq} \end{equation} This also diverges as $A_{M} \to \infty$, as is expected. As a final comment, we note that if all tensions are rescaled by a factor of $2 \pi$ (in other words, equations are rewritten in terms of $\tilde T = T/2 \pi$), then all the relations we have obtained in eqs. (\ref{rbeqn}) -- (\ref{fiveq}) have a numerical coefficient of unity. \section{The IIA Theory} The analysis given above is easily extended to the IIA theory in ten dimensions. The IIA theory is simply interpreted \cite{townsend11,witten} as the M theory on $\bb{R}^{10} \times S^1$. Let $L=2\pi r$ be the circumference of the circle in the 11D metric $g^{(M)}$. The string metric of the IIA theory is given by $g^{(A)} = {\rm exp}(2\phi_A/3)g^{(M)}$, where $\phi_A$ is the dilaton of the IIA theory. The IIA string coupling constant $g_A$ is given by the vev of ${\rm exp}\, \phi_A $. These facts immediately allow us to deduce the tensions $T_p^{(A)}$ of IIA $p$-branes for $p=1,2,4,5$. The results are \begin{equation} T_1^{(A)} = g_A^{-2/3} L T_2^{(M)}, \label{t1a} \end{equation} \begin{equation} T_2^{(A)} = g_A^{-1} T_2^{(M)}, \label{t2a} \end{equation} \begin{equation} T_4^{(A)} = g_A^{-5/3} L T_5^{(M)}, \label{t4a} \end{equation} \begin{equation} T_5^{(A)} = g_A^{-2} T_5^{(M)}. \label{t5a} \end{equation} Since $T_1^{(A)}$ and $T_2^{(M)}$ are constants, eq. (\ref{t1a}) gives the scaling rule $g_A \sim L^{3/2}$ \cite{witten,twob}. Substituting eqs. (\ref{t1a}) and (\ref{t5m}) into eqs. (\ref{t4a}) and (\ref{t5a}) gives \begin{equation} T_4^{(A)} = {1 \over 2 \pi} g_A^{-1} T_1^{(A)} T_2^{(M)} = {1 \over 2 \pi} T_1^{(A)} T_2^{(A)}, \label{t4anew} \end{equation} \begin{equation} T_5^{(A)} = {1 \over 2 \pi} \big(T_2^{(A)}\big)^2. \label{t5anew} \end{equation} Again we have found the expected scaling behaviors: $g_A^{-1}$ for the 2-brane and 4-brane, which carry R-R charge, and $g_A^{-2}$ for the NS-NS solitonic 5-brane. Combining eqs. (\ref{t4anew}) and (\ref{t5anew}) gives \begin{equation} T_2^{(A)}T_4^{(A)} = T_1^{(A)} T_5^{(A)}. \label{t2t4} \end{equation} This shows that the quantization condition for the corresponding charges is satisfied with the same (minimal) value in each case. The IIA theory also contains an infinite spectrum of BPS saturated 0-branes (aka `black holes') and a dual 6-brane, which are of Kaluza--Klein origin like those discussed earlier in nine dimensions. Since the Kaluza--Klein vector field is in the R-R sector, the tensions of these should be proportional to $g_A^{-1}$, as was demonstrated for the 0-branes in \cite{witten}. \section{P-Branes With $P \geq 7$} The IIB theory has a 7-brane, which carries magnetic $\chi$ charge. The way to understand this is that $\chi$ transforms under $SL(2,\bb{R})$ just like the axion in the 4D N=4 theory. It has a Peccei-Quinn translational symmetry (broken to discrete shifts by quantum effects), which means that it is a 0-form gauge field. As a consequence, the theory can be recast in terms of a dual 8-form potential. Whether or not one does that, the classical supergravity equations have a 7-brane solution, which is covered by the general analysis of \cite{horowitz}, though that paper only considered $p \leq 6$. Thus the 7-brane in ten dimensions is analogous to a string in four dimensions. Let us call the tension of the IIB 7-brane $T_7^{(B)}$. The existence of the 7-brane in the 10D IIB theory suggests that after compactification on a circle, the resulting 9D theory has a 7-brane and a 6-brane. If so, these need to be understood in terms of the M theory. The 6-brane does not raise any new issues, since it is already present in the 10D IIA theory. It does, however, reinforce our confidence in the existence of the IIB 7-brane. A 9D 7-brane, on the other hand, certainly would require something new in the M theory. What could it be? To get a 7-brane after compactification on a torus requires either a 7-brane, an 8-brane, or a 9-brane in the 11D M theory. However, the cases of $p=7$ and $p=8$ can be ruled out immediately. They require the existence of a massless vector or scalar particle, respectively, in the 11D spectrum, and neither of these is present. The 9-brane, on the other hand, would couple to a 10-form potential with an 11-form field strength, which does not describe a propagating mode and therefore cannot be so easily excluded. Let us therefore consider the possibility that such a 9-brane with tension $T_9^{(M)}$ really exists and trace through its consequences in the same spirit as the preceding discussions. First we match the 7-brane obtained by wrapping the hypothetical 9-brane of the M theory on the spatial torus to the 7-brane obtained from the IIB theory. This gives the relation \begin{equation} A_M T_9^{(M)} = \beta^8 T_{7}^{(B)}. \label{amt9} \end{equation} Substituting eq. (\ref{betaeqn}) gives \begin{equation} T_7^{(B)} = \big(A_M\big)^{-1} { T_1^{(B)}T_{9}^{(M)} \over \Big( T_2^{(M)} \Big)^4}. \label{t7b} \end{equation} This formula is not consistent with our assumptions. A consistent picture would require $T_7^{(B)}$ to be independent of $A_M$ or $L_B$, but we have found that $T_7^{(B)} \sim A_M^{-1} \sim L_B^{4/3}$. Also, the 8-brane and 9-brane of the IIA theory implied by a 9-brane in the M theory do not have the expected properties. I'm not certain what to make of all this, but it is tempting to conclude that there is no $9$-brane in the M theory. Then, to avoid a paradox for 9D 7-branes, we must argue that they are not actually present. I suspect that the usual methods for obtaining BPS saturated $p$-branes in $d-1$ dimensions from periodic arrays of them in $d$ dimensions break down for $p = d-3$, because the fields are not sufficiently controlled at infinity, and therefore there is no 7-brane in nine dimensions. Another reason to be suspicious of a 9D 7-brane is that a $(d-2)$-brane in $d$ dimensions is generically associated with a cosmological term, but straightforward compactification of the IIB theory on a circle does not give one. In a recent paper \cite{polchinski}, Polchinski has argued for the existence of a 9-brane in the 10D IIB theory and an 8-brane in the 10D IIA theory, both of which carry RR charges. (He also did a lot of other interesting things.) It ought to be possible to explore whether the existence of these objects is compatible with the reasoning of this paper, but it is unclear to me what the appropriate rules are for handling such objects. \section{Conclusion} We have shown that by assuming the existence of a quantum `M theory' in eleven dimensions one can derive a number of non-trivial relations among various perturbative and non-perturbative structures of string theory. Specifically, we have investigated what can be learned from identifying M theory on $\bb{R}^9 \times T^2$ with type IIB superstring theory on $\bb{R}^9 \times S^1$ and matching (BPS saturated) $p$-branes in nine dimensions. Similarly, we identified the M theory on $\bb{R}^{10} \times S^1$ with type IIA superstring theory on $\bb{R}^{10}$ and matched $p$-branes in ten dimensions. Even though quantum M theory surely has no perturbative definition in 11D Minkowski space, these results make it more plausible that a non-perturbative quantum theory does exist. Of course, this viewpoint has been advocated by others -- most notably Duff and Townsend -- for many years. Clearly, it would be interesting to explore other identifications like the ones described here. The natural candidate to consider next, which is expected to work in a relatively straightforward way, is a comparison of the M theory on $\bb{R}^7 \times K3$ with the heterotic string theory on $\bb{R}^7 \times T^3$. There is a rich variety of $p$-branes that need to be matched in seven dimensions. In particular, the M theory 5-brane wrapped on the $K3$ surface should be identified with the heterotic string itself. The M theory on $\bb{R}^4 \times S^1 \times K$, where $K$ is a Calabi--Yau space, should be equivalent to the type IIA superstring theory on $\bb{R}^4 \times K$. Kachru and Vafa have discussed examples for which there is a good candidate for a dual description based on the heterotic string theory on $\bb{R}^4 \times K3 \times T^2$ \cite {kachru}. A new element, not encountered in the previous examples, is that while there is plausibly a connected moduli space of $N=2$ models that is probed in this way, only part of it is accessed from the M theory viewpoint and a different (but overlapping) part from the heterotic string theory viewpoint. Perhaps this means that we still need to find a theory that is more fundamental than either the heterotic string theory or the putative M theory. \section{Acknowledgments} I wish to acknowledge discussions with R. Leigh, N. Seiberg, S. Shenker, L. Susskind, and E. Witten. I also wish to thank the Rutgers string theory group for its hospitality. This work was supported in part by the U.S. Dept. of Energy under Grant No. DE-FG03-92-ER40701.
1,314,259,995,492
arxiv
\section{Introduction} Black hole X-ray binaries (BHXRBs), systems containing a stellar-mass black hole and a main-sequence companion, are some of the best test cases of fundamental physics, including tests of general relativity in strong gravity, plasma physics in accretion discs and particle acceleration in astrophysical jets. Due to the relative simplicity of black hole mass scaling, they may also be seen as smaller analogues to their super-massive counterparts in Active Galactive Nuclei (AGN), by providing a window into physical processes on much shorter time scales and at much higher observable fluxes. Among the known BHXRBs, GRS 1915+105 holds a special position. Discovered as a bright, $0.35$ Crab X-ray source \citep{castrotirado1994} with the WATCH all-sky monitor on the GRANAT space telescope \citep{castrotirado1992}, it also became known as the first galactic source known to exhibit superluminal jets \citep{mirabel1994, fender1999} and was hence termed a `microquasar' for its similarities to its supermassive counterparts. Despite being highly absorbed, optical identification of a K-M III type non-degenerate companion with the Very Large Telescope allowed a mass estimate of $14\pm 4\,M_\odot$ \citep{greiner2001}, recently revised via trigonometric parallax to a slightly lower mass of $12.4^{+2.0}_{-1.8}\, M_\odot$ and a distance of $8.6^{+2.0}_{-1.6}\,\mathrm{kpc}$ \citep{reid2014}. Since its discovery in 1994, GRS 1915+105 has been monitored repeatedly with instruments across all wavelengths, providing the first solid evidence of a coupling between accretion disc and jet: hard X-ray dips in the complex light curves of GRS 1915+105 were found to be associated with bright events at infrared and radio wavelengths \citep{pooley1997, eikenberry1998a, eikenberry1998b, kleinwolt2002}. Additionally, steady jets seem to be present during periods of prolonged hard X-ray emission \citep{foster1996, dhawan2000, fuchs2003}. What sets GRS 1915+105 apart from the remaining sources in the sample of known BHXRBs is its X-ray variability. Variability in both flux and spectrum is expected from these sources since their accretion disc likely undergoes turbulence driven by magnetic instabilities. However, GRS 1915+105 is known to exhibit complex X-ray light curves spanning at least 14 different patterns \citep{belloni2000, kleinwolt2002, hannikainen2003, hannikainen2005}. These complex patterns are known to repeat almost identically, sometimes with months to years between occurrences. It was thought to be unique in its behaviour until the detection of a second source, IGR J17091-3624 \citep{altamirano2011}, exhibiting similar variability. The variability, going hand-in-hand with spectral changes on short time-scales, is difficult to explain with standard accretion theory. Yet understanding the origin and formation of these patterns is crucial, as they are clearly not random and encode information about the accretion disc. \citet{belloni1997a, belloni1997b, belloni2000} suggested that all variability patterns observed in GRS 1915+105 decompose into three basic states, termed A, B and C, based on spectral and variability characteristics. These three fundamental states seem to roughly correspond to similar spectral and variability properties in other BHXRBs, in particular to the low-hard state with a hard spectrum and the presence of strong variability (LHS; state C in GRS 1915+105) and the very high state with a soft spectrum and little variability (VHS; state B at high flux and A with similar spectrum, but lower average flux). While \citet{belloni2000} point out that their state classification is mainly intended for easy categorization of observations, it is clear that the observed variability patterns are intimately linked to the underlying accretion physics. \citet{naik2002} observed that certain variability classes ($\alpha$ and $\rho$ in the \citealt{belloni2000} classification scheme) are preferably observed before and after prolonged intervals of the source in a type-C state with a hard spectrum, indicating that there exists a connection between the states as classified by \citet{belloni2000} and the long-term behaviour of the source, which may possibly be linked to mass accretion rate. If this is the case, then the complex variability leads to interesting prospects for studying accretion disc dynamics at high mass accretion rates. Based on a similar idea, \citet{misra2004, misra2006} grouped the original 12 classes into three groups based on an analysis of the correlation dimension, a proxy for distinguishing stochastic from chaotic processes. They found representatives of both chaotic and stochastic processes (see also \citealt{harikrishnan2011} for follow-up work), with five of the original classes showing non-linear deterministic (i.e.\ chaotic) behaviour ($\theta$, $\rho$, $\alpha$, $\nu$, $\delta$), three exhibiting purely stochastic behaviour ($\phi$, $\gamma$, $\chi$) and four showing a mix of chaotic and stochastic behaviour ($\beta$, $\lambda$, $\kappa$, $\mu$). The results were recently confirmed by \citet{sukova2016} using recurrence analysis and indicate a complex interplay between the governing physical properties---e.g.\ mass accretion rate and viscosity---and the observable X-ray emission. On the other hand, \citet{polyakov2012} looked at the stochastic variability in all thirteen classes characterized in \citet{belloni2000} and \citet{kleinwolt2002} using Flicker Noise Spectroscopy and found four different modes of stochastic behaviour, which they connected to viscosity fluctuations in the accretion disc. Their results broadly agree with those of \citet{misra2006}, though they point out that for some observations, the quality of the data does not allow a firm identification of the variability with the proposed modes. It is likely that the complex, recurring variability patterns are driven by global instabilities in the accretion disc, i.e.\ non-linear, deterministic processes governed by the global dynamical evolution of the accretion disc and driven by a few global parameters, for example the accretion rate. \citet{massaro2014} show that the striking patterns observed in the $\rho$ state, also named `heartbeat` state for its quasi-periodic pulses, can be described by a limit cycle caused by a fairly simple system of non-linear ordinary differential equation. Their model indicates that the burst recurrence time largely depends on a parameter steering the forcing in the system, and suggest that either variations in the mass accretion rate or viscosity may act as the driving force behind the observed oscillations in this state, in line with hydrodynamic simulations \citep{nayakshin2000, merloni2006} and detailed observations of spectral changes \citep{neilsen2011, neilsen2012}. It is clear that the state changes in GRS1915+105 must in some way depend on global properties of the accretion disc, and can act as probes of physical processes within the disc as well as the coupling between the disc and the jet. Thus, understanding the properties of these states and the long-term evolution of GRS 1915+105 is of crucial importance. However, studies to date largely concentrate on either individual states or subsets of the available data based on the previous classification of the first four years of \project{RXTE}\ data. The purpose of this paper is a study of the full 16-year data set of GRS1915+105 observed with the Proportional Counter Array (PCA) onboard the \textit{Rossi X-ray Timing Explorer} (\project{RXTE}). We choose a machine learning approach, novel in this context, to characterize and classify the states in GRS 1915+105. Machine learning is a sub-field of computer science concerned with learning patterns from data. In recent years, it has been employed very successfully in a range of different sciences (for an introduction, see e.g. \citealt{bishop2006} or \citealt{ivezic2014} for an astronomy-focused textbook). Machine learning as relevant for astronomy can be broadly separated into two types. In supervised machine learning (either classification or regression), a training data set is available for which the outcomes are known. This requires that such a previous data set exists for which the labels (or regression variables) are known from either human classification or other methods. Unsupervised machine learning, conversely, does not assume that the desired output (e.g.\ labels in classification or continuous variables in regression) is known, but aims to actively learn it from the data itself, subject to some assumptions and constraints that depend on the precise method used. In astronomy, machine learning has recently been used in a large variety of contexts, including among many others the estimation of photometric redshifts in the Sloan Digital Sky Survey \citep{carliles2010, beck2016}, automatic classification of galaxies using training data from the Galaxy Zoo project \citep{banerji2010, dieleman2015}, variable X-ray source classification for \project{ROSAT} \citep{mcglynn2004} and \project{XMM-Newton}\ \citep{farrell2015}, modeling the \project{Swift}/BAT trigger algorithm \citep{graff2015}, and distinguishing long and short Gamma-Ray Bursts \citep{tarnopolski2015}. The existing \project{RXTE}\ data set for GRS 1915+105 is particularly well suited for a machine learning approach: the source was subject of one of the most comprehensive X-ray monitoring campaigns performed with \project{RXTE}, yielding a data set of sufficient size for automatic classification while being too large to be classified by hand in its entirety. It shows fourteen discrete classes, and a fraction of the data set has been annotated by hand in the past, yielding the training set required for supervised machine learning tasks. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{grs1915_asm_lc_all.pdf} \caption{\project{RXTE}\ All-Sky Monitor (ASM) light curve in Modified Julian Date (MJD) for the entire duration of the \project{RXTE}\ mission. Each panel covers $500$ days. The solid blue line is the ASM light curve. The green dots represent the start points of the \project{RXTE}/PCA observations with high enough time resolution to be relevant for this analysis. The Figure shows that the \project{RXTE}/PCA observations span the entire lifetime and provide an approximately regular sub-sample with high coverage in time, though each observation is short.} \label{fig:asm_total} \end{center} \end{figure*} In this paper, we show that efficient classification using machine learning can be done, and present ways in which it can be used to infer the physical properties of the source. In Section \ref{sec:observations}, we introduce the data set and the pre-processing performed. Because few machine learning algorithms perform well on raw data, we explain how we constructed \textit{features}---summary statistics of the raw light curves that allow the algorithm to distinguish between classes---in Section \ref{sec:featureengineering}. In Section \ref{sec:supervised}, we present the results of the supervised classification, while in Section \ref{sec:discussion}, we put our results in the broader context, discuss limitations of the current approach and show potential avenues for future research. \section{Observations and Data Preparation} \label{sec:observations} We used all available \project{RXTE}\ observations of GRS 1915+105 between 1996 and 2011 with data in GoodXenon or EventMode ($1712$ observations). Light curves were extracted with a resolution of $0.125\,\mathrm{s}$ in $4$ energy bands: $W = 3 - 75$ keV, $L = 3 - 6$ keV, $M = 9 - 15$ keV, and $H = 15 - 75$ keV. While the energy ranges will not be exactly the same from light curve to light curve due to different detector modes as well as gradual changes in the sensitivity of individual channels over time, channels were included or excluded as necessary to keep the energy ranges as constant as possible. Out of a total of $1712$ observations, $20$ have no high-band data and are thus excluded, for a total of $1692$ observations included in the analysis (see Figure \ref{fig:asm_total} for the long-term light curve observed with \project{RXTE}'s All Sky Monitor (ASM), with the locations of pointed PCA observations marked). In the following, we use the $3 - 75$ keV band for all time series and power spectral features, and form two hardness ratios that encode energy spectral changes within and between states. Hardness ratio 1 (HR1) is defined as $\mathrm{HR}1 = M/L$ (mid-energy band divided by low-energy band) and hardness ratio 2 (HR2) as $\mathrm{HR}2 = H/L$ (high-energy band divided by low-energy band) to capture spectral changes in a model-independent way. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_durations.pdf} \caption{Histogram of the durations of all observations used in the analysis. Most observations have durations of $500$ --- $3500$ seconds. Note that these reflect total durations for a given observation without application of Good Time Intervals (GTIs); in the analysis below, these durations may be shortened or split in parts by detector failures and the $90$-minute orbit of the space craft.} \label{fig:obsdurations} \end{center} \end{figure} Figure \ref{fig:obsdurations} shows a histogram of the durations of individual observations. Most observations have a duration of $\sim\!2000 \,\mathrm{s}$, with only a small subset being significantly longer. In practice, many light curves are shorter, since data drop-outs and interruptions in the observations lead to good time intervals that are shorter than the nominal observation time. This is an important limitation to keep in mind, given that many of the patterns observed in the light curves of GRS 1915+105 tend to be of the order of $\sim\! 1000 \,\mathrm{s}$ long. This also leaves us with an important decision to make: do we use all segments regardless of length, or do we produce light curves of equal length for the classification, at the risk of loosing the shortest light curves? The latter is preferable in order to avoid systematic biases in our features (which, in the case of summary statistics, might depend on the number of data points in the light curve) and because some features are structured such that light curves of different duration give feature vectors of different lengths, making the later classification task vastly more complex. This implies that there is a trade-off between descriptiveness and sample completeness: when choosing long segments, we likely encapsulate more of the characteristic behaviour of a state, which can sometimes consist of cycles lasting more than a thousand seconds. On the other hand, if we choose long segments, we necessarily exclude all light curves that are shorter than that, for example because their Good Time Intervals (GTIs) only allowed for shorter segments. Here, we pick a segment length of $1024\,\mathrm{s}$ as a reasonable trade-off between being descriptive (generally, the patterns observed in \citet{belloni2000} last $\sim\!1000\,\mathrm{s}$ or so) and providing sufficient samples for classification. Note that we also choose overlapping segments starting every $256\,\mathrm{s}$, both for data augmentation as well as to account for phase shifts in periodic patterns. This leaves us with a total of $8506$ data segments of $1024\,\mathrm{s}$ duration, each of which consists of $8192$ data points in each of the four energy bands. Because of the overlap between them, segments derived from the same contiguous observation interval will not be independent. This is a standard data augmentation practice in machine learning applications and does not affect our conclusions as long as samples in the validation and test set are independent from the training data set, which we ensure as described in Section \ref{sec:featureselection} below. \section{Feature Engineering} \label{sec:featureengineering} While some machine learning algorithms can produce reliable classifications on raw data (e.g. a light curve), we find that these algorithms fail on the problem at hand for a number of reasons. The data set to be explored here is relatively small, in machine learning terms, with some classes having less than ten examples in the set of examples with human annotations. Additionally, the light curves show complex periodic patterns whose phases are random with respect to the start of an observation. Thus, different light curves of the same class are phase-shifted and may appear as very different to a machine learning algorithm purely due to this phase shift alone. Instead, we reduce the number of dimensions by extracting \textit{features}, descriptive summaries of the raw data that will allow for efficient separation of the various classes in feature-space. In the following, a \textit{sample} is a single instance of the ensemble to be classified, in our case an RXTE data segment (consisting of a light curve in four energy bands) of GRS1915+105, i.e.\ a $4 \cdot 8192 = 32768$-dimensional vector. For each sample, we compute a set of features for classification. Feature engineering is the most important and most difficult part of any machine learning problem. It is here where domain knowledge of the problem at hand becomes crucial to finding the most informative features to be used by the computer in the subsequent classification task. We use the previous (human-based) annotations by \citet{belloni2000}, supplemented with additional annotations published in \citet{kleinwolt2002} and \citet{hannikainen2003} to guide the feature engineering task. With relatively high-resolution light curves ($\Delta t = 0.125 \,\mathrm{s}$) in four energy bands, there is a multitude of possible features in time, energy and frequency domains that could potentially inform our choices. Because we aim to automate the manual classification in \citet{belloni2000}, we base our feature engineering on similar features such as the hardness ratios and overall appearance of the light curves. We also supplement the feature set derived from the time series and hardness ratios with properties of the power spectrum. \subsection{Time Series Features} Because it is difficult to encapsulate the large variety of shapes observed in the light curves of GRS 1915+105, we use a mix of very simple summary features and extract a number of features from an autoregressive model (as explained below). The summary features are: the mean count rate, median count rate, total variance, skewedness and kurtosis in the light curve segment in the $3 - 75$ keV band. The light curves observed from GRS1915+105 show a very rich variability behaviour, including complex patterns not well represented by the summary features listed above. Encapsulating these complex variability patterns in a few parameters is generally difficult: for example, any representation must be phase shift-invariant. That is, for roughly periodic patterns, features should look very similar regardless of where in the cycle a light curve begins. We attempt to encapsulate the variability in a simple autoregressive model, where the data $y_t$ at any given point in the light curve $t$ depends on a linear combination of $k$ data points immediately before: \begin{equation} y_{t} = c + \sum_{i=1}^k{\left( w_i y_{t-i}\right)} , \end{equation} \noindent where the $k$ elements $w_i$ of vector $w$ specify the weights, and we define a vector of all $k$ relevant previous measurements $X_t = y_{t-k:t}$ for use below. The weights encode the relative importance of previous $k$ data points on point $y_{t}$. Because these weights should be different for different classes, we expect them to be useful summaries of the complex temporal structure encoded in the light curves. We minimize the following equation with respect to the weight vector $w$ to infer the optimal weights: \begin{equation} \min_w ||\langle w, X \rangle - y||^2 + \lambda ||w||^2 \; , \end{equation} \noindent where $\lambda$ is a regularization parameter that controls for overfitting of the data. We run this optimization for each segment independently, and extract the weight vectors as features to be used in the subsequent classification. The parameter $k$ defining the number of data points relevant in determining the data point $y_{t+1}$ and consequently the number of weights is a free parameter to be estimated. The final free parameter is the temporal resolution $\Delta t$ of the light curves. In principle, it is possible to run the feature extraction on the unbinned light curves with a resolution of $\Delta t = 0.125\,\mathrm{s}$. However, averaging a set of $n$ neighbouring bins may reduce variance due to measurement noise and thus lead to cleaner features. The parameter space for these features was explored via the validation set and will be explained in more detail in Section \ref{sec:freeparams}. \begin{table*}[hbtp] \renewcommand{\arraystretch}{1.3} \footnotesize \caption{Model Parameters} \begin{threeparttable} \begin{tabularx}{\textwidth}{p{2.0cm}p{2.0cm}p{5.0cm}p{1.0cm}p{6.0cm}} \toprule \bf{Feature Set} & \bf{Parameter} & \bf{Meaning} & Best Value & \bf{Possible Values} \\ \midrule & C & Regularization magnitude & $1$ & $[10^{-3}, 10^{-2}, 0.1, 1, 10, 100]$ \\ \midrule Autoregressive Model & $\Delta t$ & Light curve time resolution & $0.125$ & $[0.125, 1.0, 2.0, 6.25]$ \\ & $k$ & Number of time bins determining current time bin & $10$ & $[2, 5, 7, 10, 20, 30, 50, 80]$ \\ & $\lambda$ & Regularization parameter & $1000$ & $[0.01, 0.1, 1, 10, 20, 50, 100, 1000]$ \\ \midrule Power spectrum PCA & $N$ & Number of components & $10$ & $[1,2,3,5,10,20,50,100]$ \\ \bottomrule \end{tabularx} \begin{tablenotes} \item{} \end{tablenotes} \end{threeparttable} \label{table:parameters} \end{table*} \subsection{Power Spectral Features} We use power spectral features based on the power colours defined in \citep{heil2015}. We compute power spectra in fractional rms normalization for all available light curves and integrate over frequencies in order to compute the fractional rms amplitude in different frequency bands. Following the power colours defined in \citet{heil2015}, we choose our bands to be $P_\mathrm{A} = 0.0039-0.031 \,\mathrm{Hz}$, $P_\mathrm{B} = 0.031-0.25 \,\mathrm{Hz}$, $P_\mathrm{C} = 0.25-2.0 \,\mathrm{Hz}$ and $P_\mathrm{D} = 2.0-16.0 \,\mathrm{Hz}$. We also construct power colours $\mathrm{PC}_1 = P_\mathrm{C}/P_\mathrm{A}$ and $\mathrm{PC}_2 = P_\mathrm{B}/P_\mathrm{D}$. In some states, a quasi-periodic oscillation (QPO) is clearly present. As a simple proxy and to avoid time-consuming power spectral fitting, we design a feature composed of the frequency where $\nu P_\nu$, i.e. each power spectral bin multiplied by its frequency, has its maximum. This feature generally encodes the frequency that dominates the overall variance. If a QPO is present, this feature will encode the frequency of that QPO. For states without QPO, the maximum is generally lower and set by the broadband noise component. Because these features offer only an incomplete description of the power spectrum in different states, in particular the presence of a QPO cannot be completely described by the prescriptions above (partly because the power spectral bands are much broader, thus a QPO might not have a pronounced effect). Instead, we build a Principal Component Analysis (PCA; \citealt{pearson1901}) representation of the power spectra and include the principal components as features. The number of PCA components, $N_\mathrm{PCA}$ is a free parameter, and will be discussed in more detail in Section \ref{sec:freeparams} below. Note that because the QPO moves in frequency, it is in principle possible for its centroid frequency to fall on the edge of a power spectral band. If the QPO frequency moves within a state, this adds some variance to the power colour measurements of samples in that state, as the resulting power colours will lie roughly in between those that would be derived if the QPO were completely contained in either of the adjacent bands. We believe that this effect is very rare in practice, since (1) as we show in Section \ref{sec:featureselection} below, $\mathrm{PC}_1$ is our most descriptive feature and can separate more than half of the validation data set into its correct classes by itself; (2) it is only one of several power spectral features used in the classification, thus the classifier has additional information to draw on when classifying a given sample. \subsection{Hardness Ratio Features} \citet{belloni2000} showed that the different classes occupy different positions in the space spanned by HR1 and HR2. While for most classes, there seems to be a strong (approximately linear) correlation between HR1 and HR2, some classes show more complex correlations where the source follows curved tracks through this space. In order to characterize the properties of the spectral evolution, we extract mean, skewedness and kurtosis from each hardness ratio separately. Additionally, we extract the covariance matrix of HR1 and HR2 for each segment, corresponding to the variance of each hardness ratio as well as the covariance between them, yielding a total of $9$ features based on the spectral evolution. It is worth noting that we explored the use of other techniques to extract hardness ratio features, notably 2D histogram maps, PCA and manifold learning techniques, and found them to be no better than the summary statistics above, thus we chose the latter for their simplicity and straightforward interpretability. \subsection{Feature Selection} \label{sec:featureselection} We randomly split the observations in training, validation and test data sets, with $50\%$ of observations in the training set and $25\%$ of all observations in the validation and test sets, respectively. This results in $4668$ samples in the training data set, $2094$ samples in the validation set, and $2450$ sample in the test data set. The differences in samples in the validation and test set are due to the fact that we split \textit{observations} (i.e. before creating segments of equal length) rather than samples. This is necessary because we extract overlapping segments, thus picking randomly from segments would lead to the loss of independence between training, validation and test data sets. As Figure \ref{fig:asm_total} shows, individual observations are generally separated in time, and can thus be considered independent. For feature selection and supervised classification, we use the combined previous classifications by \citet{belloni2000}, \citet{kleinwolt2002} and \citet{hannikainen2003} in order to capture all $14$ currently known states, but include only classifications where the entire observation was seen to be in a single state in order to avoid accidental mis-classification as the source switches states within an observation. This yields a total of $1884$ previously classified samples, with $885$ classified samples in the training set, $480$ samples in the validation set and $519$ samples in the test set, respectively. Note that the data set is heavily imbalanced with respect to class representation: previously, the source was known to spend the majority of its time in the $\chi$ state, while other states (e.g.\ $\eta$ and $\omega$) were only seen in one or two observations. Initial visualization showed that some features span a wide range of values. Because some machine learning methods, specifically the logistic regression model introduced below, tend to do better in well-behaved (linear) feature spaces, we take the logarithm of features that extend over several orders of magnitude and use the validation set to confirm that this improves classification with the logistic regression model. In particular, these features are: the variance of the light curve, the frequency where $\nu P_\nu$ has its maximum, all fractional rms values and power colours derived from the power spectrum, and the variances of both hardness ratios. \subsubsection{Free Parameters} \label{sec:freeparams} To estimate the free parameters of the model (see Table \ref{table:parameters} for an overview) we followed the procedure outlined above: we split the data set with human annotations into training and validation sets. We then used the former to train the algorithm, and the latter to test performance and find the combination of parameters that maximizes performance. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_feature_accuracy} \caption{The greedy search for the most important features: the number of features used for classification versus the accuracy (fraction of correctly classified samples) of classification in each case. The search shows that a simple logistic regression approach can yield an $F_1$ score of $>91\%$, and that $10$ features seem to be largely sufficient to classify data from GRS 1915+105. Adding higher features does not add any improvement in predictive power, and can in some cases be detrimental to performance, if it causes the logistic regression model to overfit.} \label{fig:scores} \end{center} \end{figure} In order to estimate which parameters will yield the optimal results, we use supervised learning in the form of logistic regression (\citealt{cox1958}; as implemented in \textit{scikit-learn}; \citealt{scikit-learn}). Logistic regression is one of the simplest classification algorithms. It defines a linear model analogously to linear regression, but because outcomes are discrete rather than continuous, it uses a binomial distribution (multinomial distribution if more than two outcomes are possible) instead of a normal distribution in defining the likelihood \citep{cox1958}. In practical terms, logistic regression aims to draw a hyperplane in the $N$-dimensional space spanned by all features such that the hyperplane separates samples belonging to a given class in the training set from the remainder. Multi-class classification is performed either using a multinomial distribution or by using a one-versus-all scheme: for each class in the training set, a separate hyperplane is drawn such that the split between samples belonging to said class and the remaining samples is maximized. We use the the least squares (L2) norm for regularization, which introduces an additional parameter, $C$. This parameter is used to balance the ability of the model to produce accurate predictions against tendencies to overfit. It is worth noting here that we also attempted the supervised machine learning task with other algorithms that use different strategies for finding decision boundaries between classes, most notably linear support vector machines \citep{guyon1993,cortes1995} and random forests \citep{breiman2001}, and found no improved performance compared to the logistic regression classifier, thus we keep the latter in the following for its interpretability. For each combination of parameters listed in Table \ref{table:parameters}, we use the training set to train the model with these parameters and test the performance with the validation set. For performance assessment, we use the $F_1$ score, a harmonic mean of \textit{precision} and \textit{recall}. In binary classification, precision refers to positive predictive value, i.e.\ the number of true positive examples of a class divided by all positive classifications. Recall, in turn, refers to the true positive rate, the ratio of true positives to all correctly identified samples. In the multi-class case considered here, the $F_1$ score is computed for each class in turn and averaged for all classes. We choose a weighted average of the $F_1$ score to account for the class imbalances in terms of samples. We do use accuracy when assessing performance of the classification with the final, trained model in Section \ref{sec:supervised} below, since it can be straightforwardly interpreted as the fraction of samples the classifier identified correctly. Using the approach described above, we arrive at the best values used for the classification, yielding a total of $41$ features for each of the $8506$ data segments. This comprises the features explicitly named above, as well as $10$ weights from the autoregressive model, and $10$ components from performing PCA on the power spectra. \subsubsection{Feature Importance} In order to assess the relative predictive power of each feature, we implemented a greedy search to find the most important features in our set. Once more, we used supervised learning with the previously-classified labels as the training set, but computed the validation score for each feature independently, as if it was the only feature available for classification. We set the feature with the highest validation score as the most important feature, and then perform a second pass, this time using the combination of the winner of the first round with every other feature. Again, we picked the combination with the highest score, and continued this process until all features were exhausted. This procedure answers simultaneously two questions. (i) It provides a ranking for the relative predictive power of each feature, and (ii) it provides an assessment whether all features are required to classify the data. The latter need not necessarily be true: some features (e.g. the power colours) are combinations of other features, and thus all features might not be required. In Figure \ref{fig:scores}, we show the results of the greedy search. We find that the $F_1$ score is generally high: $\sim\!\! 91\%$ on the validation set for the the $10$ best features, which provide most of the predictive power. Adding additional features generally add no more improvement, and may even decrease the score. The most predictive features are dominated by power spectral features: $\mathrm{PC}_1$ and the power in PSD band $P_\mathrm{D} = 0.0039-0.031 \,\mathrm{Hz}$ along with a PCA component take the top three spots. Additional improvement is provided by the autoregressive process, with four of its 10 components represented in the reduced feature set, as well as mean and variance of the first hardness ratio, and the skewness of the light curve in the entire energy band. We continue with the rest of the analysis with this reduced set of 10 features, since adding many features does not improve predictiveness. We note that including another $15$ features for a total of $35$ features in the classification does not change the results presented below in any significant way. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{grs1915_supervised_pca_comparison.pdf} \caption{Projection of the 10-dimensional feature space into 2 dimensions using PCA. On the left side, the original human classifications in colour and unclassified samples in grey. On the right, we show the union of the human classification and the predicted states for the previously unclassified samples. Even in this low-dimensional representation, it is possible to see how samples belonging to the same state tend to cluster close together. That this is true also for the combined human and machine classified samples indicates that the logistic regression model performed fairly well. We note that seemingly disconnected regions are an artifact of reducing 10 dimensions to 2 and plotting many points of different classes in the same Figure.} \label{fig:supervised_pca} \end{center} \end{figure*} \section{Supervised Classification} \label{sec:supervised} Using the results of the previous sections and the parameters determined in Section \ref{sec:freeparams}, we performed supervised classification using logistic regression on the combined training and validation set for GRS 1915+105 \footnote{For the data set, results as well as all relevant code relevant, see: \url{https://github.com/dhuppenkothen/BlackHoleML}}. Overall, with a $92.5\%$ accuracy on the test set, the classifier performs very well. This is especially true in light of the small size of the data set as well as the heavy imbalance between classes, offering only a few test cases for some of the rarer states. At the same time, while the largest class accounts for $\sim 40\%$ of all samples, incorrect assignments to the dominant state are not the main cause of the mis-classifications, indicating that the model does not underfit by assigning examples of rare states to the more common ones. Additionally, the performance of our model is in line with results from other disciplines, e.g.\ in image classification of dinoflagellates \citep{culverhouse2003}, verb classification in language tasks \citep{merlo2000} and finding humans in images \citep{quinn2010}, where human accuracy is often found to be no better than $\sim 90\%$. An illustration of the classification as a whole is presented in Figure \ref{fig:supervised_pca}, where we show a 2-dimensional representation of our 10-dimensional features space achieved with PCA. \subsection{Confused Classifications} \label{sec:confusion} In Figure \ref{fig:confusion_matrix}, we show the confusion matrix between the human classification and the machine classification on the test set. Generally, the matrix is sparse, and only few classes are confused. For these cases, we visually compared the light curves, hardness ratios and power spectra of typical examples (based on the human classification) of both the class chosen by a human and the computer. We find that disagreements between human and machine classification fall into one of three categories: \begin{itemize} \item{Observations where the particular choice of segment size ($1024\mathrm{s}$) makes it such that only part of the overall pattern is observed in this segment. Examples are light curves of the $\alpha$ state, which are occasionally being mis-classified as $\chi$ (when non-flaring) or $\rho$ observations (when flaring). The small size (for machine learning purposes) of the data set makes it unfavourable to choose longer segments, thus a small fraction of segments always run the risk of being confused in this way. It is worth noting, however, that many of the samples falling in this particular case occur only once or at most twice in the test set for a certain combination of classes, thus they are expected to add only a small amount of noise to the classifications.} \item{For some cases where human and machine classifications disagree, the simple summary statistics and autoregressive model used to represent the variability in the light curves fail to fully encode the complexities of the patterns observed in GRS 1915+105. The most striking example is the $\rho$ state, where several segments were instead classified as belonging to the $\beta$ state instead. Looking at the light curve, it is fairly straightforward for the human brain to distinguish both states based on the patterns in the light curve. However, for several cases, the model used for encoding variability was not sufficient to fully appreciate the differences between those two states, in particular since the power spectra look fairly similar. Here, a better model for the light curves would clearly have helped with the classification, however, building such a model for light curves as complex as those observed in GRS 1915+105 is a major undertaking and thus the subject of future work.} \item{There are several confused cases where the rigid classification into $14$ states does not capture the behaviour very well. For example, there is a number of examples of the $\chi$ state that have a higher count rate by a factor of $\sim 3$ than typical examples of this state; these light curves are routinely mis-classified as non-flaring parts of either $\nu$ or $\theta$ states, whose dim intervals tend to be much brighter than any typical $\chi$ light curves. Similarly, there is a set of $\gamma$ light curves mis-classified as belonging to the $\rho$ state. This may seem surprising at first; however, closer inspection reveals very regular flares in these light curves as well as a much higher variance than is typically observed in the $\gamma$ state. Perhaps these light curves show a transition between the $\gamma$ and $\rho$ state, and therefore has properties reminiscent of both classes, which in turn confuses the classifier.} \end{itemize} \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_supervised_cm.pdf} \caption{Confusion matrix for the machine classification (x-axis) versus the human classification (assumed as the ``true label'') on the y-axis. On the diagonal are classes where the human and machine classifications agree. Off-diagonal cases occur where there is a disagreement.} \label{fig:confusion_matrix} \end{center} \end{figure} The multinomial probability distribution used in the logistic regression model allows for calculating the predicted probability for each class and each sample. We compared the predicted probabilities for each human-generated class and computer-generated class for all of the confused cases and compared them to those cases where human and computer-generated classifications agree. Samples where human and computer agree show a very high predicted probability for the chosen class ($>0.85$ in more than $75\%$ of all cases) and a peaked probability distribution (with low probabilities for all other classes). This is generally not the case for confused cases, which show much flatter probability distributions and the classifier is generally uncertain about its prediction. In these cases, the predicted probability of the class chosen by the computer can be as small as $0.3$ and often close to the probability assigned to the human-generated class. \subsection{Overall Distribution of States} In Figure \ref{fig:state_durations} we compare the total duration the source spent in each state during the observed intervals for both the human classified part of the data as well as the computer-generated classification. At the same time, this presents a split in time: \citet{belloni2000} and \citet{kleinwolt2002} classified observations between 1996 June and 1999 December, with an additional state identified in an observation on 2003 Mar 6 \citep{hannikainen2003, hannikainen2005}. Trained on these human classifications, we allowed the computer to find classes for the remaining observations, spanning from $2000$ Jan to the end of \project{RXTE}'s lifetime in early 2012. Assuming that the logistic regression model generally reproduces the human classification, one may then use the data set to search for time evolution in the overall pattern of states. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_supervised_states_histogram.pdf} \caption{Fraction of total observation time $T_\mathrm{obs}$ assigned to a certain state in both the human-classified data (1996-$\sim\!\! 2000$; blue) and the machine-classified data ( $\sim\!\! 2000$ - 2011; red). Durations spent in each state are calculated from the human and computer-generated labels taking into account the overlap between segments for long observations.} \label{fig:state_durations} \end{center} \end{figure} We find that broadly, the machine classification reproduces the human classification. Particularly the $\chi$ state remains the most common state to find GRS 1915+105 in. Other states such as $\theta$, $\rho$, $\kappa$, $\mu$, $\alpha$ and $\omega$ are represented similarly often, other classes occur with a significantly different frequency in later observations. It is important to note here that the initial distribution on the state occurrences in the logistic regression model was based on the previous state occurrences, that is, a state with a higher previous occurrence was more probable to occur again than a state that was only seen once or twice. In this context, it is interesting to note the relatively higher fraction of time spent in the $\beta$ and $\phi$ states compared to the human-classified data set. Conversely, the states $\gamma$ and $\eta$ occur much less frequently during later observations compared to the earlier data set. For class $\eta$, this may, to some degree, be due to chance: with only one confirmed observation and the small fraction of telescope time spent on the source, it is intrinsically hard to reliably estimate the duration of the source previously spent in this state. Based on our results from Section \ref{sec:confusion}, it is unlikely that confusions between states play a significant role in explaining all discrepancies between the state durations in the human and computer-classified data sets. Confusions seem to dominate in classes whose fraction of observation time are very similar. For the classes with the strongest relative discrepancies---$\beta$, $\phi$ and $\gamma$ and $\eta$---we also explored the probabilities of the assigned state in an effort to learn how certain the logistic regression model was in its classification for those states. We find that for states $\phi$, $\gamma$ and $\eta$, the classifier is fairly certain in its predictions: for example, for class $\phi$, more than $94\%$ of all classified samples have a probability for the source being in state $\phi$ that is $>0.8$, and for $98\%$ of all classified samples have a probability of $\phi$ being the true state that is at least twice that of the state with the second-largest probability. For this state, there is a small population of samples ($\sim 7\%$) that might be in state $\gamma$ or $\eta$ with a probability of up to $0.4$, that is close to equally likely to the classification as $\phi$. For class $\beta$, which shows the largest growth between the early and the late data set, the situation is much less clear. The confusion matrix in Figure \ref{fig:confusion_matrix} shows a significant fraction of other states, most notably $\theta$ and $\rho$ being mis-classified as $\beta$, raising the question whether these mis-classifications could account for the sharp rise in relative time spent in the $\beta$ state in later years. In general, the model is much less certain about these classifications. In only $\sim 60\%$ of all cases, the model predicts a probability for state $\beta$ that is larger than $80\%$, though it still predicts a probability exceeding that of the second-highest class by a factor of two for $83\%$ of all cases. Additionally, while mis-classifications can account for roughly $25\%$ of all observations of this state, the fraction of time spent in state $\beta$ jumped by a factor of $2$. Hence, we conclude that while confusions between states $\beta$, $\theta$ and $\rho$ might account for an appreciable fraction of the observations, they are not sufficient to explain the entire increase in occurrences of this state. In summary, we conclude that the effect of an increased number of observations in states $\phi$ and $\beta$ in the machine-classified data are likely real, though the magnitude of the effect for state $\beta$ is hard to guess due to the contamination of the sample of likely mis-classifications. For states that are much less represented in the machine-classified data set compared to the original human classification, we explore whether these states might have lost samples due to misclassification as well. For this, we found all samples where states $\gamma$ and $\eta$, both of which are almost not present in the machine-classified data set, were the second-most probable state, and compared their probability to that of the state the logistic regression model chose for these specific samples. We find that state $\gamma$ comes often second to $\chi$-state observations. However, because the hardness ratios are quite different for both states, we find that the logistic regression model assigns these samples to class $\chi$ with a very high degree of confidence (with a $\chi$-state probability of $>0.8$ in $90\%$ of all samples where $\gamma$ has the second-highest probability). This indicates that the paucity of $\gamma$-state observations in recent years is likely real. Similar reasoning applies to state $\eta$, where we find similar numbers for the confidence that the light curves in question belong to the $\phi$ state instead. Additionally, both $\gamma$ and $\eta$ are extremely rarely confused in the validation and test data sets; in the case of $\eta$, it is more likely to gain false positives from mis-classifications of state $\delta$. Overall, we conclude that there likely was a drop in the occurrence of states $\gamma$ and $\eta$ in the later observations that cannot be explained by confusions with other classes. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_supervised_transmat.pdf} \caption{Transition matrix of states. We used human labels where available, and labels inferred by the logistic regression model trained on the human labels where the latter were unavailable. The matrix presents the probability of arriving in state $x_{t+1}$ given the current state $x_{t}$. The probability is row-wise normalized such that the probabilities to arrive in any new state $j$ from a given state $i$ sum to one: $\sum_{j=1}^{N}p(x_{t+1,j} | x_{t,i}) = 1$. The diagonal indicates transitions into the same state.} \label{fig:transitionmatrix} \end{center} \end{figure} \subsection{Time Evolution of States} While the logistic regression model employed in the classification task does not include any time dependence, it is instructive to put the classified observations into context over the sixteen years of \project{RXTE}\ monitoring. In Figure \ref{fig:transitionmatrix}, we show a transition matrix between states. Each row in this matrix represents a probability to pass from initial state $i$ to final state $j$, $p(x_{t+1, j} | x_{t, i})$. The transition matrix was constructed by using the human classified states for observations where these labels exist, and the computer-based classification for all other observations. We then counted transitions from each state $i$ into each other state $j$ for the entire \project{RXTE}\ data set, and row-wise normalized such that the probabilities to move into state $j$ from state $i$ sum to one. Note, however, that there is an important caveat in this procedure: it implicitly assumes continuous observations that are causally connected, that is, the state does not change between one observation and the next. This is not true in practice: \project{RXTE}\ observed GRS1915+105 for $\sim 2\,\mathrm{ks}$ per day, leaving most of the day unobserved. Rapid state transitions are possible, thus the transition matrix here can only be seen as an indication of how state transitions might occur in this source. However, a more realistic transition matrix requires more complex (time-dependent) methods that are beyond the scope of this paper. Overall, it appears that the transition matrix is well-connected: most state transitions are possible, though many occur with a fairly low probability. Transitions to and from the $\chi$-state occur more frequently than most other transition, which is not surprising given that the source spends the majority of its time in this state. Conversely, the probability distribution for leaving the $\chi$-state is fairly flat, indicating that the source is more or less equally likely to go into any of the other states. There are several other transitions that occur with higher probability. For example, the source is more likely to move from state $\alpha$ into state $\beta$, compared even with $\chi$. Some transitions do not occur at all, for example transitions from states $\delta$ and $\gamma$ into state $\alpha$ or from states $\alpha$ and $\eta$ into state $\delta$. In principle, unobserved transitions are of as much interest as those that occur frequently, though their interpretation requires caution. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{grs1915_supervised_phys_features_pca.pdf} \caption{PCA representation as in Figure \ref{fig:supervised_pca}, but with the labels following \citet{harikrishnan2011}. In the left panel, we show the human-classified labels in colours, and the unclassified data in grey. We also explicitly mark the samples of classes $\eta$ and $\omega$, for which we have no labels in this scheme. In the right panel, the fully classified data set.} \label{fig:pca_physical} \end{center} \end{figure*} While the transition matrix is calculated as a set of probabilities, all we can say about the transitions with a probability of $0$ is that they have not been observed during the lifetime of \project{RXTE}. This may just as well be due to the lack of continuous observations and the low observational duty cycle as a real physical effects. In practice, it is interesting to note that while the transition matrix is overall not symmetric (transitions from state $i$ into state $j$ have a different probability from transitions from state $j$ into state $i$), there are some notable symmetries. In particular, transitions between states $\alpha$ and $\delta$, states $\gamma$ and $\eta$, states $\mu$ and $\omega$, and states $\mu$ and $\eta$ never occur in either direction. They indicate that perhaps the transition matrix encodes real physical effects that a better model could capture more efficiently. \subsection{Supervised Classification with Physically Motivated Labels} The connection between long-term evolution of the patterns observed in GRS 1915+105 and the underlying physical processes of the accrection disc are poorly understood. There is no comprehensive accretion theory that could explain the complex variability observed in the source. Therefore, we can only attempt a comprehensive phenomenological description, as done above. However, there are attempts to connect the set of states with some underlying mechanisms. In particular, \citet{misra2004,misra2006} and \citet{harikrishnan2011} attempted to connect the observed states to a non-linear, low-dimensional chaotic system. If true, this would have the advantage of allowing a description of the complex magnetohydrodynamics of the accretion disc with a set of ordinary differential equations. They find evidence based on a set of methods optimized for disentangling non-linear dynamics from stochastic systems---correlation dimension, correlation entropy and multi-fractal spectra---that some of the classes in GRS 1915+105 show evidence that nearly half of the twelve states under consideration exhibit deviations from randomness possibly explained by a non-linear chaotic system. Conversely, other states may be well described by stochastic or coloured noise. In contrast, \citet{polyakov2012} exclusively consider the stochastic components in the light curves using flicker-noise spectroscopy (FNS), with the advantage that they can address one of the major shortcomings in the approach chosen by \citet{misra2004,misra2006} and \citet{harikrishnan2011}: the presence of Poisson fluctuations, which may contaminate the measures of chaos theory the latter authors use in their analyses. They find that thirteen of the fourteen states (state $\omega$ had no known observation with sufficient length to perform the analysis) can be classified into four phenomenological states based on the characteristics of the stochastic contributions to the light curve: random noise, power-scale variability with a $1/f$ type power spectrum, one-scale variability with a single characteristic time scale and two-scale variability with two characteristic time scales. In both analyses, while not directly comparable, the fundamental idea is to break down the known states, determined entirely by their patterns in light curves and spectral changes, into classes that relate, at least broadly, to underlying physical processes such as stochastic fluctuations of viscosity in the accretion disc or changes in the mass accretion rate. Both \citet{harikrishnan2011} and \citet{polyakov2012} point out that their analyses have several drawbacks and shortcomings. \citet{harikrishnan2011} does not include either class $\eta$ or class $\omega$ in their analysis, and \citet{polyakov2012} specifically excludes class $\omega$ due to a lack of data. For class $\delta$, \citet{polyakov2012} point out that more data is needed to decide whether the chaotic attractors or stochastic fluctuations provides a more compelling explanation. These cases imply that the initial classification of the first four years did not provide a sufficient amount of data for the different classes. Here, we apply the classification scheme derived by \citet{harikrishnan2011} as an example of how we can use the machine learning approach developed above for a somewhat less phenomenological approach to the data set. Following \citet{harikrishnan2011}, we simplify the original labels into three categories: states $\delta$, $\gamma$, $\phi$ and $\chi$ are purely stochastic states (``stochastic''), whereas $\kappa$, $\lambda$ and $\mu$ show chaotic behaviour contaminated by coloured noise (``chaotic+coloured''), and states $\beta$, $\theta$, $\alpha$, $\mu$ and $\rho$ correspond to to a system showing signatures of deterministic non-linear behaviour (``chaotic''). We also return previously classified examples of states $\omega$ and $\eta$ to the unclassified data set, since we have no a priori knowledge of their affiliation under this scheme. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_supervised_phys_cm.pdf} \caption{Confusion matrix for the physical labels.} \label{fig:confusionmatrix_physical} \end{center} \end{figure} We then repeat the supervised classification, and find that for this classification problem, the logistic regression model underperforms compared to more complex decision schemes. In particular, random forests provide a higher performance on the validation set ($91\%$ compared to $85\%$ for the logistic regression model). This is not entirely surprising: logistic regression can only draw very simple (linear) decision boundaries in the high-dimensional parameter space, whereas random forests use ensembles of decision trees. Decision trees essentially pose a series of ``if-else'' questions to arrive at a decision for a given sample to belong to a certain class. This allows the decision boundaries between two classes to be much more complex than for the logistic regression model, with the drawback of being much harder to interpret. For the case with many classes, we have found that the added complexity of the random forests classifier does not lead to an increased accuracy, and conversely the non-linear decision boundaries they draw easily lead to over-fitting. In the classification with $3$ states attempted here, however, we have combined several of the original states into a single new state with a much more complex shape in parameter space (see also Figure \ref{fig:pca_physical}). Therefore, linear decision boundaries result in underfitting, making random forests a more appropriate algorithm for classification here. In Figure \ref{fig:pca_physical}, we show the 2-dimensional PCA representation of the samples both before and after classification. The samples of classes $\eta$ and $\omega$, which have not previously been included in this classification scheme, are explicitly marked in the left-hand panel. We report a classification accuracy of $91\%$ on the validation set and $93\%$ on the test set. Figure \ref{fig:confusionmatrix_physical} presents the confusion matrix for the classification with the physically motivated labels. Out of $495$ samples in the combined validation and test sets, only $36$ are confused. $17$ of these confusions occur in the ``stochastic'' state, where $11$ samples are incorrectly classified as ``chaotic+coloured'' and $8$ as ``chaotic'', respectively. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_supervised_phys_transmat.pdf} \caption{Transition matrix for the physical labels.} \label{fig:transmat_phys} \end{center} \end{figure} The transition matrix (see Figure \ref{fig:transmat_phys}) for this classification problem is well-connected. As with the 14-label classification, the source has the highest probability to remain in the same state, given the previous state. However, it can easily reach any of the other two states given its current state, with fairly similar transition probabilities between $0.043$ and $0.129$. Of course, previously mentioned caveats still apply: this model does not directly encode time-dependence of the states, and we do not know whether there are any visible patterns in how the source transitions between states. Figure \ref{fig:duration_phys} attempts to capture the fraction of observed time $T_\mathrm{obs}$ that the source spends in each state. This is interesting because in principle, it could tell us about the duty cycle of the various accretion regimes and (MHD) instabilities likely responsible for the source's varied behaviour. Since the $\chi$ state in the previous classification with the labels obtained by \citet{belloni2000} is by far the most ubiquitous state, it is unsurprising that more than $50\%$ of the time the source can be found exhibiting stochastic variability. The remaining observations are close to evenly split between ``chaotic'' and ``chaotic+coloured'' states, with the latter being slightly more common. \begin{figure} \begin{center} \includegraphics[width=9cm]{grs1915_supervised_phys_states_histogram.pdf} \caption{Fraction of observed time $T_{\mathrm{obs}}$ spent in each of the three states. In blue, we show the results from the human classification on the first four years of data. In red, we show the computer-classified data from the later eight years. The source spends the majority of time showing stochastic variability.} \label{fig:duration_phys} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{grs1915_supervised_eta_omega.pdf} \caption{Inferred states in the 3-state model for samples classified in the 14-state model as either $\eta$ or $\omega$, which have no identification in \citet{harikrishnan2011}. We find that most observations in state $\eta$ seem to be closer to other examples of chaotic+coloured variability, while for state $\omega$, the situation is less clear, with a significant number of samples identified with some form of either chaotic or stochastic process.} \label{fig:etaomega_states} \end{center} \end{figure*} Finally, we also infer the class membership in the physical model of the remaining two states, $\eta$ and $\omega$. In Figure \ref{fig:etaomega_states}, we show the distribution of the samples classified by \citet{kleinwolt2002} as $\omega$ and \citet{hannikainen2003} as $\eta$ in the classification scheme of \citet{harikrishnan2011}, as inferred from the random forest classifier. Out of $17$ samples in state $\omega$, $11$ ($65\%$) are classified as ``chaotic'', indicating that this state is perhaps similar to states $\beta$ and $\theta$. This is consistent with the fairly regular pattern of dips observed in \citet{kleinwolt2002} in this state. However, \citet{kleinwolt2002} also point out that in the bright, variable intervals between dips the source shows behaviour reminiscent of state $\gamma$. Segments with no dip or a smaller dip may thus account for the fraction of samples classified as stochastic. This might explain why the random forests classifier has trouble identifying these samples with a single class. It also showcases a general shortcoming in supervised learning: the algorithm will only be trained on what has seen before; if something uniquely new appears, classification is likely to fail. The situation is much clearer for state $\eta$. A large majority of the samples are classified as ``chaotic+coloured''. This state shows pulses on a 5-minute time scale, but these pulses are overall much less regular than those seen for example in the $\rho$ state. In principle, this may be indicative of a chaotic system driving the processes giving rise to the X-ray emission, perhaps contaminated with stochastic, coloured noise. On the other hand, the hardness ratios are significantly different from all other previously observed classes, one of the reasons why these observations were classified as a new state. \section{Discussion and Conclusion} \label{sec:discussion} GRS 1915+105 is a remarkable BHXRB. It has been in continuous outburst since 1992, showing at least $14$ different states, compared to at most $3$ states in other black hole X-ray binaries. \project{RXTE}'s near-continuous monitoring between 1996 and 2011 has resulted in a data set of this source of unprecedented scale and richness, and the existence of its states as well as a subset of previously-classified data makes it an ideal test case for the use of modern machine learning methods in X-ray astronomy. Here, we classify the entire sixteen-year data set observed with \project{RXTE}\ for the first time using a logistic regression model. The results allow researchers to pick specific observations where the source inhabited a certain state from the data set for further analyses, vastly improving the previous situation, where only a third of the data had known states. The initial classification was done largely visually: the light curves in the $2-60\,\mathrm{keV}$ band show remarkably complex, but repeating patterns that are easily distinguishable by eye (see e.g. Figure (2) in \citealt{belloni2000}). Encoding these patterns in a set of features that a machine learning algorithm can use proves both difficult and instructive. Many patterns, in particular in states $\theta$, $\lambda$, $\nu$, $\alpha$ and $\beta$ last $\sim 1000\,\mathrm{s}$ or more, similar to the duration of most uninterrupted data segments. In most light curve segments we extracted, we see at most one cycle of the pattern, or perhaps only a fragment of it. A Fourier representation of the data is therefore of limited use here: because of the short duration, it cannot capture the pattern of harmonics generated by the non-sinusoidal nature of the signal. At most, it will be able to capture differences between states at higher frequencies, such as the presence or absence of QPOs. This, however, does not allow us to uniquely distinguish the patterns that are so striking to the naked eye. At the same time, each observation in a given state will start at a random phase of the pattern. This immediately makes it impossible to use the light curves directly in the machine learning algorithm, since the latter is sensitive to phase shifts. Two light curves of the same state, but shifted in phase appear far from each other in feature space. Despite these caveats, our feature engineering in Section \ref{sec:featureselection} has shown that the most predictive features are power spectral representations of the data, with the top three features achieving a validation accuracy of roughly $80\%$ by themselves. This indicates that a better model of the variability might improve the classification further. In contrast, features based on the two hardness ratios had only a minor effect, and only HR1 proved to have a measurable effect on the classification accuracy. In the era of spectral timing, it might be of interest to explore features that tie time and energy closer together, such as time lags, covariance spectra and coherence. There are various strategies that might be successful at improving the features encoding variability. One may use much shorter segments than we have done here, which will not encode the full pattern, but parts of it that may be shared across states. For example, the long intervals with a low count rate and low variance in state $\alpha$ might be shared with state $\chi$, while the pulses in the same state might be more similar to state $\rho$. The patterns we see would then be repeatable cycles of these micro-states. This type of model requires a more complex representation, which is out of the scope of this current work. Another strategy is to learn the features from the data itself. This has been a popular approach in a branch of machine learning called ``deep learning'', but generally requires vast training data sets with millions of samples. It is unclear how well a deep neural network would be able to learn the structure of the data set from only $\sim 8000$ light curves. Alternatively, \textit{autoencoders}, neural networks aimed at learning representations of data, have been used successfully for encoding human speech signals for the purpose of both speech recognition and reconstruction \citep[for an overview, see e.g.][]{hinton2012}. Speech is similar to the data observed in GRS 1915+105 in the sense that it includes information on vastly different time scales, all of which are important for recognizing the correct word, or in this case, state. These methods can potentially provide powerful encodings of black hole signals beyond GRS 1915+105 and will be explored in future work. Another limitation in the approach chosen here arises from the inherent assumption in supervised classification that the examples in the training set are representative of the unclassified data, that is, that there are no additional, unrecognized states in the data. If there are states that have so far not been recognized in the previously unclassified data set, then the algorithm cannot find them. Unsupervised machine learning methods, which do not make use of the human-supplied labels, would be more suitable for this task. A class of models called Hidden Markov Models (HMMs) is one such model which also allows for an explicit encoding of the time dependence of the observations. It thus makes it possible to find other states not previously observed, and also infer the state the source likely occupied while \project{RXTE}\ did not observe it. This is necessary for an accurate inference of the transition matrix, which is limited by the low duty cycle of on-target time. Models of this type will be the subject of future work, too. Finally, it must be mentioned that \project{RXTE}\ no longer operates. Another potentially useful avenue of work might be to use transfer learning methods, which allow for inference on data sets with a warped feature space compared to the original training data. This is of particular use given the existence of different telescopes observing this source, such as \project{Swift}\ and \project{Astrosat}\, as well as \project{NICER}\ in the near future, which have different sensitivities and energy ranges and will thus create a feature space that is similar to that created with \project{RXTE}\ data, but not identical. In the future, combining data from different telescopes in the same classification task could lead to improved insights into the system. The exact processes and parameters steering the long-term evolution of GRS 1915+105 are currently unknown. While much attention has focused on individual states, in particular the $\rho$ state with its very regular patterns, the long-term evolution of the source, which states it spends its time in and how it switches between states has defied explanation. Likely, the observed patterns are due to a complex interdependence between MHD processes in the accretion disc and the emission processes producing the observed X-rays. Here, we do not attempt to provide an explanation of the long-term evolution, but instead show new ways in which the existing data can be used to derive knowledge about the phenomenology of the source. \citet{belloni2000} themselves point out that their classification was meant as a phenomenological description only. On the other hand, the observed patterns must be tied to the underlying physical processes, in particular the mass accretion rate, thus understanding which states the source spent its time in over the past $16$ years plays an important role in understanding how the accretion disc reacts to global changes. Here, we chose a specific model (``stochastic'' versus ``chaotic'' processes) to highlight the connection of the long-term evolution to real, physical processes in the accretion disc. The idea that the underlying driving mechanism could be a chaotic, non-linear dynamical system is compelling, because it reduces the complex problem of magnetohydrodynamics in an accretion disc to a system of ordinary differential equations, whereby changes in the disc are driven by changes in global properties such as viscosity or mass accretion rate. At the moment, no global models of the long-term evolution of GRS 1915+105 exist. However, the feature engineering and classification performed in this paper are a first step toward providing the data products that make a comparison between models and data possible. While it is not immediately possible to apply the same classifier to other sources, there are other interesting objects for which this approach may be useful. First and foremost, IGR J17091-3624 provides an interesting additional test case as it has shown similar states and state changes to GRS 1915+105. Similarly, some Ultraluminous X-ray Sources (ULXs) have shown variability similar to $\mu$ and $\kappa$ states of GRS 1915+105, indicating perhaps that ULXs are subject to strongly super-Eddington flows \citep{middleton2011}. The methodology presented here could thus be useful to help understand a number of poorly understood objects both Galactic and extra-galactic. \section*{Acknowledgements} We thank the anonymous referee for their helpful comments. The authors acknowledge support by the Moore-Sloan Data Science Environment at NYU. The authors thank Brian McFee and Kyunghyun Cho for many useful conversations. \bibliographystyle{mnras}
1,314,259,995,493
arxiv
\section{Introduction} \IEEEPARstart{V}{isible} and thermal infrared information strongly complement each other and contribute to visual tracking in different aspects. On one hand, thermal data are not affected by lighting variations and have a strong penetration ability to some particulate matters (e.g., smog and fog) by capturing infrared radiation (0.75-13$\mu$m) emitted by subjects with a temperature above absolute zero. On the other hand, visible data are more effective in discriminating targets from other objects or background when they are in thermal crossover. Therefore, RGBT tracking, aiming to locate targets with a bounding box using visible and thermal information given the initial state, has received a surge of interest in the computer vision community. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{figure/bubExcel+colorbar} \end{center} \caption{Comparison of the proposed LasHeR with existing RGBT tracking benchmark datasets, including GTOT~\cite{Li16tip}, VOT-RGBTIR2019~\cite{Kristan2019a}, RGBT210~\cite{Li17acmmm} and RGBT234~\cite{Li2018pr}. The circle diameter is in proportion to the number of frames of a benchmark dataset.} \label{fig:bubble} \end{figure} \par The dataset is an essential part in promoting the development of RGBT tracking. Early works develop some small-scale datasets including OSU-CT~\cite{Davis07cviu} and LITIV~\cite{Torabi12cviu}, which contain six and nine RGBT videos respectively. Li et al.~\cite{Li16tip,Li17acmmm,Li2018pr} construct three larger datasets which include 50, 210 and 234 RGBT videos respectively, and also annotate some challenge attributes for challenge-based performance evaluation. With increasing interests of RGBT tracking, the VOT challenges~\cite{Kristan2019a} take RGBT tracking as a new challenge and introduce the VOT-RGBTIR2019 dataset elaborately designed based on RGBT234. However, several issues are remained in development and assessment of RGBT tracking algorithms. {\flushleft \bf Lack of a large-scale dataset.} Existing RGBT video datasets have less than 300 sequences, as shown in Fig.~\ref{fig:bubble}. Deep learning models have dominated the research field of RGBT tracking and these small-scale datasets would limit the potentials of deep RGBT tracking models. For example, when we evaluate a tracker on RGBT234, a modern way is to initialize tracking model by pre-trained deep classification models on ImageNet dataset and then fine-tune it using GTOT dataset~\cite{Li2019iccvw}. The intrinsic differences among different tasks and the small-scale size of GTOT might result in suboptimal tracking performance~\cite{LaSOT}. {\flushleft \bf Single imaging platform.} RGBT sequences in each one of the existing datasets are captured by a single imaging platform. For example, for GTOT, the recording system consists of an online thermal imager and a CCD camera, which are mounted by a tripod. Due to such setup, the imaging view needs to be fixed in capturing one sequence and the scenes also need to be carefully selected to make the homography assumption effective~\cite{Li16tip}. The imaging hardware for RGBT210 consists of a turnable platform, a thermal infrared imager and a CCD camera, whose horizon has limited scenes and categories~\cite{Li2018pr}. These single imaging platforms largely restrict the diversity in data creation because of suffering from same imaging parameters and single kind of imaging setup. {\flushleft \bf Limited number of scenes and categories.} RGBT tracking is to track an arbitrary object in an arbitrary video sequence, and reasonable number of scenes and categories is thus vital for the fair evaluation of different algorithms. Nevertheless, existing RGBT tracking datasets suffer from small number of scenes and categories. As a result, a reasonable number of scenes and categories is desired for more reliable evaluation results. \begin{table*}[t] \caption{Comparison of our LasHeR with recently published benchmarks.} \label{tb:comparision_benchmarks} \renewcommand\arraystretch{1.2} \begin{center}\normalsize \begin{tabular}{c|cccccccc} \toprule Benchmark &Sequences &\begin{tabular}[c]{@{}l@{}}Average \\ frames\end{tabular} &\begin{tabular}[c]{@{}l@{}}Max \\ frames\end{tabular} &\begin{tabular}[c]{@{}l@{}}Total \\ frames\end{tabular} &\begin{tabular}[c]{@{}l@{}}Object \\ classes\end{tabular} &\begin{tabular}[c]{@{}l@{}}Num. of \\ attributes\end{tabular}&\begin{tabular}[c]{@{}l@{}}Multi-platform \\ imaging\end{tabular} & \begin{tabular}[c]{@{}l@{}}Training \\ dataset\end{tabular} \\ \midrule \midrule GTOT~\cite{Li16tip} &50 &157 &376 &7.8K &9 &7 & {\ding{55}}& {\ding{55}}\\ RGBT210~\cite{Li17acmmm}&210 &498 &4140 &104.7K &22 &12 &{\ding{55}}& {\ding{55}} \\ RGBT234~\cite{Li2018pr}&234 &498 &4140 &116.7K &22 &12 &\ding{55}& {\ding{55}} \\ VOT-RGBTIR2019~\cite{Kristan2019a}&60 &334 &1335 &20.1K &13 &12 &\ding{55}& {\ding{55}}\\ \hline LasHeR &1224 &600 &12862 &734.8K &32 &19 &\ding{51}& {\ding{51}}\\ \bottomrule \end{tabular} \end{center} \end{table*} {\flushleft \bf Miss some real-world challenges.} A good dataset should include as many real-world challenges as possible which could drive the tracker to handle them robustly in practice. However, several real-world challenges are missing in existing RGBT tracking datasets, including hyaline occlusion, frame lost, high illumination, abrupt illumination variation, out of view, similar appearance and aspect ratio change. For example, thermal cameras are usually equipped with the non-uniformity correction scheme. As a result, a fragment of thermal frames is lost and the state (e.g., appearance, location and scale) of target might change abruptly. To address the above issues, we provide a {\bf La}rge-{\bf s}cale {\bf H}igh-diversity b{\bf e}nchmark for {\bf R}GBT tracking (LasHeR) with the following contributions. \begin{itemize} \item LasHeR consists of 1224 visible and thermal infrared video pairs with more than 730K frame pairs in total. Each frame pair is spatially aligned and manually annotated with a bounding box, making the dataset well and densely annotated. It will play a significant role in both the training of deep RGBT trackers and the comprehensive evaluation of RGBT tracking methods. \item LasHeR is highly diverse capturing from a broad range of object categories, camera viewpoints, scene complexities and environmental factors across seasons, weathers, day and night. Induced by real-world applications, several new challenges are taken into consideration in data creation. It will promote the research and development of practical tracking algorithms. \item The unaligned version of LasHeR has also been released for the practicality of RGBT tracking. We design a scheme to automatically generate high-quality ground truths of the unaligned LasHeR by using the annotated ground truths in LasHeR and the transformations between two modalities. We hope the open of unaligned LasHeR will be beneficial to attracting the research interest for alignment-free RGBT tracking, which is a more practical task in real-world applications. \item Comprehensive evaluation and analysis of different RGBT tracking algorithms are conducted on LasHeR dataset. We not only clarity the research room in RGBT tracking by the comparison of 12 RGBT tracking algorithms, but demonstrate the effectiveness of LasHeR in the training of deep RGBT trackers. \end{itemize} \begin{figure*}[htbp] \centerline{\includegraphics[width=\linewidth]{figure/SceneandClassv2.eps}} \caption{Example frame pairs in LasHeR.} \label{fig::scene-class} \end{figure*} \section{Related Work} In this section, we review RGBT tracking benchmarks that are relevant to our work. At the same time, we also briefly review some of the current excellent RGBT tracking methods. For a more detailed introduction to the RGBT tracking methods, please refer to the survey~\cite{Zhang20survey} for RGBT tracking methods. \subsection{RGBT Tracking Datasets} {\bf OSU-CT} dataset~\cite{Davis07cviu} contains six RGBT video sequence pairs recorded from two different locations with only people moving. {\bf LITIV} dataset~\cite{Torabi12cviu} contains 9 RGBT sequences which also suffers from the problems of limited size, low diversity and high bias. {\bf GTOT} dataset includes 50 RGBT video clips under different scenarios and conditions, and seven challenges are taken into account to increase the diversity. To create a reasonable size and more challenging RGBT video dataset, Li et al.~\cite{Li17acmmm} introduce the {\bf RGBT210} dataset which contains 210 RGBT videos with highly accurate alignment and annotations of 12 challenges. Li et al.~\cite{Li2018pr} extend RGBT210 to a larger dataset called {\bf RGBT234}, containing 234 RGBT videos, to improve annotation quality and include more challenges. With increasing interests of RGBT tracking, the VOT challenges~\cite{Kristan2019a} take RGBT tracking as a new challenge and introduce the {\bf VOT-RGBTIR2019} dataset elaborately designed based on RGBT234. However, these datasets are limited by their size, diversity and bias in the training of deep RGBT tracking algorithms and comprehensive evaluation of different algorithms. Zhang et al.~\cite{zhang2019multi} generate a large-scale synthetic RGBT dataset generated from 8,335 videos with 1,251,981 frames in total, but synthetic RGBT data have a great gap with real ones. As far as we know, the above are currently commonly used datasets in the field of RGBT object tracking. We can find that several of the real RGBT datasets are small-scale in terms of the number of sequences, target categories, and challenge diversity. It is worth mentioning that there is an inclusive relationship among the three datasets of RGBT210, RGBT234 and VOT-RGBTIR2019, which means that the total number of independent sequences available for these three datasets does not exceed 234. While the effectiveness of the pseudo RGBT tracking dataset generated by the large-scale algorithm remains to be verified. In this dataset, the thermal infrared modal data are generated from the visible images by a pix2pix model~\cite{Lichao2018Synthetic}. We have to know that the original intention of introducing thermal infrared modal data in the RGBT vision field is to use the complementary information between the two modalities, so how to ensure that the generated pseudo-thermal infrared image has corresponding complementary information when the RGB data quality is poor is a difficult problem. \subsection{RGBT Tracking Methods} Due to the complementarity of thermal infrared and visible information, RGBT tracking becomes a promising research topic in the computer vision community and has attracted much attention. We can roughly divide existing RGBT tracking methods into four categories, including sparse representation-based methods, graph-based methods, correlation filter-based methods, and deep learning-based methods. We briefly review them as follows. Sparse representation is an effective tool to suppress noises and errors, and Li et al.~\cite{Li16tip,Li16mmm,Li17tsmcs} propose collaborative sparse representation models for RGBT tracking in Bayesian filtering framework, in which modality weights are introduced to achieve adaptive fusion of different source data. Lan et al.~\cite{Lan2018RobustCD,Lan2019OnlineNM,Lan2019LearningMF,Lan2020ModalitycorrelationawareSR} propose discriminative learning frameworks to leverage some properties of different modalities for robust RGBT tracking. In graph-based methods, the target in both RGB and thermal modalities is represented by a collaborative graph, in which node weights are used to suppress background interference for robust RGBT target representation. Li et al.~\cite{Li17acmmm,Li2018pr} propose a weighted sparse representation regularized graph to take global relations among graph nodes. Li et al.~\cite{Li18eccv} propose a cross-modal ranking method with soft consistency and noisy labels to handle the effects of modal heterogeneity and seed noises in ranking model. In addition, Li et al.~\cite{li2018spic} propose a two-stage modality-graphs regularized manifold ranking model to refine the ranking results using a two-stage way. Correlation filter techniques are also applied to RGBT tracking due to their good performance and high efficiency. Wang et al.~\cite{Wang2018LearningSC} present a fusion tracking method based on a soft consistency correlation filter model, in which both collaboration and heterogeneity are taken into account. Zhai et al.~\cite{Zhai2019FastRT} propose to use the low-rank constraint to learn correlation filters jointly for cross-modal fusion. Yun et al.~\cite{Yun2019DiscriminativeFC} propose a discriminative fusion correlation learning model to improve DCF-based tracking performance. Apart from the above works which purely utilize correlation filters, there are some studies to combine correlation filters with other techniques. For example, Luo et al.~\cite{Luo2019ThermalIA} propose a tracking-before-fusion framework which consists of two modules, including a correlation filter based tracking module and a histogram based tracking module. Deep learning is well known for being able to learn powerful feature representations from large-scale datasets~\cite{Xu2018RelativeOT,Li2018FusingTC}. One research stream is to employ Siamese networks~\cite{Zhang2019ObjectFT,Zhang2019SiamFTAR,Zhang2020DSiamMFTAR,Zhang2019Decisionlevel} which employs Siamese network for RGBT tracking. These methods have fast tracking speed, but are usually weak in representing low-resolution objects, which are common in RGBT tracking. The other main research stream is in MDNet frameworks~\cite{Zhang2018LearningMC,zhu2020quality,zhang2020object,Li2019iccvw,zhu2019acmmm,gao2019deep,Li2020ChallengeAwareRT,Wang2020CrossModalPF,Li2020DMCNet,Li2020MANet++}, which performs different fusion strategies to utilize complementary benefits of RGB and thermal data. Such kinds of methods receive robust tracking results but have low efficiency, and the tracking capacity is limited by MDNet which bases on VGG network. Zhang et al.~\cite{zhang2019multi} propose multi-fusion strategy in DiMP framework~\cite{DiMPBhat2019LearningDM} which bases on ResNet. Regarding decision-level fusion, Tang et al.~\cite{Cong2019DecisionLevelFT} propose a SSD based RGBT tracking framework. Zhang et al.~\cite{Zhang2021JointlyMM} combine appearance tracker with motion tracker to jointly model appearance and motion cues for RGBT tracking. \section{LasHeR Benchmark} To solve the contradiction between data-hungry RGBT trackers and existing small-scale datasets, we construct a large-scale video dataset, which includes a total of 1224 pairs of RGBT sequences and the total number of frames reaches to 730K, called LasHeR in this paper. The major properties of LasHeR over several existing datasets are shown in Table~\ref{tb:comparision_benchmarks}. We analyze the details as follows. \begin{figure*}[t] \centerline{\includegraphics[width=\linewidth]{figure/ObjectandCommonClasses}} \caption{Distribution of object categories on LasHeR. (a) Distribution on the entire LasHeR. (b) Distribution of common objects classes on several typical RGBT tracking datasets.} \label{fig:object_distribution} \end{figure*} \subsection{Multi-platform Imaging Setup} Existing RGBT tracking datasets base on single imaging platform, which greatly limits the diversity of data creation. For example, all frames in RGBT234~\cite{Li2018pr} are captured by a turnable platform fixed on a monitor rod. Such platform makes data have same resolution, frame rate and limited scenes, which greatly restrict the diversity of dataset. To handle this problem, we adopt multiple types of imaging platforms. On one hand, we design a turnable platform like the setup in RGBT234 to simulate the imaging scenarios of visual surveillance and intelligent transportation. On the other hand, we design a hand-held platform to enable the imaging flexibility in a large range of scenes and conditions. \par \subsection{Annotation} A tracking dataset is desired to have high-quality densely bounding box annotations, which is essential for training a robust tracker and ensuring the fairness of performance evaluation. For this purpose, we define a deterministic annotation strategy and each frame in LasHeR is annotated manually. Given a tracking sequence for a specific tracking target, for each frame, when the target is in the view of camera, we annotate the target with a minimum bounding box covering the entire target. Otherwise, for the case where the target moves out of the view, we set the width and height of the bounding box as 0. Under the condition of high-accurate alignment of two modalities in LasHeR, we select the best visual modality to generate bounding boxes to guarantee high-quality annotations. \begin{table}[t]\normalsize \caption{List and description of 19 attributes in LasHeR. Comparing with existing RGBT datasets, LasHeR introduces seven new ones indicated by italic fonts. } \renewcommand\arraystretch{1.1} \begin{tabular}{p{33 pt}p{200 pt}} \toprule \textbf{Attribute}& \textbf{Definition}\\ \hline \textbf{NO}& No Occlusion - the target is not occluded.\\ \textbf{PO}& Partial Occlusion - the target object is partially occluded.\\ \textbf{TO}& Total Occlusion - the target object is totally occluded. \\ \textbf{HO}& \emph{Hyaline Occlusion} - the target is occluded by hyaline object.\\ \textbf{OV}& \emph{Out-of-View} - the target leaves the camera field of view.\\ \textbf{LI}& Low Illumination - the illumination in the target region is low.\\ \textbf{HI}& \emph{High Illumination} - the illumination in the target is too strong to identify the target. \\ \textbf{AIV}& \emph{Abrupt Illumination Variation} - the illumination of the target changes significantly. \\ \textbf{LR}& Low Resolution - the resolution in the target region is low.\\ \textbf{DEF}& Deformation - non-rigid object deformation.\\ \textbf{BC}& Background Clutter - the background information which includes the target object is messy. \\ \textbf{SA}& \emph{Similar Appearance} - there are objects of similar shape near the target. \\ \textbf{TC}& Thermal Crossover - the target has similar temperature with other objects or background surroundings. \\ \textbf{MB}& Motion Blur - the target object motion results in the blur image information.\\ \textbf{CM}& Camera Moving - the target object is captured by moving camera.\\ \textbf{FL}& \emph{Frame Lost} - some of thermal or visible frames are lost.\\ \textbf{FM}& Fast Motion - the motion of the ground truth between two adjacent frames is larger than 20 pixels.\\ \textbf{SV}& Scale Variation - the ratio of the first bounding box and the current bounding box is out of the range [0.5,2].\\ \textbf{ARC}& \emph{Aspect Ratio Change} - the ratio of bounding box aspect is outside the range [0.5,2]. \\ \bottomrule \end{tabular} \label{tb::attribute} \end{table} \begin{figure*}[htbp] \centerline{\includegraphics[width=\linewidth]{figure/Attribute.eps}} \caption{Distribution of Attributes on LasHeR. (a) Distribution on the entire LasHeR. (b) Distribution of common Attributes on RGBT234 and LasHeR.} \label{fig:attribute_distribution} \vspace{-0.3cm} \end{figure*} \subsection{Data Statistics} Based on the multi-platform imaging setup, we are able to collect video pairs from a broad range of object categories, camera viewpoints, scene complexities and environmental factors across seasons, weathers, day and night. To clarify the advantages of LasHeR, we analyze the diversity from the following aspects. {\bf Scene type}. The complexity and type of a scene are key factors in improving the diversity of dataset. To this end, we capture videos in more than 20 scenes with different characteristics in both indoor and outdoor environments. Indoor scenes include teaching building, parking garage, corridor, canteen, etc., and outdoor scenes include basketball court, stadium, pool and road, etc. These scenes are with different complexities and thus bring some difficulties for visual trackers. Fig.~\ref{fig::scene-class} shows some example scenes in LasHeR. {\bf Object category}. RGBT tracking is to locate arbitrary objects in videos, and the number of object categories should be as many as possible. Existing RGBT tracking datasets~\cite{Li16tip,Li2018pr} contain no more than 22 categories, as shown in Table~\ref{tb:comparision_benchmarks}. To improve the diversity, our LasHeR contains 32 types of target objects, including a rich variety of rigid objects and non-rigid objects. Fig.~\ref{fig:object_distribution} shows the distributions of target categories on LasHeR. Here we can observe that the data distribution on object classes in LasHeR conforms to the long tail distribution, and it is a popular research topic in practical applications. As the long tail distribution of object classes in tracking field does exist in the real-world scenarios, we believe such an imbalance of data would bring meaningful challenges and encourage the design and development of more practical and extensible RGBT trackers. {\bf Challenge}. In data creation, we consider more real-world challenges than existing tracking datasets~\cite{Li16tip,Li2018pr}. The labeled challenges in existing RGBT tracking datasets include no occlusion (NO), partial occlusion (PO), total occlusion (TO), low illumination (LI), low resolution (LR), deformation (DEF), background clutter (BC), motion blur (MB), thermal crossover (TC), camera moving (CM), fast motion (FM) and scale variation (SV). In addition to them, we take more factors in data creation and also label them in sequence-level for challenge-based performance evaluation of different tracking algorithms. The newly labeled challenges include hyaline occlusion (HO), high illumination (HI), abrupt illumination variation (AIV), similar appearance (SA), aspect ratio change(ARC), out-of-view (OV) and frame lost (FL). The detailed definitions of all challenge factors are presented in Table~\ref{tb::attribute}. And Fig.~\ref{fig:attribute_distribution} shows the distribution of the above challenges on the LasHeR sequences and the comparison with RGBT234 in the common attributes. It is not difficult to find that the proposed LasHeR is a dataset with intensive challenges, and has a great advantage over the existing RGBT tracking dataset in terms of scale. Herein, we explain two uncommon challenges for clarity. First, thermal infrared imaging has a very weak penetration ability to hyaline objects. Therefore, when the target object is occluded by hyaline objects and it will be invisible in thermal modality. We call this challenge as hyaline occlusion. Second, we observe that existing thermal cameras often have built-in non-uniformity correction schemes, the purpose of which is to ensure the accuracy of temperature information. Within a few seconds after the non-uniformity correction occurs, the thermal camera will stop imaging and some frames are thus lost. we call this challenge as frame lost. \subsection{Unaligned LasHeR} As we all know, the alignment of RGB and thermal images is labor- and time-consuming. It also restricts the application of RGBT tracking. Some efforts are devoted to developing RGB and thermal imaging systems with optical registrations~\cite{Gundogdu_2015_CVPR_Workshops,Zhang2019WeaklyAC,Arar2020UnsupervisedMI}, which are usually very expensive, and alignment errors are sometimes inevitably introduced~\cite{Zhang2019WeaklyAC}. Therefore, the task of alignment-free RGBT tracking is essential. We define it as follows. Given the initial state of target in unaligned RGBT videos, alignment-free RGBT tracking is to estimate states of the target in subsequent frames using both information of RGB and thermal data collaboratively. We believe that the research and development of alignment-free RGBT tracking will be beneficial to the practicality of RGBT tracking. To this end, we also release the unaligned version of LasHeR. Since RGB and thermal frames are unaligned, their ground truths are not shared. Note that re-annotating them is not only labor- and time-consuming but also introduces more annotation bias due to the existence of bad imaging equality. Therefore, we design a scheme to automatically generate high-quality ground truths of the unaligned LasHeR by using the annotated ground truths in LasHeR and the transformations between two modalities. Fig.~\ref{fig:GTalignment} shows the process of the generation of the ground truth (GT) of unaligned LasHeR from the annotated GT of LasHeR. In specific, we use SIFT based matching algorithm to obtain a set of matching points in RGB modality. Note that we adjust RGB image with thermal image fixed in manual annotation, and the GTs of unaligned and aligned thermal images are the same. Then, we compute the homography transformation using the matching points between unaligned and aligned RGB images. Finally, we transform the GT of RGB image according to the homography transformation to the GT of unaligned RGB image. Since the content of two RGB images is almost the same, the homography transformation is very accurate, which guarantees the high-quality generation of GT. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{figure/alignmentc.eps}} \caption{Process of the generation of ground truth of unaligned RGBT pair. Herein, the same colors indicate the same ground truths.} \label{fig:GTalignment} \vspace{-0.3cm} \end{figure} \subsection{Evaluated Trackers} We evaluate 12 RGBT tracking algorithms on LasHeR to provide a comprehensive platform of performance analysis. Deep RGBT trackers include MANet~\cite{Li2019iccvw}, DAPNet~\cite{zhu2019acmmm}, MaCNet~\cite{zhang2020object}, DAFNet~\cite{gao2019deep}, FANet~\cite{zhu2020quality}, MANet++~\cite{Li2020MANet++}, DMCNet~\cite{Li2020DMCNet} and mfDiMP~\cite{zhang2019multi}. RGBT trackers based on handcrafted features include SGT~\cite{Li17acmmm}, CMR~\cite{Li18eccv} and SGT++~\cite{Li2018pr}. We present the details of these RGBT trackers in Table~\ref{tab:trackers_information}. \begin{table}[htbp]\normalsize \caption{Detailed information of evaluated trackers. Representation: HoG - Histogram of Oriented Gradients, Color - Color name features, Deep - Deep models. Year: publish years.} \label{tab:trackers_information} \linespread{1.5} \renewcommand\arraystretch{1.2} \begin{center} \begin{tabular}{p{85 pt}cc cp{25 pt}<{\centering}} \toprule \multirow{2}{*}{ } & \multicolumn{3}{c}{Representation} & \multicolumn{1}{l}{ }\\ \cline{2-4} &\multicolumn{1}{c}{{HoG}} & \multicolumn{1}{c}{{Color}} &\multicolumn{1}{c}{{Deep}} &\multicolumn{1}{c}{{year}}\\ \midrule \midrule SGT~\cite{Li17acmmm} &\checkmark &\checkmark& & 2017\\ CMR~\cite{Li18eccv} &\checkmark &\checkmark & & 2018\\ SGT++~\cite{Li2018pr} &\checkmark &\checkmark & & 2018\\ MANet~\cite{Li2019iccvw} & & & \checkmark & 2019 \\ DAPNet~\cite{zhu2019acmmm}& & & \checkmark & 2019 \\ DAFNet~\cite{gao2019deep} & & & \checkmark & 2019 \\ mfDiMP~\cite{zhang2019multi} & & & \checkmark & 2019 \\ MaCNet~\cite{zhang2020object} & & & \checkmark & 2020 \\ CAT~\cite{Li2020ChallengeAwareRT} & & & \checkmark & 2020 \\ FANet~\cite{zhu2020quality} & & & \checkmark & 2020 \\ MANet++~\cite{Li2020MANet++} & & & \checkmark & 2020 \\ DMCNet~\cite{Li2020DMCNet} & & & \checkmark & 2020 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Evaluation Metrics} Precision rate and success rate are two widely used metrics to evaluate tracking algorithms. In our benchmark, we utilize them for tracking evaluation. \begin{itemize} \item {\bf Precision rate (PR)}. The precision rate is to calculate the percentage of frames where the distance between the predicted position and the ground truth is within a certain threshold range. In this work, we set the threshold to 20 pixels to compute the representative PR score. \item {\bf Normalized precision rate (NPR)}. Since the precision metric is easily affected by the image resolution and the size of the bounding box, we further normalized the precision as in ~\cite{Mller2018TrackingNetAL} as the second metric. For detailed calculation of NPR, please refer to ~\cite{Mller2018TrackingNetAL}. \item {\bf Success rate (SR)}. The success rate is to calculate the ratio of successful frames where the overlap between the predicted bounding box and the ground truth is greater than a certain threshold. In this work, we employ the area under curve to compute the representative SR score. \end{itemize} \section{Evaluation and Analysis} In this section, we adopt two evaluation protocols for the evaluation of tracking algorithms. First, we use the entire dataset as a testing set, and evaluate 12 RGBT tracking methods on the entire LasHeR. Then, to provide a large-scale set for the training of deep trackers, we split LasHeR into a training subset and a testing subset, and the evaluation of trackers is performed on the testing subset. Finally, the retrianing experiments are conducted using the training subset of LasHeR. \begin{figure*}[htbp] \centerline{\includegraphics[width=\linewidth]{figure/PlotLasherBig.eps}} \caption{Evaluation result on entire LasHeR using precision, normalized precision and success plots, where the representative scores are presented in the legend.} \label{fig:LasHeR_evaluation} \end{figure*} \subsection{Overall Evaluation Results} We first present the evaluation results on the entire LasHeR dataset, where all 1224 sequence pairs are used as testing set for large-scale evaluations. We summarize the overall evaluation results of these trackers with the precision, normalized precision and success plots, as shown in Fig.~\ref{fig:LasHeR_evaluation}. From the results, we can see that DMCNet achieves the best precision score of 0.557 and normalized precision score of 0.496, while mfDiMP combines the IoU-Net based architecture from ATOM~\cite{Danelljan2019ATOMAT}, achieving the top success score of 0.399. DMCNet based on MDNet~\cite{MDNetNam2016LearningMC} is trained in an online fashion, which leads to a slow tracking speed due to high computational complexity, although the tracker achieves the top precision. While based on DiMP~\cite{DiMPBhat2019LearningDM}, mfDiMP learns the classification model in an off-line manner, which can achieve a fast tracking speed during testing. Compared with the existing RGBT tracking datasets, the overall performance of these trackers on the proposed large-scale LasHeR has a significant degradation. For example, DMCNet achieves the PR score of 0.839 and the SR score of 0.593 on RGBT234, while the PR and SR scores of mfDiMP on RGBT210 are respectively up to 0.785 and 0.559. Such a significant performance drop further proves the difficulty of the proposed dataset, which is inseparable from the diversity of challenges and scene categories of the dataset. As mentioned before, one of the most notable features of LasHeR is that the tracking data captured by mobile platforms are closer to the real-world tracking scenarios. The regression of these trackers on LasHeR also indicates that trackers trained on existing datasets are not enough to cope with the challenges of real-world tracking applications due to limited diversity in challenges and scenes of existing datasets. In view of the above-mentioned problems, a high-diversity and challenging dataset is supposed to be proposed to provide great space for the research and development of RGBT trackers. \subsection{Challenge-based Evaluation Results} To specifically analyze the advantages of trackers from the perspective of different challenges on LasHeR, we evaluate all 12 trackers under 19 challenge attributes. We present the attribute-based PR and SR scores on all 19 challenge attributes in Table~\ref{tab:challenge-based}. As we can observed from Table~\ref{tab:challenge-based}, when the challenges such as total occlusion, hyaline occlusion, and out-of-view exist in the sequence, trackers are more likely to lose target causing low PR and SR scores. When target is disappear, trackers are hard to find the target, partly because the target is out of searching window when it is back to view. To address these difficult but common challenges in RGBT tracking, a potential method is to use the trajectory inference to solve the problem of target disappearance. In addition, when target appears again, trackers are expected to be able to recognize the target again. It requires the models to have more robust modeling of target appearance and long memory for target features. The challenging factors like deformation and aspect ratio changes also easily degrade tracking performance. It is because trackers can not effectively model target appearance when it changes drastically. An effective way to handle these challenges is to update the model in tracking process. When the challenges of similar appearance and thermal crossover happen, trackers are also easily failed due to the little discriminative features to distinguish similar objects. We can alleviate this problem by learning contextual information in appearance modeling. For the seven newly introduced challenges in the proposed LasHeR, we can observe that trackers consistently show low tracking performance on these challenges, as shown in Table~\ref{tab:challenge-based}. Trackers perform almost worst under the challenges of healine occlusion, frame lost and aspect ratio change. The results indicate that we should put an emphasis on the research of the newly introduced challenges in RGBT tracking tasks. It will be beneficial to improving the practicality of RGBT trackers in the future. \subsection{Qualitative Evaluation} In this part, we conduct a qualitative evaluation of nine deep trackers including MANet~\cite{Li2019iccvw}, DAPNet~\cite{zhu2019acmmm}, MaCNet~\cite{zhang2020object}, DAFNet~\cite{gao2019deep}, FANet~\cite{zhu2020quality}, MANet++~\cite{Li2020MANet++}, DMCNet~\cite{Li2020DMCNet} and mfDiMP~\cite{zhang2019multi}. We select five representative sequences in LasHeR where there are frequent challenges such as total occlusion, thermal crossover, deformation, similar appearance, and visualize the tracking results of these trackers on these sequences, as shown in Fig.~\ref{fig:QualitativeEva}. From Fig.~\ref{fig:QualitativeEva}, we can observe that most trackers cannot cope with the challenges of aspect ratio changes and scale variation. Among them, mfDiMP handles the challenge of scale variation best, which can also be verified in the challenge-based results in Table~\ref{tab:challenge-based}. The excellent performance of mfDiMP in target scale handling benefits from the DiMP~\cite{DiMPBhat2019LearningDM}, which solves the problem of update the model of the trackers based on the Siamese framework. However, when the challenges like fast motion, similar appearance and occlusion occur, almost all trackers tend to lose the target. In real scenes, challenges such as similar appearance and fast motion often appear at the same time. In these cases, most trackers are prone to lose their targets. To cope with the challenge of fast motion, we can think of increasing the search range, but at the same time more background interference information is introduced, causing the challenge of similar appearance more serious. Therefore, how to weigh the solutions between different challenges is a topic worthing to be studied. \subsection{Retraining Experiment on LasHeR} We also suggest the second evaluation protocol to evaluate RGBT tracking algorithms, under which we split LasHeR into \emph{training} and \emph{testing} subsets according to the target class distribution. The sequences in \emph{training} set are suggested to be used to train trackers and we can then assess the trackers on the \emph{testing} set. We evaluate 12 RGBT trackers on the \emph{testing} set to provide baselines and comparison. Same as before, each tracker is evaluated as it is for testing without any modification or re-training. The performance is reported in Fig.~\ref{fig:LasHeRTestingSet_evaluation} with precision plot, normalized precision and success plot. Consistent results as they are in the first protocol can be observed. On the LasHeR \emph{testing} subset, the performance of all trackers has dropped significantly compared to existing datasets. DMCNet achieves the top performance with precision score as 0.490, normalized precision score as 0.432 and success score as 0.356. In addition to the evaluation of the performance of RGBT trackers on \emph{testing} set, we conduct the retraining experiment by retraining MANet and mfDiMP on \emph{training} set to demonstrate how deep RGBT trackers can be improved using a large-scale training set. The tracker MANet designs three kinds of adapters within the network based on MDNet to jointly perform modality-shared, modality-specific and instance-aware feature learning. While the deep RGBT tracker mfDiMP using the RGB tracker DiMP as baseline tracker with offline training, uses the Siamese structure to extract image features. Table~\ref{tab:retraining} reports the results of MANet and mfDiMP on two \emph{testing} sets and comparison of original trackers trained on GTOT and a larger synthetic RGBT tracking dataset respectively and the mfDiMP tracker trained on GTOT with these two deep trackers trained on the proposed LasHeR \emph{training} set. The consistent performance gains prove that a large-scale training set for deep trackers is of vital importance. It is worth noting that the retrained mfDiMP has a great improvement on LasHeR \emph{testing} set than on RGBT234, and the final PR and SR scores both exceed the best tracker DMCNet without retraining. In the comparison with the tracker trained on the large-scale synthetic dataset, it can be found that the improvement of PR and SR scores proves the necessity of a large-scale real but not synthetic RGBT tracking dataset. \begin{table*}[htbp]\normalsize \caption{Challenge-based precision, normalized precision and success scores of 12 trackers on LasHeR, including MANet~\cite{Li2019iccvw}, DAPNet~\cite{zhu2019acmmm}, MaCNet~\cite{zhang2020object}, DAFNet~\cite{gao2019deep}, FANet~\cite{zhu2020quality}, CAT~\cite{Li2020ChallengeAwareRT}, MANet++~\cite{Li2020MANet++}, DMCNet~\cite{Li2020DMCNet}, mfDiMP~\cite{zhang2019multi}, SGT~\cite{Li17acmmm}, CMR~\cite{Li18eccv} and SGT++~\cite{Li2018pr}. The last row shows the speed of these trackers. The \textcolor[rgb]{1,0,0}{red}, \textcolor[rgb]{0,0,1}{blue} and \textcolor[rgb]{0,1,0}{green} fonts represent the top three values respectively.} \label{tab:challenge-based} \renewcommand\arraystretch{1.1} \begin{center} \setlength{\tabcolsep}{4.7mm}{ \begin{tabular}{c ccccccccc ccc} \toprule &MANet &DAPNet &MaCNet &DAFNet &FANet &CAT \\ \midrule NO &0.674/0.472&0.621/0.439 &\textcolor[rgb]{0,0,1}{0.692}/\textcolor[rgb]{0,0,1}{0.492}&0.646/0.443 &0.647/0.446 &0.656/0.454 \\ PO &\textcolor[rgb]{0,1,0}{0.509}/\textcolor[rgb]{0,1,0}{0.369} &0.454/0.329 &\textcolor[rgb]{0,0,1}{0.517}/\textcolor[rgb]{0,0,1}{0.373}&0.486/0.334 &0.496/0.342 &0.491/0.339 \\ TO &\textcolor[rgb]{0,1,0}{0.410}/\textcolor[rgb]{0,1,0}{0.294} &0.350/0.253 &\textcolor[rgb]{1,0,0}{0.428}/\textcolor[rgb]{0,0,1}{0.307} &0.384/0.265 &0.384/0.268 &0.385/0.265 \\ HO &\textcolor[rgb]{0,0,1}{0.312}/\textcolor[rgb]{0,0,1}{0.314} &0.234/0.261&\textcolor[rgb]{1,0,0}{0.321}/\textcolor[rgb]{1,0,0}{0.328} &0.210/0.228 &0.223/0.254 &0.239/0.250 \\ OV &0.406/0.281 &\textcolor[rgb]{0,1,0}{0.415}/0.274 &\textcolor[rgb]{0,0,1}{0.453}/\textcolor[rgb]{1,0,0}{0.319} &0.386/0.238 &0.357/0.230 &0.340/0.203 \\ LI &\textcolor[rgb]{0,1,0}{0.450}/\textcolor[rgb]{0,0,1}{0.333} &0.408/0.302 &\textcolor[rgb]{0,0,1}{0.452}/\textcolor[rgb]{0,1,0}{0.329} &0.432/0.300 &0.443/0.308 &0.425/0.297\\ HI &\textcolor[rgb]{0,1,0}{0.589}/\textcolor[rgb]{0,0,1}{0.414} &0.530/0.373 &\textcolor[rgb]{0,0,1}{0.591}/\textcolor[rgb]{0,0,1}{0.414} &0.508/0.336 &0.558/0.379 &0.543/0.368\\ AIV &\textcolor[rgb]{0,0,1}{0.454}/\textcolor[rgb]{0,0,1}{0.345} &0.395/0.288 &0.440/\textcolor[rgb]{0,1,0}{0.328} &0.433/0.291 &\textcolor[rgb]{0,1,0}{0.451}/0.320 &0.449/0.310\\ LR &0.532/\textcolor[rgb]{0,1,0}{0.331} &0.453/0.285 &\textcolor[rgb]{0,1,0}{0.532}/\textcolor[rgb]{0,0,1}{0.333} &0.512/0.303 &\textcolor[rgb]{0,0,1}{0.534}/0.320 &0.513/0.305 \\ DEF &0.425/0.346 &0.418/0.330 &\textcolor[rgb]{0,0,1}{0.448}/\textcolor[rgb]{0,0,1}{0.362} &0.435/0.327 &0.403/0.314 &0.420/0.323\\ BC &0.454/\textcolor[rgb]{0,1,0}{0.342} &0.416/0.308 &\textcolor[rgb]{0,1,0}{0.463}/\textcolor[rgb]{0,0,1}{0.344} &0.439/0.313 &\textcolor[rgb]{0,0,1}{0.466}/0.329 &0.443/0.317\\ SA &0.479/\textcolor[rgb]{0,1,0}{0.345} &0.436/0.313 &\textcolor[rgb]{0,0,1}{0.495}/\textcolor[rgb]{0,0,1}{0.356} &0.478/0.324 &0.478/0.330 &0.469/0.322 \\ TC &\textcolor[rgb]{0,1,0}{0.468}/\textcolor[rgb]{0,1,0}{0.332} &0.417/0.297 &\textcolor[rgb]{0,0,1}{0.479}/\textcolor[rgb]{0,0,1}{0.340} &0.448/0.303 &0.454/0.311 &0.450/0.307 \\ MB &\textcolor[rgb]{0,1,0}{0.504}/\textcolor[rgb]{0,1,0}{0.347} &0.455/0.313 &\textcolor[rgb]{0,0,1}{0.521}/\textcolor[rgb]{1,0,0}{0.362} &0.482/0.311 &0.488/0.319 &0.489/0.317 \\ CM &\textcolor[rgb]{0,1,0}{0.508}/0.369 &0.450/0.324 &\textcolor[rgb]{0,0,1}{0.526}/\textcolor[rgb]{0,0,1}{0.379} &0.491/0.341 &0.495/0.345 &0.496/0.343 \\ FL &\textcolor[rgb]{0,0,1}{0.371}/\textcolor[rgb]{0,0,1}{0.252} &0.325/0.222 &0.332/0.224 &0.333/0.216 &0.319/0.189 &\textcolor[rgb]{0,1,0}{0.351}/0.203\\ FM &0.489/0.362 &0.429/0.317 &\textcolor[rgb]{0,0,1}{0.498}/\textcolor[rgb]{0,1,0}{0.366} &0.466/0.325 &0.472/0.334 &0.469/0.331 \\ SV &\textcolor[rgb]{0,1,0}{0.534}/0.379 &0.475/0.338 &\textcolor[rgb]{0,0,1}{0.541}/\textcolor[rgb]{0,1,0}{0.386} &0.504/0.339 &0.514/0.350 &0.509/0.347 \\ ARC &0.400/0.304 &0.363/0.278 &\textcolor[rgb]{0,1,0}{0.410}/\textcolor[rgb]{0,1,0}{0.313} &0.379/0.270 &0.379/0.275 &0.390/0.280 \\ \hline ALL &\textcolor[rgb]{0,1,0}{0.539}/0.385 &0.483/0.346 &\textcolor[rgb]{0,0,1}{0.547}/\textcolor[rgb]{0,1,0}{0.392} &0.514/0.351 &0.524/0.360 &0.522/0.359 \\ \hline Speed [fps] &0.904 &1.475 &0.891 &\textcolor[rgb]{0,0,1}{22.208} &\textcolor[rgb]{0,1,0}{17.476} &8.938\\ \midrule \midrule &MANet++ &DMCNet &mfDiMP &SGT &CMR &SGT++\\ \midrule NO &0.653/0.449 &\textcolor[rgb]{0,1,0}{0.681}/\textcolor[rgb]{0,1,0}{0.475} &\textcolor[rgb]{1,0,0}{0.738}/\textcolor[rgb]{1,0,0}{0.572} &0.499/0.331 &0.574/0.401&0.557/0.358 \\ PO &0.498/0.343&\textcolor[rgb]{1,0,0}{0.527}/\textcolor[rgb]{1,0,0}{0.380} &0.474/0.362 &0.392/0.272 &0.407/0.292 &0.416/0.283\\ TO &0.382/0.265 &\textcolor[rgb]{0,0,1}{0.426}/\textcolor[rgb]{1,0,0}{0.309} &0.341/0.261 &0.302/0.212 &0.313/0.223 &0.320/0.220\\ HO &0.252/0.253 &\textcolor[rgb]{0,1,0}{0.296}/\textcolor[rgb]{0,1,0}{0.298} &0.202/0.247 &0.217/0.227 &0.210/0.220 &0.229/0.247\\ OV &0.392/0.235 &\textcolor[rgb]{1,0,0}{0.469}/\textcolor[rgb]{0,0,1}{0.317} &\textcolor[rgb]{0,0,1}{0.453}/\textcolor[rgb]{0,1,0}{0.316} &0.371/0.246 &0.385/0.254 &0.357/0.235\\ LI &0.441/0.303 &\textcolor[rgb]{1,0,0}{0.481}/\textcolor[rgb]{1,0,0}{0.348} &0.408/0.310&0.344/0.250 &0.344/0.252 &0.369/0.263\\ HI &0.568/0.379 &\textcolor[rgb]{1,0,0}{0.615}/\textcolor[rgb]{1,0,0}{0.428} &0.548/\textcolor[rgb]{0,1,0}{0.408} &0.420/0.275 &0.477/0.328 &0.465/0.290\\ AIV &0.411/0.291 &\textcolor[rgb]{1,0,0}{0.476}/\textcolor[rgb]{1,0,0}{0.370} &0.355/0.293 &0.310/0.218 &0.312/0.242 &0.317/0.219\\ LR &0.527/0.316 &\textcolor[rgb]{1,0,0}{0.560}/\textcolor[rgb]{1,0,0}{0.353} &0.472/0.313 &0.398/0.240 &0.428/0.267 &0.448/0.262 \\ DEF &0.432/0.328 &\textcolor[rgb]{1,0,0}{0.473}/\textcolor[rgb]{1,0,0}{0.373} &\textcolor[rgb]{0,1,0}{0.439}/\textcolor[rgb]{0,1,0}{0.358} &0.352/0.282 &0.358/0.278 &0.360/0.284\\ BC &0.456/0.322 &\textcolor[rgb]{1,0,0}{0.467}/\textcolor[rgb]{1,0,0}{0.349} &0.403/0.313 &0.359/0.259 &0.349/0.260 &0.364/0.250\\ SA &\textcolor[rgb]{0,1,0}{0.482}/0.328 &\textcolor[rgb]{1,0,0}{0.511}/\textcolor[rgb]{1,0,0}{0.367} &0.445/0.339 &0.400/0.281 &0.422/0.302 &432/0.294\\ TC &0.455/0.309 &\textcolor[rgb]{1,0,0}{0.492}/\textcolor[rgb]{1,0,0}{0.347} &0.436/0.328 &0.364/0.251 &0.385/0.271 &0.393/0.264\\ MB &0.483/0.315 &\textcolor[rgb]{1,0,0}{0.525}/\textcolor[rgb]{1,0,0}{0.362} &0.476/\textcolor[rgb]{0,0,1}{0.351} &0.367/0.250 &0.395/0.272 &0.422/0.272\\ CM &0.493/0.341 &\textcolor[rgb]{1,0,0}{0.541}/\textcolor[rgb]{1,0,0}{0.390} &0.501/\textcolor[rgb]{0,1,0}{0.378} &0.381/0.263 &0.400/0.288 &0.411/0.277\\ FL &0.344/0.202 &\textcolor[rgb]{1,0,0}{0.393}/\textcolor[rgb]{1,0,0}{0.283} &0.292/\textcolor[rgb]{0,1,0}{0.226} &0.327/0.214 &0.285/0.196 &0.329/0.195\\ FM &0.471/0.331 &\textcolor[rgb]{1,0,0}{0.513}/\textcolor[rgb]{0,0,1}{0.376} &\textcolor[rgb]{0,1,0}{0.491}/\textcolor[rgb]{1,0,0}{0.385} &0.363/0.257 &0.380/0.281 &0.388/0.270\\ SV &0.516/0.351 &\textcolor[rgb]{1,0,0}{0.549}/\textcolor[rgb]{0,0,1}{0.389} &0.525/\textcolor[rgb]{1,0,0}{0.403} &0.401/0.269 &0.430/0.304 &0.431/0.283\\ ARC &0.402/0.286 &\textcolor[rgb]{0,0,1}{0.432}/\textcolor[rgb]{0,0,1}{0.325} &\textcolor[rgb]{1,0,0}{0.433}/\textcolor[rgb]{1,0,0}{0.347} &0.301/0.228 &0.307/0.235 &0.316/0.237\\ \hline ALL &0.524/0.359 &\textcolor[rgb]{1,0,0}{0.558}/\textcolor[rgb]{0,0,1}{0.398} &0.523/\textcolor[rgb]{1,0,0}{0.399} &0.410/0.280 &0.437/0.311 &0.442/0.295 \\ \hline Speed [fps] &16.153 &2.244 &\textcolor[rgb]{1,0,0}{38.242} &0.957 &2.811 &1.192\\ \bottomrule \end{tabular}} \end{center} \end{table*} \begin{figure*}[htbp] \centerline{\includegraphics[width=\linewidth]{figure/QualityResGreyThin.eps}} \caption{Qualitative evaluation in five representative sequences: \emph{bluebuscoming}, \emph{leftbasketball}, \emph{dotat43}, \emph{rightshiningmirror}. The color of bounding box denotes a specific tracker. And every two rows belong to the same sequence, the upper row is from RGB images, and the images at lower row are in infrared modality.} \label{fig:QualitativeEva} \vspace{-0.3cm} \end{figure*} \begin{table*}[htbp]\normalsize \caption{Retraining of MANet~\cite{Li2019iccvw} and mfDiMP~\cite{zhang2019multi} on LasHeR.} \label{tab:retraining} \renewcommand\arraystretch{1.1} \begin{center} \setlength{\tabcolsep}{3mm}{ \begin{tabular}{cc cc ccc} \toprule \multicolumn{2}{c}{ } & \multicolumn{2}{c}{MANet~\cite{Li2019iccvw}} & \multicolumn{3}{c}{mfDiMP~\cite{zhang2019multi} }\\ \cline{3-7} \multicolumn{2}{c}{\diagbox{Testing data}{Training data}} & \multicolumn{1}{c}{GTOT~\cite{Li16tip}} &\multicolumn{1}{c}{\begin{tabular}[c]{@{}l@{}}LasHeR \\ \emph{training} set\end{tabular}} &\multicolumn{1}{c}{\begin{tabular}[c]{@{}l@{}}Synthetic \\ dataset\end{tabular}~\cite{zhang2019multi}} &\multicolumn{1}{c}{GTOT~\cite{Li16tip}} &\multicolumn{1}{c}{\begin{tabular}[c]{@{}l@{}}LasHeR \\ \emph{training} set\end{tabular}}\\ \midrule \multirow{2}{*}{RGBT234~\cite{Li2018pr}} & Precision &0.777 &0.810 & 0.785 &0.787 &0.842\\ &Success &0.539 &0.569 &0.559 &0.560 &0.591\\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}LasHeR \\ \emph{testing} set\end{tabular}} & Precision &0.456 &0.607 &0.447 &0.475 &0.542 \\ &Success &0.329 &0.461 &0.344 &0.310 &0.368\\ \bottomrule \end{tabular}} \end{center} \vspace{-0.3cm} \end{table*} \begin{figure*}[htbp] \centerline{\includegraphics[width=\linewidth]{figure/PlotTestBig.eps}} \caption{Evaluation results on LasHeR \emph{testing} set using precision, normalizes precision and success plots, where the representative scores are in t.} \label{fig:LasHeRTestingSet_evaluation} \vspace{-0.3cm} \end{figure*} Although the large number of synthetic RGBT tracking sequences used for training mfDiMP is nearly 9 times that of LasHeR \emph{training} set, the performance of the tracker has not improved but declined. It suggests that the real RGBT tracking dataset can improve RGBT trackers better. In addition, in the comparison with the performance of mfDiMP trained on GTOT, we can have the following conclusion that a large-scale dataset can be easier to train the tracker to achieve a better performance, which further proves the necessity of the proposed large-scale dataset. \par As a typical MDNet-based RGBT tracking method, MANet shows a good tracking performance under the training of small-scale datasets. Therefore we also conduct the same retraining experiment on MANet. We can see that PR and SR scores on RGBT234 are also improved by retraining it using LasHeR \emph{training} set. The performance improvement of the two types of trackers based on MDNet and Siamese framework proves the importance of the large-scale high-diversity benchmark dataset for the training of deep trackers. \section{Conclusion} In this paper, we propose LasHeR, which is, as far as we know, the largest dataset with high-quality dense bounding box annotations to date for RGBT tracking. By constructing such a large-scale dataset, we hope to alleviate the contradiction between the hunger for data of deep RGBT trackers and the inability of existing datasets to meet the training needs of deep trackers. At the same time, we also hope to propose LasHeR to make up for the limitations of the existing datasets that are far from the real-world tracking applications in terms of challenges and scenarios. Proposing such a large-scale multi-challenge dataset allows the tracker to get a comprehensive evaluation on the data closed to the real-world applications. In addition, we also conduct experiments to prove that our dataset is challenging and necessary, and there is still a lot of room for the research and development of RGBT tracking. In the future, we plan to expand the data of categories to further enhance the diversity of the data set. In addition, we will complete the annotations with other information including target masks and semantic descriptions. \bibliographystyle{IEEEtran}
1,314,259,995,494
arxiv
\section{Introduction} There are a number of groups that have been developing dynamical cores for numerical weather prediction (NWP) and climate modelling, based on triangular meshes on the sphere or on the dual meshes composed of hexagons together with twelve pentagons \citep{Ri+2000,Ma+2002,Sa+2008}. These grids are referred to as pseudo-uniform grids since they have edge lengths $h$ that satisfy $c_0\bar{h}<h<c_1\bar{h}$, as $\bar{h}\to 0$, where $\bar{h}$ is the average edge length, for some positive constants $c_0$, $c_1$. The principal reason for adopting these grids is that they provide a direct addressing data structure whilst avoiding the polar singularity of the latitude-longitude grid, which introduces a bottleneck to scaling on massively parallel architectures due to the convergence of meridians. One approach to developing numerical discretisations on triangular or hexagonal grids is to adapt the staggered Arakawa C-grid finite difference method on quadrilaterals \citep{ArLa77} (used in several currently operational NWP models, such as the UK Met Office Unified Model \citep{Da+2005}) since this type of staggering prevents pressure modes (non-constant functions on the pressure grid that have zero numerical gradient). By defining discrete curl and divergence operators which satisfy div curl$=0$, it is possible to construct C-grid discretisations for horizontal wave propagation which have stationary geostrophic modes on the $f$-plane \citep{ThRiSkKl2009}, a necessary condition for accurate representation of geostrophic adjustment processes. These operators can be used to construct energy and enstrophy C-grid discretisations for the nonlinear rotating shallow-water equations using the vector invariant form \citep{RiThKlSk2010}. The drawback with using the C-grid finite difference method on triangles or hexagons instead of quadrilaterals is that the ratio of velocity and pressure degrees of freedom (DOF) is altered. The quadrilateral C-grid has one pressure DOF stored at the centre of each grid cell, and two velocity DOF per grid cell (normal velocity is stored at each of the four edges, which are each shared with the neighbouring cell on the other side of the face)\footnote{Here, and in the rest of the paper, we consider compact domains without boundary such as the sphere and rectangles with double periodic boundary conditions.}. This is considered the ideal ratio, since the velocity then has an equal number of rotational and divergent DOF which are coupled together in the correct way so that there are two inertia-gravity modes (the inward and outward propagating modes) for each Rossby mode. On the other hand, the triangular C-grid has only $3/2$ velocity DOF per grid cell, and the hexagonal C-grid has $3$ velocity DOF per grid cell. This means that the triangular C-grid has four inertia-gravity modes per Rossby mode; the extra spurious inertia-gravity branch has a frequency range that decreases with Rossby deformation radius, leading to ``checkerboard patterns'' in the divergence when the deformation radius is small (as it can be in the ocean, or when there are many vertical layers). The hexagonal C-grid has an equal number of inertia-gravity and Rossby modes; the extra spurious Rossby mode has very low frequencies and propagates Eastwards on the $\beta$-plane \citep{Th08}. The effects of these spurious Rossby modes has not been reported in practice but there are concerns amongst the operational NWP community that if spurious modes are supported by the grid, then they might be initialised during the data assimilation process or by physics parameterisations \citep{St_pc2010}. It may also be the case that the spurious modes lead to spurious spread/lack of spread in ensemble forecasts. Careful numerical experiments are required to investigate this concern. The finite element method provides the opportunity to alter the number of degrees of freedom per triangular element to ameliorate this problem. A number of finite element pairs on triangles have been proposed for geophysical fluid dynamics, mostly in the ocean modelling community \citep{WaCa1998,RoStLi1998,Ro+2005,CoHaPa2009,CoLaReLe2010, RoRoPo2007}. In \citep{RoRo2008}, the lowest order Brezzi-Douglas-Marini element pair \citep{BrDoMa1985}, known as BDM1, was investigated in the context of the discrete shallow-water equations. The velocity space is piecewise linear with continuous normal components, and the pressure space is piecewise constant. The natural data structure for the velocity space stores two normal velocity components on each edge, and hence there are 3 velocity DOF per triangular element and 1 pressure DOF. There are too many velocity DOF and hence there will be too many Rossby modes per inertia-gravity mode, just as for the hexagonal C-grid. The key result of this paper is in showing that discretisations of the linear rotating shallow water equations on the $f$-plane constructed using these spaces on arbitrary meshes satisfy a crucial property, namely that geostrophic modes are exactly steady. This is achieved by making use of the discrete Helmholtz decomposition, within the framework of discrete exterior calculus \citep{ArFaWi2006}. As described in \citep{Ar2002}, existence of such a decomposition requires that the following diagram commutes: \begin{equation} \begin{CD} \label{big commutes} H^1(\Omega) @>\nabla^\perp>> H(\mathrm{div},\Omega) @>\nabla\cdot>> L_2(\Omega) \\ @VV{\Pi^E}V @VV{\Pi^S}V @VV{\Pi^V}V \\ E@>\nabla^\perp>> S @>\nabla\cdot>> V \end{CD} \end{equation} where $\Pi^E$, $\Pi^S$ and $\Pi^V$ are suitably chosen projection operators. The same Helmholtz decomposition can then be used to study the discrete dispersion relations for the numerical discretisation. Within this framework, we then conclude that an optimal choice is to have $\dim(S)=2\dim(V)$ which, at least in the periodic plane, satisfies necessary conditions for absence of both spurious inertia-gravity and spurious Rossby waves. The rest of this paper is organised as follows. The general framework of mixed finite element methods applied to the linear rotating shallow-water equations is described in Section \ref{framework}, and the four properties of energy conservation, local mass conservation, absence of spurious pressure modes and steady geostrophic modes are discussed. In Section \ref{examples}, two examples are then introduced that fit into this framework, and numerical results are presented in section \ref{numerics}. Finally, we give a summary and outlook in Section \ref{summary}. \section{Mixed finite elements for geophysical fluid dynamics} \label{framework} In this section we describe how mixed finite elements can be used to build flexible discretisations on pseudo-uniform grids. We concentrate on the rotating shallow-water equations which are regarded in the numerical weather prediction community as being a simplified model that contains many of the issues arising in the horizontal discretisation for dynamical cores. Since in this paper we are concerned with wave propagation properties, we restrict attention to the linearised equations on the $f$-plane, $\beta$-plane or the sphere. First, we introduce the mixed finite element formulation applied to the linear rotating shallow-water equations, then we discuss various properties of the formulation that are a requirement for numerical weather prediction applications, namely global energy and local mass conservation, absence of spurious pressure modes and steady geostrophic states. These properties all rely on exact sequence properties, \emph{i.e.} div-curl relations, as described in \citep{ArFaWi2006}. \subsection{Spatial discretisation of the linear rotating shallow-water equations} In this paper we consider the discretisation of the linear rotating shallow-water equations on a two dimensional surface $\Omega$ that is embedded in three dimensions (which we restrict to be compact with no boundaries, \emph{e.g.} the sphere or double periodic $x-y$ plane): \begin{equation} \label{swe} \MM{u}_t + f\MM{u}^\perp + c^2\nabla\eta = 0, \quad \eta_t + \nabla\cdot\MM{u} = 0, \quad \MM{u}\cdot\MM{n}=0 \quad \mathrm{on}\quad \partial\Omega, \end{equation} where $\MM{u}=(u,v)$ is the horizontal velocity, $\MM{u}^\perp=\MM{k}\times\MM{u}$, $f$ is the Coriolis parameter, $c^2=gH$, $g$ is the gravitational acceleration, $H$ is the mean layer thickness, $h=H(1+\eta)$ is the layer thickness, $\MM{k}$ is the normal to the surface $\Omega$, and $\nabla$ and $\nabla\cdot$ are appropriate invariant gradient and divergence operators defined on the surface. We form the finite element approximation by multiplying by time-independent test functions $\MM{w}$ and $\phi$, integrating over the domain, integrating the pressure gradient term $c^2\nabla\eta$ by parts in the momentum equation, and finally restricting the velocity trial and test functions $\MM{u}$ and $\MM{w}$ to a finite element subspace $S\subset H(\divv)$ (where $H(\divv)$ is the space of square integrable velocity fields whose divergence is also square integrable), and the elevation trial and test functions $\eta$ and $\alpha$ to the finite element subspace $V\subset L^2$ (where $L^2$ is the space of square integrable functions): \begin{eqnarray} \label{fe u eqn} \dd{}{t}\int_{\Omega}\MM{w}^h\cdot\MM{u}^h\diff{V} + \int_{\Omega}f\MM{w}^h\cdot\left(\MM{u}^h\right)^\perp\diff{V} - c^2\int_{\Omega}\nabla\cdot\MM{w}^h\eta^h\diff{V} &=& 0, \quad \forall\MM{w}^h\in S, \\ \dd{}{t}\int_{\Omega}\alpha^h\eta^h\diff{V} + \int_{\Omega}\alpha^h\nabla\cdot\MM{u}^h\diff{V}&=&0, \quad \forall \alpha^h \in V. \label{fe eta eqn} \end{eqnarray} After discretisation in time, these equations are solved in practise by introducing basis expansions for $\MM{w}^h$, $\MM{u}^h$, $\eta^h$, and $\alpha^h$ and solving the resulting matrix-vector systems for the basis coefficients. In this framework we restrict the choice of finite element spaces $S$ and $V$ so that \[ \MM{u}^h \in S \quad \implies \nabla\cdot\MM{u}^h \in V. \] The divergence should map from S onto V, so that for all functions $\phi^h\in V$ there exists a velocity field $\MM{u}^h\in S$ with $\nabla\cdot\MM{u}^h=\phi^h$. Such spaces are known as ``div-conforming''. Furthermore we require that there exists a ``streamfunction'' space $E\subset H^1$ such that \[ \psi^h \in E \quad \implies \MM{k}\times\nabla\psi^h \in S, \] where the $\MM{k}\times\nabla$ operator (the curl, which we shall write as $\nabla^\perp$) maps onto the kernel of $\nabla\cdot$ in S. A consequence of these properties is that functions in $E$ are continuous, vector fields in $S$ only have continuous normal components and functions in $V$ are discontinuous. \subsection{Energy conservation} Global energy conservation for the linearised equations is a requirement of numerical weather prediction models for various reasons, in particular because it helps to prevent numerical sources of unbalanced fast waves. It is also a precursor to a energy-conserving discretisation of the nonlinear equations using the vector-invariant formulation. For the mixed finite element method, global energy conservation is an immediate consequence of the Galerkin finite element formulation. The conserved energy of equations \eqref{swe} is \[ H = \frac{1}{2}\int_{\Omega} |\MM{u}|^2 + c^2 \eta^2 \diff{V}. \] Substituting the solutions $\MM{u}^h$ and $\eta^h$ to equations (\ref{fe u eqn}-\ref{fe eta eqn}) and taking the time derivative gives \[ \dd{}{t}H = \int_{\Omega}\MM{u}^h\cdot\dot{\MM{u}}^h + c^2\eta^h\dot{\eta}^h\diff{V}. \] Choosing $\MM{w}^h=\MM{u}^h$ and $\alpha^h=\eta^h$ in equations (\ref{fe u eqn}-\ref{fe eta eqn}) then gives \begin{eqnarray*} \dd{}{t}H &=& \int_{\Omega}\MM{u}^h \cdot\dot{\MM{u}}^h + c^2\eta^h\dot{\eta}^h\diff{V} \\ & = & \int_{\Omega}-f\underbrace{\MM{u}^h \cdot\left({\MM{u}}^h\right)^\perp}_{=0} + \underbrace{c^2\nabla\cdot\MM{u}^h\eta^h - c^2\eta^h\nabla\cdot\MM{u}^h}_{=0}\diff{V} = 0. \end{eqnarray*} \subsection{Local mass conservation} Local mass conservation is a requirement for numerical weather prediction models since it prevents spurious sources and sinks of mass. For the nonlinear density equation, this can be achieved using a finite volume or discontinuous Galerkin method. For mixed finite element methods of the type used in this paper applied to the linear equations, consistency and discontinuity of functions in $V$ requires that element indicator functions (\emph{i.e.} functions that are equal to 1 in one element and 0 in the others) are contained in $V$. Selecting the element indicator function for element $e$ as the test function $\alpha^h$ in equation \eqref{fe eta eqn} gives \[ \dd{}{t}\int_e \eta^h\diff{V} + \int_{\partial e} \MM{u}^h\cdot\MM{n}\diff{S} = 0, \] where $\partial e$ is the boundary of element $e$. Since $\MM{u}^h$ has continuous normal components on element boundaries, this means that the flux of $\eta^h$ is continuous and hence $\eta^h$ is locally conserved. \subsection{Absence of spurious pressure modes and stability of discrete Poisson equation} \label{pressure modes} The principle reason for using the staggered C-grid for numerical weather prediction is that the collocated A-grid, in which pressure and both components of velocity are stored at the same grid locations, suffers from a checkerboard pressure mode which has vanishing numerical gradient when the centred difference approximation is used, despite being oscillatory in space. This pressure mode rapidly pollutes the numerical solution in the presence of nonlinearity, boundary conditions and forcing, and can be easily excited by physics subgrid parameterisations or initialisation \emph{via} data assimilation from noisy data. In the context of mixed finite element methods applied to the equation set \eqref{swe}, spurious pressure modes relate to the discretised gradient $D\phi^h\in S$ of a function $\phi^h\in V$ defined by \[ \int_{\Omega}\MM{w}^h\cdot D\phi^h\diff{V} = -\int_{\Omega}\nabla\cdot\MM{w}^h\phi^h\diff{V}, \quad \forall \MM{w}^h\in S. \] On uniform grids, spurious pressure modes are functions $\phi^h$ from the pressure space $V$ which have zero discretised gradient $D\phi^h$ even though $\nabla\phi^h$ is non-zero. On unstructured grids or grids with varying edge lengths, spurious pressure modes are functions which have discretised gradient becoming arbitrarily small as the maximum edge length $h_0$ tends to zero, despite their actual gradient staying bounded away from zero. Such functions would prevent the numerical solution of equations \eqref{swe} converging at the optimal rate predicted by approximation theory. We make the following definition of a spurious pressure mode. \begin{definition}[Spurious pressure modes] A mixed finite element space $(S,V)$ is said to be free of spurious pressure modes if there exists $\gamma_2>0$ independent of $h_0$ such that for all $\phi^h\in V$, there exists nonzero $\MM{v}^h\in S$ satisfying \begin{equation} \label{inf sup} \int_{\Omega}\phi^h\nabla\cdot\MM{v}^h\diff{V}\geq \gamma_2\|\phi^h\|_{L_2} \|\MM{v}^h\|_{H(\mathrm{div})}. \end{equation} \end{definition} Condition \eqref{inf sup} is one of two sufficient conditions for numerical stability of the mixed finite element discretisation of the Poisson equation $-\nabla^2\phi=f$ given by \begin{eqnarray*} \int_{\Omega}\MM{w}^h\cdot \MM{v}^h\diff{V} &=& -\int_{\Omega}\nabla\cdot\MM{w}^h\phi^h\diff{V}, \quad \forall \MM{w}^h\in S, \\ \int_\Omega \alpha^h \nabla\cdot\MM{v}^h\diff{V} & = & \int_\Omega \alpha^h f^h, \quad \forall \alpha^h \in V. \end{eqnarray*} This discretisation is stable (\emph{i.e.} small changes in the right-hand side lead to small changes in the solution field in the limit as $h_0$ tends to zero) if Condition \eqref{inf sup} holds, together with the condition that there exists $\gamma_1>0$ independent of $h_0$ such that \begin{equation} \label{S1} \int_{\Omega}\MM{v}^h\cdot\MM{v}^h\diff{x} \geq \gamma_1 \|\MM{v}^h\|^2_{H(\mathrm{div})}, \end{equation} for all $\MM{v}^h\in S$ such that $\int\nabla\cdot\MM{v}^h\phi^h\diff{V}=0$ for all $\phi^h\in V$. As reviewed in \citet{Ar2002}, Condition \eqref{inf sup} is satisfied if it is possible to define a bounded projection $\Pi^S:H(\mathrm{div})\to S$ such that the following diagram commutes: \begin{equation} \label{commutes} \begin{CD} H(\mathrm{div},\Omega) @>\nabla\cdot>> L_2(\Omega) \\ @VV{\Pi^S}V @VV{\Pi^V}V \\ S @>\nabla\cdot>> V \end{CD} \end{equation} where $\Pi^V$ is the usual $L_2$ projection operator. This means that taking any square integrable velocity field $\MM{u}$ with square integrable divergence, evaluating the divergence and projecting into $V$ produces the same result as projecting $\MM{u}$ into $S$ using $\Pi^S$ and evaluating the divergence. The projection $\Pi^S$ is constructed by applying an $L^2$ projection of normal components on element edges, ensuring that $\MM{u}$ is $L^2$-orthogonal to gradients of functions from $V$ in each element, and ensuring the remaining degrees of freedom in $\MM{u}$ are $L^2$-orthogonal to divergence-free functions in each element. We shall explain how this is done for the two examples described in Section \ref{examples}. To check that the diagram \eqref{commutes} commutes, it is sufficient to show that \[ \int_K \alpha^h (\nabla\cdot\MM{u}-\nabla\cdot\Pi^S\MM{u})\diff{V}=0, \quad \forall \alpha^h\in V, \MM{u}\in H(\mathrm{div},K), \] for each element $K$, since this defines the $L_2$ projection $\Pi^V$ into the discontinuous space $V$. This is easily checked using integration by parts: \begin{eqnarray*} \int_K\alpha^h\nabla\cdot\MM{u}\diff{V} & = & - \int_K\nabla\alpha^h\cdot\MM{u}\diff{V} + \int_{\partial K}\alpha^h\MM{u}\cdot\MM{n}\diff{S}, \\ & = & - \int_K\nabla\alpha^h\cdot\Pi^S\MM{u}\diff{V} + \int_{\partial K}\alpha^h\Pi^S\MM{u}\cdot\MM{n}\diff{S} = \int_K\alpha^h\nabla\cdot\Pi^S\MM{u}\diff{V}, \end{eqnarray*} as required. As also reviewed in \citet{Ar2002}, Condition \eqref{S1} is satisfied if vector fields $\MM{v}\in S$ with divergence orthogonal to $V$ are in fact divergence-free. This is satisfied by the types of mixed finite element methods considered in this paper since the divergence maps from S into V, and so the projection of $\nabla\cdot\MM{v}^h$ into $V$ is simply the inclusion. Hence, if the divergence is orthogonal to $V$, the divergence must be zero, and so \eqref{S1} is satisfied. \subsection{Discrete Helmholtz decomposition} Proof of the condition that geostrophic modes are steady requires the construction of a discrete Helmholtz decomposition. Since Condition \eqref{inf sup} holds, the discrete gradient operator $D:V\to S$, has no non-trivial kernel. For any $\psi^h\in E$, the curl $\nabla^\perp$ of $\psi^h$ satisfies \[ \int_\Omega \nabla^\perp\psi^h\cdot D\phi^h\diff{V}= -\int_\Omega \underbrace{\nabla\cdot\nabla^\perp\psi^h}_{=0}\phi^h\diff{V}=0, \] for any $\phi^h\in V$, and hence the curl from $E$ to $S$ and the discrete divergence from $V$ to $S$ map onto orthogonal subspaces of $S$. This means that there is a one-to-one mapping between elements of $S$ and $E\times V$, defining a discrete Helmholtz decomposition \begin{equation} \label{helmholtz} \MM{u}^h = \nabla^\perp\psi^h + D\phi^h + \MM{h}^h, \quad \MM{u}\in S,\, \psi^h\in E,\, \phi^h\in V, \, \MM{h}^h\in H, \end{equation} where $H\subset S$ is the space of discrete harmonic velocity fields \[ H^h = \left\{\MM{u}^h\in S: \nabla\cdot\MM{u}^h=0, \quad \int_{\Omega} \MM{u}^h\cdot\nabla^\perp\psi^h\diff{V}=0, \, \forall \psi^h\in E\right\}. \] The dimension of $H^h$ is the same as the dimension of the space $H$ of harmonic velocity fields \[ H = \left\{\MM{u} \in H(\divv):\nabla\cdot\MM{u}=0, \qquad \int_{\Omega} \MM{u}^h\cdot\nabla^\perp\psi\diff{V}=0, \,\forall \psi\in H^1\right\}, \] \emph{i.e.}, velocity fields with vanishing divergence and (weak) curl (In the periodic plane, these harmonic velocity fields are the constant velocity fields, but there are no harmonic velocity fields on the sphere); however $H^h \ne H$ in the general case \citep{ArFaWi2006}. The kernel of $\nabla^\perp$ in $E$ is the subspace of constant functions, and stability results (as described in Section \ref{pressure modes}) imply that the kernel of $D$ in $V$ is the subspace of constant functions, and hence we can use Equation \eqref{helmholtz} to obtain a DOF count for $S$. \[ \dim(S)=(\dim(E)-1)+(\dim(V)-1)+\dim(H), \] and hence \[ \dim(E) = \dim(S)-\dim(V)+2-\dim(H). \] For our DOF requirement $\dim(S)=2\dim(V)$, we obtain \[ \dim(E) = \dim(V) + 2 - \dim(H), \] which becomes $\dim(E)=\dim(V)$ for the periodic plane and $\dim(E)=\dim(V)+2$ for the sphere. If $\dim(S)>2\dim(V)$, then $\dim(E)>\dim(V)+(2-\dim(H))$ and \emph{vice versa}. This will become important when we examine wave propagation in Section \ref{dispersion relations}. \subsection{Vorticity and divergence} The discrete vorticity associated with the velocity $\MM{u}^h\in S$ is defined as $\xi^h\in E$ such that \begin{equation} \label{vorticity} \int_{\Omega} \gamma^h\xi^h\diff{V} = -\int_{\Omega}\nabla^\perp\gamma^h\cdot\MM{u}^h\diff{V}, \quad \forall\gamma^h\in E. \end{equation} It is possible to obtain $\MM{u}\in S$ from the discrete vorticity $\xi\in E$ and the divergence $\delta^h=\nabla\cdot\MM{u}^h\in V$ by solving two elliptic problems for the streamfunction $\psi^h$ and velocity potential $\phi^h$. To obtain the streamfunction $\psi^h\in E$, we use the Helmholtz decomposition and rewrite equation \eqref{vorticity} as \[ \int_{\Omega} \gamma^h\xi^h\diff{V} = -\int_{\Omega}\nabla\gamma^h\cdot\nabla\psi^h\diff{V}, \quad \forall\gamma^h\in \left\{\gamma:\gamma\in E, \quad \int_\Omega\gamma\diff{V}=0\right\}, \qquad \int_\Omega \psi^h\diff{V}=0, \] which is the usual finite element discretisation of the Poisson equation for $\psi^h$. To obtain the vector potential $\phi^h$ requires the solution of the coupled system \begin{eqnarray*} \int_{\Omega}\alpha^h\nabla\cdot D\phi^h\diff{V} & = & \int_{\Omega} \alpha^h\delta^h\diff{V}, \quad \forall\alpha^h\in \left\{ \alpha:\alpha\in V,\quad \int_\Omega\alpha\diff{V}=0\right\}, \\ \int_{\Omega}\MM{w}^h\cdot D\phi^h\diff{V} & = & -\int_{\Omega}\nabla\cdot\MM{w}^h\phi^h\diff{V}, \quad \forall\MM{w}^h\in S, \qquad \int_\Omega\phi^h\diff{V}=0. \end{eqnarray*} This is the mixed finite element approximation to the Poisson equation already discussed in Section \ref{pressure modes}; if the Conditions \eqref{inf sup} and \eqref{S1} are satisfied, the coupled system is well-posed. \subsection{Steady geostrophic modes} On the $f$-plane (planar domain with constant $f$), geostrophic balanced states satisfying $f\MM{u}^\perp+c^2\nabla\eta=0$ are steady since $\nabla\cdot\MM{u}=0$. The remaining solutions of the linear rotating shallow-water equations are fast inertia-gravity waves. In the quasi-geostrophic limit (slow, large scale motion), when nonlinear terms and spatially varying $f$ are introduced, these steady states become slowly-evolving balanced states that characterise large-scale weather systems. It is crucial that a discretisation gives rise to steady geostrophic states on the $f$-plane, otherwise when nonlinear terms and spherical geometry are introduced, balanced states will emit noisy inertia-gravity waves that will pollute the numerical solution over timescales that are much shorter than that required for a weather forecast. To show that mixed finite element methods have steady geostrophic modes, we follow the approach of \citet{ThRiSkKl2009}, namely we aim to show that vanishing divergence implies steady vorticity, then checking that vanishing divergence and steady vorticity implies steady velocity. To obtain a geostrophic balanced state corresponding to a given streamfunction $\psi^h$, we initialise $\MM{u}^h$ and $\eta^h$ as follows: \begin{enumerate} \item Set $\MM{u}^h=\nabla^\perp\psi^h$. \item Set $\eta^h$ from the geostrophic balance relation \begin{equation} \label{psi projection} c^2 \int_{\Omega}\alpha^h\eta^h\diff{V} = f\int_\Omega \alpha^h\psi^h\diff{V}, \quad \forall \alpha^h\in V. \end{equation} \end{enumerate} Substitution in equation \eqref{fe u eqn} then gives \begin{eqnarray*} \dd{}{t}\int_\Omega \MM{w}^h\cdot\MM{u}^h\diff{V} & = & -f\int_\Omega \MM{w}^h\cdot \nabla \psi^h\diff{V} - c^2 \int_\Omega \nabla\cdot \MM{w}^h \eta^h\diff{V}, \\ & = & f\int_\Omega\nabla\cdot\MM{w}^h \psi^h\diff{V} - c^2 \int_\Omega \nabla\cdot \MM{w}^h \eta^h\diff{V}, \\ & = & 0, \end{eqnarray*} having noted that $\nabla\cdot\MM{w}^h\in V$ and so we may choose $\alpha^h=\nabla\cdot\MM{w}^h$ in equation \eqref{psi projection}. To show that $\dot{\eta}^h=0$, first note that $\MM{u}^h=\nabla^\perp\psi^h$ and hence $\nabla\cdot\MM{u}^h=0$. Equation \eqref{fe eta eqn} thus becomes \[ \int_{\Omega} \alpha^h\dot{\eta}^h\diff{V}=0, \quad \forall \alpha^h\in V, \] and hence $\dot{\eta}^h=0$. This means that the geostrophic balanced state is steady. \subsection{Numerical dispersion relations} \label{dispersion relations} In this section we consider the numerical wave propagation properties of this family of finite element discretisations, on the $f$-plane and on the $\beta$-plane in the quasi-geostrophic limit. Dispersion relations are computed by assuming time-harmonic solutions proportional to $e^{-i\omega t}$ (a valid assumption if the equations are invariant under time translations) and studying the resulting eigenvalue problem. For the continuous equations on the periodic plane, the equations are also invariant under spatial translations and so it may be assumed that the eigensolutions take the form $e^{i(\MM{k}\cdot\MM{x}-\omega t)}$ where $\MM{k}$ is restricted so that the periodic boundary conditions are satisfied. Substitution in the equations of motion leads to an algebraic system relating $\MM{k}$ to $\omega$: the dispersion relation. For the linear shallow-water equations this system is most easily obtained by using the Helmholtz decomposition for $\MM{u}$. Numerical dispersion relations for continuous-time spatial discretisations are also computed by assuming time-harmonic solutions, leading to a discrete eigenvalue problem. If a structured mesh is used on the periodic plane with a set of discrete translation symmetries then eigensolutions satisfy the property that translating from one cell to another by $\Delta\MM{x}$ results in the discrete eigensolution changing by a factor of $e^{i(\MM{k}\cdot\Delta\MM{x})}$, where $\MM{k}$ is again chosen so that the periodic boundary conditions are satisfied. This can again lead to a numerical relationship between $\MM{k}$ and $\omega$, obtained for both the $f$-plane, and the $\beta$-plane in the quasi-geostrophic limit, for the hexagonal C-grid in \citet{Th08}, and for the $P1_{DG}-P2$ finite element pair in \citet{CoHa2011}. Here, we discuss the properties of the discrete eigenvalue problem arising from the finite element spaces from the framework of this paper. The discussion makes use of the discrete Helmholtz decomposition. In the $f$-plane case, substitution of the discrete Helmholtz decomposition into equations (\ref{fe u eqn}-\ref{fe eta eqn}) and assuming time-harmonic solutions yields \begin{eqnarray} -i\omega\int_{\Omega}\nabla\gamma^h\cdot\nabla\psi^h\diff{V} + \int_{\Omega}f\nabla\gamma^h\cdot D\phi^h\diff{V} & = & 0, \label{fplane psi eqn} \\ \label{fplane phi eqn} -i\omega\int_{\Omega} D\alpha^h\cdot D\phi^h\diff{V} + \int_{\Omega}fD\alpha^h\cdot \left(\nabla\psi^h +\left(D\phi^h\right)^\perp\right)\diff{V} - c^2\int_{\Omega}\nabla\cdot D\alpha^h\eta^h\diff{V} &=& 0, \\ -i\omega\int_{\Omega}\alpha^h\eta^h\diff{V} + \int_{\Omega}\alpha^h\nabla\cdot D\phi^h\diff{V}&=&0, \label{fplane eta eqn} \end{eqnarray} for all test functions $\alpha^h\in V$, $\gamma^h\in E$. Next we define projections $P^E:V\to E$ and $P^V:E\to V$ by \begin{eqnarray*} \int_\Omega \nabla\gamma^h\cdot\nabla \left(P^E\phi^h\right)\diff{V} & = & \int_\Omega \nabla\gamma^h\cdot D\phi^h\diff{V}, \qquad \forall \phi^h \in V, \quad \gamma^h \in E, \\ \int_\Omega D\alpha^h\cdot D \left(P^V\psi^h\right)\diff{V} & = & \int_\Omega D\alpha^h\cdot \nabla\psi^h\diff{V}, \qquad \forall \psi^h \in E, \quad \alpha^h \in V. \end{eqnarray*} These projections are uniquely defined since $P^E$ uses the standard continuous finite element discretisation of the Laplace operator which is solvable by the Lax-Milgram theorem when $E$ is restricted to mean zero functions, and $P^V$ uses the mixed finite element discretisation of the Laplace operator using the spaces $S$ and $V$ which is solvable by the stability conditions \eqref{inf sup} and \eqref{S1} when $V$ is also restricted to mean zero functions. Using these projections, and the fact that the divergence operator maps from $S$ to $V$, equations (\ref{fplane psi eqn}-\ref{fplane eta eqn}) become \begin{eqnarray} -i\omega\psi^h + fP^E\phi^h & = & 0, \\ \nonumber -i\omega\int_{\Omega} D\alpha^h\cdot D\phi^h\diff{V} + f\int_{\Omega}D\alpha^h\cdot D P^V\psi^h\diff{V} & & \\ \qquad\qquad + \int_{\Omega}fD\alpha^h\cdot\left(D\phi^h\right)^\perp\diff{V} - c^2\int_{\Omega}\nabla\cdot D\alpha^h\eta^h\diff{V} &=& 0, \\ -i\omega\eta^h + \nabla\cdot D\phi^h&=&0, \label{eta2phi} \end{eqnarray} and elimination of $\psi^h$ and use of the definition of $D$ gives \begin{eqnarray} \nonumber 0 & = & \omega\left(\left(\omega^2+f^2\right)\int_{\Omega} \alpha^h\eta^h\diff{V} + \int_{\Omega}\alpha^h\eta^h\diff{V} - c^2\int_{\Omega}\nabla\cdot D\alpha^h\eta^h\diff{V}\right) \\ & & \quad + if^2\int_{\Omega}D\alpha^h\cdot D \left(P^VP^E\phi^h-\phi^h\right)\diff{V} - \omega\int_{\Omega}fD\alpha^h\cdot\left(D\phi^h\right)^\perp\diff{V}, \label{f disp reln} \end{eqnarray} where $\phi^h$ is obtained from equation \eqref{eta2phi}. The first row of equation \eqref{f disp reln} is the discretisation of the continuous eigenvalue problem for the rotating shallow-water equations using the mixed finite element spaces $V$ and $S$. In this case the eigenvalues of this discrete eigenvalue problem converge to the eigenvalues of the continuous problem at the optimal rate as described in \citet{BoBrGa1997}. However, there are two extra terms in the bottom row of equation \eqref{f disp reln}. The second term converges to zero for smooth $\phi^h$, and use might be made of spectral perturbation theory to examine what effect this has on the discrete eigenvalue problem; we have not yet developed a technique to do this. However, the impact of the first term in the second row is more immediately clear, since it involves projecting $\phi^h$ from $V$ to $E$ and back to $V$ again. If $V$ has larger dimension than $E$, which is the case for the lowest order Raviart-Thomas element on triangles, for example, then this double projection will have a kernel, and $(P^VP^E-1)\phi^h$ will not be small. This leads to spurious branches of inertia-gravity waves, \emph{i.e.} branches of solutions of the discrete eigenvalue problem that do not converge to solutions of the continuous eigenvalue problem as $h\to 0$. See \cite{Da2010} for numerical examples illustrating this spurious modes, in particular Figures 2,3 and 4. Hence, $\dim(V)\leq \dim(E)$ is a necessary condition for the absence of spurious divergent inertia-gravity modes. A similar approach can be taken to studying the $\beta$-plane solutions in the quasi-geostrophic limit. Substitution of the discrete Helmholtz decomposition into equations (\ref{fe u eqn}-\ref{fe eta eqn}) and assuming time-harmonic solutions yields \begin{eqnarray} -i\omega\int_{\Omega}\nabla\gamma^h\cdot\nabla\psi^h\diff{V} + \int_{\Omega}\left(f_0+\beta y\right)\nabla\gamma^h\cdot D\phi^h\diff{V} & = & 0, \label{betaplane psi eqn} \\ \nonumber -i\omega\int_{\Omega} D\alpha^h\cdot\left(D\phi^h+\nabla^\perp\psi^h\right)\diff{V} + & & \\ \label{betaplane phi eqn} \quad \int_{\Omega}\left(f+\beta y\right)D\alpha^h\cdot \left(\nabla\psi^h+\left(D\phi^h\right)^\perp\right)\diff{V} - c^2\int_{\Omega}\nabla\cdot D\alpha^h\eta^h\diff{V} &=& 0, \\ -i\omega\int_{\Omega}\alpha^h\eta^h\diff{V} + \int_{\Omega}\alpha^h\nabla\cdot D\phi^h\diff{V}&=&0. \label{betaplane eta eqn} \end{eqnarray} In the usual quasi-geostrophic limit, the leading order solution is \[ \phi^h_g =0, \quad \int_{\Omega}f_0D\alpha^h\cdot \nabla\psi^h_g\diff{V} +c^2\int_{\Omega}\nabla\cdot D\alpha^h\eta^h_g\diff{V} = 0, \] where $\phi^h_g$, $\psi^h_g$ and $\eta^h_g$ are the leading order terms in the low Rossby number expansion of $\phi^h$, $\psi^h$ and $\eta^h$ respectively. This is the same as the geostrophic steady state formula for the $f$-plane, and we have \[ f_0P^V\psi_g^h = c^2\eta_g^h. \] The next order in the expansion of the equations (we do not make use of the next order in the $\phi^h$ equation) is \begin{eqnarray} \label{ag psi} -i\omega\int_{\Omega}\nabla\gamma^h\cdot\nabla\psi^h_g\diff{V} + \int_{\Omega}f_0\nabla\gamma^h\cdot D\phi^h_{ag}\diff{V} + \int_{\Omega}\beta y\nabla\gamma^h\cdot \nabla^\perp\psi_g^h\diff{V} & = & 0, \\ \label{ag eta} -i\omega\int_{\Omega}\alpha^h\eta^h_g\diff{V} + \int_{\Omega}\alpha^h\nabla\cdot D\phi^h_{ag}\diff{V}&=&0. \end{eqnarray} Again, the embedding property implies that $i\omega\eta^h_g=\nabla\cdot D\phi^h_{ag}$. Since $\gamma^h$ is continuous and $D\phi^h_{ag}$ has continuous normal components, we may integrate by parts in the second two terms in equation \eqref{ag psi}, to obtain \begin{eqnarray*} 0 & = & -i\omega\int_{\Omega}\nabla\gamma^h\cdot\nabla\psi^h_g\diff{V} - i\omega\int_{\Omega}\frac{f_0^2}{c^2}\gamma^h\psi^h_g\diff{V} - \int_{\Omega}\beta \gamma^h \pp{}{x}\psi_g^h\diff{V} \\ & & \qquad + i\omega\int_{\Omega}\frac{f_0^2}{c^2}\gamma^h\left(1-P^EP^V\right)\psi^h_g\diff{V}. \end{eqnarray*} The first line is the continuous finite element approximation to the Rossby wave eigenvalue problem using the finite element space $E$, which has convergent eigenvalues. The second line is a perturbation involving $\left(1-P^EP^V\right)\psi^h_g$ which will not always be small if $P^EP^V$ has a non-trivial kernel. This will be the case if $\dim(V)<\dim(E)$, as occurs in the lowest order Brezzi-Douglas-Marini (BDM1) element on triangles \citep{BrDoMa1985} which has $P1$ as the streamfunction space, and hence $2\dim(V)=\dim(E)+2-\dim(H)$. If $P^EP^V$ has a non-trivial kernel, this will lead to spurious Rossby wave branches of the numerical dispersion relation. We conclude that $\dim(V)=\dim(E)$ is a necessary condition for avoiding both spurious divergent modes and spurious irrotational modes. Note that this is not a sufficient condition since it is still possible for $P^EP^V$ or $P^VP^E$ to have non-trivial kernel even in this case. This condition motivates the selection of examples of mixed finite element spaces given in the next section. \section{Examples} \label{examples} In this section we provide two examples of mixed finite element spaces that are suitable for constructing pseudo-uniform grids on the sphere, and that have the additional property that there are exactly twice as many velocity degrees of freedom as pressure degrees of freedom, which prevents the presence of spurious mode branches. The first example is the modified Raviart-Thomas element on quadrilaterals, and the second example is the Brezzi-Douglas-Fortin-Marini element on triangles. \subsection{Modified Raviart-Thomas element on quadrilaterals} There have been several efforts at developing numerical weather prediction models based on a cubed sphere grid (see \cite{PuLi2007}, for example) in which a grid on the surface of a cube is projected to a sphere. The drawback in using such is grid is that to obtain a C-grid finite difference method with stationary geostrophic states, the scheme of \cite{ThRiSkKl2009} must be used, which requires the grid to be orthogonal in the sense that lines joining adjacent pressure nodes must cross cell boundaries at right-angles. On the cubed sphere, this condition does not produce a pseudo-uniform grid since elements become clustered near the poles as the resolution is increased. Mixed finite elements provide extra freedom to design numerical schemes since the orthogonality condition is not a requirement; it is replaced by the conditions on finite element spaces specified in Section \ref{framework}. The lowest-order Raviart-Thomas finite element space is the mixed finite element analogue of the C-grid since the pressure space is piecewise constant functions, and the velocity fields are constrained to be have constant, continuous normal components on element edge. This means that one normal component of velocity must be stored on each element edge, just like the C-grid. The velocity fields are constructed on a square $1\times 1$ reference element $\hat{K}$ with coordinates $(\xi_1,\xi_2)$, on which the $\xi_1$-component of velocity $\hat{\MM{u}}$ is obtained by linear interpolation between constant values on the $\xi_1=0$ and $\xi_1=1$ edges, and the $\xi_2$-component is obtained by linear interpolation between constant values on the $\xi_2=0$ and $\xi_2=1$ edges. In these coordinates, the divergence is constant. In any physical element $K$ in the mesh, we define a coordinate mapping $\MM{g}:\MM{\xi}\mapsto\MM{x}$, and the velocity in $K$ is obtained \emph{via} the Piola transformation \[ \MM{u}(\MM{x}) = \frac{1}{\det\left(\pp{\MM{g}}{\MM{\xi}}\right)} \pp{\MM{g}}{\MM{\xi}} \cdot\hat{u}(\MM{\xi}), \] which preserves flux integrals \[ \int_\gamma\hat{\MM{u}}\cdot\MM{n}\diff{S}(\MM{\xi}) = \int_{\MM{g}(\gamma)}\MM{u}\cdot\MM{n}\diff{S}(\MM{x}), \] guaranteeing continuity of normal fluxes. The divergence satisfies \[ \nabla\cdot\MM{u} = \frac{1}{\det\left(\pp{\MM{g}}{\MM{x}}\right)} \hat{\nabla}\cdot\hat{\MM{u}}, \] where $\hat{\nabla}$ is the divergence in the local coordinates $\MM{\xi}$. If the coordinate transformation is affine (elements are parallelograms), the determinant of the Jacobian is constant, and so the divergence of the velocity is constant in each element. However, for general quadrilateral elements (required for the cubed sphere), the coordinate transformation is bilinear, with linear determinant of the Jacobian. The solution, proposed by \cite{BoGa2009}, is to modify the basis functions by adding a divergent correction with vanishing normal components on the boundary that makes the divergence constant. The corresponding streamfunction space $E$ is the usual continuous bilinear space on quadrilaterals, often denoted $Q1$, and it can easily be shown that the $\nabla^\perp$ operator maps from $E$ into $S$ in this case. In fact, the Boffi-Gastaldi correction adds a purely divergent component to the velocity field and so the $\nabla^\perp$ embedding property is not affected. The RT0-Q0 finite element space has one pressure degree of freedom per quadrilateral element, and one velocity degree of freedom per edge. Since (for periodic boundary conditions or the sphere) each edge is shared by two elements, this means that there are exactly twice as many velocity degrees of freedom as pressure degrees of freedom. This modified Raviart-Thomas finite element space satisfies all the conditions that we require in this paper and hence has potential for use on pseudo-uniform grids for numerical weather prediction. \subsection{Brezzi-Douglas-Fortin-Marini element on triangles} \label{BDFM1} There is an analogous Raviart-Thomas finite element space on triangles which satisfies the required embedding properties. However, these spaces satisfy $2\dim(V)>\dim(S)$ in general. For example, the lowest order finite element space RT0-P0 has one pressure degree of freedom per element, and one velocity degree of freedom per edge, meaning that $3\dim(V)=2\dim(S)$. The BDM1 element on triangles has one pressure degree of freedom per element and two velocity degrees of freedom per edge, meaning that $3\dim(V)=\dim(S)$, so $2\dim(V)<\dim(S)$. However, the little-used lowest order Brezzi-Douglas-Fortin-Marini (BDFM1) element together with $P1_{DG}$ on triangles satisfies $2\dim(V)=\dim(S)$. The BDFM family of elements for quadrilaterals was introduced in \cite{Br+1987}, and an analogous family for triangles was described in \cite{BrFo1991}. On triangles it is infrequently used since the BDM and RT families have less degrees of freedom for the same order of convergence (after suitable post-processing). However, these extra degrees of freedom are useful to us here since they mean that $\dim(V)=\dim(E)$. Here we describe the BDFM1 element on triangles as an augmentation of the BDM1 element on triangles, which we recall first. Given a triangle $K$, we define $P_k(K)$ to be the space of $k$-th order polynomials on $K$. We define the following spaces on $K$: \begin{eqnarray*} \mbox{velocity space} &\quad& S(K) = \{P_1(K)\}^2 \\ \mbox{pressure space} &\quad& V(K) = P_0(K). \end{eqnarray*} For a triangulation $T$ of the domain $\Omega$, we define the $BDM1$ velocity space \[ S = \{\MM{v}\in H(\mathrm{div},\Omega):\MM{v}|_K\in S(K), \quad K\in T\}, \] where $H(\mathrm{div},\Omega)$ is the space of vector fields with square integrable divergence, which requires that $\MM{v}$ has continuous normal component across triangle edges. The pressure space is \[ V = \{\eta:\eta|_K\in V(K)\}, \] with no continuity requirements across edges. A convenient set of local nodal basis functions for $S$ is defined by choosing two node points on each triangle edge, each node located at one of the vertices belonging to that edge: a total of six node points. For example, in the triangle shown in Figure \ref{bbm1}, on edge $e1$ there are two node points, one at vertex $v3$ and one at vertex $v2$. The basis function associated with edge $e1$ and vertex $v3$ is \[ \MM{\phi}_{1,3} = \MM{t}_2\lambda_3, \] where $\MM{t}_2$ is the unit tangent vector to edge $e2$ and where $\{\lambda_i\}_{i=1}^3$ are the barycentric coordinates associated with vertices $e1$, $e2$ and $e3$ respectively. It can easily be checked that $\MM{\phi}_{1,3}$ has normal component equal to 1 at the node point located at vertex $v3$ on edge $e1$, and normal component equal to zero at each of the other node points. The other six basis functions are constructed in a similar manner. \begin{figure} \centerline{\includegraphics[width=8cm]{bbm1}\includegraphics[width=8cm]{shat}} \caption{\label{bbm1}Diagram showing degrees of freedom in (left) BBM1 vector element, (right) augmented BBM1 vector element.} \end{figure} To increase the number of degrees of freedom in each triangle $K$ in the triangulation $T$, we define the local BDFM1 space $\hat{S}(K)$ by \[ \hat{S}(K) = \{\MM{v}\in P_2(K)^2:\MM{v}\cdot\MM{n}=0 \mbox{ on } \partial K\}. \] Since all of the vectors in $\hat{S}(K)$ vanish on the boundary of $K$, they do not alter the values of the normal components at the boundary, and so there are no additional continuity constraints. The dimension of $P_2(K)^2$ is 12, and there are 9 independent degrees of freedom which do not vanish on the boundary, which means that $\dim(\hat{S}(K))=3$. A convenient set of local nodal basis functions for $\hat{S}$ is defined by locating nodes that store the tangential component of velocity at the centre of each edge. The tangential component of velocity is permitted to be discontinuous and so a different value of the tangential component will be stored on each side of the edge. The basis function associated with the node at the centre of edge $e1$ is \[ \MM{\phi}_1 = 4\MM{t}_1\lambda_2\lambda_3. \] It can easily be checked that $\MM{\phi}_1$ has vanishing normal component on all edges, tangential component equal to 1 at the centre of edge $e1$ and vanishing tangential component on the other two edges. The other two basis functions are constructed in a similar manner. The augmented velocity space $S$ on the triangulation $T$ is defined as \[ S = \{\MM{v}\in H(\mathrm{div},\Omega):\MM{v}=\MM{v}'+\hat{\MM{v}}, \MM{v}'|_K\in S(K),\quad\hat{\MM{v}}|_K\in\hat{S}(K),\quad K\in T\}. \] The pressure space $V$ is defined as \[ V=\{\eta\in P_1(K)\} \] with no continuity requirements. For this mixed element pair the velocity space $S$ has 6 DOF per element, and the pressure space $V$ has 3 DOF per element, hence there are twice as many velocity DOF as pressure DOF, just as for the C-grid finite difference method on quadrilaterals. For our augmented velocity space, it is easy to define the projection operator $\Pi^S$. The projection is computed element by element and guarantees the continuity of $\MM{u}\cdot\MM{n}$ across element edges. The projection on an element $K$ is defined from the following conditions: \begin{eqnarray} \label{BDM1 proj} \int_{e(i)} \gamma^h(\Pi^S\MM{u}-\MM{u})\cdot\MM{n}\diff{S} & = & 0 \quad \forall \gamma^h\in P^1(e(i)), \forall\,\mathrm{edges}\,e(i)\in\partial K, i=1,2,3, \\ \label{Shat divergent} \int_{K}\nabla\gamma^h\cdot(\Pi^S\MM{u}-\MM{u})\diff{V} & =& 0 \quad \forall\gamma^h \in P^1(K), \\ \label{Shat rotational} \int_{K}\nabla^\perp B\cdot(\Pi^S\MM{u}-\MM{u})\diff{V} & =& 0, \end{eqnarray} where $B$ is the cubic ``bubble'' function (as used in the MINI element \citep{ArBrFo1984}). In a triangle $K$, the cubic bubble function $B_K$ is the unique cubic polynomial which takes the value 1 at the barycentre and 0 on all three edges. The streamfunction space $E$ is \[ E = \{\psi\in H^1(\Omega):\psi|_K=\psi'|_K+\alpha B_K, \psi'|_K\in P_2(K),\alpha\in \mathbb{R}\}. \] Equation \eqref{BDM1 proj} comprises the BDM1 projection operator, fixing six degrees of freedom. The components of the extra degrees of freedom $\hat{S}(K)$ are not affected since they all satisfy $\MM{u}\cdot\MM{n}=0$ on $\partial K$. The vector field $\nabla^\perp B$ lies inside $\hat{S}(K)$ since it is quadratic (being the skew gradient of a cubic function, $B$) and has vanishing normal component on $\partial K$ (since $B$ is zero on $\partial K$). If we construct an orthogonal (relative to the $L_2$ inner product) decomposition of $\hat{S}(K)$ into $\nabla^\perp B \oplus \tilde{S}(K)$ then we see that equation \eqref{Shat rotational} only involves the $\nabla^\perp B$ component, and equation \eqref{Shat divergent} only involves the remaining two $\tilde{S}(K)$ components, as \[ \int_K\nabla\gamma^h\cdot\nabla^\perp B\diff{V} = -\int_K\underbrace{\nabla^\perp\cdot\nabla\gamma^h}_{=0}B\diff{V} +\int_{\partial K}\nabla\gamma^h\cdot\MM{n}\underbrace{B}_{=0}\diff{S}, \] because $B$ vanishes on $\partial K$. The space $\{\MM{v}=\nabla\gamma^h,\gamma^h\in P1(K)\}$ is spanned by constant vector fields, and hence equation \eqref{Shat divergent} fixes the two degrees of freedom in $\tilde{S}(K)$. Bounds on $\Pi^S$ can be obtained by following the steps of \citet{BrDoMa1985}, since it simply involves $L_2$ projection onto various moments. We define the streamfunction space $E$ as the usual Lagrange continuous quadratic space augmented by cubic bubble functions. For any function $\psi\in E$, the curl $\nabla^\perp$ maps into $S$: $\nabla^\perp\psi\in S$. Furthermore, we may define a projection operator $\Pi^E:H^1(\Omega)\to H(\mathrm{div})$ by \begin{eqnarray*} \Pi^E\psi(v_i) & = & \psi(v_i)\,\forall\,\mathrm{vertices}\,v_i,\,i=1,2,3, \\ \int_{e_i}\Pi^E\psi\diff{S} & = & \int_{e_i}\psi\diff{S}, \,\forall\,\mathrm{edges}\,e_i\,i=1,2,3, \\ \int_K\Pi^E\psi\diff{V} & = & \int_K\psi\diff{V}, \end{eqnarray*} for each element $K$. To show that the projections commute with $\nabla^\perp$, \emph{i.e.} $\Pi^S\nabla^\perp\psi=\nabla^\perp\Pi^E\psi$, we check each of the conditions (\ref{BDM1 proj}-\ref{Shat rotational}). Condition \eqref{BDM1 proj} becomes \begin{eqnarray} \nonumber \int_{e(i)}\gamma^h\nabla^\perp\psi\cdot\MM{n}\diff{S} & = & \int_{e(i)}\gamma^h\nabla\psi\cdot\diff{\MM{x}}, \\ \nonumber & = & -\int_{e(i)}\psi\nabla\gamma^h\diff{\MM{x}} + [\gamma^h\psi]_{v^-_{e(i)}}^{v^+_{e(i)}}, \\ \nonumber & = & -\int_{e(i)}\Pi^E\psi\nabla\gamma^h\diff{\MM{x}} + [\gamma^h\Pi^E\psi]_{v^-_{e(i)}}^{v^+_{e(i)}}, \\ & = & \int_{e(i)}\gamma^h\nabla^\perp\Pi^E\psi\cdot\MM{n}\diff{S}, \quad \forall \gamma^h\in P^1(e(i)), \quad i=1,2,3, \label{BBM1 vort proj} \end{eqnarray} where $v^{\pm}_{e(i)}$ are the two vertices at either end of edge $e(i)$, and having noted that $\nabla\gamma^h$ is constant for $\gamma^h\in P^1(e(i))$. Condition \eqref{Shat divergent} becomes \begin{eqnarray*} \int_{K}\nabla\gamma^h\cdot\Pi^S\nabla^\perp\psi\diff{V} & =& \int_{K}\nabla\gamma^h\cdot\nabla^\perp\psi\diff{V}, \\ & =& -\int_{K}\gamma^h\underbrace{\nabla\cdot\nabla^\perp\psi}_{=0}\diff{V} + \int_{\partial K}\gamma^h\nabla^\perp\psi\cdot\MM{n}\diff{S}, \\ & =& \int_{K}\nabla\gamma^h\cdot\nabla^\perp\Pi^E\psi\diff{V}, \quad \forall\gamma^h \in P^1(K), \\ \end{eqnarray*} where we have used equation \eqref{BBM1 vort proj}. Finally, condition \eqref{Shat rotational} becomes \begin{eqnarray*} \int_{K}\nabla^\perp B\cdot\Pi^S\nabla^\perp\psi\diff{V} & =& \int_{K}\nabla^\perp B\cdot\nabla^\perp\psi\diff{V} \\ & =& -\int_{K}\nabla^2 B\psi\diff{V} + \int_{\partial K} \underbrace{\nabla^\perp B\cdot\MM{n}}_{=0}\psi\diff{S}, \\ & =& -\int_{K}\nabla^2 B\Pi^E\psi\diff{V}, \\ & =& \int_{K}\nabla^\perp B\cdot\nabla^\perp \Pi^E\psi\diff{V}, \end{eqnarray*} since $\nabla^2 B$ is constant in $K$ and $B$ is zero on $\partial K$. Counting global degrees of freedom, \[ \dim(E)=N_{\mathrm{edge}}+N_{\mathrm{vert}}+N_{\mathrm{face}} = 2N_{\mathrm{edge}}+C, \quad \dim(S)=3N_{\mathrm{edge}}, \quad \dim(V)=2N_{\mathrm{edge}}+3N_{\mathrm{face}}, \] where $C$ is the Euler characteristic of the domain $\Omega$ which is equal to 0 for the doubly-periodic domain and equal to 2 on the sphere. On the sphere there are two extra constraints: namely that the divergence and the vorticity both integrate to zero, and so in both cases $\dim(E)+\dim(V)=\dim(S)$. Finally, we note that each triangle has three edges which are each shared with one other triangle, and hence $2N_{\mathrm{edge}}=3N_{\mathrm{face}}$. \section{Numerical results} \label{numerics} In this section we illustrate the properties of the BDFM1 finite element space applied to the linear rotating shallow-water equations. The equations were integrated numerically using the implicit midpoint rule, and the resulting discrete system was solved by using hybridisation which is a standard technique for solving elliptic problems (see \cite{BrFo1991} for a detailed description) in which the continuity constraints on the velocity space are dropped, and are instead enforced in the equation by Lagrange multipliers. It becomes possible to eliminate both the velocity and free surface variables from the matrix equation, leaving a symmetric positive definite system to solve for the Lagrange multipliers. The velocity and free surface variables can then be reconstructed element-by-element. One of the benefits of this approach is that it can be applied when the Coriolis term is present, resulting in a fully implicit treatment of this term. In our numerical tests this system was solved using a direct solver. In the case of BDFM1-${P}1_{{DG}}$\xspace, there are three Lagrange multipliers per element. In the test cases with variable Coriolis parameter $f$, a continuous piecewise quadratic representation of $f$ was used. \subsection{Steady states for the $f$-plane} \label{steady states} We verified that the geostrophic states are exactly steady on the $f$-plane for the BDFM1 finite element space by randomly generating streamfunction fields $\psi$ from the streamfunction space $S$ on the same mesh as used for the $P1_{DG}-P2$ finite element pair steady state tests in \citet{CoHaPa2009}, with streamfunction equal to zero on the boundary. This mesh is a planar unstructured mesh in the $x-y$ plane in a $1\times 1$ square region. The velocity was initialised by setting $\MM{u}=\MM{k}\times\nabla\psi$ where $\MM{k}$ is the unit normal to the domain \emph{i.e.} $\MM{k}=(0,0,1)$, and $\eta$ was obtained by solving the discrete elliptic system \begin{eqnarray} \int_\Omega \MM{w}^h\cdot\MM{v}^h\diff{V} + \int_\Omega c^2\nabla\cdot\MM{w}^h \eta^h\diff{V} & = & 0 \\ \int_\Omega \alpha^h\nabla\cdot\MM{v}\diff{V} & = & \int_\Omega D\alpha^h\cdot f\left(\MM{u}^h\right)^{\perp}\diff{V}, \end{eqnarray} with $c^2=f=1$. We then integrated the equations forward for arbitrary lengths of time and observed that the layer thickness $h$ and velocity $\MM{u}$ remained constant up to machine precision. We also conducted the same experiment on an icosehedral mesh of the unit sphere with $c^2=f=1$ (following the ``$f$-sphere'' experiment of \citet{ThRiSkKl2009}) and obtained the same result. \subsection{Kelvin waves in a circular basin} Coastal Kelvin waves provide a challenging test since they propagate at the gravity wave speed along the coast but are geostrophically balanced in the direction normal to the coast. We used the Kelvin wave initial condition for a circular basin with unit dimensionless radius as proposed in \citet{HaKrStPi2007}, with $\Ro=0.1$ and $\Fr=1$. We integrated the equations until 10 dimensionless time units with a time step size $\Delta t=0.01$. The mesh used for the Kelvin wave calculation is shown in Figure \ref{kelvin mesh}. Some snapshots of the numerical solution are shown in Figure \ref{kelvin snapshots}. There are no spurious gravity waves observed, which means that the BDFM1 discretisation is maintaining geostrophic balance in the normal direction as well as the Kelvin wave structure. \begin{figure} \centerline{\includegraphics[width=12cm]{kelvinmesh}} \caption{\label{kelvin mesh}Mesh used for the Kelvin wave tests.} \end{figure} \begin{figure} \centerline{\includegraphics[width=6cm]{kelvin0} \includegraphics[width=6cm]{kelvin25}} \centerline{\includegraphics[width=6cm]{kelvin50}} \caption{\label{kelvin snapshots}Snapshots of the free surface elevation for the circular Kelvin wave testcase obtained at times $t=0,2500000,5000000$. The numerical scheme maintains the geostrophic balance in the normal direction, as indicated by the lack of radiated inertia-gravity waves.} \end{figure} \subsection{Rossby waves} To verify the convergence of the method we compared against the Rossby wave solution with streamfunction \[ \psi(x,y,t) = \sin(2\pi x)\sin\left(2\pi\left(y+\gamma t\right)\right), \qquad \gamma = \frac{2\pi}{1+8\pi^2}, \] in a square domain with nondimensional length 1, with nondimensional wave propagation speed $c=Ro^2$, and non-dimensional Coriolis parameter \[ f = \frac{1 + \Ro y}{\Ro}, \] and periodic boundary conditions in the $x$-direction. This is an exact solution of the Rossby wave equation, but is only an asymptotic limit solution of the linearised rotating shallow-water equations as $\Ro\to 0$, with $\mathcal{O}(\Ro^2)$ error. This means that for sufficiently small grid width and time step size we expect the $\mathcal{O}(\Ro^2)$ error to dominate. The numerical solution was initialised from this streamfunction following the balanced initialisation approach described in Section \ref{steady states}. A plot of the error is shown in figure \ref{rossby convergence fig}. We observe $\mathcal{O}(\Delta x^3)$ convergence until the error saturates because of the finite Rossby number. We attribute this third order convergence to the fact that in Section \ref{dispersion relations} the discrete Rossby wave equation was shown to be equal to usual continuous finite element discretisation of the Rossby wave equation using the space $E$, plus a perturbation. Since $E$ contains all of the continuous piecewise quadratic functions, we would expect third-order convergence provided that the perturbation converges to zero sufficiently fast (although we do not currently have any estimates for the convergence of the perturbation). \begin{figure} \centerline{\includegraphics[width=12cm]{rossbyconvergence}} \caption{\label{rossby convergence fig}Plot of errors from the Rossby convergence test with Rossby number $\Ro=1e-3$ and timestep size $\Delta t=0.007996$. The comparison is made after time $\pi/(1+8\pi^2)/2$ after which time the wave has travelled halfway around the domain. For large $\Delta x$ we observe third-order convergence in both $l_2$ and $l_\infty$ norms; for smaller $\Delta x$ the error is dominated by either the timestepping error or the $\mathcal{O}(\Ro^2)$ truncation error in the small Rossby number expansion.} \end{figure} To demonstrate the performance of the numerical scheme on arbitrary manifolds we constructed an unstructured mesh of a cylinder with unit dimensionless radius and dimensionless height equal to 2. The Coriolis parameter was set to $f = (1+\Ro z)/\Ro$ and other parameters were kept the same as the planar Rossby wave tests. We call this configuration the ``$\beta$''-tube since it corresponds to a $\beta$-plane that has been wrapped into a cylinder. Some plots of the numerical integration of this test case are provided in Figure \ref{rossbytube}; no unbalanced motions are visible from the plots. \begin{figure} \centerline{\includegraphics[width=5cm]{rossbytube0} \includegraphics[width=5cm]{rossbytube25} \includegraphics[width=5cm]{rossbytube50}} \centerline{ \includegraphics[width=5cm]{rossbytube75} \includegraphics[width=5cm]{rossbytube100} \includegraphics[width=1.35cm]{rossbycolorbar} } \caption{\label{rossbytube}Rossby waves on the ``$\beta$-tube'' initialised from a streamfunction on a cylinder with a coarse unstructured triangle mesh. Colour plots of the free surface elevation are plotted at non-dimensional times $t=0.79957,19.9892,39.9784,59.9686,79.9568$ from left to right. No unbalanced motions are visible from the plot.} \end{figure} \subsection{Solid rotation on the sphere} To investigate the grid imprinting caused by the finite element scheme, we integrated the linear rotating shallow-water equations on the sphere with initial condition obtained from the streamfunction $\psi=-u_0\cos\theta$, where $\theta$ is the latitude, $u_0=2\pi R/(12\mbox{ days})$, and $R=6.37122\times 10^6$ is the radius of the sphere. The rotation rate $|\Omega|$ was $1/(1\mbox{ day})$, and $g=9.8$. This solution is a steady state solution of the linear equations with varying $f$ because of the cylindrical symmetry; in general we do not expect numerical discretisations which break this symmetry to preserve the steady state. In our experiment, we used a level 4 icosahedral mesh (each icosahedron edge being subdivided into 8) of the sphere. The velocity and free surface elevation were initialised according to the procedure described in Section \ref{steady states}. To measure the deviation from a steady state, the free surface elevation after 10 days of simulation with a timestep of 3600s was subtracted from the initial condition. Remarkably, the errors were almost indistinguishable from round-off error. It turns out that this is because of the mapping used between functions on the sphere, and functions on the icosahedral mesh with flat triangular elements used for the numerical integration. The finite element streamfunction $\psi^h$ was initialised according to $\psi^h=\psi\circ\phi$, where $\phi$ is the mapping given as follows: \[ \phi(x,y,z) = \left( \left(\frac{R^2-z^2}{x^2+y^2}\right)^{1/2}x, \left(\frac{R^2-z^2}{x^2+y^2}\right)^{1/2}y,z \right). \] This mapping preserves the value of $z$, and rescales $x$ and $y$ onto the sphere. Hence, we obtain $\psi^h=z$, which can be represented exactly in the streamfunction space $E$. The same mapping is also applied to the finite element representation $f^h$ of the Coriolis parameter $f$, and we obtain $f^h=2|\Omega|z$ which can also be represented exactly. Following the balanced initialisation procedure, the finite element free surface elevation field $\eta^h$ is obtained by projecting the mapping $\eta\circ\phi^{-1}$ into the pressure space $V$, where $\eta$ is the continuous balanced free surface elevation. Substitution into the velocity equation gives \begin{eqnarray*} \dd{}{t}\int_{\Omega}\MM{w}^h\cdot\MM{u}^h\diff{V} & = & -\int_{\Omega} f^h \MM{w}^h\cdot\left(\MM{u}^h\right)^\perp \diff{V} + c^2\int_{\Omega}\nabla\cdot\MM{w}^h \eta^h\diff{V}, \\ \mbox{[definition of $f^h$, $\psi^h$ and $\eta^h$]} & = & \int_{\Omega} f \MM{w}^h\cdot\nabla\psi\diff{V} + c^2\int_{\Omega}\nabla\cdot\MM{w}^h \eta\diff{V}, \\ \mbox{[integration by parts]} & = & \int_{\Omega} \MM{w}^h\cdot\nabla\left(\underbrace{f\psi-c^2\eta}_{=0}\right)\diff{V} = 0, \end{eqnarray*} where the second step follows since $\nabla\cdot\MM{w}^h\in V$ and so we can use the fact that $\eta^h$ is a finite element projection of $\eta$ in $V$, and where in the last step integration by parts was possible since $\eta$ is continuous and $\MM{w}^h$ has continuous normal components. \begin{figure} \centerline{\includegraphics[width=8cm]{exactsteadyh}\\ \includegraphics[width=8cm]{exactsteadyu}} \centerline{\includegraphics[width=6cm]{exactsteadyu_closeup}} \caption{\label{exact steady}Plots showing the exact steady numerical solution obtained using the balanced initialisation procedure. Top Left: The free surface elevation field. Top Right: The velocity field, plotted by evaluating the finite element field at vertices and edge midpoints of each triangle. Since only the normal components are continuous, there are multiple values of these vectors corresponding to the different elements that share those vertices/midpoints. Bottom: Close-up of the velocity vectors near the equator.} \end{figure} \section{Summary and outlook} \label{summary} In this paper we described some properties of applying finite element spaces satisfying the div and curl embedding properties, applied to the rotating linear shallow water equations, in order to illustrate their possible suitability for numerical weather prediction on quasi-uniform grids. In this context, these methods can be thought of as more flexible extensions of the mimetic C-grid finite difference method that is currently used in many dynamical cores. This extra flexibility means that non-orthogonal grids and grids with rapid changes of mesh resolution can be used, and the ratio of pressure and velocity degrees of freedom can be adjusted to avoid spurious mode branches. We showed that spurious inertia-gravity mode branches will exist if $\dim(E)<\dim(V)$ and spurious Rossby mode branches will exist if $\dim(V)>\dim(E)$. The discrete Helmholtz decomposition implies that $\dim(E) = \dim(S)-\dim(V)+2-\dim(H)$ where $H$ is the space of harmonic velocity fields on the chosen domain. This motivates the search for finite element spaces with $\dim(S)=2\dim(V)$ that can be used on pseudo-uniform grids on the sphere. In Section \ref{examples} we gave two low-order examples: the modified RT0-Q0 element pair for the non-orthogonal cubed sphere, and the BDFM1-${P}1_{{DG}}$\xspace element pair for triangles, the latter of which was illustrated with some numerical examples in Section \ref{numerics}. In future work, we shall aim to benchmark the augmented mixed element pair in the context of numerical weather prediction and ocean modelling. One of the benefits of this pair is that discontinuous Galerkin methods can be used for the nonlinear continuity equation for the density. These methods are locally conservative, have minimal dispersion and diffusion errors, and can be made TVB in combination with appropriate slope limiters as described in \cite{CoSh2001}. Furthermore, as described in \citep{WhLeDe2008}, if one wishes to have tracer transport that is both conservative and consistent, it is necessary use the pressure space for tracers too. This means that tracer transport can (must) also use the discontinuous Galerkin method. Finally, we note that although the BDFM(k)-Pk$_{DG}$ finite element spaces do not have a 2:1 ratio of velocity DOFs to pressure DOFs, there does exists a family of higher-order versions of the BDFM1 element pair with a 2:1 ratio, obtained by appropriately augmenting the $\mathrm{BDM}(k)$ spaces (with $k>0$ odd) with higher-order components that vanish on element boundaries. This does not work out so neatly for $k>1$ since it is also necessary to augment the $P(k)$ space for pressure, to obtain stable element pairs with twice as many velocity DOF as pressure DOF per triangle. In future work, we shall investigate these higher-order element pairs, as well as extensions to tetrahedra in three-dimensions that can be used in unstructured mesh ocean modelling. \section{Acknowledgements} The authors acknowledge funding from NERC grants NE/I016007/1, NE/I02013X/1, and NE/I000747/1. The shallow water solver was produced using the Imperial College Ocean Model finite element library, unstructured meshes were generated using GMSH, and the icosahedral mesh generator was provided by John Thuburn. Plots were obtained using the Python Matplotlib library and Mayavi2. \bibliographystyle{elsarticle-harv}
1,314,259,995,495
arxiv
\section{Introduction} This is the second of two papers where we study the impact of higher derivative terms in field theories. In ref.\cite{bmp} we have pointed out that the presence of these terms in a self interacting single component scalar field theory produces tree level effects which may drive the formation of new vacua which is not accounted for by the decoupling theorem \cite{apca}. Three phases have been studied in the $\Phi^4$ theory by means of the mean-field approximation, the paramagnetic, $<\Phi(x)>=0$, the ferromagnetic, $<\Phi(x)>=\Phi_1\not=0$ and the antiferromagnetic where $<\Phi(x)>$ is an oscillating function. $2^d$ bands have been found in the dispersion relation for the elementary excitations above these vacuua in a certain range of the coupling constants of the d-dimensional theory. A reduced Brillouin zone was introduced for each band. Two zones describe particle like excitations and the others decouple in the mean-field continuum limit. For a special choice of the coefficients of the higher order derivatives the theory possesses a formal chiral symmetry which allows us to decouple the two particles. The two decoupled modes correspond to the sublattices consisting of the even and the odd lattice sites. The theory which has nonvanishing field variables on one of the sublattices only is local and describes a chiral boson. In fact the space inversion exchanges the two sublattices and there is no space inversion partner of the particle in such a model. We extend the analysis \cite{bmp} in the upper critical dimension, $d=4$, for theories in the vicinity of the chiral invariant critical point to the one-loop order and show that the beta functions of the lattice regulated theory with our $O(\Box^2)$ term in the lagrangian are those of an ordinary $\Phi^4$ model and give a renormalized lagrangian in terms of the continuum field variables. The one-loop renormalizability turns out to be a nontrivial consequence of the perturbative renormalizability around the critical point of the usual $\Phi^4$ model without higher order derivative terms because we have to render the dynamics for the two particles finite by fine tuning the set of the parameters of the bare lagrangian with a single quantum field. There is a formal similarity between the tricritical point of the $\phi^6$ model and the chiral symmetrical theory. The mean field solution of the model with the potential \be V(\phi)={g_2\over2}\phi^2+{g_4\over4!}\phi^4+{g_6\over6!}\phi^6 \ee shows a tricritical point at $g_4=0$ which separates the second and the first order phase transition lines with different scaling laws \cite{chang}. In our case the dispersion relation \be G^{-1}(p)=m^2+p^2-c_2a^2p^4+c_4a^4p^6 \ee produces a tricritical point when $c_2$ is sufficiently large to give an absolute minimum at nonvanishing values of the momentum. When $G^{-1}$ at the minimum is negative an inhomogeneous condensate is formed. The chiral symmetrical point where this happens is the Lifshitz point. This was introduced in \cite{lif} where the $\epsilon$-expansion was used to find out the scaling laws. A scalar model where the dispersion relation has a single minimum at nonvanishing momentum was considered in \cite{frenkont} and \cite{braz}. The phase transition towards the inhomogeneous vacuum was identified in the mean field level and the quantum fluctuations were taken into account in \cite{braz}. We will be working at $d=4$ and extend the loop computation into the phase with inhomogeneous condensate which generates a "dangerous irrelevant variable" \cite{dang}. The dispersion relation of our model has several minima hence it contains several particle modes simultaneously. The condensate formation mechanism selects one of these particle sectors in a manner reminiscent of the spontaneous symmetry breaking. There is a technical problem to solve in achieving this goal because more than one particle corresponds to the same quantum field. The formal problem is that higher order derivative terms in the kinetic energy imply the presence of states with negative metric \cite{unit} and may render the effective action complex. But we argue that far below the momentum scale of the condensate we find only two particles, both with positive metric. Their dispersion relations can be replaced in the continuum limit by the usual quadratic expressions coming from a manifestly hermitean free lagrangian \cite{riesz}. The situation turns out to be somehow similar to the species doubling of the lattice fermions where one finds several particle modes in the dispersion relation of a single bispinor field. We introduce a $2^d$-component field variable, $\Phi_\alpha(x)$, $\alpha=1,\cdots,2^d$, for the computation of the one-loop generator functional for the 1PI functions of the different excitation bands and show that it can be made finite by an appropriate fine tuning of the coupling constants of the original lagrangian. The organization of the paper is the following. The basic tools of the perturbation expansion are developed in Section 2. The computation of the effective potential is presented in Section 3. The elimination of the divergences is shown and the finite renormalized coupling constants are obtained in Section 4. Section 5 is devoted to a simple effective theory which reproduces our model at low energy. A brief conclusion is in Section 6. \section{The perturbation expansion} We develop the basic formula for the perturbation expansion in the scalar $\phi^4$ model in $d=4$ with higher order derivatives by keeping the original field variable, $\Phi(x)$. As in \cite{bmp} the theory is regularized on the lattice. \subsection{The lagrangian} The model considered contains a one component field variable, $\Phi(x)$, and is defined by the bare, cut-off lagrangian, \be \cL=\hf\partial_\mu\Phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\Phi(x) +{m^2_B\over2}\Phi^2(x)+{\lambda_B\over4}\Phi^4(x),\label{lagrc} \ee where \be \cK(z)=1+c_2z. \ee We write this lagrangian as \be \cL=\cL_1+\cL_2 \ee with \be \cL_1=\hf\partial_\mu\Phi\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\Phi +{\mr\over2}\Phi^2+{\lr\over4}\Phi^4, \ee and \be \cL_2={\delta Z\over2}\partial_\mu\Phi\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\Phi +{\delta m^2\over2}\Phi^2 +{\dl\over4}\Phi^4. \ee We will use $\cL_1$ non-perturbatively in the selection of the saddle point and $\cL_2$ perturbatively in removing the UV divergences of the loop-corrections. The bare parameters are defined as $m^2_B=\mr+\delta m^2$ and $\lambda_B=\lr+\dl$. We have no counter terms for the coupling constants $c_j$ because their leading order renormalization is at the tree-level. We employ lattice regularization where one introduces the dimensionless variables $x^\mu$, $\varphi=a^{d/2-1}\Phi$, $m^2_L=m^2a^2$ and the unit vectors $(e_\mu)^\nu=\delta_{\mu\nu}$ and writes the action as \be S_1[\varphi]=\sum\limits_x\cL_1(x)=\sum\limits_x\biggl\{K[\varphi;x]+V(\varphi(x))\biggr\}, \ee where \bea K[\varphi;x]&=&-\hf\varphi(x)\biggl[ A\varphi(x)+\sum_\mu\biggl(B(\varphi(x+e_\mu)+\varphi(x-e_\mu))\nonumber\\ &+&C(\varphi(x+2e_\mu)+\varphi(x-2e_\mu)\biggr)\nonumber\\ &+&\sum_{\mu\ne \nu}\biggl(E(\varphi(x+e_\mu+e_\nu)+2\varphi(x+e_\mu-e_\nu) +\varphi(x-e_\mu-e_\nu)\biggr)\biggr] \eea and \be V(\varphi(x))={\ml\over2}\varphi^2(x)+{\lr\over4}\varphi^4(x), \ee The coefficients of the kinetic energy are \bea A&=&-2d+(4d^2+2d)c_2,\nonumber\\ B&=&1-4dc_2,\nonumber\\ C&=&c_2,\nonumber\\ E&=&c_2. \eea The field variable, $\varphi=\varphi_{vac}+\phi$, is the sum of the tree-level vacuum, \be \varphi_{vac}(x)=\varphi_1+\varphi_{2^d}\chi(x), \ee where \be \chi(x)=(-1)^{\sum\limits_{\mu=1}^dx^\mu}, \ee and the quantum fluctuations, $\phi(x)$. We will study the theory in the para- ($\varphi_1=\varphi_{2^d}=0$), ferro- ($\varphi_1\not=0,~~\varphi_{2^d}=0$) and the $(1,2)$ antiferromagnetic ($\varphi_1=0,~~\varphi_{2^d}\not=0$) phases in $d=4$. The lagrangian for the quantum fluctuations is \bea \cL_{1P}&=&\hf\partial_\mu\phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\phi(x) +{\ml\over2}\phi^2(x)+{\lr\over4}\phi^4(x),\nonumber\\ \cL_{1F}&=&\hf\partial_\mu\phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\phi(x) +{1\over2}(\ml+3\lr\varphi_1^2)\phi^2(x)\nonumber\\ &&+\lr\varphi_1\phi^3(x)+{\lr\over4}\phi^4(x),\nonumber\\ \cL_{1AF}&=&\hf\partial_\mu\phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\phi(x) +{1\over2}(\ml+3\lr\varphi_{2^d}^2)\phi^2(x)\nonumber\\ &&+\lr\varphi_{2^d}\chi(x)\phi^3(x)+{\lr\over4}\phi^4(x), \eea \bea \cL_{2P}&=&{\delta Z\over2}\partial_\mu\phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\phi(x) +{\dml\over2}\phi^2(x)+{\dl\over4}\phi^4(x),\nonumber\\ \cL_{2F}&=&{\delta Z\over2}\partial_\mu\phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\phi(x) +\varphi_1(\dml+\dl\varphi_1^2)\phi(x)\nonumber\\ &&{1\over2}(\dml+3\dl\varphi_1^2)\phi^2(x)+\dl\varphi_1\phi^3(x) +{\dl\over4}\phi^4(x),\nonumber\\ \cL_{2AF}&=&{\delta Z\over2}\partial_\mu\phi(x)\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\partial_\mu\phi(x) +\varphi_{2^d}\chi(x)(\dml+\dl\varphi_{2^d}^2)\phi(x)\nonumber\\ &&{1\over2}(\dml+3\dl\varphi_{2^d}^2)\phi^2(x) +\dl\varphi_{2^d}\chi(x)\phi^3(x)+{\dl\over4}\phi^4(x), \eea where the tree-level vacuum is given by \bea P:&\varphi_{P1}=0&\varphi_{P2^d}=0,\nonumber\\ F:&\varphi_{F1}=-{\ml\over\lambda}&\varphi_{F2^d}=0,\nonumber\\ AF:&\varphi_{AF1}=0&\varphi_{AF2^d}=-{\ml+\cm\over\lambda}. \eea Here $\cm$ stands for the eigenvalue of the the kinetic energy on the antiferromagnetic vacuum, \be {(2\pi)^2\over\Lambda^2}\Box\cK\biggl({(2\pi)^2\over\Lambda^2}\Box\biggr)\chi= \cm(d,c_2)\chi, \ee with \be \cm(d,c_2)=4d\cK(-4d)=4d(1-4dc_2). \ee The tree-level conditions for the three phases shown in Fig. 1 are \bea P:&\ml\ge0&\ml+\cm\ge0,\nonumber\\ F:&\ml\le0&\cm\ge0,\nonumber\\ AF:&\ml+\cm\le0&\cm\le0.\nonumber\\ \eea \subsection{The free propagator} The free propagator, \be <\phi(x)\phi(y)>=\int_{p\le\pi}{d^dp\over(2\pi)^d}e^{-ipx}G(p), \ee is given by \be G^{-1}(p)=\tml+\hat p_\mu\hat p^\mu\cK(-\hat p_\mu\hat p^\mu), \ee where mass parameter with the tilde includes the shift due to the condensate \be \tml=\cases{\ml&P,\cr-2\ml&F,\cr-2\ml-3\cm(d,c_2)&AF,}\label{shmass} \ee in the different phases and \be \hat p_\mu=2\sin{p_\mu\over2}. \ee We further write \be G^{-1}(p)={\cal P}^2(p)-c_2{\cal P}^4(p)+\tml, \ee with the help of \be {\cal P}^2(p)=4\sum_\mu\sin^2{p^\mu\over2}. \ee It is advantageous to divide the Brillouin zone, \be {\cal B}=\bigg\{k_\mu,~|k_\mu|\le\pi\biggr\}, \ee into $2^d$ restricted zones, \be {\cal B}_\alpha=\biggl\{|k_\mu-P_\mu(\alpha)|\le{\pi\over2}\biggr\}, \ee whose centers are at \be P_\mu(\alpha)=\pi n_\mu(\alpha), \ee where $n_\mu(\alpha)=0,1$ and the index $1\le\alpha\le2^d$ is given by \be \alpha=1+\sum_{\mu=1}^d\alpha_\mu 2^{\mu-1}. \ee The propagator for the zone ${\cal B}_\alpha$ is \be G_\alpha(p)=G(P(\alpha)+p). \ee It turns out that only the Brillouin zones $\alpha=1$ and $2^d$ contain particle like excitations and the corresponding propagators are \be G^{-1}_\alpha(p)=\tml(\alpha)+Z(\alpha)p^2+O(p^4), \ee where the mass and the wave function renormalization constant are given in Table 1. Note that $\tml(1)=\tml$. The fact that the vacuum is a single Fourier mode offers the possibility of recovering the energy-momentum conservations in the anti-ferromagnetic phase. The possible translations which keep the vacuum invariant consist of an even number of shift of the integer lattice coordinates. The corresponding spectrum of the momentum operator is \be p_{AF\mu}=p_\mu~({\rm mod}\pi).\label{consm} \ee In fact, the function ${\rm mod}\pi$ substracts the part of the momentum which can be exchanged with the antiferromagnetic vacuum and the resulting value is conserved. In this manner the momentum non-conservation is traded for the exchange of the particle type, the "flavor dynamics". \subsection{Chiral symmetry} The chiral transformation \be \chi:~~~~~~~~\phi(x)\longrightarrow\chi(x)\phi(x),\label{chtrrs} \ee which appears as the shift \be p_\mu\to p_\mu+P_\mu(2^d) \ee in the Fourier space is a symmetry of the lagrangian when \be c_2={1\over4d},~~~~c_4=0.\label{chinth} \ee The two particle species are degenerate in the chiral invariant theory. The operator ${\cal P}_\pm=\hf(1\pm\chi)$ identifies the fields which belong to the even or odd sublattices, \be {\cal P}_\pm\phi_\pm=\phi_\pm.\label{slproj} \ee The chiral transformation is represented by \be \phi_\pm\to\pm\phi_\pm, \ee so the chiral fields $\phi_+$ and $\phi_-$ decouple in the chiral invariant theory. The inversion of odd number of coordinates exchanges the chiral fields. The low energy excitations in ${\cal B}_1$ and ${\cal B}_{16}$ correspond to \be \tilde\phi_\pm=\phi_+\pm\phi_-, \ee where the fields $\phi_\pm$ are slowly varying. Thus the low energy excitations of the zones ${\cal B}_1$ and ${\cal B}_{16}$ have space inversion parity $+1$ and $-1$, respectively. \section{The Effective Potential} The renormalization of the theory will be performed in the para- ferro and the $(1,2)$ antiferromagnetic phase in the one-loop order by making the effective potential cut-off independent. It is easy to verify that this latter is enough, i.e. the wavefunction renormalization constant is finite at the one-loop order, $\delta Z=0$. \subsection{A one-loop diagram} In order to develop the appropriate notation we consider first a simple example, the contribution of the second graph of Fig. 2 in the most complicated case, the $(1,2)$ antiferromagnetic phase, \be \Sigma(k)=\hf\lambda^2_R\varphi_{2^d}^2\int_{p\le\pi}{d^dp\over(2\pi)^d} G(k+p)G(p+P(2^d))\label{sigm}. \ee The lattice cut-off of the loop integrals, $p<\pi$, should always be understood as the constraint $|p_\mu|<\pi$, for $\mu=1,\cdots,d$ imposed on the torus ${\cal D}$ unless it is stated otherwise. An integral like this can be written in a simpler form by the help of the following matrix notation. The loop integration is split into the sum over the $2^d$ restricted Brillouin zones, \be \int_{p\le\pi}d^dpf(p)=\sum_{\alpha=1}^{2^d}\int_{p\le\pi/2}d^dpf(P(\alpha)+p), \ee in particular, \be \int_{p\le\pi}d^dpG(p)=\sum_{\alpha=1}^{2^d}\int_{p\le\pi/2}d^dp f(P(\alpha)+p)=\sum_{\alpha=1}^{2^d}\int_{p\le\pi/2}d^dpG_\alpha(p). \ee Returning to our one-loop integral (\ref{sigm}) we find \bea \Sigma_c(k)=\hf\lr^2\varphi_{2^d}^2\sum_{\alpha=1}^{2^d} \int_{p\le\pi/2}{d^dp\over(2\pi)^d}G_\alpha(k+p)G_{\bar\alpha}(p), \eea where we have introduced the region complementer to $\alpha$, \be \bar\alpha=2^d+1-\alpha. \ee To simplify further the latter expression we now promote $\alpha$ to be an internal index distinguishing different kind of fluctuations and define the propagator, \be G_{\alpha,\beta}(p)=\delta_{\alpha,\beta}G(P(\alpha)+p.),\label{prop} \ee which is diagonal in this new internal space. The contribution considered to the self energy is then written in matrix notation, \be \Sigma(k)_c=\hf\lr^2\varphi_{2^d}^2\int_{p\le\pi/2}{d^dp\over(2\pi)^d} \tr\bigl[G(k+p)\gamma^{2^d}G(p)\gamma^{2^d}\bigr],\label{gnull} \ee by the help of the matrix \be \gamma^{2^d}_{\alpha,\beta}=\delta_{\alpha+\beta,2^d+1}, \ee which describes the change of the type of particle after scattering off the vacuum. \subsection{The one-loop effective potential} Let us denote the usual 1PI functions by $\Gamma^{(n)}(p_1,\cdots,p_n)$. The 1PI function for the excitations of the type $\alpha_1,\cdots,\alpha_n$ is given as \be \Gamma^{(n)}(P(\alpha_1)+p_1,\cdots,P(\alpha_n)+p_n). \ee The generator function for the zero momentum excitations, the effective potential, is defined as \be V_{eff}(\Phi)=\sum_{n=0}^\infty{1\over n!} \sum_{\alpha_1,\cdots,\alpha_n}\Phi_{\alpha_1}\cdots \Phi_{\alpha_n}\Gamma^{(n)}(P(\alpha_1),\cdots,P(\alpha_n)). \ee The matrix $\gamma^{2^d}$ in (\ref{gnull}) reflects a modification of the Feynman rules. Whenever a propagator $G_\alpha(p)$ is inserted in a graph it contains the momentum $P_\mu(\alpha)+p$. We keep track of the first term of this sum by introducing a $2^d$-component field, $\Phi_\alpha$, in such a manner that the $\alpha$-th component will be responsible of the excitations in ${\cal B}_\alpha$. Thus the Feynman rules are those of a $2^d$-component field with the propagator (\ref{prop}) and each external line with $p=0$ is represented by the insertion of the matrix \be \Phi\br=\sum_{\alpha=1}^{2^d}\gamma^\alpha\Phi_\alpha, \ee where \be \gamma^\alpha_{\rho,\sigma}=\prod_{\mu=1}^d \delta_{\sigma_\mu+\alpha_\mu-\rho_\mu(mod2),0} \ee takes care of the change of the particle type at each vertex due to the momentum flowing from the external leg. We will use either the index $\alpha$ or its vector representative, $n_\mu(\alpha)$, in the formulae. Taking advantage of the matrix formalism introduced above we obtain \bea V_{eff}(\Phi)&=&\hf\int_{p\le\pi/2}{d^dp\over(2\pi)^d} \tr\ln[{\cal P}^2(P+p)\cK(-{\cal P}^2(P+p))\nonumber\\ &&+\tml+6\lr\Phi\br\varphi\br+3\lr\Phi\br^2],\nonumber\\ &=&\hf\int_{p\le\pi/2}{d^dp\over(2\pi)^d} \tr\ln[{\cal P}^2(P+p)\cK(-{\cal P}^2(P+p))\nonumber\\ &&+\tml+3\lr(\Phi\br+\varphi\br)^2], \eea where the matrix $P$ is given by \be P_{\alpha,\beta}=\delta_{\alpha,\beta}P(\alpha) \ee and the vacuum field is \be \varphi\br=\varphi_1\gamma^1+\varphi_{2^d}\gamma^{2^d}. \ee The complete one-loop effective potential $V^{(0)}(\Phi)+V^{(1)}_{eff}(\Phi)$ for the background field \be \Phi\br=\Phi_1\gamma^1+\Phi_{2^d}\gamma^{2^d}, \ee is obtained in (\ref{varba}) and (\ref{effpoti}), \bea V^{P(0)}(\Phi)&=&\hf\biggl({\cal P}^2(P(1))\cK(-{\cal P}^2(P(1)))+\ml+\dml\biggr)\Phi_1^2\nonumber\\ &&+\hf\biggl({\cal P}^2(P(2^d))\cK(-{\cal P}^2(P(2^d)))+\ml+\dml\biggr)\Phi_{2^d}^2\nonumber\\ &&+{\lambda+\dl\over4}(\Phi_1^4+\Phi_{2^d}^4 +6\Phi_1^2\Phi_{2^d}^2),\nonumber\\ V^{F(0)}(\Phi)&=&V^{P(0)}(\Phi+\varphi_F),\nonumber\\ V^{AF(0)}(\Phi)&=&V^{P(0)}(\Phi+\varphi_{AF}),\nonumber\\\label{treepot} \eea and \bea V^{P(1)}_{eff}(\Phi)&=&\hf\int_{p\le\pi/2}{d^dp\over(2\pi)^d} \sum_{\alpha=1}^{2^{d-1}}\ln\biggl\{\nonumber\\ &&\times\biggl[{\cal P}^2(P(\alpha)+p)\cK(-{\cal P}^2(P(\alpha)+p))\nonumber\\ &&+\ml+3\lr(\Phi_{2^d}^2+\Phi^2_1)\biggr]\nonumber\\ &&\biggl[{\cal P}^2(P(\bar\alpha)+p)\cK(-{\cal P}^2(P(\bar\alpha)+p))\nonumber\\ &&+\ml+3\lr(\Phi_{2^d}^2+\Phi^2_1)\biggr] -36\lambda^2_R\Phi_1^2\Phi_{2^d}^2\biggr\}\nonumber\\ V^{F(1)}_{eff}(\Phi)&=&V^{P(1)}(\Phi+\varphi_F),\nonumber\\ V^{AF(1)}_{eff}(\Phi)&=&V^{P(1)}(\Phi+\varphi_{AF}) \label{loopot}. \eea The mass parameter of the effective potential in the ferro- and the antiferromagnetic phase after the shift $\Phi\to\Phi+\varphi$ is given by (\ref{shmass}). \section{The renormalization in $d=4$} The divergences arising in the one-loop integral for the effective potential are isolated by expanding the logarithm in the integrand. We reintroduce the lattice spacing and use dimensional quantities in the rest of this paper. One finds three divergent integrals, \bea \cD_1&=&\sum_{\alpha=1}^{16}\int_{p\le{\pi\over2a}}{d^4p\over(2\pi)^4}G_\alpha(p)\nonumber\\ \cD_2&=&\sum_{\alpha=1}^{16}\int_{p\le{\pi\over2a}}{d^dp\over(2\pi)^4}G_\alpha(p)^2\nonumber\\ \bar\cD_2&=&\sum_{\alpha=1}^8\int_{p\le{\pi\over2a}}{d^4p\over(2\pi)^4} G_\alpha(p)G_{\bar\alpha}(p), \eea and the divergent part of $V^{AF(1)}(\Phi)$ turns out to be \bea \label{effpdiv} V^{AF(1)}_{div}(\Phi)&=&{C\over2}\cD_1-{C^2\over4}\cD_2-{B^2\over2}\bar\cD_2\nonumber\\ &=&{3\over2}\lr\cD_1[(\Phi_{16}+\varphi_{16})^2+\Phi_1^2]\\ &&-{1\over8}\lr^236\bar\cD_2[(\Phi_{16}+\varphi_{16})^4+\Phi_1^4 +6(\Phi_{16}+\varphi_{16})^2\Phi_1^2]\nonumber\\ &&-{9\over4}\lr^2\Delta\cD_2\biggl[(\Phi_{16}+\varphi_{16})^4+\Phi_1^4 -{4\over3}(\Phi_{16}+\varphi_{16})^2\Phi_1^2\biggr],\nonumber \eea where \be \Delta\cD_2=\cD_2-2\bar\cD_2, \ee and \bea B&=&6\lr\Phi_1(\phi_{16}+\Phi_{16}),\nonumber\\ C&=&3\lr(2\varphi_{16}\Phi_{16}+\Phi_{16}^2+\Phi^2_1). \eea The corresponding expression for the ferromagnetic phase can be obtained by the exchange $1\leftrightarrow16$ of the internal index. The condensate has to be set to zero, $\varphi_\alpha=0$, in the expressions of the paramagnetic phase. The choice of the mass counterterm, \be \dml=-3\lr\cD_1-9\lr^2\varphi_{16}^2\cD_2\label{ctmm}, \ee is straightforward after comparing (\ref{effpdiv}) with (\ref{treepot}). But there is a problem with the counterterm $\dl$ because it can not eliminate the divergences for both particles in the same time when $\Delta\cD_2\not=0$. \subsection{Renormalization with chiral symmetry} The remedy of the problem of the divergences $O(\Phi^4)$ comes from the observation that the chiral symmetry protects against the unwanted divergences. In fact, the chiral transformation, (\ref{chtrrs}), acts as \bea \Phi_\alpha&\to&\Phi_{\bar\alpha},\nonumber\\ G_\alpha(p)&\to&G_{\bar\alpha}(p) \eea on the variables of the effective potential and the propagator and the chiral symmetry requires \be G_{\alpha}(p)=G_{\bar\alpha}(p)\label{symtr} \ee which reduces the number of divergences since we gain the relation \be \Delta\cD_2=\cD_2-2\bar\cD_2=0. \ee The divergent part of the effective potential is now written as \bea V^{AF(1)}_{div}(\Phi)&=&3\lr\cD_1[(\Phi_{16}+\varphi_{16})^2+\Phi_1^2]\nonumber\\ &-&9\lr^2\bar\cD_2[(\Phi_{16}+\varphi_{16})^4+\Phi_1^4+ 6(\Phi_{16}+\varphi_{16})^2\Phi_1^2]. \eea Comparing it with (\ref{treepot}) we arrive at the choice \be \dl=18\lr\bar\cD_2\label{ctml}. \ee Thus one can eliminate the divergences of the chiral symmetrical theory in either of the phases by the help of the appropriate fine tuning of the parameters $m^2_B$ and $\lambda_B$ of the original lagrangian. The chiral invariant theory is invariant under the exchange of the two degenerate particles. Using the chiral fields, $\Phi_\pm=\Phi_1\pm\Phi_{16}$, one can decouple the two particle modes. Let's consider for example the case of the paramagnetic phase. Replacing in (\ref{treepot}) the appropriate values of $\dml$ and $\dl$, as given respectively in (\ref{ctmm}) and (\ref{ctml}), from (\ref{treepot}) and (\ref{loopot}) we get for the effective potential along the chiral line $\chi_P$ (see Fig.1), \be V^P_{eff}(\Phi_1,\Phi_{16})=V_{eff}^{ch}(\Phi_+)+V_{eff}^{ch}(\Phi_-), \ee where \bea V_{eff}^{ch}(\Psi)&=&\hf \mr\Psi^2+{1\over4}\lr\Psi^4 +\hf\sum\limits_{\alpha=1}^{2^{d-1}}\int_{p\le{\pi\over2a}} {d^4p\over(2\pi)^4}\nonumber\\ &&\times\ln\biggl[(P(\alpha)+p)^2 \cK\biggl(-{(2\pi)^2\over\Lambda^2}(P(\alpha)+p)^2\biggr) +\mr+6\lr\Psi^2\biggr]\nonumber\\ &&-3\lr\Psi^2\int_{p\le{\pi\over2a}}{d^4p\over(2\pi)^4}{1\over p^2+\mr}\\ &&+9\lr^2\Psi^2\int_{p\le{\pi\over2a}}{d^4p\over(2\pi)^4}{1\over(p^2+\mr)^2}. \nonumber \eea The same is true along the chiral lines in the other phases. This decoupling arises because in either of the phases at the chiral line the lattice decouples into two different sublattices as explained in Ref. \cite{bmp}. \subsection{Renormalization around the symmetrical point} In order to remove the symmetry with respect to the exchange of the two particles we consider the four dimensional theory with the tree-level cut-off dependence \be \ml=\mr a^2,~~~c_2={1\over16}\biggl[1+\sigma(a\mu)^{2+\kappa}\biggr], \label{rtraj} \ee where $\sigma=\pm1$ and $\mu$ is a mass parameter to characterize the split of the degeneracy in the spectrum, \be \cm=-16\sigma(a\mu)^{2+\kappa}. \ee The quantities referring to the symmetrical theory, $\mu=0$, will be labelled with a star. Since there are several possibilities in reaching the continuum limits as shown in Fig.1b we collect the corresponding conditions for (\ref{rtraj}) in Table 2. We will find that the one-loop corrections do not change qualitatively the tree-level spectrum. The degeneracy, $\mr(2^d)=\mr(1)$, is achieved analytically at the chiral line indicating that the chiral symmetry is not broken dynamically. We find $\mr(1)<\mr(2^d)$ in the phase $F$ and in the region $P_F$ ($P_F$ is the region of the paramagnetic phase on the left side of the chiral line $\chi_P$, see Fig.1b). On the contrary, $\mr(2^d)<\mr(1)$ in the regions $AF$ and $P_{AF}$ (on the right side of the chiral line $\chi_P$ in Fig.1b). We found no singularities in the effective potential due to the discontinuity in the momentum of the condensate when the chiral line is reached from the phases $F$ or $AF$. The complication we face is that there will be finite $\mu$-dependent corrections from the counterterms in the vicinity of the symmetrical theory, $\cD_2\not=2\bar\cD_2$. The detailed study of the $\mu$ dependence in the limit $a\to0$ is presented in Appendix \ref{mudep}. One finds that the $\mu$-dependence drops out from the finite part of the effective potential and is finite for $\cD_2$ and $\bar\cD_2$. By introducing \bea \cD_2&=&\cD_2^\star+\delta\cD_2,\nonumber\\ \bar\cD_2&=&\bar\cD^\star_2+\delta\bar\cD_2, \eea with $\cD_2^\star=2\bar\cD_2^\star$ one finds the finite expressions \bea \delta\cD_2&=&-{1\over16\pi^2}\ln{\tmr(16)\over\tmr(1)},\nonumber\\ \delta\bar\cD_2&=&-{1\over16\pi^2}{\tmr(1)\over\tmr(16)-\tmr(1)}\ln{\tmr(16)\over\tmr(1)}, \eea with \bea \tmr(1)=\cases{\mr&P\cr-2\mr&F\cr-2\mr+48\mu^2(a\mu)^\kappa&AF,} \eea and \bea \tmr(16)=\cases{\mr-16\sigma\mu^2(a\mu)^\kappa&P,\cr-2\mr+16\mu^2(a\mu)^\kappa&F,\cr -2\mr+32\mu^2(a\mu)^\kappa&AF.} \eea These expressions lead to the counterterms \bea \delta m^2&=&-3\lr\cD_1-9\lr^2\varphi_{16}^2\cD_2,\nonumber\\ \delta \lambda&=&18\lr^2\bar\cD_2^\star.\label{countrt} \eea It is well known that the spontaneous symmetry breaking in a ferromagnetic theory changes the counterterms by a cut-off independent finite piece and influences the renormalization group flow at finite energies only. One could, in principle, encounter a different situation in the antiferromagnetic phase because the condensate is formed at the cut-off scale. Furthermore one band of the elementary excitations, in ${\cal B}_{16}$, belongs to the staggered modes which show fast oscillation at the cut-off scale. It is the fine tuning of the value of the minimum of the dispersion relation in the zone ${\cal B}_{16}$ which eliminates the divergent phase dependence in the counterterms and restricts the effects of the phase transitions in the infrared region. \subsection{Mass spectrum} We are now in the position to follow the renormalization in the vicinity of the critical system. The effective potential is written as the sum of the finite and divergent part, \be V^{(1)}_{eff}=V^{(1)}_{fin}+V^{(1)}_{div}, \ee where the second term in the right hand side is defined by (\ref{effpdiv}). One should bear in mind that starting with a single mass parameter in the bare lagrangian we have already introduced different masses for the propagators in the zones ${\cal B}_1$ and ${\cal B}_{16}$. The physical masses which contain the radiative corrections are given by the derivative of the effective potential. \underline{The Brillouin zone ${\cal B}_1$:} The mass square of the excitations is given by \bea \partial^2_{\Phi_1}V^{AF}_{eff}(\Phi)\Big\vert_{\Phi=0} &=&\tmr+\delta m^2+3(\lr+\dl)\varphi_{16}^2\nonumber\\ &&+3\lr\cD_1-9\lr^2\varphi_{16}^2\cD_2\nonumber\\ &&-36\lr^2\varphi_{16}^2\bar\cD_2 +\partial^2_{\Phi_1}V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0}\nonumber\\ &=&m_{ph}^2(1) \eea By the help of the counterterms (\ref{countrt}) we find \be m^2_{ph}(1)=\tmr+3\lr\varphi_{16}^2(1-3\lr\delta\cD_2-12\lr\delta\bar\cD_2) +\partial^2_{\Phi_1}V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0}, \ee where \bea \delta\cD_2&=&-{1\over16\pi^2}\ln{-\tmr+16\mu^2(a\mu)^\kappa \over-\tmr+24\mu^2(a\mu)^\kappa},\\ \delta\bar\cD_2&=&{1\over16\pi^2}\biggl(1+{-\tmr+16\mu^2(a\mu)^\kappa \over-8\sigma\mu^2(a\mu)^\kappa} \ln{-\tmr+16\mu^2(a\mu)^\kappa\over-\tmr+24\mu^2(a\mu)^\kappa}\biggr).\nonumber\\ \eea The computation of the finite part of the effective potential in Appendix C yields vanishing result for the second derivatives with respect either field variables in all phases. So we arrive at \be m^2_{ph}(1)=-2\mr+48\mu^2(\kappa)-18\lr^2\varphi_{16}^2\Delta\cD_2(\kappa), \ee in the continuum limit with \be \mu^2(\kappa)=\lm\mu^2(a\mu)^\kappa=\cases{0&$\kappa>0$,\cr\mu^2&$\kappa=0$,\cr \infty&$\kappa<0$,} \ee and \bea \Delta\cD_2(\kappa)&=&\lm[\cD_2(\mu)-2\bar\cD_2(\mu)]\nonumber\\ &=&\lm[\delta\cD_2-2\delta\bar\cD_2]\\ &=&\cases{{\tmr(1)\over\tmr(1)-\tmr(16)}\ln{\tmr(16)\over\tmr(1)}&$\kappa=0$, \cr0&$\kappa>0$.}\nonumber \eea \underline{The Brillouin zone ${\cal B}_{16}$:} One finds \bea \partial^2_{\Phi_{16}}V^{AF}_{eff}(\Phi)\Big\vert_{\Phi=0} &=&G^{-1}_{16}(0)+\delta m^2+3\dl\varphi_{16}^2+3\lr\cD_1\nonumber\\ &&-36\lr^2\varphi_{16}^2\cD_2 +\partial^2_{\Phi_{16}}V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0}\nonumber\\ &=&-2\tmr+32\mu^2(a\mu)^\kappa+27\lr^2\varphi_{16}^2(2\delta\bar\cD_2 -\delta\cD_2)\nonumber\\ &=&m_{ph}^2(16), \eea which results \be m_{ph}^2(16)=-2\tmr+32\mu^2(\kappa)-27\lr^2\varphi_{16}^2\Delta\bar\cD_2(\kappa). \ee Our conclusion is that for $\kappa>0$ $\delta\bar\cD_2=\delta\cD_2=0$ so the two particles become degenerate and the chiral symmetry is restored in the continuum limit. For $\kappa=0$ the mass spectrum stays non-degenerate. Finally the masses diverge as expected when $\kappa<0$. \underline{The ferromagnetic phase:} We have, in a similar manner \bea m^2_{ph}(1)&=&-2\mr-18\lr^2\varphi_1^2\Delta\cD_2(\kappa)\\ m_{ph}^2(16)&=&-2\tmr+16\mu^2(\kappa)-27\lr^2\varphi_1^2\Delta\bar\cD_2(\kappa).\nonumber \eea \underline{The paramagnetic phase:} The renormalized masses for the line $\chi_P$ are \bea m^2_{ph}(1)&=&\mr,\nonumber\\ m_{ph}^2(16)&=&\mr, \eea showing the presence of the chiral symmetry. In the remaining part of the paramagnetic phase we find a non-degenerate spectrum, \bea m^2_{ph}(1)&=&\mr,\nonumber\\ m_{ph}^2(16)&=&\mr+16\mu^2(\kappa). \eea The $P-AF$ transition line corresponds to the spectrum \bea m^2_{ph}(1)&=&\mr,\nonumber\\ m_{ph}^2(16)&=&16\mu^2\lm(\mu a)^\kappa. \eea \subsection{Coupling constant renormalization} \underline{The Brillouin zone ${\cal B}_1:$} The definition of the renormalized coupling constant is \bea \partial^4_{\Phi_1}V^{AF}_{eff}(\Phi)\Big\vert_{\Phi=0} &=&6(\lr+\dl)-54\lr^2\cD_2+\partial^4_{\Phi_1}V_{fin}^{AF(1)}(\Phi) \Big\vert_{\Phi=0}\nonumber\\ &=&6\lambda_{ph}(1), \eea giving \bea \lambda_{ph}(1)=(\lr+\dl)-9\lr^2(\cD^\star_2+\delta\cD_2) +{1\over6}\partial^4_{\Phi_1}V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0}. \eea With our choice of the counterterms we have \be \lambda_{ph}(1)=\lr-9\lr^2\lm\delta\cD_2+{1\over 6}\lm\partial^4_{\Phi_1} V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0} \ee in the continuum limit. \underline{The Brillouin zone ${\cal B}_{16}:$} The self-coupling constant for the field $\Phi_{16}$ is \be \lambda_{ph}(16)=\lr-9\lr^2\lm\delta\cD_2 +{1\over 6}\lm\partial^4_{\Phi_{16}}V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0}. \ee For the coupling constant which mixes the two fields we have \bea \partial^2_{\Phi_1}\partial^2_{\Phi_{16}} V_{eff}^{AF}(\Phi)\Big\vert_{\Phi=0}&=&6(\lr+\dl)-18\lr^2(\cD^\star_2+\delta\cD_2)\nonumber\\ &&-72\lr^2(\bar\cD_2^\star+\delta\bar\cD_2) +\partial^2_{\Phi_1}\partial^2_{\Phi_{16}}V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0}\nonumber\\ &=&6\lambda_{ph}(1,16). \eea In the continuum limit it is \be \lambda_{ph}(1,16)=\lr-3\lr^2\lm(\delta\cD_2-4\delta\bar\cD_2) +{1\over 6}\lm\partial^2_{\Phi_1}\partial^2_{\Phi_{16}} V_{fin}^{AF(1)}(\Phi)\Big\vert_{\Phi=0} \ee The finite part of the effective potential, $V^{AF(1)}_{fin}(\Phi)$, is computed in Appendix \ref{finap}. The corresponding expressions in the ferro- and the paramagnetic phases are formally the same. \section{A low energy effective theory} Our theory with a single quantum field contains two particles and its antiferromagnetic vacuum is in the ultraviolet regime. So it is not obvious that the evolution of the coupling constants for the two particle like excitations obeys the renormalization group equations which hold for the usual para- or ferromagnetic theories. In order to obtain the renormalization group equation for the potential of the model we introduce the running cut-off, $k$, implemented in each restricted Brillouin zone in a spherical symmetric manner, \be {\cal D}_\alpha(k)=\biggl\{\biggl(p-{\Lambda\over2}n(\alpha)\biggr)^2\le k^2\biggr\}, \ee where $\Lambda=2\pi/a$ and the contributions coming from the edges of the toroidal Brillouin zones are left out. We approximate the dispersion relation in ${\cal D}_1(k)$ and ${\cal D}_{16}(k)$ with an $O(4)$ invariant parabola and neglect the non-particle like excitations. These approximations involve irrelevant operators of the perturbative continuum limit which should not influence the finite energy behavior. Thus the renormalization group equation \cite{senben} for the potential is \bea k\partial_kV^P_k(\Phi_1,\Phi_{2^d})&=&\hf\Omega_dk^d \ln\biggl\{\biggl[k^2+\tmr(1)+\partial^2_{\Phi_1}V^P_k(\Phi)\biggr]\nonumber\\ &&\biggl[k^2+\tmr(2^d)+\partial^2_{\Phi_{2^d}}V_k^P(\Phi)\biggr]\nonumber\\ &&-\partial_{\Phi_{2^d}}\partial_{\Phi_1}V^P_k(\Phi)\biggr\}\nonumber\\ V^F_k(\Phi)&=&V^P_k(\Phi+\varphi_F),\nonumber\\ V^{AF}_k(\Phi)&=&V^P_k(\Phi+\varphi_{AF}), \eea in the leading order of the gradient expansion where $\Omega_d$ stands for the solid angle in $d$ dimensions. The coefficients of the higher order terms in $k^2$ in the logarithm are kept fixed in our approximation. Consider now the following renormalizable continuum lagrangian for a scalar and a pseudoscalar field, $\tilde\phi_+(x)$, $\tilde\phi_-(x)$, respectively, with momentum space cut-off, \be L=\hf(\partial_\mu\tilde\phi_+)^2+\hf(\partial_\mu\tilde\phi_-)^2 +V(\tilde\phi_+,\tilde\phi_-),\label{anth} \ee whose renormalization group equation in the leading order of the gradient expansion is \bea k\partial_kV_k(\tilde\phi_+,\tilde\phi_-)&=&\hf\Omega_dk^d \ln\biggl\{\bigl[k^2+\partial^2_{\tilde\phi_+}V_k(\tilde\phi)\bigr] \bigl[k^2+\partial^2_{\tilde\phi_-}V_k(\tilde\phi)\bigr]\nonumber\\ &&-\partial_{\tilde\phi_-}\partial_{\tilde\phi_+}V_k(\tilde\phi)\biggr\}. \eea The renormalization group flow of this model agrees with our higher derivative theory at low energies when the initial condition \be V_{\Lambda}(\tilde\phi_+,\tilde\phi_-)={m^2_{B+}\over2}\tilde\phi_+^2 +{m^2_{B-}\over2}\tilde\phi_-^2 +{\lambda_B\over4}(\tilde\phi_+^4+\tilde\phi_-^4+6\tilde\phi_+^2\tilde\phi_-^2)\label{anpot} \ee is chosen. In other words, the model (\ref{anth}), (\ref{anpot}) is equivalent with (\ref{lagrc}) at low energy when the continuum limit is taken. The correspondence between the phases is the following, \bea P&\Longleftrightarrow<\tilde\phi_+>=0,&<\tilde\phi_->=0,\nonumber\\ F&\Longleftrightarrow<\tilde\phi_+>\not=0,&<\tilde\phi_->=0,\nonumber\\ AF&\Longleftrightarrow<\tilde\phi_+>=0,&<\tilde\phi_->\not=0. \eea The conserved momentum of the antiferromagnetic phase is (\ref{consm}) and the exchange of the momentum $\pi n(16)_\mu/2$ on the lattice with the vacuum corresponds to the exchange of the scalar and the pseudoscalar particle. Due to the vertex $\tilde\phi_+^2\tilde\phi_-^2$ in the lagrangian a pseudoscalar particle can decay into two scalar ones in the antiferromagnetic phase and the parity is not conserved. The one-loop scaling laws of our theory agree with a usual two component $\phi^4$ model up to irrelevant terms. Thus one may suppose that our theory is not asymptotically free and consequently becomes trivial in the continuum limit. In this case when the cut-off can not be eliminated from the interacting theory the irrelevant terms which were neglected in the comparison might be important and generate different physical content. \section{Conclusions} The one-loop vacuum polarization effects were studied in the para-, ferro- and $(1,2)$ antiferromagnetic phases of the four dimensional $\phi^4$ model around the chiral invariant critical point. One can identify two particle like excitations in each phase. The one-loop divergences were eliminated by an appropriate fine tuning of the parameters of the bare lagrangian and the resulting theory was found to be equivalent at low energies with a usual renormalizable model made by a scalar and a pseudoscalar field. In this continuum limit where the length scale of antiferromagnetic vacuum or the pseudoscalar staggered particle mode tends to zero the well known problems about the unitarity \cite{unit} disappear. One should emphasize that even though the cut-off can be removed and the continuum limit can be taken at the one-loop level the theory can be defined by relying heavily on the regulator. The renormalized continuum theory exists only when the regulator is taken into account both at the tree- and loop-levels in a systematical manner. The possibility of removing the divergences in the presence of an apparently non-renormalizable term in the lagrangian is in principle a serious threat for the usual strategy of Particle Physics where the universality is used to limit our investigations to the class of renormalizable theories. But the result that our model reproduces the infrared structure of a conventional renormalizable one is reassuring because it indicates that there is no new universality class encountered. The antiferromagnetic phase is certainly different compared to the usual $\phi^4$ model with $c_j=0$. But even the para- and the ferromagnetic phases in our higher derivative model become unusual, as well, in the vicinity of the chiral invariant critical point. This is because the dispersion relation develops in all of these phases a second minimum which can be fine tuned around this critical point in such a manner that another particle like excitation, the analogue of the rotons of superfluids, appears. This particle has staggered excitation modes which allow us to introduce the chiral fields which are exchanged between each other under the space-time inversions. These chiral fields decouple in the chiral symmetrical theory. By considering field variables only in the sublattice of the even lattice one can construct models with a single chiral boson. Such a decoupling of the modes is reminiscent of the fermion doubling on the lattice and the resulting model with a single chiral boson is local and satisfies the reflection positivity. Our computation was made at the one-loop level only. There is no conceptual problem in extending our work to higher loop orders though the treatment of the overlapping divergences with unconventional dispersion relation represents a challenging problem. It remains to be seen if the perturbative elimination of the divergences can be achieved beyond the one-loop order. If a theory in the antiferromagnetic phase turns out to be renormalizable then its vacuum appears homogeneous in physical measurements. It is the structure of the excitations only which betrays the non-trivial structure of the vacuum of such a theory. There are numerical indications of the continuum limit in the antiferromagnetic phase for other models with antiferromagnetic vacuum \cite{lcm}. One should mention that there are other possible continuum limits in our model away from the chiral invariant critical region when the mass parameter is kept at a cut-off independent value. This parameter plays a role analogous of the $\kappa$-parameter of the Wilson fermions. In fact, the excitations of the restricted zone ${\cal B}_1$ decouple when $\ml=O(a^0)\not=0$. The only left over excitations in ${\cal B}_{16}$ become critical along the P-AF transition line. Thus the approach of the critical line with a fixed mass parameter results in a theory which contains a single pseudoscalar particle. In a similar manner certain regions of the $c_4\not=0$ part of the phase diagram may become critical and offer a continuum limit. This is because the renormalizability is a rather straightforward issue when only one particle is left in our model. Finally we mention the problem of triviality. It is a frustrating experience that the simplest models such as the $\phi^4$ and QED which are used in the textbooks to demonstrate the renormalization of Quantum Field Theories might well be non-renormalizable if they are found to be trivial. In this case the study of their ultraviolet scaling behaviour serves phenomenological interest and a real ultraviolet fixed point can be achieved by asymptotically free models only. The one-loop ultraviolet structure of our theory turned out ot be similar to the conventional $\phi^4$ model. This leads us to assume that our theory is not asymptotically free and perhaps trivial because its coupling constant which is marginal at the tree-level becomes irrelevant due to the one-loop contributions. This suggests the extension of the investigation of the antiferromagnetic vacuum to other, more involved asymptotically free models which may preserve their renormalizability and offer a more consistent example of a non-homogeneous vacuum which actually appears homogeneous in the experiments.
1,314,259,995,496
arxiv
\section{Introduction} Extragalactic jet sources associated with active galactic nuclei (AGN) are among the most powerful sites for radiation in the cosmos. The jets are highly-collimated outflows, imaged in radio, optical and X-ray in select examples, and can emit up to around \teq{10^{47}}erg/sec in highly variable radiation, spanning wavelength ranges from the radio to TeV gamma-rays. The sub-class of blazars, the most variable of AGN, are believed to have their jets aligned near to the line of sight to Earth. Current understanding of blazar jets is limited by the fact that the physical phenomena of jet formation and collimation, particle acceleration, and radiation, are usually treated as separate problems. The acceleration/radiation interface, the emphasis of this paper, can be suitably addressed using plasma (particle-in-cell; PIC \cite{Nishikawa-2005-ApJ,Sironi-2009-ApJ}) and diffusion (Monte Carlo) codes \cite{Ellison-1990-ApJ,SB12}. To help bridge the gap between science focal areas and theoretical techniques, we embarked upon a program to blend shock acceleration and radiation elements of blazar models \cite{BBS17}. The acceleration process was modeled using the relativistic Monte Carlo simulation shock study of \cite{SB12}, with the principal product being combined thermal plus non-thermal distributions spanning \teq{3-7} decades in particle momentum. These constitute leptons injected in a fairly confined acceleration zone in the jet, which are then subject to radiative dissipation in larger, neighboring jet regions. The radiation modules of \cite{Boettcher-2002-ApJ} and \cite{Boettcher-2013-ApJ} were then employed to characterize the multi-wavelength (MW) synchrotron and inverse Compton signatures. Such broadband modeling affords tighter constraints on jet environmental parameters, including the magnetic field strength (i.e., \teq{\omega_{\hbox{\fiverm B}} = eB/m_ec}), the electron density \teq{n_e} and its plasma frequency \teq{\omega_{\hbox{\sixrm p}} = [4\pi n_e e^2/m_e]^{1/2}}, and the nature and spatial distribution of the field turbulence. To efficiently inject electrons from thermal energies into the acceleration process, \cite{BBS17} determined that the shock needed fairly turbulent fields on small scales, such as would be generated by the Weibel instability: see \cite{Nishikawa-2005-ApJ,Sironi-2009-ApJ} for PIC simulations. To position the synchrotron turnover in the optical, very large mean free paths and inefficient diffusion/acceleration were needed for the highest energy electrons (first concluded by \cite{Inoue-1996-ApJ}), {\it corresponding to very low levels of turbulence}. Deducing such disparate character in field turbulence spanning the large range in momenta of accelerated particles signals the power of this synergistic approach. Since the introductory steady-state foray of \cite{BBS17}, we have moved our program forward by exploring time-dependent properties of energetic electron populations in combination with MW radiation signals, encapsulating the essentials of the competition between shock acceleration and radiative cooling. This has culminated in an extensive offering \cite{BB19_ApJ} on evolving MW SEDs and light curves for blazar flares, of which this paper captures a glimpse pertaining to the famous flat spectrum radio quasar (FSRQ) 3C 279, a low synchrotron-energy-peaked (LSP) blazar. \section{Shock Acceleration and Multi-Wavelength Radiation Emission} Electron distributions for the generation of radiation signatures are obtained in our study using the shock acceleration Monte Carlo simulation of \cite{SB12}. This code tracks the diffusive elements of the first-order Fermi acceleration process (DSA), modeling stochastic pitch angle diffusion of charges convecting along magnetic field lines, yet it also includes episodes of shock drift energization (SDA). The mean-free path for pitch angle scattering is parameterized via \teq{\lambda_{\rm pas}= \eta (p) \, r_{\rm g}}, i.e. a momentum-dependent multiple \teq{\eta (p)} of the particle's gyro radius, \teq{r_{\rm g} = p c / (q B)}, where $p$ is the particle's momentum. A broadly applicable choice for the scaling is a power-law in the particle's momentum, \teq{\eta (p) = \eta_1 \, (p/mc)^{\alpha - 1}}, where \teq{\eta_1} describes the mean free path in the non-relativistic limit, \teq{\gamma \to 1}. Motivations for this form from quasi-linear MHD turbulence theory, hybrid plasma simulations, and {\it in-situ} spacecraft observations in the heliosphere are discussed in \cite{BBS17,SB12}. Examples of simulated distributions for strong, subluminal, mildly-relativistic shocks expected in blazar jets are provided in Fig.~1 of \cite{BBS17}, illustrating that shock acceleration leads to a non-thermal broken power-law tail of relativistic charges that have been accelerated out of the thermal pool. As a consequence of the \teq{\eta (p) \propto p^{\alpha -1}} form, the particle distribution is somewhat steep (\teq{dn/dp \sim p^{-2.2}}) at low momenta when DSA dominates, and much flatter (\teq{dn/dp \sim p^{-1}}) for much higher momenta when SDA is the more effective energization process. These thermal + non-thermal distributions, modulated by an exponential radiation-reaction-limited turnover at a maximum Lorentz factor \teq{\gamma_{\rm max}} (see \cite{BB19_ApJ}), serve as an injection \teq{Q_e(\gamma ,\, t)} into a time-dependent electron/pair evolution code. The electrons then diffuse in a larger radiation zone, surrounding the shock region (see Fig.~2 of \cite{BBS17}), cooling while radiating. This evolving distribution of relativistic electrons is simulated by numerically solving a Fokker-Planck equation of the form \begin{equation} {\partial n_e (\gamma_e, t) \over \partial t} = - {\partial \over \partial \gamma_e} \Bigl( \dot\gamma_e \, n_e [\gamma_e, t] \Bigr) - {n_e (\gamma_e, t) \over t_{\rm esc, e}} + Q_e (\gamma_e, t) \label{eq:FP} \end{equation} using an implicit Crank-Nicholson scheme \cite{Boettcher-2002-ApJ}. Here $\dot\gamma_e$ represents the combined radiative energy loss rate of the electrons. The relevant radiative mechanisms are synchrotron emission in a tangled magnetic field, synchrotron self-Compton (SSC) radiation, and inverse Compton scattering of external radiation fields (external inverse Compton = EIC) on various plausible target photon fields such as from a dusty torus proximate to the jet. The electron distribution \teq{n_e (\gamma_e, t)} is considered in the co-moving frame of the jet, and is assumed isotropic therein, with an electron escape time scale parameterized via a multiple of the light-crossing time scale, \teq{t_{\rm esc, e} = \eta_{\rm esc} \, R/c}. The spatial transfer and temporal evolution of the jet frame photon distribution \teq{n_{\rm ph} (\epsilon, t)} is modeled by solving a continuity equation \begin{equation} {\partial n_{\rm ph} (\epsilon, t) \over \partial t} = {4 \, \pi \, j_{\epsilon} \over \epsilon \, m_e c^2} - c \, \kappa_{\epsilon} \, n_{\rm ph} (\epsilon, t) - {n_{\rm ph} (\epsilon, t) \over t_{\rm esc, ph}} \label{eq:radiation} \end{equation} that serves as an upgrade from the steady-state radiative transfer routines of \cite{Boettcher-2013-ApJ} that were employed in \cite{BBS17}. Here \teq{j_{\epsilon}} and \teq{\kappa_{\epsilon}} are the emissivity and absorption coefficients, respectively, \teq{\epsilon = h \nu / (m_e c^2)} is the dimensionless photon energy, and \teq{t_{\rm esc, ph}} is the photon escape time scale from the radiation emission region. Note that \teq{\gamma\gamma} pair absorption is generally small in the jet for 3C 279 due to its low IC peak energy in the GeV band. Note also that we invoke a dusty torus seed for the external inverse Compton (EIC) component that is Doppler-boosted and highly anisotropic in the jet frame, thereby strongly enhancing the EIC emissivity. Corrections for \teq{\gamma\gamma} absorption by the Extragalactic Background Light are included but are small. The total observed flux is obtained by Doppler boosting \teq{n_{\rm ph} (\epsilon, t)} to the observer frame, with jet-frame and observer time intervals being related through \teq{\Delta t_{\rm obs} = \Delta t \, (1 + z) / \delta}, for a redshift of $z = 0.536$. Our radiation emission/transfer code outputs snap-shot SEDs and multi-wavelength light curves at select frequencies. \section{Modeling Two Contrasting Flares from 3C 279 in December 2013} Our focal blazar, 3C~279, is one of the brightest gamma-ray blazars detected by the Large Area Telescope (LAT) on board the {\it Fermi} Gamma-Ray Space Telescope (e.g., \cite{Abdo10}). It is one of only a few FSRQs also detected in very-high-energy (VHE: $E > 100$~GeV) gamma rays by ground-based Imaging Atmospheric \v{C}erenkov Telescopes (e.g., \cite{Albert-2008-Sci}). The active period of interest was December 2013 -- April 2014, during which extensive multi-wavelength observations of flaring activity were acquired, as reported in \cite{Hayashida15}. Figure 7 of that paper shows multi-wavelength light curves of 3C~279, where several gamma-ray flares (B, C, D) are identified, in addition to a quiescent period (A). In this presentation, we summarize our modeling of two of these ephemerals, the modest Flare C, and the strongly Compton-dominated Flare B, an investigation presented at length in \cite{BB19_ApJ} (see also \cite{BB19}). In the modeling, the jet Doppler factor was presumed to be $\delta \approx 15$, and the variability timescales of the order of hours imply that the active regions cannot be larger than $\sim 10^{16}$cm. Fig.~\ref{fig:FlareC_edist_spec} shows the electron distributions (at left) resulting from the solutions of the kinetic equation in Eq.~(\ref{eq:FP}), together with the MW radiation modeling (at right) associated with Eq.~(\ref{eq:radiation}) for Flare C. The spectra are temporal snap-shots at different times, as labeled, and are compared with observations extracted from \cite{Hayashida15}. The jet/shock parameters for this modeling are given in Tables~2 and~3 of \cite{BB19_ApJ}. We highlight here that the shock acceleration diffusion parameters for the Flare C models were \teq{\eta_1=100} and \teq{\alpha =3}, i.e. that the pitch-angle scattering mean free path scaled as \teq{\lambda_{\rm pas} = 100 \, r_g \, (p/m_ec)^2\, \propto p^3}. These parameters were realized throughout the flare, including at the outset, which is represented by the dark blue curve at right, and constitute the long-term equilibrium state prior to the flare (period A). Thus while the injection rate became enhanced during the flare, the character of the turbulence in the shock environs was not altered. In particular, the \teq{\lambda_{\rm pas}/r_g\gg 1} realization for essentially all electron momenta implies that MHD turbulence levels are extremely low on the pertinent diffusion scales, a necessary imposition for the synchrotron emission to emerge in the optical for this LSP blazar: see \cite{Inoue-1996-ApJ,BBS17} for explanation of this constraint. The connection between turbulence and polarization measures \cite{Hayashida15} is discussed in \cite{BB19_ApJ}. \begin{figure}[hb] \centering \vspace{-12pt} \centerline{\includegraphics[width=7.8cm]{3C279_FlareC_edist.pdf} \hskip -8pt \includegraphics[width=8.0cm]{3C279_FlareC_spec.pdf}} \vspace{-15pt} \caption{{\it Left panel}: Simulated electron distribution sequence \teq{n_e(\gamma_e, \, t)} corresponding to the Flare C spectral sequence on the right. The diagonal line marks the approximate shock injection power law distribution that results primarily from the shock drift (SDA) mechanism: see \cite{BBS17}. {\it Right panel}: Snap-shot SEDs of 3C279 modeling the moderate Flare C on December 31, 2013 at different times. Data are from \cite{Hayashida15}. Curves illustrate the spectral evolution during the rising part of Flare C, the injection phase at times \teq{t< t_i + 5.55}hr; the acqua diamond spectrum signals the evolution during the decaying part. See text for details. } \label{fig:FlareC_edist_spec} \end{figure} The electron distributions on the left of Fig.~\ref{fig:FlareC_edist_spec} evince flat \teq{n_e(\gamma ) \equiv dn/dp \sim p^{-1}} character at low momenta (\teq{\gamma\beta \lesssim 30}; note the \teq{\gamma^2 n_e(\gamma )} representation) due to the prominent action of SDA. The densities rise due to the cumulative injection during the 5.5 hour activation period. At high energies, acceleration first competes with cooling, generating electrons with \teq{\gamma\gtrsim 10^3}, and then succumbs to it at \teq{t>5.5}hrs when the injection is terminated. Thereafter the cooling evolution of \teq{n_e(\gamma)} is apparent, causing a pile-up of electrons at \teq{\gamma\sim 10^2}. The associated radiation signatures on the right of Fig.~\ref{fig:FlareC_edist_spec} consist of a simple rise during the acceleration epoch, followed by a cooling/spectral softening in the optical synchrotron and EIC GeV $\gamma$-ray bands on timescales of \teq{\sim 10}hr, consistent with the {\it Fermi}-LAT light curve in \cite{Hayashida15}. The EIC component seeded by IR radiation from a dusty torus is needed because the low \teq{\gamma_{\rm max}} required to generate a synchrotron peak in the optical produces an SSC peak in the hard X-rays. The radio synchrotron and SSC X-ray fluxes do not die off as quickly during the cooling relaxation phase, a signature that is borne out in the model light curves for Flare C displayed in Fig.~4 of \cite{BB19_ApJ} that were derived for the 4 frequencies marked in Fig.~\ref{fig:FlareC_edist_spec}. Note that the steady radio emission is believed to originate from a region much larger than that for the flares, and so does not provide a useful constraint for the MW data ``fitting'' protocol. Encouraged by this combined MW spectroscopy/temporal success for Flare C, we moved to the more challenging case of the powerful GeV-band Flare B, with results illustrated in Fig.~\ref{fig:FlareB_edist_spec}. At its peak, the $\gamma$-ray flux of this flare is 3-10 times higher than that for Flare C. While a simple increase in the shock acceleration injection rate \teq{Q_e(\gamma, \, t)} could yield this strong signal, it would also produce a similar flare enhancement in the optical synchrotron, which was clearly not seen. Thus, to generate the pseudo-orphan GeV flare with a strongly Compton-dominated model, we lowered the jet frame magnetic field to \teq{B=0.075}Gauss, (corresponding to a mildly-relativistic gyro-scale of \teq{m_ec^2/eB = 2.27 \times 10^4}cm), i.e. a factor of 8 or so below the \teq{B=0.65}Gauss employed for Flare C and the long-term equilibrium MW spectra (dark blue curves in both Figs.~\ref{fig:FlareC_edist_spec} and~\ref{fig:FlareB_edist_spec}). This field reduction was ephemeral, with an exponential recovery on a timescale of just over 3 days, and can be interpreted as being part of larger scale MHD turbulence in the jet. Such magnetic rarefactions followed by some recovery are seen in active regions just downstream of non-relativistic interplanetary shocks in the solar wind \cite{BOEF97}, albeit by factors of around \teq{1.5-2} in field strength. \begin{figure}[ht] \centering \centerline{\includegraphics[width=8.0cm]{3C279_FlareB_edist.pdf} \hskip -8pt \includegraphics[width=8.2cm]{3C279_FlareB_spec.pdf}} \vspace{-15pt} \caption{{\it Left panel}: Simulated electron distribution sequence \teq{n_e(\gamma_e, \, t)} corresponding to the Flare B spectral sequence on the right. The solid blue curve is the long-term equilibrium distribution. As in Fig.~1, the energetic injection from particle acceleration and subsequent cooling are apparent. {\it Right panel}: Snap-shot SEDs of 3C279 modeling the moderate Flare B on December 20, 2013. Data are again from \cite{Hayashida15}. As in Fig.~1, the curves illustrate the spectral evolution during the rising part of Flare B, the injection phase at times \teq{t< t_i + 5.25}hr, and the acqua diamond spectrum signals the evolution during the decaying part. } \label{fig:FlareB_edist_spec} \end{figure} The prominence of the EIC component in Flare B yielded a net flattening of the spectrum in the X-ray band as it dominates the SSC component. This evolutionary spectral trend was commensurate with that of the {\it Swift/NuSTAR} flare data. Another spectral characteristic was the blue-ward shift of the gamma-ray peak as the flare progresses, leading to distinctive spectral hysteresis properties that are depicted in \cite{BB19_ApJ} via familiar hardness-intensity diagrams. To generate this trend, the model required an alteration of the turbulence/diffusion parameters from the quiescent, equilibrium values, and so we employed $\eta_1 = 100 \to 10$ and $\alpha = 3 \to 2.3$. This modification is indicative of modest increases in turbulence levels in the post-shock region \cite{SB12}. The consequence was to raise the value of \teq{\gamma_{\rm max}} by a few. This larger maximum Lorentz factor is apparent in the left panel of Fig.~\ref{fig:FlareB_edist_spec} during the injection phase, as is the rapid cooling in the relaxation phase that spawns hard-to-soft spectral evolution in both the optical and $\gamma$-ray bands. Light curves reflecting this character for the frequencies marked on the SED panel are exhibited in Fig.~7 of \cite{BB19_ApJ}. As with Flare C, the Flare B light curves in the X-ray and {\sl mm} radio peak after the cessation of acceleration/injection and decay much slower than do the flux time profiles for the optical synchrotron and EIC gamma-ray bands. Our full paper \cite{BB19_ApJ} graphically illustrates and discusses such inter-band lags for both flares. No significant TeV emission is predicted by our modeling for either flare, in accordance with the finding by \cite{Boettcher09} that leptonic models have difficulties reproducing the VHE emission observed in several exceptional flare states of 3C~279. The cause of this is the low value of \teq{\gamma_{\rm max}}, imposed by the very strong Compton cooling in the emission region, and required to generate the low synchrotron peak frequency. No VHE gamma-ray emission was detected from 3C~279 during the 2013--2014 flaring episodes discussed in \cite{Hayashida15}. Yet there have been TeV-band detections by Atmospheric \v{C}erenkov Telescopes for this blazar, for example by MAGIC in 2006 \cite{Albert-2008-Sci}, data for which is included as archival VHE flux points (gray) in both Figs.~\ref{fig:FlareC_edist_spec} and~\ref{fig:FlareB_edist_spec}. Modeling of flares evincing emission above 100 GeV may demand more modest injection increases during flares concomitant with reduced Compton cooling, such as was explored for the HBL Mrk 501 in \cite{BBS17,BB19_ApJ}. The treatment of full thermal + non-thermal electron distributions in the shock acceleration modeling from \cite{SB12} employed in this investigation permits measures of the thermal electron number density \teq{n_e} in the jet plasma. This is not afforded by commonplace MW modeling that restricts considerations to purely non-thermal fiducial electron distribution functions. Accordingly we evaluate the plasma frequency \teq{\omega_{\hbox{\sixrm p}}} in our modeling, finding values of \teq{\omega_{\hbox{\sixrm p}} = 8.4}MHz for Flare C and \teq{\omega_{\hbox{\sixrm p}} = 8.3}MHz for Flare B. These values, which for the range of models attainable in our analysis possess a precision to within a factor of a few, establish the approximate energization rates for acceleration processes that connect to inertial effects such as relativistic magnetic reconnection or Weibel-instability instigated turbulent shock acceleration. The B-field values ascertained from the MW spectral fits yielded cyclotron frequencies of \teq{\omega_{\hbox{\fiverm B}} = 11.4}MHz for Flare C and \teq{\omega_{\hbox{\fiverm B}} = 1.3}MHz for Flare B. These measures define the rates \teq{d\gamma /dt \sim \omega_{\hbox{\fiverm B}}/\eta (p)} of cyclotronic acceleration processes such as DSA at relativistic shocks; comparison with the plasma frequencies indicates that reconnection and \underline{efficient} diffusive acceleration (i.e., for \teq{\eta \sim 1}) are more or less on a par for the plasma jet of 3C 279. Yet, our MW spectroscopy demands inefficient SDA shock acceleration conditions, and similar inefficiency would be imposed upon magnetic reconnection scenarios. \section{Conclusion} In this short cameo offering, we present some core results from our numerical analysis coupling Monte-Carlo simulations of diffusive shock acceleration with time-dependent radiation transfer in an internal-shock, leptonic scenario for flares from the FSRQ blazar 3C 279. Our two-zone picture, with a small acceleration zone, in which both diffusive shock and shock drift acceleration are active, and a larger radiation/electron cooling zone, extends our prior work in \cite{BBS17} that laid the foundation for this unique combination approach. Our exploration of distinctive 3C 279 flares here during its 2013-2014 active period generates MW spectroscopy and temporal evolution broadly consistent with the observations. Moreover, it attains a consistency with the general conclusions we obtained in \cite{BBS17} for blazars Mrk 501, BL Lac and AO 0235+164, in its ability to constrain core physics parameters of the jet, namely the cyclotron and plasma frequencies. More complete details are laid out in \cite{BB19_ApJ}, setting the scene for the next stage in our program which will include probes of more complex temporal injection profiles, and application to other bright, flaring blazars. \vspace{3pt} \acknowledgements{We thank NASA for support for early parts of this research program through the Astrophysics Theory Program, grant NNX10AC79G. MGB is also grateful for support from the NASA {\it Fermi} Guest Investigator Program through grant 80NSSC18K1711. The work of M. B\"ottcher is supported by the South African Research Chairs Initiative (grant no. 64789) of the Department of Science and Innovation and the National Research Foundation\footnote{Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any liability in this regard.} of South Africa.}
1,314,259,995,497
arxiv
\section{Introduction } \textcolor{black}{Mobile data traffic is undergoing an unprecedented growth, and it is being further propelled by the proliferation of smart mobile devices, e.g., smart phones and tablets. In particular, the data services subscribed by mobile users have gradually shifted from \textquotedblleft connection-centric\textquotedblright{} communications, e.g., phone calls and text messages, to \textquotedblleft content-centric\textquotedblright{} communications, e.g., multimedia file sharing and video streaming. One main effort to meet such a strong demand is to boost the network capacity via network densification, i.e., to deploy more access points. While this approach is expected to significantly increase the capacity in future 5G networks, it incurs a tremendous demand for backhaul links that connect the access points to the backbone network. Thus, it will cause a heavy financial burden for mobile operators who are required to upgrade the backhaul network, and such a comprehensive approach will not be cost-effective to handle content-centric mobile traffic, which may be bursty and regional. Consequently, a holistic approach is needed and }\textit{\textcolor{black}{cache-enabled content-centric wireless networking}}\textcolor{black}{{} emerges as an ideal solution.} \textcolor{black}{Nowadays, abundant caching storages are available at the }\textcolor{black}{\emph{wireless edge}}\textcolor{black}{, including both base stations (BSs) and user terminals (UTs), which can be used to store popular contents that will be repeatedly requested by users. Since the prices of caching devices, e.g., solid state drives (SSDs), have been coming down year after year, it has become more and more cost-effective to deploy caches instead of laying high-capacity backhaul links \cite{cachebenefit}. Moreover, the ample storages at mobile UTs, currently as large as hundreds of gigabytes, are also potential resources to be utilized for caching. Besides reducing the demand and deployment costs of backhaul links, caching popular content is also an effective technique to lower delays and reduce network congestion \cite{cachebenefit2}, since mobile users may acquire the required files from the serving BSs or the proximal UTs directly without connecting to the backbone network.} \textcolor{black}{The idea of content-centric networking has already been explored in wired networks, where named pieces of content are directly routed and delivered at the packet level, and content packets are automatically cached at routers along the delivery path. Accordingly, caching design at the routers, including content placement and update, is crucial to the system performance. Caching at the wireless edge can draw lessons from its wired counterpart, but it also enjoys new features. The broadcast nature of the radio propagation will fundamentally affect the content caching and file delivery, which has recently attracted significant attention. Another important feature of content-centric wireless networks (CCWNs) is user mobility, which has been less well studied. While mobility imposes additional difficulties on caching design in CCWNs, it also brings about new opportunities. User mobility has been proved to be a useful feature for wireless network design, e.g., it has been utilized to improve the routing protocol in wireless ad hoc networks \cite{exintercontactmodel}. Unfortunately, most previous studies on caching design in CCWNs ignored user mobility and assumed fixed network topologies, which cannot capture the actual scenario. There have been initial efforts on caching designs by incorporating user mobility }\cite{femtomobility}\textcolor{black}{. However, only some special properties of user mobility patterns were addressed and there is a lack of systematic investigation.} The main objective of this paper is to provide a systematic framework that can take advantage of user mobility to improve the caching efficiency in CCWNs. Specifically, a comprehensive discussion of spatial and temporal properties of user mobility patterns will firstly be provided, each of which will be linked to specific caching design problems. We will then propose mobility-aware caching strategies, with two typical design cases as examples. Finally, we will identify some future research directions. \begin{figure}[!t] \centering \includegraphics[width=4.5in]{model_overall} \caption{A sample \textcolor{black}{cache-enabled CCWN. }A mobile user may download the requested file from the BSs or UTs along its moving path that have this file in cache. Once the requested files match the cached data, transmissions over the backhaul network will be avoided. Otherwise, mobile users have to request from the central controller via backhaul links.} \label{model} \end{figure} \section{Exploiting User Mobility in Cache-Enabled CCWNs } \textcolor{black}{In this section, we will illustrate the importance of considering user mobility when designing caching strategies in CCWNs. A sample cache-enabled CCWN is shown in Fig. \ref{model}, where both BSs and UTs have cache storages and are able to cache some pieces of content from the file library. In the following, we will first introduce the main caching design problems in CCWNs, and then identify important properties of the user mobility patterns and associate them with different caching problems.} \subsection{\textcolor{black}{Key Design Problems of Caching in CCWNs}} \textcolor{black}{The fundamental problem in caching design for CCWNs is to determine where and what to cache. The design principles may depend on different types of side information, including }\textit{\textcolor{black}{long-term information}}\textcolor{black}{{} obtained from observations over a long period of time, such as the statistics of users' requests and average communication times with BSs and other UTs, and }\textit{\textcolor{black}{short-term information}}\textcolor{black}{{} generated by instant changes, e.g., instantaneous channel state information and real-time location information. The collection of long-term information incurs a low overhead, while the usage of short-term information can provide better performance but requires frequent update. In the following, we categorize different caching design problems in CCWNs according to the timeliness of the available information.} \subsubsection{\textcolor{black}{Caching Content Placement}} \textcolor{black}{Caching content placement typically relies on long-term system information and is used to determine how to effectively pre-cache content in the available storage. To reduce overhead, the update of side information and caching content will not be very frequent. It is normally assumed that the long-term file popularity distribution is known as a priori, and the network topology can either be fixed or subject to some assumptions in order to simplify the design.} \textcolor{black}{Previous works have provided some insights into caching content placement at BSs. In particular, without cooperation among BSs, the optimal caching strategy is to store the most popular files \cite{cache_coding_Placement}. However, if users are able to access several BSs, each user will see a different but correlated aggregate cache, and in this scenario, allocating files to different BSs becomes nontrivial. Moreover, the coded caching scheme, where segments of Fountain-encoded versions of the original file are cached \cite{cache_coding_Placement}, outperforms the uncoded caching scheme where only complete files are cached. By carefully designing the caching content placement via combining multiple files with a given logic operator, different requests can be served by a single multicast transmission \cite{Codedcaching}, which results in a significant performance improvement compared to the uncoded scheme. } \textcolor{black}{Meanwhile, caching content placement at UTs is also attracting noticeable attention. Caching at UTs may allow users to download requested content in a more efficient way with device-to-device (D2D) communications, where proximal users communicate with each other directly. Compared with caching at BSs, the advantages of caching at UTs come from the lower deployment costs and an automatic promotion of the storage capacity when the UT density increases, as the ensemble of UTs forms an aggregate cache; while the drawbacks include the difficulty of motivating UTs to join the aggregate cache, and the more complicated randomness in the D2D scenario. Pioneering works have shed light on caching content placement at UTs \cite{D2D}.} \textcolor{black}{However, it is noted that previous studies rarely considered user mobility, which can be tracked without much difficulty with today's technologies. If we could make use of long-term statistics of user mobility, such as the average steady-state probability distribution over BSs, the efficiency of content caching will be significantly improved.} \subsubsection{Caching Content Update} Though long-term information incurs a low overhead to obtain, it contains less fine grained information, which may also expire after a period of time and thus cannot assure accuracy. For example, the BS-UT or UT-UT connectivity topology may change quickly due to the movement of UTs. Consequently, it may cause significant errors by using the expired long-term information to design caching strategies. If short-term information is available, such as the real-time information of the file requests and transmission links, caching content can be updated to provide a better experience for mobile users. In the following, we will introduce two caching content update problems. \paragraph{Adaptive caching} Since caching storage is limited, it is critical to replace the stale caching content to improve caching efficiency. Common adaptive caching schemes to increase the cache hit ratio include replacing the least recently used content and replacing the least likely requested content \cite{videoAmazon}. Another typical application of adaptive caching is to serve the users that follow regular mobility patterns and have highly predicable requirements. When the mobility regularity and request preference of mobile users are known, BSs can update the caching content according to the estimation of future requests. The main challenges come from the accurate prediction of users' future positions and requirements, the frequency to conduct the adaptive caching strategy, as well as the replacement priorities for the caching content. \paragraph{Proactive caching} In practice, a user can only download a portion of its requested file rather than the entire file from a BS, as the moving user may not have enough communication time with the BS. Proactive caching aims at providing seamless handover and downloading for users by pre-fetching the requested content at the BSs that will be along the users' future paths with a high probability. Nevertheless, user requests and locations are usually unknown in realistic environments, and thus the accuracy of location prediction is critical to the performance. \subsection{Modeling User Mobility Patterns} \begin{figure} \centering \subfigure[User trajectory]{ % \begin{minipage}[b]{0.45\textwidth}% \includegraphics[width=1\textwidth]{Trajectory} % \end{minipage}\label{trace} } % \begin{minipage}[b]{0.45\textwidth}% \subfigure[Cell sojourn times]{ \includegraphics[width=1\textwidth]{sojourn_2} \label{sojourn} } \\ \subfigure[Timeline of two users]{ \includegraphics[width=1\textwidth]{intercontact} \label{intercontact} } % \end{minipage}\caption{The trajectories of two mobile users based on data collected on a university campus. The two users are moving within a $5000\text{ {m}}\times4000\text{ {m}}$ area. We assume that $20$ BSs are deployed regularly in the area, with the cell indices labeled in (a). The average cell sojourn times of these two users, which denote the duration of the users being connected to each BS, are shown in (b). The transmission ranges of two mobile users are assumed to be $200$ m and the timeline of users 1 and 2, including inter-contact times and contact times, is depicted in (c).} \end{figure} As can be inferred from the above discussions, taking user mobility into consideration is critical for the caching design in CCWNs. In this subsection, we will provide detailed descriptions of different user mobility properties, which can be classified into two categories, i.e., the spatial and temporal properties. The spatial properties contain the information of user mobility patterns related to the physical locations, while the temporal properties characterize the time-related features. \subsubsection{Spatial Properties} The mobility pattern of a mobile user can be visualized by the \emph{user trajectory}, i.e., the user's moving path. Crucial information for caching design in CCWNs, e.g., serving BSs, and distances between BSs and mobile users, can be obtained from the trajectories of the mobile users. It is an ongoing research topic to investigate realistic models for user trajectory, e.g., the random waypoint model in \cite{randomwaypoint}. As an example, the trajectories of two mobile users are shown in Fig. \ref{trace}, which are based on data collected on a university campus \footnote{I. Rhee, M. Shin, S. Hong, K. Lee, S. Kim, and S. Chong, \textquotedblleft CRAWDAD dataset ncsu/mobilitymodels (v. 2009-07-23),\textquotedblright{} Downloaded from http://crawdad.org/ncsu/mobilitymodels/20090723, Jul. 2009.}. The \emph{cell transition}, which denotes the transition pattern of a user moving from one cell to another, implies the information of serving BSs for each mobile user, which is one of the most critical pieces of information in caching design at BSs. Compared to the user trajectory, the cell transition contains less fine grained information as the moving path inside each cell cannot be specified. It is appropriate to capture the transition property using a Markov chain model \cite{MVchain}, where the number of states equals the number of BSs. In the Markov chain, one state denotes a specific user being served by a given BS, and the transition probabilities represent the probabilities for a specific user moving from the serving area of one BS to that of another BS. Recently, it has been found that user mobility patterns also largely depend on the social relations among mobile users. For example, it was claimed in \cite{sociallink} that the mobile users having relatively strong social ties are more likely to have similar trajectories. In \cite{musolesi2004ad}, Musolesi \emph{et al.} proposed a two-level mobility model, which first establishes a social graph, where the nodes represent mobile users and the weighted edges represent the strength of the social connection between mobile users. Then, \textit{social groups} are built and mobile users in each group move together. Such information will be useful for caching at UTs. \subsubsection{Temporal Properties} To capture the information of the frequency and duration that two mobile users are connected with each other, the timeline of an arbitrary pair of mobile users is represented by \emph{contact times} and \emph{inter-contact times}, where the \emph{contact times} are defined as the time intervals during which the mobile users are within the transmission range, and the \emph{inter-contact times} are defined as the time intervals between two consecutive contact times. The timeline of two users shown in Fig. \ref{trace} is illustrated in Fig. \ref{intercontact}. Such a mobility model has been applied to routing problems in ad hoc networks. For instance, in \cite{exintercontactmodel}, Conan \emph{et al.} modeled locations of contact times in the timeline of each pair of mobile users as a Poisson Process so as to capture the average pairwise inter-contact times in an ad hoc network. The \textit{cell }\emph{sojourn time }denotes the time duration of a specific user served by a given BS, which may affect the amount of data that this user can receive from the BS. Fig. \ref{sojourn} shows the cell sojourn times of the two users whose trajectories are shown in Fig. \ref{trace}. Specifically, in \cite{MVchain}, Lee \emph{et al.} provided an approach to obtain the sojourn time distributions according to the associated moving history of mobile users. The user mobility pattern always possesses a periodic property, which can be exploited to tackle the caching update problem. The \emph{return time}, which is defined as the time for an arbitrary mobile user to return to a previous visited location, is considered as a measure to reflect the periodic property and the frequency of mobile users to revisit a given area. In \cite{returntime}, Gonzales \emph{et al.} measured the distribution of the return time and figured out that the peaks of the return time probability are at $24$ h, $48$ h and $72$ h. \subsection{Exploiting Mobility for Caching in CCWNs} Built upon the information given in the above two subsections, potential approaches will now be proposed to take advantage of user mobility patterns to resolve different caching design problems in CCWNs, as summarized in Table \ref{tab_mobility}. \subsubsection{Caching content placement at BSs} In CCWNs, as a user moves along a particular path, the user may download the requested file from all the BSs along this path, and different BSs may cooperatively cache this file to improve the efficiency. For this purpose, the statistic and predictive information of the BSs along the user trajectory, which can be obtained based on user trajectory or cell transition probabilities, will be needed. Compared to cell transition probabilities, the user trajectory provides additional information, i.e., different transmission distances from BSs in different cells, which can help better design the BS cooperative caching in CCWNs. For example, different transmission distances may result in different transmission rates, which will affect the amount of data that can be downloaded from different BSs. Furthermore, the cell sojourn time is also a critical factor to determine the amount of data that can be delivered, and thus will also affect the caching content placement at BSs. \subsubsection{Caching content placement at UTs} By enabling caching at UTs, mobile users may get the requested files via proximal D2D links. For caching design in such a setting, the information related to inter-user contacts is essential. In particular, inter-contact times and contact times will be valuable information, which will be further illustrated in the design examples in the next section. In addition, social relations may help to decompose a large network into several small social groups, and thus reduce the complexity of caching design. Meanwhile, social groups also imply some contact information, i.e., mobile users in the same social group are more likely to have more contacts \cite{social-allocation}. Thus, social group information can also be utilized to design caching content placement at UTs. \subsubsection{Adaptive caching} The caching content can be adjusted adaptively based on the periodical mobility pattern, for which the knowledge of return times will be very useful. Moreover, mobile users in different social groups may have different content preferences. Thus, the mobility pattern of each social group can be utilized to improve the adaptive caching design. For example, in a restaurant, there may be several customer groups with different content preferences during different time periods, e.g., elders may enjoy the morning tea, students will have lunch with friends, and office workers may have dinner together. The BSs around the restaurant may perform adaptive caching updates accordingly. \subsubsection{Proactive caching} If the user trajectory or cell transition property can be estimated based on past data, the future serving BSs for mobile users can be predicted. In this way, if a mobile user requests a certain file, the BSs that are predicted to be on its future path may proactively cache the requested file, each with a certain segment, and then the user can download the file when passing by. While it may slightly increase the backhaul traffic, such proactive caching can significantly improve the caching efficiency and reduce download latency. The above proposals are by no means complete. Nevertheless, they clearly indicate the great potential and importance of mobility-aware caching in CCWNs. We hope this discussion will inspire more follow-up investigations. \begin{table} \caption{Exploiting Mobility for Caching in CCWNs} \label{tab_mobility} \centering % \begin{tabular}{|p{1in}<{\centering}|p{0.69in}<{\centering}|p{0.69in}<{\centering}|p{0.69in}<{\centering}|p{0.69in}<{\centering}|p{0.69in}<{\centering}|p{0.69in}<{\centering}|} \hline & \multicolumn{3}{c|}{Spatial Properties} & \multicolumn{3}{c|}{Temporal Properties}\tabularnewline \cline{2-7} & User trajectory & Cell transition & Social group & User inter-contact time & Cell sojourn time & Return time \tabularnewline \hline Caching content placement at BSs & \ding{52} & \ding{52} & \textbf{---} & \textbf{---} & \ding{52} & \textbf{---} \tabularnewline \hline Caching content placement at UTs & \textbf{---} & \textbf{---} & \ding{52} & \ding{52} & \textbf{---} & \textbf{---} \tabularnewline \hline Adaptive caching & \textbf{---} & \textbf{---} & \ding{52} & \textbf{---} & \textbf{---} & \ding{52} \tabularnewline \hline Proactive caching & \ding{52} & \ding{52} & \textbf{---} & \textbf{---} & \textbf{---} & \textbf{---} \tabularnewline \hline \end{tabular} \begin{tablenotes} \item General: '\ding{52} ' means that the mobility property can be utilized in the corresponding caching design problem, and '\textbf{---}' means that the mobility property may not be utilized. \end{tablenotes} \end{table} \section{Mobility-Aware Caching Content Placement} In this section, we present two specific design examples for mobility-aware caching content placement, including caching at BSs and caching at UTs. Sample numerical results will be provided to validate the effectiveness of utilizing user mobility patterns in wireless caching design problems. \begin{figure} \centering \subfigure[Caching at BSs]{ % \begin{minipage}[b]{0.5\textwidth}% \includegraphics[width=1\textwidth]{Model_BS} % \end{minipage}\label{trace-1} }\centering \subfigure[Caching at UTs]{ % \begin{minipage}[b]{0.45\textwidth}% \includegraphics[width=1\textwidth]{model_D2D} % \end{minipage}\label{trace-1-1} } \caption{Wireless Caching Networks. BS caching is shown in (a), where a user requests a file and passes by BSs numbered \{1, 2, 5, 2, 3\} in sequence. The user can obtain the requested file by collecting data from these BSs. D2D caching is shown in (b), where UT 1 requests a file, and it has not stored the file in its own cache. After a period of time, UT 1 encounters UT 3 which stores the requested file, and it downloads the file from UT 3.} \end{figure} \subsection{Mobility-Aware Caching at BSs} \textcolor{black}{We first consider utilizing the cell sojourn time information to design caching content placement at BSs, which may be macro BSs or femto-cell BSs. A sample network is shown in Fig. \ref{trace-1}. For simplicity, we assume the downlink rate for each user is the same while passing by each BS, and cell sojourn times are estimated based on available data. Mobile users will request files in the file library based on their demands, which is assumed to follow a Zipf distribution. Both uncoded and coded caching schemes are considered. In the uncoded case, we assume that each file is either fully stored or not stored at each BS. In the coded case, rateless fountain codes are applied, where each BS may store part of a coded file, and the whole file can be recovered by collecting enough coded message of that file \cite{cache_coding_Placement}. When a mobile user requests a file, the user will try to collect the requested file while passing by each BS. The proportion of the requested file that can be downloaded from a BS is limited by the transmission rate and the sojourn time in this cell, as well as the proportion of the requested file stored at this BS. We aim to minimize the cache failure probability, which is the probability that the mobile users cannot get the requested files from cached contents at BSs. The coded caching placement problem can be formulated as a convex optimization problem, while the uncoded caching placement can be obtained by solving a mixed integer programming (MIP) problem.} \textcolor{black}{We evaluate the performance of the proposed mobility-aware caching strategies based on a real-life data set of user mobility, which was obtained from the wireless network at Dartmouth College }\footnote{\textcolor{black}{D. Kotz, T. Henderson, I. Abyzov, and J. Yeo, \textquotedblleft CRAWDAD dataset dartmouth/campus (v. 2009-09-09),\textquotedblright{} Downloaded from http://crawdad.org/dartmouth/campus/20090909, Sept. 2009.}}\textcolor{black}{. Following caching placement strategies are compared: } \begin{itemize} \item \textcolor{black}{{Mobility-aware coded caching strategy}, which is the proposed coded caching strategy obtained by solving a convex optimization problem.} \item \textcolor{black}{{Mobility-aware uncoded caching strategy}, which is the proposed uncoded caching strategy obtained by solving an MIP problem.} \item \textcolor{black}{{MPC strategy}, which is a heuristic caching strategy, for which each BS stores the most popular contents \cite{videoAmazon}. } \end{itemize} \textcolor{black}{The comparison is shown in Fig. \ref{BScachingdesign}, where a larger value of the Zipf parameter $\gamma_{p}$ implies the requests from mobile users are more concentrated on the popular files. We see that the mobility-aware caching strategies outperform the heuristic caching strategy, and the performance gap expands with $\gamma_{p}$, which demonstrates the value of the mobility information. Moreover, the coded caching strategy performs better than the uncoded caching strategy, which validates the advantage of coded caching.} There are many interesting problems for further investigation. For example, the user trajectory can be utilized to consider variant download rates, which will affect the amount of data obtained in different cells. In addition, based on the user trajectory, it is possible to jointly deal with the caching problem and interference management. Another challenge is that many BS caching problems are typically NP-hard, and thus time-efficient sub-optimal algorithms are needed. \begin{figure}[!t] \centering \includegraphics[width=3.4in]{BScaching} \caption{Comparison of different BS caching content placement strategies, with 6 BSs and a file library of 100 files, while the caching storage capacity of each BS is the size of one file.} \label{BScachingdesign} \end{figure} \subsection{Mobility-Aware Caching at UTs} In this subsection, we will focus on caching at UTs. We consider taking advantage of average inter-contact times among mobile users to improve the caching efficiency at UTs. An illustrative example is shown in Fig. \ref{trace-1-1}.\textcolor{blue}{{} }\textcolor{black}{The locations of contact times in the timeline for any two mobile users are modeled as a Poisson process, as in \cite{exintercontactmodel}, where the intensity is estimated from the history data. For simplicity, the timelines for different pairs of mobile users are assumed to be independent, and each file is assumed to be either completely stored or not stored at each UT. Mobile users will request files in the file library based on their demands, which is assumed to follow a Zipf distribution.} When a mobile user generates a request, it will first try to find the requested file in its own cache, and will then wait for encountering users storing the requested file. The \emph{delay time} is defined as the time between when a user requests a file and when it encounters the first user storing the requested file. We assume that if the mobile user stores the requested file or its delay time is within a pre-determined delay threshold, it will be served via D2D links; otherwise, it will get the file from the BS. To offload the traffic from BSs and encourage proximal D2D transmissions, we set the objective as to maximize the \textit{\textcolor{black}{data offloading ratio,}}\textcolor{black}{{} which is the fraction of users that can get requested files via D2D links. }This turns out to be a challenging problem and falls in the category of monotone submodular maximization over a matroid constraint, which can be solved by a greedy algorithm with an approximation ratio as $\frac{1}{2}$. The performance of mobility-aware caching at UTs is evaluated based on a real-life data set, which was collected at the INFOCOM conference\footnote{J. Scott, R. Gass, J. Crowcroft, P. Hui, C. Diot, and A. Chaintreau, CRAWDAD dataset cambridge/haggle (v. 2009\nobreakdash-05\nobreakdash-29), downloaded from http://crawdad.org/cambridge/haggle/20090529, doi:10.15783/C70011, May 2009. } \cite{reallifedata}. Considering that most requests may occur in the daytime, we generate average inter-contact times according to the daytime data during the first day of the conference. The following caching placement strategies are compared: \begin{itemize} \item {Mobility-aware greedy caching strategy}, which is the proposed caching strategy using a greedy algorithm. \item {Mobility-aware random caching strategy}, which is similar to the random caching strategy proposed in \cite{D2D}. In this strategy, each UT caches files according to a Zipf distribution with parameter $\gamma_{c}$. The optimal value of $\gamma_{c}$, which maximizes the expected fraction of users that can get requested files via D2D links, is obtained by a line search. \item {MPC strategy}, which is the same as the one used in Fig. \ref{BScachingdesign}. \end{itemize} Based on the data during the daytime in the second day of the conference, the performance of three caching strategies are compared in Fig. \ref{D2Dcachingdesign} by varying the file request parameter. \textcolor{black}{It shows that both mobility-aware caching strategies significantly outperform the MPC strategy, and the performance gain increases as $\gamma_{c}$ increases. Furthermore, the mobility-aware greedy caching strategy has a better performance than the mobility-aware random caching strategy, since the former strategy incorporates average pairwise inter-contact times more explicitly and allows more optimization variables. Through extensive simulations, we also observe that, as the number of users increases, the data offloading ratio using mobility-aware caching strategies increases, while the MPC strategy remains the same. Meanwhile, using mobility-aware strategies, the data offloading ratio increases as the user mobility increases, and the greedy caching strategy always outperforms the random one. }This implies that a better utilization of user mobility patterns can further improve the caching efficiency. While this initial study provides promising results, lots of challenges remain. For example, since the number of mobile users in a CCWN is usually very large, collecting the pairwise inter-contact times will cause a high overhead. One potential solution is to decompose the large number of mobile users into several social groups, and then design caching content placement at UTs based on the inter-contact times of mobile users within the same social group. Moreover, coded caching strategies can also be applied, which is a prominent approach to further optimize the caching efficiency. \begin{figure}[!t] \centering \includegraphics[width=3.4in]{D2Dcaching_3} \caption{Comparison of different caching content placement strategies at UTs with 78 mobile users and a file library consisting of 1000 files, while each UT can cache at most one file.} \label{D2Dcachingdesign} \end{figure} \section{Conclusions and Future Directions} In this paper, we conducted a systematic study that investigated the exploitation of user mobility information in cache-enabled CCWNs. Useful spatial and temporal mobility properties were identified and linked to key caching design problems. Through two design examples, the advantages and effectiveness of mobility-aware caching were demonstrated. To fully exploit mobility information in CCWNs, more works will be needed, and the followings are some potential future research directions. \begin{itemize} \item \textit{Joint caching content placement at the wireless edge}: In practice, many caching systems consist of more than one layer of caches, which leads to a more complicated hierarchical caching architecture. In CCWNs, while most existing works, as well as our discussion in this paper, treated caching at BSs and UTs as separate problems, a joint design of caching at both BSs and UTs will be essential to further improve the system performance. \item \textit{Dynamic user caching capacities}: Unlike BSs, the caching capacities at UTs may not be fixed, since they are related to storage usages of mobile users, which are different from user to user and are changing over time. It is thus important to investigate how to adaptively cache according to the dynamic user caching capacities, while also taking user mobility into consideration. \item \textit{Big data analytics for mobility information extraction}: With the explosive growth of mobile devices, collecting user mobility information will generate huge amounts of data. Thus, big data analytics to extract the required mobility information is another challenge in mobility-aware caching. Meanwhile, accurate prediction is also critical. Though some existing user mobility models can predict the future mobility behavior via historical data, e.g., the Markov chain model in \cite{MVchain} can jointly predict the cell transition and cell sojourn time, more works will be needed, e.g., on how to predict the user trajectory. \textcolor{black}{It is also important to investigate how different mobility models will affect the performance of caching strategies.} \item \textit{Privacy issues}: In order to take advantage of the user mobility pattern, some personal information, e.g., home locations and work place locations, may be divulged in the collected mobility information. This will certainly cause some concerns on the privacy issues. Thus, how to extract the useful user mobility information without touching the individual privacy is important. Location obfuscation and fake location injection mechanisms may serve as potential approaches for anonymous traces. \end{itemize} \bibliographystyle{IEEEtran}
1,314,259,995,498
arxiv
\section{Introduction}\label{sec:int} In this paper we consider circular and radial motions in combs of circular geometry, see Fig.~\ref{fig:fig1}, where the radii are continuously distributed over the circle, and the circular motion takes place for the fixed radius $r=R$, only. Fractional diffusion in this geometry has been studied recently, where both outward and inward radial diffusion has been considered analytically \cite{FaLiZhLi2019} and numerically \cite{LiFaLi2020}. Finite time evolution of both angular and radial probability distribution functions as well as the mean squared displacement have been observed analytically \cite{FaLiZhLi2019} and numerically \cite{FaLiZhLi2019,LiFaLi2020} for different realizations of the boundary conditions for both angular and radial motions. Further analytical study of the system is important to understand asymptotic transport in the system. We consider two possibilities of boundary conditions for angular diffusions. The first one corresponds to the reflecting boundary condition, and the second one corresponds to the periodic boundary condition. Our study of anomalous diffusion in this comb geometry is also motivated by consideration of an idealized radial transport, which can be also related to the radial transport model for the Tore Supra tokamak, considered in Ref. \cite{MiRa2018}. We however disregard the avalanche dynamics, described by L\'evy flights, and concentrate our attention to the geometry impact on the topologically restricted transport in the framework of the circular comb model, which we call here ``umbrella comb model''. It is also related to circular anomalous diffusion in presence of inhomogeneous magnetic fields \cite{Gr2008}. Anomalous transport in this umbrella comb is described by a probability distribution function (PDF) $P(r,\phi,t)$ in polar coordinates to find a particle at the position $(r,\phi)$ at time $t$ in the framework of a Fokker-Planck equation as follows \begin{subequations}\label{int-uc-1} \begin{align} \partial_tP=\Delta P\, , \label{int-uc-1a} \\ \Delta=D_{\phi}\frac{1}{r^2}\partial_{\phi}^2 +D_r\frac{1}{r}\partial_r r\partial_r\, . \label{int-uc-1b} \end{align} \end{subequations} Here $D_{\phi}(r)=D_1\delta(r-R)$ and $D_r=D_r(r)$ are diffusion coefficients of the angular and radial directions, respectively. Note that for the singular $D_{\phi}$, the transport in the angular direction takes place at $r=R$ only. The radial diffusion coefficient $D_r(r)$ is a function of the radius, and this dependence is specified for every scenario separately. We consider two scenarios of different realizations of radial diffusion. In the first scenario we consider geometric Brownian diffusion along the radii that results from the conformal map of normal diffusion in $x-y$ plane comb to the circular motion, as shown in Fig.~\ref{fig:fig1}. In general case of the conformal map realization, one imposes the periodic boundary conditions for the circular motion at $\phi=\pm\pi$, namely $P(R,\phi=\pi,t)=P(R,\phi=-\pi,t)$ for the PDF, and the shifting boundary conditions for the probability current $\partial_{\phi}P(R,\phi=\pi,t)=-\partial_{\phi}P(R,\phi=-\pi,t)$. In this case, the amplitudes of the diffusive currents at $\phi=\pm\pi$, clockwise and counterclockwise, are equal to each other. Due to this symmetry at $\phi=\pm\pi$, a cut along the $\phi=\pi$ ray can be performed. Note that this scenario results from the possible symmetry with respect to the $x$ axis for angular diffusion. It is supported by symmetrical diffusion obtained numerically in Ref. \cite{LiFaLi2020}. The radial diffusion coefficient is $D_r(r)=D_2r^2$ as the result of the conformal map. We however consider reflecting boundaries at $\phi=\pm\pi$ motivated by dynamical chaos \cite{LiLi83}. That is, there is an infinite wall\footnote{Note that this specific choice of the boundary condition can be replaced by a delta potential, which affects the circular diffusion like in the Azbel'-Kaner effect \cite{Io94}, or in chaotic motion of persistent current \cite{Io95}.}, at $\phi=\pm \pi$, or a cut along the $\phi=\pi$ ray, where $\partial_{\phi}P(R,\phi=\pm \pi,t)=0$. The second scenario corresponds to the realization of Brownian normal diffusion along the radii, when the radial diffusion coefficient is taken to be a constant value, $D_r=\mathrm{const}$. In this case, we consider the periodic boundary condition at $P(R,\phi=\pi,t)=P(R,\phi=-\pi,t)$. The zero boundary conditions for the radial directions will be specified separately for each angular scenario. In sequel, the section titles are according to the different boundary conditions. However, one should bear in mind that the difference among these scenarios is due to their radial diffusivity. That is, the first one is a consequence of the conformal mapping of the $x-y$ comb, which leads to inhomogeneous (space dependent) radial diffusivity, while the second one is according to $r-\phi$ comb constraint with a constant diffusion coefficient in the radial direction. \begin{figure}[htbp] \includegraphics[width=1.0\hsize]{./Fig1.eps} \caption{Schematic picture of an umbrella comb (a) and its conformal mapping into the $x-y$ strip comb (b). The radii are continuously distributed over the circle of the radius $R$, and the angular motion is possible only at $r=R$. Correspondingly, the $x$ fingers are continuously distributed along the $y$ backbone.} \label{fig:fig1} \end{figure} \section{Reflecting boundary conditions}\label{sec:rbc} In this section, we concentrate our attention on the geometry impact in the framework of a standard comb model, which can be mapped onto the circle and vice versa by the conformal way. Then the complex plane $w=(r,\phi)$ is mapped on the complex plane $z=(x,y)$. The reflecting boundary conditions play important role at this conformal mapping. Indeed, if we take a cut along $\phi=\pi$ that yields the reflecting boundaries at $\phi=\pm\pi$. Then we have $x=\ln (r/R)$ and $y=R\phi$ \cite{LavrShab87}; the map is shown in Fig. \ref{fig:fig1}. The comb model, which describes anomalous diffusion in the $x-y$ strip, shown in Fig.~\ref{fig:fig1}, reads \begin{equation}\label{rbc-1} \partial_tP(x,y,t)=D_1\delta(x)\partial_y^2P(x,y,t) +D_2\partial_x^2P(x,y,t)\, . \end{equation} The boundary and the initial conditions are $P(x=\pm\infty,y,t)=\partial_xP(x=\pm\infty,y,t)=0$, $\partial_yP(x,y=\pm \pi R,t)=0$, and $P(x,y,t=0)=\delta(x)\delta(y)$, respectively, and these conditions reflect the boundaries and the initial condition in the polar coordinates, as well. The radial motion in the umbrella comb in Eq. \eqref{int-uc-1a} corresponds to a dilation(contraction) operator $D_2(r\partial_r)^2$, which results from the conformal map $D_2\partial_x^2\rightarrow D_2(r\partial_r)^2$ with the diffusion coefficient $D_2\rightarrow D_r=D_2r^2$. This inhomogeneous diffusion results from the \textit{conformal map} and corresponds to a multiplicative white noise and is known as the so-called geometric, or exponential Brownian motion\footnote{For $r>0$, it describes \textit{e.g.}, a stock price behavior as a Wiener process for $x=\log(r)$, which is known as the Black-Scholes model \cite{BlSch73}. In the present consideration, it can be considered as an exponential instability of plasma in the radial direction in tokamaks \cite{MiRa2018}. Note that the dilation operator in dynamical systems relates to an inverted quartic potential, while in diffusion equation it appear due to inverted harmonic oscillator \cite{BhKhLa1995,NoVo1997,BeVi2003,Io2013} This leads to dilation - contraction operator in the radial diffusion equation in Ref. \cite{MiRa2018}, where it appears due to a sawtooth field.} \cite{Ross2019}. We solve Eq. \eqref{rbc-1} by standard procedures as follows. Performing the Laplace transformation ${\cal L}[P(t)](s)=\tilde{P}(s)$, and substituting it in Eq. \eqref{rbc-1}, one has \begin{subequations}\label{rbc-2} \begin{align} \tilde{P}(x,y,s)=e^{-|x|\sqrt{s/D_2}}f(y,s)\label{rbc-2a}\, , \\ D_1\partial_y^2f(y,s)-2\sqrt{sD_2}f(y,s)+\delta(y/R)=0 \label{rbc-2b}\, . \end{align} \end{subequations} Due to the reflecting boundary conditions $f(y=\pm\pi R,s)=0$, the solution $f(y,s)$ of Eq. \eqref{rbc-2b} is considered as the superposition \begin{equation}\label{rbc-3} f(y,s)=\frac{1}{\sqrt{2\pi R}}\sum_{k=0}^{\infty}f_k(s)\cos(ky/R)\, . \end{equation} The initial time backbone dynamics is estimated in \ref{sec:app-A}, and the backbone PDF due to Eq. \eqref{A5} consists of two terms \begin{multline}\label{rbc-3-4} f(y,t)=P(x=0,y,t)=\frac{t^{-1/2}}{4\pi R\sqrt{\pi D_2}} \sum_{n=0}^{\infty}\cos(ny/R) - \\ -\frac{t^{-1/2}D_{\frac{1}{2}}}{4\pi R\sqrt{\pi D_2}} \sum_{n=1}^{\infty}\cos(ny/R)n^2e^{-\kappa\pi n^2} = \\ =\frac{t^{-1/2}}{4\pi R\sqrt{\pi D_2}}(\delta(y/R)+1) +\frac{t^{-1/2}D_{\frac{1}{2}}}{4\pi^2 R\sqrt{\pi D_2}} \frac{d}{d\kappa}\vartheta_3(y/R,\kappa)\, , \end{multline} where $\kappa=2D_{\frac{1}{2}}t^{\frac{1}{2}}/\pi^{3/2}$. The first term in Eq. \eqref{rbc-3-4} relates to the pining initial condition, while the second term yields the stationary solution in the form of the theta function $\vartheta_3(y/R,\kappa)$ \cite{BaEr55}. A typical behavior of $\frac{d}{d\kappa}\vartheta_3(y/R,\kappa)$ for $\pi\kappa=0.5$ is shown in Fig. \ref{fig:fig_theta}. \begin{figure}[ht \includegraphics[width=0.7\hsize]{./fig_theta.eps} \caption{An example of derivation of the theta function with respect to $\kappa\pi$ for $\pi\kappa=0.5$: $\frac{d\,\vartheta_3(v,\kappa)}{d(\pi\kappa)}$, where $v=y/R=\phi$. Here we also use the definition of the theta function $\vartheta_3(v,\kappa)$ \cite{BaEr55}. } \label{fig:fig_theta} \end{figure} It should admitted that at the large time asymptotic, diffusion in fingers affects strongly anomalous diffusion in the backbone, and the former should be taken into account. Therefore, according to the Laplace inversion, the solution of Eq. \eqref{rbc-1} reads \begin{multline}\label{rbc-4} P(x,y,t)=\frac{1}{\sqrt{2\pi R}}\sum_{k=0}^{\infty}\cos(ky/R) {\cal L}^{-1}\left[f_k(s)e^{-|x|\sqrt{s/D_2}}\right] = \\ =\frac{1}{\sqrt{2\pi R}}\sum_{k=0}^{\infty}\cos(ky/R){\cal L}^{-1} \left[ \frac{e^{-|x|\sqrt{s/D_2}}/\sqrt{D_2}}{2\left(\sqrt{s} +k^2D_{\frac{1}{2}}\right)} \right] = \\ =\frac{1}{2\pi \sqrt{2RD_2 t}}e^{-\frac{x^2}{4D_2t}}+ \bar{P}(x,y,t)\, , \end{multline} where $D_{\frac{1}{2}}=\frac{D_1}{2\sqrt{D_2}}$. The term $\bar{P}(x,y,t)$ is estimated for the large time in \ref{sec:app-B} and reads \begin{equation}\label{rbc-5} \bar{P}(x,y,t)= \frac{(2-\sqrt{\pi})}{4\pi\sqrt{2 R D_2 t}} e^{-\frac{x^2}{4D_2t}}\delta(y/R) +\frac{\pi^{3/2}|x|}{4D_1\sqrt{2 D_2R t^3}} B_2\left(\frac{y}{2\pi R}\right)e^{-\frac{x^2}{4D_2t}}\, , \end{equation} where $B_2(z)$ is a shifted Bernoulli polynomial \cite{AbSt72} defined on $z\in (-1/2,\,1/2)$, see Fig. \ref{fig:fig_Bernoul1}. \begin{figure}[ht \includegraphics[width=0.9\hsize]{./fig_Bernoul1} \caption{Bernoulli polynomial \cite{AbSt72} $B_2(z)=z^2-z+1/6$ for $z\in [0, \,1]$ on panel (a), which is also the result of summation in Eq. \eqref{B4}. Another result of there is of the summation in Eq. \eqref{B4} for $z\in [-1/2 ,\, 1/2] $ is on panel (b). It consists of two parts for $z\in [0 ,\, 1/2]$, which coincides with the left curve of $B_2(z)$. The second part for $z\in [-1/2 ,\, 0]$ corresponds to the curve of the left part of $B_2(z)$ for $z\in [1/2 ,\, 1]$ shifted on $1$. } \label{fig:fig_Bernoul1} \end{figure} In the polar coordinates the obtained result in Eqs. \eqref{rbc-4} and \eqref{rbc-5} reads \begin{multline}\label{rbc-6} P(r,\phi,t)=\frac{1}{2\pi \sqrt{2R D_2t}}e^{-\frac{\ln^2(r/R)}{4D_2t}} \left[1+\frac{2-\sqrt{\pi}}{2}\delta(\phi)\right] + \\ +\frac{\pi^{3/2}|\ln(r/R)|}{4D_1\sqrt{2 D_2R t^3}} B_2\left(\frac{\phi}{2\pi }\right)e^{-\frac{\ln^2(r/R)}{4D_2t}}\, . \end{multline} As obtained, the circular motion is not random due reflections on boundaries. It consists of two stationary distributions: the first one is the initial condition, which relaxes by power law as $t^{-1/2}$ and the second stationary distribution is according to the Bernoulli polynomial $B_2(\phi/2\pi)$, which interacts with the radial motion. The radial motion is random and corresponds to the geometric Brownian motion, which is described by the log-normal distribution \cite{Ross2019}, and leads to the exponential spreading along the radii with the mean squared displacement (MSD) $\langle r^{2}(t)\rangle\sim e^{t}$. This dominant process is also corrected by the L\'evy-Smirnov distribution with respect to $\frac{t}{\ln(r/R)}$, see \textit{e.g.}, \cite{UcSi13,MeFeHo10}. Concluding this section it is worth to stress that the geometric Brownian motion is the geometry effect of the conformal mapping of the ($x-y$) comb model \eqref{rbc-1} onto the circular geometry comb by conformal gluing of the backbone ends. \section{Periodic boundary conditions}\label{sec:pbc} \begin{figure}[htbp] \includegraphics[width=0.4\hsize]{./fig_pbc.eps} \caption{Schematic picture of the circular/umbrella comb with the periodic boundary conditions at $\phi=\pm\pi$ and outward diffusion along the radii with zero boundary conditions at infinity.} \label{fig:fig_pbc} \end{figure} In the section, we consider the second scenario with periodic circular motion at $r=R$ and the \textit{outward} radial motion with a constant diffusion coefficient $D_r=D$, see Fig.~\ref{fig:fig_pbc}. Equation \eqref{int-uc-1} now is \begin{equation}\label{pbc-1} \partial_tP=D_1\delta(r-R)\frac{1}{r^2}\partial_{\phi}^2P +D\frac{1}{r}\partial_r r\partial_rP\, . \end{equation} The initial condition is $P_0=P(r,\phi,t=0)=\delta(r-R)\delta(\phi)$ The distribution function $P=P(r,\phi,t)$ is a convolution integral \begin{equation} \label{pbc-2} P(r,\phi,t)=\int_0^tG(r,t-t')F(\phi,t')dt'\, , \end{equation} which represents two independent motions in the Laplace space: \begin{equation}\label{pbc-3} \tilde{P}(r,\phi,s)=g(r,s)f(\phi,s)\, . \end{equation} This corresponds to Eq. \eqref{pbc-1} in the Laplace space, \begin{equation}\label{pbc-4} s\tilde{P}-P_0=D_1\delta(r-R)\frac{1}{r^2}\partial_{\phi}^2\tilde{P} +D\frac{1}{r}\partial_r r\partial_r\tilde{P}\, . \end{equation} Now, the boundary conditions for the radial motion can be easily specified for $\tilde{P}$, $g$ and $f$. These are $ \tilde{P}(r=R,\phi,s)=f(\phi,s)$ and $g(r=R,s)=1$, and $\tilde{P}(r=\infty,\phi,s)=g(r=\infty,s)= \tilde{P}'(r=\infty,\phi,s)=g'(r=\infty,s)=0$, where prime means differentiation with respect to $r$. Note also, $f(\phi,s)$ is a multivalued function, and the conformal map cannot be performed. Therefore, we are treating the problem in the polar coordinates. First, let us consider diffusion in radii - fingers. In the Laplace space, the diffusion equation from Eq. \eqref{pbc-4} leads to the equation \begin{equation}\label{pbc-5} rsg=Drg''+Dg' \end{equation} with the solution \begin{equation}\label{pbc-6} g(r,s)=A(s)K_0\left(r\sqrt{s/D}\right)\theta(r-R)\, , \end{equation} where $K_0(z)$ is the modified Bessel function of the second kind, which satisfies the boundary conditions at infinity. The second boundary condition $g(R,s)=1$ yields $A(s)=2\left[K_0\left(R\sqrt{s/D}\right)\right]^{-1}$, where $\theta(0)=1/2$ is accounted. Substituting solution \eqref{pbc-6} in Eq. \eqref{pbc-4}, one obtains, see \ref{sec:app-C} \begin{equation}\label{pbc-7} \partial_{\phi}^2f+af-bs^{\frac{1}{2}}f+c\delta(\phi)=0\, , \end{equation} where $a=2RD/D_1$ and $bs^{1/2}= \frac{2R^2K_1(\lambda R)}{D_1K_0(\lambda R)} \sqrt{Ds}$, and $c=R^2/D_1$. \subsection{Initial time asymptotics} For the initial times, when $s\rightarrow \infty$, and $\lambda R=R\sqrt{s/D}\gg 1$, one obtains $\frac{K_1(\lambda R)}{K_0(\lambda R)}\approx 1$ and $a\ll bs^{\frac{1}{2}}$. Then, Eq. \eqref{pbc-7} is simplified with the solution \begin{equation}\label{pbc-8} f(\phi,s)=\sum_{n=-\infty}^{\infty}\frac{ce^{i n\phi}}{n^2+b\sqrt{s}}\, . \end{equation} We also obtain that $g(r,s)\approx R^{\frac{1}{2}}e^{-\lambda(r-R)}/\sqrt{r}$ \cite{AbSt72} and the PDF $P(r,\phi,t)$ leads to a chain of estimations in \ref{sec:app-C}. Therefore, the PDF reads \begin{subequations}\label{pbc-9} \begin{align} P(r,\phi,t)= & {\cal L}^{-1}\left[g(r,s)f(\phi,s)\right] \approx \sum_{n=-\infty}^{\infty}\frac{cR^{\frac{1}{2}}}{br^{\frac{1}{2}}}e^{i n\phi} \cdot {\cal L}^{-1}\left[\frac{e^{-\frac{\sqrt{s}(r-R)}{\sqrt{D}}}} {n^2/b+\sqrt{s}}\right] = \label{pbc-9a} \\ = & \frac{cR^{\frac{1}{2}}}{b\sqrt{r\pi t}}e^{-\frac{(r-R)^2}{4Dt}}[1+2\delta(\phi)] -\frac{2c(Dt/r)^{\frac{1}{2}}}{b(r-R)}e^{-\frac{(r-R)^2}{4Dt}} + \\ + & \frac{c\pi\sqrt{RD^{\frac{1}{2}}}}{\sqrt{rb(r-R)}} \frac{\cosh\left[(\pi-|\phi|)\sqrt{D^{\frac{1}{2}}t/b(r-R)}\right]} {\cosh\left[\pi\sqrt{D^{\frac{1}{2}}t/b(r-R)}\right]} e^{-\frac{(r-R)^2}{4Dt}} \label{pbc-9b} \, , \end{align} \end{subequations} where $\phi\in[-\pi\, , \pi]$. The modulus $|phi|$ is due to the symmetry of Eq. \eqref{pbc-7}. We also stress that the solution \eqref{pbc-9b} is valid for $r>R$, strictly. It should be noted that for $t\rightarrow 0$, the solution reduces to the transport for $r=R$, and Eq. \eqref{pbc-9a} reads \begin{multline}\label{pbc-9-2} P(R,\phi,t) \approx \frac{c}{b\sqrt{\pi t}}[1+2\delta(\phi)] -\frac{2c}{\sqrt{t}} \sum_{n=1}^{\infty}\frac{\cos(n\phi)}{n^2} = \\ \frac{c}{b\sqrt{\pi t}}[1+2\delta(\phi)] -\frac{2c\pi^2}{\sqrt{t}}B_2\left(\frac{\phi}{2\pi}\right) \, . \end{multline} The situation changes dramatically for the long times. \subsection{Large time asymptotics} For the large times, when $\lambda R\ll 1$, we have $K_0(\lambda R)\approx \ln\frac{2}{\gamma\lambda R}$ and $K_1(\lambda R) \approx 1/\lambda R$, where $\gamma$ is the Euler constant \cite{AbSt72}. However, asymptotic behavior of $K_0(\lambda r)$ must correspond to the boundary conditions at $r\rightarrow\infty$. Therefore, we take the intermediate asymptotic, when $\lambda r>1$, which yields $$K_0(\lambda R)\approx \sqrt{\frac{\pi}{2\lambda r}}e^{-\lambda r}= \sqrt{\pi (D/s)^{\frac{1}{2}}/r }e^{-r(s/D)^{\frac{1}{2}}}\, .$$ Correspondingly, in the limits $s\rightarrow 0$ and $rs^{\frac{1}{2}}D^{-\frac{1}{2}} >1$, the radial distribution $g(r,s)$ and the coefficient $b$ in Eq. \eqref{pbc-7} are functions of $s$ which are approximated as follows \begin{subequations}\label{pbc-10} \begin{align} & g(r,s)\approx \sqrt{2\pi D^{\frac{1}{2}}/rs^{\frac{1}{2}}} \frac{e^{-r(s/D)^{\frac{1}{2}}}} {\ln\left(4D/\gamma^2 R^2s\right)}\, , \label{pbc-11a} \\ & bs^{\frac{1}{2}}=b(s)s^{\frac{1}{2}}\approx \frac{4DR}{D_1} \ln^{-1}\left(4D/\gamma^2 R^2 s\right) \equiv b_1\left[\ln\left(4D/\gamma^2R^2s\right)\right]^{-1} \label{pbc-11b} \, . \end{align} \end{subequations} Again neglecting the parameter $a$ in Eq. \eqref{pbc-7}, we obtain the solution as follows \begin{equation}\label{pbs-11} f(\phi,s)=\sum_{n=-\infty}^{\infty} \frac{ce^{i n\phi}\ln\left(D/\gamma^2R^2s\right)} {b_1 +n^2\ln\left(D/\gamma^2R^2s\right)}\, . \end{equation} Then the PDF \eqref{pbc-9} for the large time asymptotics reads in the form of the inverse Laplace transformation \begin{multline}\label{pbc-12} P(r,\phi,t)= {\cal L}^{-1}\left[g(r,s)f(\phi,s)\right] \approx \\ \approx {\cal L}^{-1}\left[\sum_{n=-\infty}^{\infty} \frac{2ce^{i n\phi}}{n^2+b_1\ln\left(4D/\gamma^2R^2s\right)} \cdot \sqrt{2\pi (D/r^2s)^{\frac{1}{2}}} e^{-r(s/D)^{\frac{1}{2}}} \right] \, . \end{multline} The solution, obtained in \ref{sec:app-C}, reads \begin{multline}\label{pbc-13} P(r,\phi,t)\approx \frac{c\pi}{b_1\Gamma\left(\frac{1}{4}\right)} \frac{\sqrt{2\pi (D/t^3r^2)^{\frac{1}{2}}}} {\ln^{\frac{1}{2}}\left(4Dt/\gamma^2R^2\right)} e^{-3r^2/8Dt}\\ \times\Biggl\{\frac{\ln^{-\frac{1}{2}}\left(4Dt/\gamma^2R^2\right)}{\pi}+ \frac{\cosh\left[(\pi-|\phi|)\sqrt{b_1\ln\left(4Dt/\gamma^2R^2\right)}\right]} {\cosh\left[\pi\sqrt{b_1\ln\left(4Dt/\gamma^2R^2\right)}\right]}\Biggr\} \, . \end{multline} This result is valid for $r\gg R$ and $t\rightarrow\infty$, and the Tauberian theorem, applied in Eq. \eqref{C10}, grasps exactly this intermediate asymptotic behavior of $K_0(\lambda r)\propto r^{-\frac{1}{2}}s^{-\frac{1}{4}}$ due to the power law. The logarithmic evolution in the backbone shown in Fig.~\ref{fig:fig_Backbone}, relaxes to the radii-fingers. However, in large time asymptotic calculation this backbone relaxation cannot be separated from the radii one. Note also that the approximate solution \eqref{pbc-13} for the PDF in the specific area does not conserve the probability ${\cal P}$, namely the latter is \begin{equation}\label{pbc-14} {\cal P}(t)=\int_0^{\infty}dr\int_{-\pi}^{\pi}d\phi P(r,\phi,t)\sim t^{-\frac{1}{2}}\, . \end{equation} \begin{figure}[ht \includegraphics[width=0.7\hsize]{./fig_Backbone.eps} \caption{Evolution of the braces $F(\phi,t)$ in PDF of Eq. \eqref{pbc-13}. The plots are dependence of $F(\phi,t)$ on $\phi$ for three times where plot 1 corresponds to $t=10$, 2 to $t=100$ and 3 to $t=1000$. All parameters are taken to be one, while $\gamma=1.781$ } \label{fig:fig_Backbone} \end{figure} \subsection{The MSD in radii} This also results to radii subdiffusion with the mean squared displacement (MSD) of the order of $\langle r^2(t)\rangle \sim Dt$. Indeed, the relaxation process in the backbone contributes to the transport in fingers with the MSD defined from the inverse Laplace transformation in Eq. \eqref{pbc-12} as follows \begin{multline}\label{C13} \langle r^2(t)\rangle ={\cal P}^{-1}(t){\cal P}^{-1}(t)\frac{1}{2\pi}\int_R^{\infty}\int_{-\pi}^{\pi} r^2P(r,\phi,t)dr d\phi= \\ ={\cal L}^{-1}\left[ \frac{2c}{b_1\ln\left(4D/\gamma^2R^2s\right)} \cdot\sqrt{2\pi (D/s)^{\frac{1}{2}}} \int_R^{\infty} r^{3/2} e^{-r(s/D)^{\frac{1}{2}}}dr\right] \propto \\ \propto D{\cal P}^{-1}(t){\cal L}^{-1}\Big[ s^{-3/2} \Gamma\left(\frac{3}{2},R\sqrt{s}\right)\Big](t) \, , \end{multline} where $\Gamma(\alpha,z)$ is the incomplete gamma function \cite{AbSt72} and the factor ${\cal P}^{-1}(t)$ is due to the non conserved probability. Performing the Laplace inversion by means of the Tauberian theorem, see \ref{sec:app-C}, we obtain a subdiffusive growth of the order of $Dt^{1/2}$ However, the obtained expression should be normalized by the probability ${\cal P}(t)$. This eventually yields the MSD in the form of normal diffusion \[ \langle r^2(t)\rangle \sim D t\, .\] \section{Conclusion} A circular comb is considered, and two scenarios of the anomalous transport in the circular comb geometry are studied. The first scenario corresponds to the conformal mapping of a comb Fokker-Planck equation on the umbrella comb. In this case, the reflecting boundary conditions are imposed on the circular (rotator) motion, while the radial motion corresponds to geometric Brownian motion with vanishing to zero boundary conditions on infinity. The radial diffusion is described by the log-normal distribution, which corresponds to exponentially fast motion with the MSD of the order of $e^t$. The second scenario corresponds to circular diffusion with periodic boundary conditions and the outward Brownian radial diffusion with vanishing to zero boundary conditions at infinity. In this case the radial motion is normal diffusion with the MSD of the order of $t$. However the circular motion in both scenarios is a superposition of cosine functions that results in a stationary distribution in the form of the Bernoulli polynomials, with the power law relaxation.
1,314,259,995,499
arxiv
\section{Introduction} Consider the model problem: \begin{equation} \label{A1} \min_{f, \, u} \, \@ifstar\@lrset\@set*{ \frac{1}{2} \@ifstar\@lrnorms\@norm{u-d}^2_{{L^2(\partial \Omega)})} + \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{f}_{{L^2(\Omega)}}^2 }, \end{equation} on a Lipschitz domain $\Omega\subset \mathbb{R}^n$, subject to \begin{alignat}{2} \label{A2} - \laplace u + u + f &= 0 \quad &&\mbox{in } \Omega, \\ \label{A3} \frac{\partial u}{\partial \mathbf{n}} &= 0 \quad &&\mbox{on } \partial \Omega. \end{alignat} This minimization task is similar to the standard example considered in PDE-constrained optimization. But instead of assuming that observation data is available everywhere in $\Omega$, we consider the case where observations are only given at the boundary $\partial \Omega$ of $\Omega$, that is $d \in {L^2(\partial \Omega)}$, see the first term in \eqref{A1}. For problems of the form \eqref{A1}-\eqref{A3}, in which \begin{equation} \label{A4} \frac{1}{2}\@ifstar\@lrnorms\@norm{u-d}^2_{{L^2(\partial \Omega)}} + \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{f}_{{L^2(\Omega)}}^2 \end{equation} is replaced by \begin{equation} \frac{1}{2}\@ifstar\@lrnorms\@norm{u-d}^2_{{L^2(\Omega)}} + \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{f}_{{L^2(\Omega)}}^2 \end{equation} very efficient preconditioners have been developed for the associated KKT system. In fact, by employing proper $\alpha$-dependent scalings of the involved Hilbert spaces \cite{s-z}, or by using a Schur complement approach \cite{Pea12}, methods that are robust with respect to the size of the regularization parameter $\alpha$ have been developed. More specifically, the condition number of the preconditioned optimality system is small and bounded independently of $0 < \alpha \ll 1$ and the mesh size $h$. This ensures good performance for suitable Krylov subspace methods, e.g. the minimum residual method (\textsc{Minres}{}), independently of both parameters. These techniques have been extended to handle time dependent problems \cite{Pea12_II} and PDE-constrained optimization with Stokes equations \cite{z-11}, but the rigorous analysis of $\alpha$-independent bounds always requires that observations are available throughout all of $\Omega$. For cases with limited observations, for example with cost-functionals of the form \eqref{A4}, efficient preconditioners are also available for a rather large class of PDE-constrained optimization problems, see \cite{Nie13,Nie10}. But these techniques do not yield convergence rates, for the preconditioned KKT-system, that are completely robust with respect to the size of the regularization parameter $\alpha$. Instead, the number of preconditioned \textsc{Minres}{} iterations grows logarithmically\footnote{In \cite{Nie13,Nie10} it is proved that the number of needed preconditioned \textsc{Minres}{} iterations cannot grow faster than \[a + b \left[ \log_{10} \left( \alpha^{-1} \right) \right]^2.\] Furthermore, in \cite{Nie13} it is explained why iterations counts of the kind \eqref{A6} often will occur in practice.} with respect to the size of $\alpha^{-1}$, as $\alpha \rightarrow 0$: \begin{equation} \label{A6} a + b \log_{10} \left( \alpha^{-1} \right). \end{equation} According to the numerical experiments presented in \cite{Nie13}, the size of $b$ may become significant. More specifically, $b \in [5,50]$ for problems with simple elliptic state equations posed on rectangles. Thus, for small values of $\alpha$, \textsc{Minres}{} may require rather many iterations to converge - even though the growth in iteration numbers is only logarithmically. In practice, observations are rarely available throughout the entire domain of the state equation. On the contrary, the purpose of solving an inverse problem is typically to use data recorded at the surface of an object to compute internal properties of that object: Impedance tomography, the inverse problem of electrocardiography (ECG), computerized tomography (CT), etc. This fact, combined with the discussion above, motivate the need for further improving numerical methods for solving KKT systems arising in connection with PDE-constrained optimization. This paper is organized as follows. In the next section we derive the KKT system associated with the model problem \eqref{A1}-\eqref{A3}. Our $\alpha$ robust preconditioner is presented in Section \ref{Numerical_experiments}, along with a number of numerical experiments. Sections \ref{Analysis}-\ref{Preconditioning} contain our analysis, and the method is generalized in Sections \ref{Generalization}-\ref{Alternative}. Section \ref{Conclusions} provides a discussion of our findings, including their limitations. \section{KKT system} \label{KKT_system} Consider the PDE \eqref{A2} with the boundary condition \eqref{A3}. A solution $u$ to this elliptic PDE, with source term $f\in{L^2(\Omega)}$, is known to have improved regularity, i.e. $u\in H^{1+s}(\Omega)$, for some $s\in [0, 1]$, with $s$ depending on the domain $\Omega$. In the remainder of this paper we assume that $u$ has full regularity, i.e. $u\in {H^2(\Omega)}$. This is known to hold if $\Omega$ is convex or if $\partial\Omega$ is $C^2$, see e.g. \cite{grisvard1985, BHac92}. When solutions to \eqref{A2} exhibit this improved regularity, we can write the problem on the non-standard variational form: Find $u \in {\bar{H}^2(\Omega)}$ such that \begin{equation} \label{B1} (- \laplace u + u,w)_{{L^2(\Omega)}} + (f,w)_{{L^2(\Omega)}} = 0\quad \forall w \in {L^2(\Omega)}, \end{equation} where \begin{equation*} {\bar{H}^2(\Omega)} = \bigg\{ \phi \in {H^2(\Omega)} \, \bigg| \, \frac{\partial \phi}{\partial \mathbf{n}} = 0 \mbox{ on } \partial \Omega \bigg\}, \end{equation*} equipped with the inner product \begin{equation} \label{eq:H2_inner} \begin{split} (u,v)_{{H^2(\Omega)}} &= \int_\Omega \hess u : \hess v + \grad u \cdot \grad v + uv \dx \\ &= \int_\Omega \laplace u \laplace v + \grad u \cdot \grad v + u v \dx. \end{split} \end{equation} Here $\hess u$ denotes the Hessian of $u$, and the second identity is due to the boundary condition $\frac{\partial u}{\partial \nvec} = 0$ imposed on the space ${\bar{H}^2(\Omega)}$. We will see below that, in order to design a regularization robust preconditioner for \eqref{A1}-\eqref{A3}, it is convenient to express the state equation in the form \eqref{B1}, instead of employing integration by parts/Green's formula to write it on the standard self-adjoint form. \subsection{Optimality system} We may express \eqref{A1}-\eqref{A3} in the form: \begin{equation} \label{B3} \min_{f \in {L^2(\Omega)}, \, u \in {\bar{H}^2(\Omega)}}\, \@ifstar\@lrset\@set*{ \frac{1}{2}\@ifstar\@lrnorms\@norm{u-d}_{{L^2(\partial \Omega)}}^2 + \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{f}_{{L^2(\Omega)}}^2 } \end{equation} subject to \begin{equation} \label{B4} (- \laplace u + u,w)_{{L^2(\Omega)}} + (f,w)_{{L^2(\Omega)}} =0\quad \forall w \in {L^2(\Omega)}. \end{equation} The associated Lagrangian reads \begin{equation*} \begin{split} \mathcal{L}(f,u,w) &= \frac{1}{2} \@ifstar\@lrnorms\@norm{u-d}_{{L^2(\partial \Omega)}}^2 + \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{f}_{{L^2(\Omega)}}^2 +(f - \Delta u + u,w)_{{L^2(\Omega)}}, \end{split} \end{equation*} with $f \in {L^2(\Omega)}$, $u \in {\bar{H}^2(\Omega)}$ and $w \in {L^2(\Omega)}$. From the first order optimality conditions \begin{equation*} \frac{\partial \mathcal{L}}{\partial f} = 0,\quad \frac{\partial \mathcal{L}}{\partial u} = 0,\quad \frac{\partial \mathcal{L}}{\partial w} = 0, \end{equation*} we obtain the optimality system: Determine $(f,u,w) \in {L^2(\Omega)} \times {\bar{H}^2(\Omega)} \times {L^2(\Omega)}$ such that \begin{alignat}{2} \label{B5.1} \alpha (f,\psi)_{{L^2(\Omega)}}+(\psi,w)_{{L^2(\Omega)}} &= 0 &&\quad \forall \psi \in {L^2(\Omega)}, \\ \label{B5.2} (u-d,\phi)_{{L^2(\partial \Omega)}} + (-\laplace \phi + \phi, w)_{{L^2(\Omega)}} &= 0 &&\quad \forall \phi \in {\bar{H}^2(\Omega)}, \\ \label{B5.3} (f,\xi)_{{L^2(\Omega)}} + (- \laplace u + u,\xi)_{{L^2(\Omega)}} &= 0 &&\quad \forall \xi \in {L^2(\Omega)}. \end{alignat} \section{Numerical experiments} \label{Numerical_experiments} Prior to analyzing our model problem, we will consider some numerical experiments. Discretization of \eqref{B5.1}-\eqref{B5.3} yields an algebraic system of the form \begin{equation} \label{N1} \underbrace{\left[ \begin{array}{ccc} \alpha M & \ze & M \\ \ze & M_{\partial} & A^T \\ M & A & 0 \end{array} \right]}_{\mathcal{A}_{\alpha}} \left[ \begin{array}{c} f \\ u \\ w \end{array} \right] = \left[ \begin{array}{c} 0 \\ \tilde M_{\partial}d \\ 0 \end{array} \right], \end{equation} where \begin{itemize} \item $M$ is a mass matrix, \item $M_{\partial}$ is a mass matrix associated with the boundary $\partial \Omega$ of $\Omega$. \item $A$ is a matrix that arise upon discretization of the operator $(1-\Delta)$. Since we write the state equation on a non self-adjoint form, $A$ will not be the usual sum of the stiffness and mass matrices. Instead, equation \eqref{B1} is discretized with subspaces of ${\bar{H}^2(\Omega)}$ and ${L^2(\Omega)}$. \end{itemize} In the current numerical experiments, we employ the Bogner-Fox-Schmit (BFS) rectangle for discretizing the state variable $u \in {\bar{H}^2(\Omega)}$. That is, the finite element field consists of bicubic polynomials that are continuous, have continuous first order derivatives and mixed second order derivatives at each vertex of the mesh. BFS elements are $C^1$ on rectangles and therefore $H^2$-conforming. The control $f$ and Lagrange multiplier $w$ are discretized with discontinuous bicubic elements. We propose to precondition \eqref{N1} with the block-diagonal matrix \begin{equation} \label{N2} \mathcal{B}_{\alpha}= \left[ \begin{array}{ccc} \alpha M & \ze & \ze \\ \ze & \alpha R+M_\partial & \ze \\ \ze & \ze & \frac{1}{\alpha} M \end{array} \right]^{-1}, \end{equation} where $R$ results from a discretization of the bilinear form $b(\cdot,\cdot)$ on ${\bar{H}^2(\Omega)}$: \begin{equation} b( u, v) \label{N2.1} = (u,v)_{H^2(\Omega)} + \int_\Omega \grad u \cdot \grad v \, dx. \end{equation} In the experiments presented below, we used this bilinear form to construct a multigrid approximation of $\left( \alpha R+M_\partial \right)^{-1}$. \subsection*{Remark} The bilinear form \eqref{N2.1} is equivalent to the inner product on ${\bar{H}^2(\Omega)}$. The additional term stems from our choice of implementing a multigrid algorithm for the bilinear form associated with the operator $(\laplace - 1)^2 = \laplace^2 -2 \laplace +1$. Indeed, the bilinear form $\alpha b(\subs, \subs) + (\subs,\subs)_{{L^2(\partial \Omega)}}$ can be seen to coincide with the variational form associated with the fourth order problem \begin{alignat*}{2} \label{A2} \alpha(\laplace-1)^2 u &= f \quad &&\mbox{in } \Omega, \\ \frac{\partial u}{\partial \mathbf{n}} &= 0 \quad &&\mbox{on } \partial \Omega, \\ \alpha\frac{\partial \laplace u}{\partial \mathbf{n}} &= u \quad &&\mbox{on } \partial \Omega. \end{alignat*} \subsection*{} To limit the technical complexity of the implementation, we considered the problem \eqref{A1}-\eqref{A3} on the unit square in two dimensions. The experiments were implemented in Python and SciPy. The meshes were uniform rectangular, with the coarsest level for the multigrid solver consisting of $8\times 8$ rectangles. Figure \ref{fig:2} shows an example of a solution of the optimality system \eqref{N1}. \begin{figure} \centering \begin{subfigure}[t]{.46\textwidth} \includegraphics[width=1.0\textwidth]{data.pdf} \captionsetup{font=scriptsize} \caption{Observation data $d$. The forward model was solved for the control shown in (\subref{fig2:d}), but only the boundary values can be observed.} \label{fig2:b} \end{subfigure} \hfill \centering \begin{subfigure}[t]{.46\textwidth} \includegraphics[width=1.0\textwidth]{state.pdf} \captionsetup{font=scriptsize} \caption{Computed optimal state $u$ based on the observation data shown in (\subref{fig2:b})} \label{fig2:a} \end{subfigure} \centering \begin{subfigure}[t]{.46\textwidth} \includegraphics[width=1.0\textwidth]{opt_ctrl.pdf} \captionsetup{font=scriptsize} \caption{Computed optimal control $f$ based on the observation data in (\subref{fig2:b})} \label{fig2:c} \end{subfigure} \hfill \centering \begin{subfigure}[t]{.46\textwidth} \includegraphics[width=1.0\textwidth]{gen_ctrl.pdf} \captionsetup{font=scriptsize} \caption{The ``true'' control function used to generate the observation data in (\subref{fig2:b}).} \label{fig2:d} \end{subfigure} \caption{An example of a solution of \eqref{N1}. The observation data $d$ was generated with the forward model, using the ``true'' control $4x(1-x) + y$ shown in panel (\subref{fig2:d}). Solutions to the unregularized problem are non-unique, and the generating control cannot be (exactly) recovered. The figures were generated with mesh parameter $h = 1/128$ and regularization parameter $\alpha = 10^{-6}$.} \label{fig:2} \end{figure} \subsection{Eigenvalues} Let us first consider the exact preconditioner $\mathcal{B}_{\alpha}$ defined in \eqref{N2}. If $\mathcal{B}_{\alpha}$ is a good preconditioner for the discrete optimality system \eqref{N1}, then the spectral condition number of $\mathcal{B}_{\alpha} \mathcal{A}_{\alpha}$ should be small and bounded, independently of the size of both the regularization parameter $\alpha$ and the discretization parameter $h$. The eigenvalues of this preconditioned system were computed by solving the generalized eigenvalue problem \begin{equation*} \mathcal{A}_{\alpha} x = \lambda \mathcal{B}_{\alpha}^{-1} x. \end{equation*} We found that the absolute value of the eigenvalues $\lambda$ were bounded, with \begin{equation*} 0.445 \leq \vert\lambda\vert \leq 1.809, \end{equation*} uniformly in $\alpha\in\{1,10^{-1},\ldots, 10^{-10}\}$ and $h \in \{2^{-2},\ldots,2^{-5}\}$. This yields a uniform condition number $k(\mathcal{B}_{\alpha} \mathcal{A}_{\alpha}) \approx 4.05$. The spectra of the preconditioned systems are pictured in Figure \ref{fig:1} for some choices of $\alpha$. The spectra are clearly divided into three bounded intervals, and the eigenvalues are more clustered for $\alpha \approx 1$ and for very small $\alpha$. \begin{figure} \centering \centering \begin{subfigure}[b]{.46\textwidth} \includegraphics[width=1.0\textwidth]{N16a1.pdf} \captionsetup{font=scriptsize} \label{fig:1a} \caption{$\alpha = 1$} \end{subfigure} \hfill \centering \begin{subfigure}[b]{.46\textwidth} \includegraphics[width=1.0\textwidth]{N16a1e-04.pdf} \captionsetup{font=scriptsize} \label{fig:1b} \caption{$\alpha = 10^{-4}$} \end{subfigure} \centering \begin{subfigure}[b]{.46\textwidth} \includegraphics[width=1.0\textwidth]{N16a1e-06.pdf} \captionsetup{font=scriptsize} \label{fig:1c} \caption{$\alpha = 10^{-6}$} \end{subfigure} \hfill \centering \begin{subfigure}[b]{.46\textwidth} \includegraphics[width=1.0\textwidth]{N16a1e-10.pdf} \captionsetup{font=scriptsize} \label{fig:1d} \caption{$\alpha = 10^{-10}$} \end{subfigure} \caption{Spectrum of $\mathcal{B}_{\alpha} \mathcal{A}_{\alpha}$ for different regularization parameters $\alpha$. The discretization parameter was $h=2^{-4}$ for all figures.} \label{fig:1} \end{figure} \subsection{Efficient preconditioning} In practice, the action of $\mathcal{B}_{\alpha}$ is replaced with a less computationally expensive operation $\widehat{\mathcal{B}_{\alpha}}$. Note that $\mathcal{B}_{\alpha}$ has a block structure, and that computationally efficient approximations can be constructed for the individual blocks. Specifically, $\widehat{\mathcal{B}_{\alpha}}$ is constructed by employing \begin{itemize} \item 1 multigrid V-cycle for the (2,2) block of $\mathcal{B}_{\alpha}$, containing a symmetric $4\times 4$ block Gauss-Seidel smoother where the blocks contain the matrix entries corresponding to all degrees of freedom associate with a vertex in the mesh (see \cite{Zhang1994} for a theoretical analysis of the method). \item 2 symmetric Gauss-Seidel iterations for the (1,1) and (3,3) blocks. \end{itemize} We estimated condition numbers of the individual blocks of $\mathcal{B}_{\alpha}^{-1}$ preconditioned with their respective approximations. The results are reported in Tables \ref{tab:gs} and \ref{tab:multigrid}. A slight deterioration in the performance of the multigrid cycle can be seen for very small values of $\alpha > 0$. \begin{table}% \label{tab:block_conditon_numbers} \centering \begin{minipage}{.45\textwidth}% \begin{tabular}{c|ccc} Iterations & 1 & 2 & 3 \\ \hline ($h=2^{-8}$)& 1.931 & 1.303 & 1.126 \end{tabular} \caption{Condition numbers of $M$ preconditioned with symmetric Gauss-Seidel iterations.} \label{tab:gs} \qquad \end{minipage}% \qquad \begin{minipage}{.45\textwidth}% \begin{tabular}{l|ccc} $\alpha$\textbackslash$h$ & $2^{-4}$ & $2^{-6}$ & $2^{-8}$ \\ \hline $1$ & 1.130 & 1.136 & 1.140 \\ $10^{-4}$ & 1.129 & 1.135 & 1.139 \\ $10^{-8}$ & 1.237 & 1.150 & 1.149 \\ $10^{-12}$& 1.252 & 1.259 & 1.253 \end{tabular} \caption{Estimated condition numbers of $\alpha R + M_\partial$ preconditioned with one V-cycle multigrid iteration.} \label{tab:multigrid} \end{minipage}% \end{table} \subsection{Iteration numbers} To verify that also $\widehat{\mathcal{B}_{\alpha}}$ is an effective preconditioner for $\mathcal{A}_{\alpha}$, we applied the \textsc{Minres}{} scheme to the system \begin{equation*} \widehat\mathcal{B}_{\alpha} \mathcal{A}_{\alpha} x = \widehat\mathcal{B}_{\alpha} b. \end{equation*} For the results presented in Table \ref{tab:MINRES_iterations_and_estimates}, the \textsc{Minres}{} iteration process was stopped as soon as \begin{equation} \label{N3} \frac{(r_k,\widehat{\mathcal{B}_{\alpha}} r_k)}{(r_0,\widehat{\mathcal{B}_{\alpha}} r_0)}= \frac{(\mathcal{A}_{\alpha} x_k - b, \widehat{\mathcal{B}_{\alpha}} \{\mathcal{A}_{\alpha} x_k - b\})}{(\mathcal{A}_{\alpha} x_0 - b, \widehat{\mathcal{B}_{\alpha}} \{\mathcal{A}_{\alpha} x_0 - b\})} \leq \varepsilon, \end{equation} which is the standard termination criterion for the preconditioned \textsc{Minres}{} scheme, provided that the preconditioner is SPD. A random initial guess $x_0$ was used, and the tolerance was set to $\varepsilon = 10^{-12}$. \begin{table} \centering \begin{tabular}{l|ccccccccccc} $\alpha$\textbackslash$h$ & $ 2^{-4}$&$2^{-5}$& $2^{-6}$& $2^{-7}$ \\ \hline 1 & 53(4.33) & 53(4.36) & 53(4.36) & 53(4.36) \\ $10^{-1}$ & 57(4.31) & 57(4.34) & 57(4.35) & 57(4.35) \\ $10^{-2}$ & 75(4.31) & 72(4.34) & 70(4.35) & 68(4.35) \\ $10^{-3}$ & 79(4.31) & 79(4.34) & 77(4.35) & 73(4.35) \\ $10^{-4}$ & 81(4.30) & 81(4.33) & 79(4.35) & 77(4.35) \\ $10^{-5}$ & 82(4.33) & 81(4.33) & 79(4.35) & 79(4.35) \\ $10^{-6}$ & 81(4.35) & 79(4.36) & 79(4.35) & 81(4.35) \\ $10^{-7}$ & 70(4.35) & 81(4.37) & 81(4.36) & 79(4.35) \\ $10^{-8}$ & 62(4.36) & 70(4.36) & 79(4.36) & 81(4.36) \\ $10^{-9}$ & 62(4.36) & 64(4.37) & 68(4.37) & 78(4.36) \\ $10^{-10}$& 62(4.36) & 63(4.36) & 64(4.37) & 67(4.37) \end{tabular} \caption{Number of preconditioned \textsc{Minres}{} iterations needed to solve the optimality system to a relative error tolerance $\varepsilon = 10^{-12}$. Estimated condition numbers in parentheses, computed from conjugate gradient iterations on the normal equations for the preconditioned optimality system. \label{tab:MINRES_iterations_and_estimates} \end{table} \section{Analysis of the KKT system} \label{Analysis} Recall that our optimality system reads: \begin{alignat*}{2} \alpha (f,\psi)_{{L^2(\Omega)}}+(\psi,w)_{{L^2(\Omega)}} &= 0 &&\quad \forall \psi \in {L^2(\Omega)}, \\ (u-d,\phi)_{{L^2(\partial \Omega)}} + (- \Delta \phi + \phi, w)_{{L^2(\Omega)}} &= 0 &&\quad \forall \phi \in {\bar{H}^2(\Omega)}, \\ (f,\xi)_{{L^2(\Omega)}} + (- \Delta u + u,\xi)_{{L^2(\Omega)}} &= 0 &&\quad \forall \xi \in {L^2(\Omega)}, \end{alignat*} with unknowns $f \in {L^2(\Omega)}$, $u \in {\bar{H}^2(\Omega)}$ and $w \in {L^2(\Omega)}$. We may write this KKT system in the form: \\ Determine $(f,u,w) \in {L^2(\Omega)} \times {\bar{H}^2(\Omega)} \times {L^2(\Omega)}$ such that \begin{equation} \label{B5} \underbrace{\left[ \begin{array}{ccc} \alpha M & \ze & M' \\ \ze & M_{\partial} & A' \\ M & A & 0 \end{array} \right]}_{\mathcal{A}_{\alpha}} \left[ \begin{array}{c} f \\ u \\ w \end{array} \right] = \left[ \begin{array}{c} 0 \\ \tilde{M}_{\partial}d \\ 0 \end{array} \right], \end{equation} where \begin{align} \label{B6} M&: {L^2(\Omega)} \rightarrow {L^2(\Omega)}', & f &\mapsto (f,\subs)_{{L^2(\Omega)}}, \\ \label{B7} M_{\partial}&: {\bar{H}^2(\Omega)} \rightarrow {\bar{H}^2(\Omega)}', & u &\mapsto (u,\subs)_{{L^2(\partial \Omega)}}, \\ \label{B8} \tilde{M}_{\partial} &: {L^2(\partial \Omega)} \rightarrow {\bar{H}^2(\Omega)}', & d &\mapsto (d,\subs)_{{L^2(\partial \Omega)}}, \\ \label{B9} A &: {\bar{H}^2(\Omega)} \rightarrow {L^2(\Omega)}', & u &\mapsto (- \Delta u + u,\subs)_{{L^2(\Omega)}}, \end{align} and the notation "$'$" is used to denote dual operators and dual spaces. In the rest of this paper, the symbols $M$, $ M_{\partial}$ and $A$ will represent the mappings defined in \eqref{B6}, \eqref{B7} and \eqref{B9}, respectively, and not (the associated) matrices, as was the case in Section \ref{Numerical_experiments}. (We believe that this mild ambiguity improves the readability of the present text). By using standard techniques for saddle point problems, one can show that the system \eqref{B5} satisfies the Brezzi conditions~\cite{brezzi}, provided that $\alpha >0$. Therefore, for every $\alpha > 0$, this set of equations has a unique solution. Nevertheless, if the standard norms of ${L^2(\Omega)}$ and ${H^2(\Omega)}$ are employed in the analysis, then the constants in the Brezzi conditions will depend on $\alpha$. More specifically, the constant in the coercivity condition will be of order $O(\alpha)$, and thus becomes very small for $0 < \alpha \ll 1$. This property is consistent with the ill posed nature of \eqref{A1}-\eqref{A3} for $\alpha=0$, and makes it difficult to design $\alpha$ robust preconditioners for the algebraic system associated with \eqref{B5}. Similar to the approach used in \cite{s-z, Mar11, Nie10}, we will now introduce weighted Hilbert spaces. The weights are constructed such that the constants appearing in the Brezzi conditions are independent of $\alpha$. Thereafter, in Section \ref{Preconditioning}, we will show how these scaled Hilbert spaces can be combined with simple maps to design $\alpha$ robust preconditioners for our model problem. \subsection{Weighted norms} Consider the $\alpha$-weighted norms: \begin{align} \label{B0.1} \@ifstar\@lrnorms\@norm{ f }_{L_{\alpha}^2(\Omega)}^2 &= \alpha \@ifstar\@lrnorms\@norm{f}_{{L^2(\Omega)}}^2, \\ \label{B0.2} \@ifstar\@lrnorms\@norm{ u }_{H_{\alpha}^2(\Omega)}^2 &= \alpha \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{u}_{{L^2(\partial \Omega)}}^2, \\ \label{B0.3} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)}^2 &= \frac{1}{\alpha} \@ifstar\@lrnorms\@norm{ w }_{{L^2(\Omega)}}^2, \end{align} applied to the control $f$, the state $u$ and the dual/Lagrange-multiplier $w$, respectively. Note that these norms become ``meaningless'' for $\alpha = 0$, but are well defined for positive $\alpha$. \subsection{Brezzi conditions} We will now analyze the properties of \begin{equation*} \mathcal{A}_{\alpha}: L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega) \times L_{\alpha^{-1}}^2(\Omega) \rightarrow L_{\alpha}^2(\Omega)' \times H_{\alpha}^2(\Omega)' \times L_{\alpha^{-1}}^2(\Omega)', \end{equation*} defined in \eqref{B5}. More specifically, we will show that the Brezzi conditions are satisfied with constants that do not depend on the size of the regularization parameter $\alpha > 0$. Note that we use the scaled Hilbert norms \eqref{B0.1}-\eqref{B0.3}. \begin{lemma} \label{pre_inf-sup} For all $\alpha > 0$, the following ``inf-sup'' condition holds: \begin{equation*} \inf_{w \in L_{\alpha^{-1}}^2(\Omega)} \sup_{(f,u) \in L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \frac{(f,w)_{{L^2(\Omega)}} + (- \laplace u + u,w)_{{L^2(\Omega)}}}{\@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)}} \geq 1. \end{equation*} \end{lemma} \subsubsection*{Proof} Note that $L_{\alpha}^2(\Omega)$ and $L_{\alpha^{-1}}^2(\Omega)$ contain the same functions, provided that $\alpha > 0$. Let $w \in L_{\alpha^{-1}}^2(\Omega)$ be arbitrary. By choosing $f=w$ and $u=0$ we find that \begin{equation*} \begin{aligned} \sup_{(f,u) \in L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \frac{(f,w)_{{L^2(\Omega)}} + (- \Delta u + u,w)_{{L^2(\Omega)}}}{\@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)}} &\geq \frac{(w,w)_{{L^2(\Omega)}}}{\@ifstar\@lrnorms\@norm{ (w,0) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)}} \\ &= \frac{\@ifstar\@lrnorms\@norm{ w }_{{L^2(\Omega)}}^2}{\sqrt{\alpha}\@ifstar\@lrnorms\@norm{ w }_{{L^2(\Omega)}} (\sqrt{\alpha})^{-1} \@ifstar\@lrnorms\@norm{ w }_{{L^2(\Omega)}}} \\ &= 1. \end{aligned} \end{equation*} Since $w\in L_{\alpha^{-1}}^2(\Omega)$ was arbitrary, this completes the proof. \\ \rule{2mm}{2mm} \\ Expressed in terms of the operators that constitute $\mathcal{A}_{\alpha}$, Lemma \ref{pre_inf-sup} takes the form \begin{equation*} \inf_{w \in L_{\alpha^{-1}}^2(\Omega)} \sup_{(f,u) \in L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \frac{\brack{ Mf,w } + \brack{ Au,w }}{\@ifstar\@lrnorms\@norm {(f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)}} \geq 1, \end{equation*} see \eqref{B6} and \eqref{B9}. Recall that we decided to write our state equation \eqref{A2}-\eqref{A3} on the non-standard variational form \eqref{B1}. Throughout this paper we assume that problem \eqref{A2}-\eqref{A3} admits a unique solution $u \in {\bar{H}^2(\Omega)}$ for every $f \in {L^2(\Omega)}$, and that \begin{equation} \label{B2} \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}} \leq c_1 \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}. \end{equation} This assumption is valid if $\Omega$ is convex or if $\Omega$ has a $C^2$ boundary, see e.g. \cite{grisvard1985, BHac92}. Inequality \eqref{B2} is a key ingredient of the proof of our next lemma. \begin{lemma} \label{pre_coercivity} There exists a constant $c_2$, which is independent of $\alpha > 0$, such that \begin{equation*} \begin{aligned} \alpha \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2 & \geq c_2 \left( \alpha \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 + \alpha \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2 \right) \\ & = c_2 \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)}^2 \end{aligned} \end{equation*} for all $(f,u) \in {L^2(\Omega)}\times {\bar{H}^2(\Omega)}$ such that \begin{equation} \label{B10} (f,\phi)_{{L^2(\Omega)}} + (-\Delta u + u,\phi)_{{L^2(\Omega)}} = 0 \quad \forall \phi \in {L^2(\Omega)}. \end{equation} \end{lemma} \subsubsection*{Proof} If $(f,u)$ satisfies \eqref{B10}, then \begin{equation*} \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}} \leq c_1 \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}, \end{equation*} see the discussion of \eqref{B2}. Let $\theta = (1+ c_1^2)^{-1} \in (0,1)$, and it follows that \begin{equation*} \begin{aligned} \alpha \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2 &\geq \alpha \theta \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 +\alpha\frac{1-\theta}{c_1^2} \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2 \\ &\geq \frac{1}{1+ c_1^2} \left( \alpha \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 + \alpha \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2 \right). \end{aligned} \end{equation*} \rule{2mm}{2mm} \\ This result may also be written in the form \begin{equation*} \begin{split} \brack*{ \left[ \begin{matrix} \alpha M & \ze \\ \ze & M_{\partial} \end{matrix} \right] \left[ \begin{matrix} f \\ u \end{matrix} \right] , \left[ \begin{matrix} f \\ u \end{matrix}\right] } &\geq c_2 \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)}^2 \\ \end{split} \end{equation*} for all $(f,u) \in L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)$ satisfying \begin{equation*} Mf+Au=0, \end{equation*} where $M$, $M_{\partial}$ and $A$ are the operators defined in \eqref{B6}, \eqref{B7} and \eqref{B9}, respectively. \subsection{Boundedness} Having established that the Brezzi conditions hold, with constants that are independent of $\alpha$, we next explore the boundedness of $\mathcal{A}_{\alpha}$. \begin{lemma} \label{bounded_1} \begin{equation*} \@ifstar\@lrabs\@abs*{ \brack*{ \left[ \begin{matrix} \alpha M & \ze \\ \ze & M_{\partial} \end{matrix} \right] \left[ \begin{matrix} f \\ u \end{matrix} \right] , \left[ \begin{matrix} \psi \\ \phi \end{matrix} \right] } } \leq \sqrt{2} \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ (\psi,\phi) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \end{equation*} for all $(f,u),(\psi, \phi) \in L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)$. \end{lemma} \subsubsection*{Proof} Recall the definitions \eqref{B6} and \eqref{B7} of $M$ and $M_{\partial}$, respectively. Since \begin{equation*} \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} = \sqrt{\alpha \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 + \alpha \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2}, \end{equation*} we find, by employing the Cauchy-Schwarz inequality, that \begin{equation*} \begin{split} \@ifstar\@lrabs\@abs*{ \brack*{ \left[ \begin{matrix} \alpha M & \ze \\ \ze & M_{\partial} \end{matrix} \right] \left[ \begin{matrix} f \\ u \end{matrix} \right] , \left[ \begin{matrix} \psi \\ \phi \end{matrix} \right] } } &= \left| \alpha (f,\psi)_{{L^2(\Omega)}} + (u,\phi)_{{L^2(\partial \Omega)}} \right| \\ &\leq \@ifstar\@lrnorms\@norm{ f }_{L_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ \psi }_{L_{\alpha}^2(\Omega)} + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}} \@ifstar\@lrnorms\@norm{ \phi }_{{L^2(\partial \Omega)}} \\ &\leq \sqrt{2}\sqrt{\@ifstar\@lrnorms\@norm{ f }_{L_{\alpha}^2(\Omega)}^2 \@ifstar\@lrnorms\@norm{ \psi }_{L_{\alpha}^2(\Omega)}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2 \@ifstar\@lrnorms\@norm{ \phi }_{{L^2(\partial \Omega)}}^2} \\ &\leq \sqrt{2} \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ (\psi,\phi) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)}. \end{split} \end{equation*} \rule{2mm}{2mm} \begin{lemma} \label{bounded_2} \begin{equation*} \@ifstar\@lrabs\@abs*{ \brack*{ \left[ M \, \, A \right] \left[ \begin{matrix} f \\ u \end{matrix} \right] , w } } \leq \sqrt{3} \@ifstar\@lrnorms\@norm{(f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)} \end{equation*} for all $(f,u) \in L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)$, $w \in L_{\alpha^{-1}}^2(\Omega)$. \end{lemma} \subsubsection*{Proof} Again, we note that \begin{align*} \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} &= \sqrt{\alpha \@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 + \alpha \@ifstar\@lrnorms\@norm{ u }_{{H^2(\Omega)}}^2 + \@ifstar\@lrnorms\@norm{ u }_{{L^2(\partial \Omega)}}^2}, \\ \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)} &= \frac{1}{\sqrt{\alpha}} \@ifstar\@lrnorms\@norm{ w }_{{L^2(\Omega)}}. \end{align*} From the definitions of $M$ and $A$, see \eqref{B6} and \eqref{B9}, and the Cauchy-Schwarz inequality, it follows that \begin{equation*} \begin{split} \@ifstar\@lrabs\@abs*{ \brack*{ \left[ M \, \, A \right] \left[ \begin{matrix} f \\ u \end{matrix} \right] , w } } &= \@ifstar\@lrabs\@abs*{ \brack*{ Mf,w } + \brack*{ Au,w } } \\ &= \@ifstar\@lrabs\@abs*{ (f, w)_{{L^2(\Omega)}} + (-\Delta u + u, w)_{{L^2(\Omega)}} } \\ &\leq \left( \@ifstar\@lrnorms\@norm{ f }_{L_{\alpha}^2(\Omega)} + \@ifstar\@lrnorms\@norm{ \laplace u }_{L_{\alpha}^2(\Omega)} + \@ifstar\@lrnorms\@norm{ u }_{L_{\alpha}^2(\Omega)} \right)\@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)} \\ &\leq \sqrt{3} \@ifstar\@lrnorms\@norm{ (f,u) }_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)} \@ifstar\@lrnorms\@norm{ w }_{L_{\alpha^{-1}}^2(\Omega)}. \end{split} \end{equation*} For the last equality, recall from \eqref{eq:H2_inner} that $\@ifstar\@lrnorms\@norm{\laplace u}_{L^2(\Omega)} = \@ifstar\@lrnorms\@norm{ \hess u}_{L^2(\Omega)} \leq \@ifstar\@lrnorms\@norm{u}_{H^2(\Omega)}$ for all $u\in {\bar{H}^2(\Omega)}$. \\ \rule{2mm}{2mm} \subsection{Isomorphism} We have verified that the Brezzi conditions hold, and that $\mathcal{A}_{\alpha}$ is a bounded operator. Moreover, all constants appearing in the inequalities expressing these properties are independent of the regularization parameter $\alpha > 0$. Let \begin{align} \mathcal{V} &= L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega) \times L_{\alpha^{-1}}^2(\Omega), \\ \mathcal{V} ' &= L_{\alpha}^2(\Omega)' \times H_{\alpha}^2(\Omega)' \times L_{\alpha^{-1}}^2(\Omega)'. \end{align} \begin{theorem} \label{main} The operator $\mathcal{A}_{\alpha}$, defined in \eqref{B5}, is bounded and continuously invertible for $\alpha > 0$ in the sense that for all nonzero $x\in \mathcal{V} $, \begin{equation} \label{Aa_continuous_estimates} c \leq \sup_{ 0\neq y\in\mathcal{V} } \frac{\brack{\mathcal{A}_{\alpha} x, y}}{\@ifstar\@lrnorms\@norm{y}_{\mathcal{V} }\@ifstar\@lrnorms\@norm{x}_{\mathcal{V} }} \leq C \end{equation} for some positive constants $c$ and $C$ that are independent of $\alpha > 0$. In particular, \begin{equation*} \@ifstar\@lrnorms\@norm{\mathcal{A}_{\alpha}^{-1}}_{\mathcal{L}(\mathcal{V} ',\mathcal{V} )} \leq c^{-1} \quad\mbox{and} \quad \@ifstar\@lrnorms\@norm{\mathcal{A}_{\alpha}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} ')} \leq C. \end{equation*} \end{theorem} \subsubsection*{Proof} This result follows from Lemma \ref{pre_inf-sup}, Lemma \ref{pre_coercivity}, Lemma \ref{bounded_1}, Lemma \ref{bounded_2} and Brezzi theory for saddle point problems, see \cite{brezzi}. \\ \rule{2mm}{2mm} \subsection{Estimates for the discretized problem} The stability properties \eqref{Aa_continuous_estimates} is not necessarily inherited by discretizations. However, the structure used to prove the so-called ``inf-sup condition'' in Lemma \ref{pre_inf-sup} is preserved in the discrete system provided that the same discretization is employed for the control and the Lagrange multiplier. Furthermore, the boundedness properties, Lemma \ref{bounded_1} and Lemma \ref{bounded_2}, certainly also hold for conforming discretizations. It remains to adress the coercivity condition, Lemma \ref{pre_coercivity}, for the discretized problem. We consider finite dimensional subspaces $U_h\subset U = {\bar{H}^2(\Omega)}$ and $W_h\subset W = {L^2(\Omega)}$. For certain choices of $U_h$ and $W_h$, the estimate of Lemma \ref{pre_coercivity} carries over to the finite-dimensional setting. \begin{lemma} \label{compatibility_lemma} Assume $U_h\subset U$ and $W_h\subset W$, such that $(1-\laplace) U_h \subset W_h$. Then \begin{equation}\label{discrete_coercivity} \alpha \@ifstar\@lrnorms\@norm{f_h}_{L^2(\Omega)}^2 + \@ifstar\@lrnorms\@norm{u_h}_{L^2(\partial \Omega)}^2 \geq c_2 \@ifstar\@lrnorms\@norm{(f_h,u_h)}_{L_{\alpha}^2(\Omega) \times H_{\alpha}^2(\Omega)}^2 \end{equation} for all $(f_h,u_h) \in W_h\times U_h$ such that \begin{equation}\label{B10_discrete} (f_h, \phi_h)_{L^2(\Omega)} + (u_h -\laplace u_h, \phi_h)_{L^2(\Omega)} = 0 \quad \forall \phi_h \in W_h. \end{equation} \end{lemma} \subsection*{Proof} Assume that $(1-\laplace) U_h \subset W_h$, and that \eqref{B10_discrete} holds for $(f_h,u_h)\in W_h\times U_h$. Then $f_h +(1-\laplace) u_h \in W_h$, and \eqref{B10_discrete} implies $f_h +(1-\laplace) u_h = 0$. Therefore, $(f_h, u_h)$ satisfies \eqref{B10} and the estimate \eqref{discrete_coercivity} follows from Lemma \ref{pre_coercivity}. \\ \rule{2mm}{2mm} If the discretization is chosen such that Lemma \ref{compatibility_lemma} is satisfied, then the estimates \eqref{Aa_continuous_estimates} carries over to discretized system. More precisely, we have \begin{align} \@ifstar\@lrnorms\@norm{ \mathcal{A}_{\alpha,h} }_{\mathcal{L}(\mathcal{V} _h,\mathcal{V} _h')} \leq \@ifstar\@lrnorms\@norm{ \mathcal{A}_{\alpha} }_{\mathcal{L}(\mathcal{V} ,\mathcal{V} ')}, \quad\mbox{and}\quad \@ifstar\@lrnorms\@norm{ \mathcal{A}_{\alpha,h}^{-1} }_{\mathcal{L}(\mathcal{V} _h',\mathcal{V} _h)} \leq \@ifstar\@lrnorms\@norm{ \mathcal{A}_{\alpha}^{-1} }_{\mathcal{L}(\mathcal{V} ',\mathcal{V} )}, \label{Aa_discrete_estimates} \end{align} where $\mathcal{V} _h = W_h\times U_h \times W_h \subset \mathcal{V} $, equipped with the inner prdocut of $\mathcal{V} $, and $\mathcal{A}_{\alpha,h}$ is discrete counterpart to $\mathcal{A}_{\alpha}$, defined by setting $\brack{\mathcal{A}_{\alpha,h} x_h,y_h} = \brack{\mathcal{A}_{\alpha} x_h, y_h}$ for all $x_h,y_h\in \mathcal{V} _h$. If the state is discretized with $C^1$-conforming bicubic Bogner-Fox-Schmit rectangles, as in Section \ref{Numerical_experiments}, then Lemma \ref{compatibility_lemma} is satisfied if the control and Lagrange multiplier is discretized with discontinuous bicubic elements on the same mesh. For triangular meshes, one could choose Argyris triangles for the state variable and piecewise quintic polynomials for the control and Lagrange multiplier variables. We remark that Lemma \ref{compatibility_lemma} provides a sufficient, but not necessary criterion for stability of the discrete problem, and usually may imply far more degrees of freedom in the discrete space $W_h\subset W$ than is actually needed. The usefulness of Lemma \ref{compatibility_lemma} is that the estimates \eqref{Aa_discrete_estimates} can, in principle, always be obtained by choosing a sufficiently large space for the control and Lagrange multiplier. \section{Preconditioning}\label{Preconditioning} The linear problem \eqref{B5} is of the form \begin{equation} \label{P:Axeqb} \mathcal{A} x = b. \end{equation} where $x$ is sought in a Hilbert space $\mathcal{V} $, the right hand side $b$ is in the dual space $\mathcal{V} '$, and $\mathcal{A} $ is a self-adjoint continuous mapping of $\mathcal{V} $ onto $\mathcal{V} '$. Iterative methods for linear problems are most often formulated for operators mapping $\mathcal{V} $ into itself, and can not be directly applied to the linear system \eqref{P:Axeqb}, as described in \cite{Mar11}. If we want to apply such methods to \eqref{P:Axeqb}, then we need to introduce a continuous operator mapping $\mathcal{V} '$ isomorphically back onto $\mathcal{V} $. More precisely, if we have a continuous operator \begin{equation*} \mathcal{B}: \mathcal{V} ' \rightarrow \mathcal{V} , \end{equation*} then $\mathcal{M} = \mathcal{B} \mathcal{A} :\mathcal{V} \rightarrow\mathcal{V} $ is continuous and has the desired mapping properties, and if $\mathcal{B} $ is an isomorphism, the solutions to \eqref{P:Axeqb} coincides with the solutions to the problem \begin{equation} \mathcal{M} x = \mathcal{B} \mathcal{A} x = \mathcal{B} b. \label{P:BAxeqBb} \end{equation} In this paper we shall consider $\mathcal{B} \in \mathcal{L}(\mathcal{V} ,\mathcal{V} ')$ a preconditioner if $\mathcal{B} $ is self-adjoint and positive definite. This implies that $\mathcal{B} ^{-1}$ is self-adjoint and positive definite as well, and hence $\mathcal{B} ^{-1}$ defines an inner product on $\mathcal{V} $ by setting \begin{equation} \label{P:Riesz} \@ifstar\@lrpp\@pp{x,y} = \brack{ \mathcal{B} ^{-1} x, y }, \quad\quad x,y \in \mathcal{V} . \end{equation} This inner product has the crucial property of making $\mathcal{M} $ self-adjoint, in the sense that \begin{equation} \@ifstar\@lrpp\@pp{\mathcal{M} x,y} = \brack{\mathcal{A} x,y} = \brack{\mathcal{A} y,x} = \@ifstar\@lrpp\@pp{\mathcal{M} y,x}. \label{P:symmetric} \end{equation} Conversely, given any inner product on $\@ifstar\@lrpp\@pp{\subs,\subs}$ on $\mathcal{V} $, the Riesz-Fr\'echet theorem provides a self-adjoint positive definite isomorphism $\mathcal{B} :\mathcal{V} '\rightarrow \mathcal{V} $ such that \eqref{P:Riesz} and \eqref{P:symmetric} hold, and we say that $\mathcal{B} $ is the Riesz operator induced by $\@ifstar\@lrpp\@pp{\subs,\subs}$. This establishes a one-to-one correspondence between preconditioners and Riesz operators on $\mathcal{V} '$. Since the Riesz operator is an isometric isomorphism, the operator norm of $\mathcal{B} \mathcal{A} $ coincides with the operator norm of $\mathcal{A} $. We formulate this well-known fact here in a lemma for the sake of self-containedness. We refer to \cite{Mar11, gunnel2014note} for a more in-depth discussion of preconditioning and its relation to Riesz operators. \begin{lemma} \label{precond:equivalence} Let $\mathcal{V} $ be a Hilbert space, and let $\mathcal{A} :\mathcal{V} \rightarrow \mathcal{V} '$ be a self-adjoint isomorphism, and assume that $\mathcal{B} $ is the Riesz operator induced by the inner product on $\mathcal{V} $, or equivalently, that the inner product on $\mathcal{V} $ is defined by the self-adjoint positive definite isomorphism $\mathcal{B} ^{-1}:\mathcal{V} \rightarrow \mathcal{V} '$. Then $\mathcal{B} \mathcal{A} : \mathcal{V} \rightarrow \mathcal{V} $ is an isomorphism, self-adjoint in the inner product on $\mathcal{V} $, with \begin{equation*} \@ifstar\@lrnorms\@norm{\mathcal{B} \mathcal{A} }_{\mathcal{L}(\mathcal{V} , \mathcal{V} )} = \@ifstar\@lrnorms\@norm{\mathcal{A} }_{\mathcal{L}(\mathcal{V} , \mathcal{V} ')} \quad \mbox{and}\quad \@ifstar\@lrnorms\@norm{(\mathcal{B} \mathcal{A} )^{-1}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} )} = \@ifstar\@lrnorms\@norm{\mathcal{A} ^{-1}}_{\mathcal{L}(\mathcal{V} ', \mathcal{V} )}. \end{equation*} In particular, the condition number of $\mathcal{B} \mathcal{A} $ is given by \begin{equation*} \cond(\mathcal{B} \mathcal{A} ) = \@ifstar\@lrnorms\@norm{\mathcal{A} ^{-1}}_{\mathcal{L}(\mathcal{V} ', \mathcal{V} )}\@ifstar\@lrnorms\@norm{\mathcal{A} }_{\mathcal{L}(\mathcal{V} , \mathcal{V} ')}. \end{equation*} \end{lemma} \subsubsection*{Proof} Since $\mathcal{A} $ is self-adjoint, $\mathcal{M} = \mathcal{B} \mathcal{A} $ is self-adjoint with respect to the inner product on $\mathcal{V} $. From the Riesz-Fr\'echet theorem we have $\@ifstar\@lrnorms\@norm{\mathcal{A} x}_{\mathcal{V} '} = \@ifstar\@lrnorms\@norm{\mathcal{B} \mathcal{A} x} = \@ifstar\@lrnorms\@norm{\mathcal{M} x}$, and we obtain following identity for the operator norm of $\mathcal{M} $. \begin{equation* \begin{split} \@ifstar\@lrnorms\@norm{\mathcal{M} }_{\mathcal{L}(\mathcal{V} , \mathcal{V} )} = \sup_{x\neq 0} \frac{ \@ifstar\@lrnorms\@norm{\mathcal{M} x}_\mathcal{V} }{\@ifstar\@lrnorms\@norm{x}_\mathcal{V} } &= \sup_{x\neq 0} \frac{ \@ifstar\@lrnorms\@norm{\mathcal{A} x}_{\mathcal{V} '}}{\@ifstar\@lrnorms\@norm{x}_\mathcal{V} } \\ &= \sup_{x\neq 0}\sup_{y\neq 0} \frac{\brack{\mathcal{A} x, y}}{\@ifstar\@lrnorms\@norm{x}_\mathcal{V} \@ifstar\@lrnorms\@norm{y}_\mathcal{V} } = \@ifstar\@lrnorms\@norm{\mathcal{A}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} ')}. \end{split} \end{equation*} A similar identity is obtained for the norm of the inverse operator, \begin{equation* \begin{split} \@ifstar\@lrnorms\@norm{\mathcal{M} ^{-1}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} )} &= \sup_{x\neq 0} \frac{ \@ifstar\@lrnorms\@norm{\mathcal{M} ^{-1} x}_\mathcal{V} }{\@ifstar\@lrnorms\@norm{x}_\mathcal{V} } \\ &= \left(\inf_{x\neq 0} \frac{ \@ifstar\@lrnorms\@norm{\mathcal{M} x}_\mathcal{V} }{\@ifstar\@lrnorms\@norm{x}_\mathcal{V} } \right)^{-1} \\ &= \left(\inf_{x\neq 0}\sup_{y\neq 0} \frac{\brack{\mathcal{A} x, y}}{\@ifstar\@lrnorms\@norm{x}_\mathcal{V} \@ifstar\@lrnorms\@norm{y}_\mathcal{V} }\right)^{-1} = \@ifstar\@lrnorms\@norm{\mathcal{A}^{-1}}_{\mathcal{L}(\mathcal{V} ', \mathcal{V} )}. \end{split} \end{equation*} \rule{2mm}{2mm} \\ We say that a preconditioner $\mathcal{B}_{\alpha}$ for $\mathcal{A}_{\alpha}$ is robust with respect to the parameter $\alpha$ if $\cond(\mathcal{B}_{\alpha}\mathcal{A}_{\alpha})$ is bounded uniformly in $\alpha$. The significance of Lemma \ref{precond:equivalence} is that such a robust preconditioner can be found by identifying (parameter-dependent) norms in which $\mathcal{A}_{\alpha}$ and $\mathcal{A}_{\alpha}^{-1}$ are both uniformly bounded. \subsection{Parameter-robust minimum residual method} In Section \ref{Analysis} stability of $\mathcal{A}_{\alpha}$ was shown in the $\alpha$-dependent norms defined in \eqref{B0.1}-\eqref{B0.3}. The preconditioner provided by Lemma \ref{precond:equivalence} is the Riesz operator induced by the weighted norms. This operator $\mathcal{B}_{\alpha}: \mathcal{V} ' \rightarrow \mathcal{V} $ takes the form \begin{equation} \label{def_Bca} \mathcal{B}_{\alpha}= \left[ \begin{matrix} \alpha M & \ze & \ze \\ \ze & \alpha R +M_{\partial} & \ze \\ \ze & \ze & \frac{1}{\alpha} M \end{matrix} \right]^{-1} \end{equation} where $R:{\bar{H}^2(\Omega)}\rightarrow {\bar{H}^2(\Omega)}'$ is the operator induced by the ${H^2(\Omega)}$ inner product, i.e. $\brack{Ru,v} = (u,v)_{{H^2(\Omega)}}$. Since $\mathcal{A}_{\alpha}$ is self-adjoint, the preconditioned operator $\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}:\mathcal{V} \rightarrow\mathcal{V} $ is self-adjoint in the inner product on $\mathcal{V} $. Consequently we can apply the minimum residual method (\textsc{Minres}) to the problem \begin{equation*} \mathcal{B}_{\alpha} \mathcal{A}_{\alpha} x = \mathcal{B}_{\alpha} b. \end{equation*} \begin{theorem} \label{number_of_iterations} Let $\mathcal{A}_{\alpha}$ be the operator defined in \eqref{B5} and $\mathcal{B}_{\alpha}$ the operator defined in \eqref{def_Bca}. Then there exists an upper bound, independent of $\alpha$, for the convergence rate of \textsc{Minres}{} applied to the preconditioned system \begin{equation*} \mathcal{B}_{\alpha} \mathcal{A}_{\alpha} x = \mathcal{B}_{\alpha} b. \end{equation*} In particular there exists an upper bound, independent of $\alpha$, for the number of iterations needed to reach the stopping criterion \eqref{N3}. \end{theorem} \subsubsection*{Proof} A crude upper bound for the convergence rate (more precisely, the two-step convergence rate) of \textsc{Minres}{} is given by \begin{equation*} \@ifstar\@lrnorms\@norm{\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}(x-x_{2m})}_{\mathcal{V} } \leq \left(\frac{1-\cond}{1+\cond}\right)^{m} \@ifstar\@lrnorms\@norm{\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}(x-x_{0})}_{\mathcal{V} } \end{equation*} where $\cond = \cond(\mathcal{B}_{\alpha}\mathcal{A}_{\alpha})$ is the condition number of $\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}$, see e.g. \cite{Mar11}. From Lemma \ref{precond:equivalence} and \eqref{Aa_continuous_estimates} we determine that $\cond$ is bounded independently of $\alpha$, with \begin{equation} \label{P:cond} \begin{split} \cond &= \@ifstar\@lrnorms\@norm{(\mathcal{B}_{\alpha}\mathcal{A}_{\alpha})^{-1}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} )} \@ifstar\@lrnorms\@norm{\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} )} \\ &= \@ifstar\@lrnorms\@norm{\mathcal{A}_{\alpha}^{-1}}_{\mathcal{L}(\mathcal{V} ', \mathcal{V} )} \@ifstar\@lrnorms\@norm{\mathcal{A}_{\alpha}}_{\mathcal{L}(\mathcal{V} , \mathcal{V} ')} \\ &\leq c^{-1}C. \end{split} \end{equation} \rule{2mm}{2mm} \\ \\ In practical applications, the operator $\mathcal{B}_{\alpha}$ will be replaced with a less computationally expensive approximation $\widehat\mathcal{B}_{\alpha}$. Ideally $\widehat\mathcal{B}_{\alpha}$ will be spectrally equivalent to $\mathcal{B}_{\alpha}$, in the sense that the condition number of $\widehat\mathcal{B}_{\alpha} \mathcal{B}_{\alpha}^{-1}$ is bounded, independently of $\alpha$. Then the preconditioned system reads \begin{equation*} \widehat\mathcal{B}_{\alpha} \mathcal{A}_{\alpha} x = \widehat\mathcal{B}_{\alpha} b, \end{equation*} and the upper bound for the convergence rate is determined by the conditioned number $\cond(\widehat\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}) \leq \cond(\widehat\mathcal{B}_{\alpha}\Bca^{-1})\cond(\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}^{-1})$. \subsubsection*{Remark} In this paper we only consider the minimum residual method, and we therefore require that the preconditioner is self-adjoint and positive definite. More generally, if other Krylov subspace methods are to be applied to \eqref{B5}, then preconditioners lacking symmetry or definiteness may be considered. We mention in particular that a preconditioned conjugate gradient method for problems similar to \eqref{B5} was proposed in \cite{s-z}, based on a clever choice of inner product. \section{Generalization} \label{Generalization} Is our technique applicable to other problems than \eqref{A1}-\eqref{A3}? We will now briefly explore this issue, and show that the preconditioning scheme derived above yields $\alpha$ robust methods for a class of problems. The scaling \eqref{B0.1}-\eqref{B0.3} was also investigated in \cite{Nie10}, but for a family of abstract problems posed in terms of Hilbert spaces. More specifically, for general PDE-constrained optimization problems, subject to Tikhonov regularization, and with linear state equations. But in \cite{Nie10} no assumptions about the control, state or observation spaces were made, except that they were Hilbert spaces. Under these circumstances, it was proved that the coercivity and the boundedness, of the operator associated with the KKT system, hold with $\alpha$-independent constants. Nevertheless, in this general setting, the inf-sup condition involved an $\alpha$-dependent constant, which, eventually, yielded theoretical iteration bounds of order $O([\log\left( \alpha^{-1} \right)]^2)$ for \textsc{Minres}{}. In the present paper we were able to prove an $\alpha$-robust inf-sup condition for the model problem \eqref{A1}-\eqref{A3}. This is possible because both the control $f$ and the dual/Lagrange-multiplier $w$ belong to ${L^2(\Omega)}$. From a more general perspective, it turns out that this is the property that must be fulfilled in order for our approach to be successful: The control space and the dual space, associated with the state equation, must coincide. This will usually lead to additional regularity requirements for the state space. Motivated by this discussion, let us consider an abstract problem of the form: \begin{equation} \label{G1} \min_{f \in W, \, u \in U} \left\{ \frac{1}{2} \@ifstar\@lrnorms\@norm{ Tu - d }^2_O + \frac{1}{2} \alpha \@ifstar\@lrnorms\@norm{ f }^2_W \right\} \end{equation} subject to \begin{equation} \label{G2} \brack{ Au, w } + (f,w)_W =0, \quad \forall w \in W. \end{equation} Here, \begin{itemize} \item $W$ is the dual \underline{and} control space, \item $U$ is the state space, \item $O$ is the observation space, \item $W$, $U$ and $O$ are Hilbert spaces. \end{itemize} Let us assume that \begin{itemize} \labeleditem{(A1)}{G:A1} $A:U \rightarrow W'$ is a continuous linear operator with closed range. In particular, there is a constant $c_1$ such that for all $u \in U$, \begin{equation*} \@ifstar\@lrnorms\@norm{u}_{U/ \ker A} = \inf_{\tilde u \in \ker A} \@ifstar\@lrnorms\@norm{u-\tilde u}_U \leq c_1\@ifstar\@lrnorms\@norm{A u}_{W'}. \end{equation*} \labeleditem{(A2)}{G:A2} $T:U \rightarrow O$ is linear and bounded, and invertible on the kernel of $A$. That is, there is a constant $c_2$ such that for all $u\in \ker A$, \begin{equation*} \@ifstar\@lrnorms\@norm{u}_U \leq c_2 \@ifstar\@lrnorms\@norm{T u}_O. \end{equation*} \end{itemize} It then follows that the KKT system associated with \eqref{G1}-\eqref{G2} is well-posed for every $\alpha > 0$: Determine $(f,u,w) \in W \times U \times W$ such that \begin{equation} \label{G3} \underbrace{\left[ \begin{matrix} \alpha M & \ze & M' \\ \ze & K & A' \\ M & A & 0 \end{matrix} \right]}_{=\mathcal{A}_{\alpha}} \left[ \begin{matrix} f \\ u \\ w \end{matrix} \right] = \left[ \begin{matrix} 0 \\ \tilde K d \\ 0 \end{matrix} \right], \end{equation} where \begin{align} \label{G4} M&: W \rightarrow W', &f &\mapsto (f,\subs)_W, \\ \label{G5} K&: U \rightarrow U', &u &\mapsto (Tu,T\subs)_O, \\ \tilde K&: O \rightarrow U', &d &\mapsto (d,T\subs)_O, \end{align} Note that, compared with \eqref{N1}, the boundary observation matrix $M_{\partial}$ has been replaced with the general observation operator $K$ in \eqref{G3}. We introduce scaled norms as follows. \begin{align*} \@ifstar\@lrnorms\@norm{ f }_{W_{\alpha}}^2 &= \alpha \@ifstar\@lrnorms\@norm{ f }_W^2, \\ \@ifstar\@lrnorms\@norm{ u }_{U_{\alpha}}^2 &= \alpha \@ifstar\@lrnorms\@norm{Au}_{W'}^2 + \@ifstar\@lrnorms\@norm{ Tu }_O^2, \\ \@ifstar\@lrnorms\@norm{ w }_{W_{\alpha^{-1}}}^2 &= \frac{1}{\alpha} \@ifstar\@lrnorms\@norm{ w }_W^2. \end{align*} We first show that $\@ifstar\@lrnorms\@norm{\subs}_{U_\alpha}$ is indeed a norm on $U$ when assumptions \ref{G:A1} and \ref{G:A2} hold. It suffices to show that $\@ifstar\@lrnorms\@norm{\subs}_{U_\alpha}$ is a norm equivalent to $\@ifstar\@lrnorms\@norm{\subs}_U$ when $\alpha=1$. We have \begin{equation}\label{G:norm_euiv1} \@ifstar\@lrnorms\@norm{Tu}_O + \@ifstar\@lrnorms\@norm{Au}_{W'} \leq \big(\@ifstar\@lrnorms\@norm{T}_{\mathcal{L}( U,O)} + \@ifstar\@lrnorms\@norm{A}_{\mathcal{L}(U,W')}\big) \@ifstar\@lrnorms\@norm{u}_U, \end{equation} and letting $\pi$ denote the orthogonal projection of $U$ onto $\ker A$, \begin{equation} \begin{split} \@ifstar\@lrnorms\@norm{u}_U&\leq \@ifstar\@lrnorms\@norm{\pi u}_U + \@ifstar\@lrnorms\@norm{u-\pi u}_U \\ &\leq c_2 \@ifstar\@lrnorms\@norm{T\pi u}_O + \@ifstar\@lrnorms\@norm{u-\pi u}_U \\ &\leq c_2 \@ifstar\@lrnorms\@norm{T u}_O + \big(1+c_2 \@ifstar\@lrnorms\@norm{T}_{\mathcal{L}(U,O)}\big)\@ifstar\@lrnorms\@norm{u-\pi u}_U \\ & \leq c_2 \@ifstar\@lrnorms\@norm{T u}_O +c_1\big(1+c_2\@ifstar\@lrnorms\@norm{T}_{\mathcal{L}(U,O)}\big) \@ifstar\@lrnorms\@norm{Au}_{W'}. \end{split} \end{equation} Here the last inequality follows from $\@ifstar\@lrnorms\@norm{u-\pi u}_{U} = \inf_{\tilde u\in \ker A}\@ifstar\@lrnorms\@norm{u-\tilde u}_U$ and assumption \ref{G:A1}. We set $\mathcal{V} = W_\alpha \times U_{\alpha} \times W_{\alpha^{-1}}$. As in Section \ref{Analysis}, $\mathcal{A}_{\alpha}:\mathcal{V} \rightarrow \mathcal{V} '$ can be shown to be an isomorphism, with parameter-independent estimates obtained in the weighted norms. \begin{theorem} \label{GG:isomorphism} There exists positive constants $c$ and $C$, independent of $\alpha$, such that for all nonzero $x \in \mathcal{V} $, \begin{equation} \label{G:isomorphism} c \leq \sup_{0\neq y \in \mathcal{V} } \frac{\brack{\mathcal{A}_{\alpha} x, y}}{ \@ifstar\@lrnorms\@norm{x}_{\mathcal{V} } \@ifstar\@lrnorms\@norm{y}_{\mathcal{V} }} \leq C. \end{equation} \end{theorem} We omit the full proof, which is analogous to that of Theorem \ref{main}. The crucial part is the ``inf-sup condition'' of Lemma \ref{pre_inf-sup}, which is easily shown to hold in the abstract setting: \begin{equation*} \begin{aligned} \sup_{(f,u) \in W_\alpha \times U_\alpha} \frac{(f,w)_{W} + \brack{A u,w}}{\@ifstar\@lrnorms\@norm{ (f,u) }_{W_\alpha\times U_\alpha} \@ifstar\@lrnorms\@norm{ w }_{W_{\alpha^{-1}}}} & \geq \frac{(w,w)_{W}}{\@ifstar\@lrnorms\@norm{ (w,0) }_{W_\alpha \times U_\alpha} \@ifstar\@lrnorms\@norm{ w }_{W_{\alpha^{-1}}}} = 1. \end{aligned} \end{equation*} The coercivity condition of Lemma \ref{pre_coercivity} naturally holds in the prescribed norm on $U_\alpha$, since for $(f,u)\in W\times U$ such that $Au = Mf$, \begin{equation*} \alpha \@ifstar\@lrnorms\@norm{f}_W^2 + \@ifstar\@lrnorms\@norm{T u}_O^2 = \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{f}_W^2 + \frac{\alpha}{2} \@ifstar\@lrnorms\@norm{Au}_{W'}^2 + \@ifstar\@lrnorms\@norm{T u}_O^2 \geq \frac{1}{2} \left(\@ifstar\@lrnorms\@norm{f}_{W_\alpha}^2 + \@ifstar\@lrnorms\@norm{u}_{U_\alpha}^2 \right). \end{equation*} Note that the weighted norm now depends on $A$, and as consequence, the estimates become $A$-independent. In fact, we obtain bounds for the constants $c$ and $C$ which are independent of $\alpha$ as well as the operators appearing in \eqref{G1}-\eqref{G2}. This is postponed to the next section, where sharp estimates are obtained for \eqref{G:isomorphism}. With the estimates \eqref{G:isomorphism}, Lemma \ref{precond:equivalence} provides a preconditioner for the operator $\mathcal{A}_{\alpha}$, given as \begin{equation}\label{G:precond} \mathcal{B}_{\alpha}= \left[ \begin{matrix} \alpha M & \ze & \ze \\ \ze & \alpha A' M^{-1} A + K & \ze \\ \ze & \ze & \frac{1}{\alpha} M \end{matrix} \right]^{-1} . \end{equation} The condition number of $\mathcal{B}_{\alpha} \mathcal{A}_{\alpha}$ will be bounded independently of $\alpha$. It is, however, not clear how to find a computationally efficient approximation of $\mathcal{B}_{\alpha}$ in the abstract setting of \eqref{G1}-\eqref{G2}. \subsubsection*{Example 1} The problem \eqref{A1}-\eqref{A3} fits in the abstract framework presented in this section when we assume that the state has ${H^2(\Omega)}$ regularity. We set $W= {L^2(\Omega)}$, $U={\bar{H}^2(\Omega)}$, $A = 1-\Delta$, and $T:{\bar{H}^2(\Omega)}\rightarrow {L^2(\partial \Omega)}$ is a trace operator, see (\ref{G5}). Since $A$ is a continuous isomorphism, assumptions \ref{G:A1} and \ref{G:A2} are both valid. The inner product on $U_\alpha$ takes the form \begin{equation*} \begin{split} (u, v)_{U_\alpha} &= \brack{K u, v} + \alpha \brack{A M^{-1} A u, v} \\ &= \int_{\partial\Omega} u v \, ds + \alpha\int_\Omega (u-\laplace u)(v-\laplace v) \, dx \\ &= \int_{\partial\Omega} u v \, ds + \alpha\int_\Omega D^2u : D^2v + 2 \grad u \cdot \grad v \, +uv \,dx, \end{split} \end{equation*} where $D^2 u$ denotes the Hessian of $u$, and the last equality follows from the boundary condition $\partial u /\partial \mathbf{n} = 0$ imposed on ${\bar{H}^2(\Omega)}$. The resulting preconditioner is the one that was used in the numerical experiments, detailed in Section \ref{Numerical_experiments}, and it is spectrally equivalent to the preconditioner defined in \eqref{def_Bca}. \subsubsection*{Example 2} Let $U$, $W$, and $K$ be as in Example 1, but let us set $A = -\laplace$. Now $A$ has non-trivial kernel, consisting of the a.e. constant functions, and for constant $u$ we have \begin{equation*} \@ifstar\@lrnorms\@norm{Tu}_{L^2(\partial \Omega)} = \sqrt{\frac{\@ifstar\@lrabs\@abs{\partial \Omega}}{\@ifstar\@lrabs\@abs{\Omega}}} \@ifstar\@lrnorms\@norm{u}_{\bar{H}^2(\Omega)}. \end{equation*} Since assumptions \ref{G:A1} and \ref{G:A2} are valid, the optimality system is still well-posed. In this case the inner product on $U_\alpha$ is given by \begin{equation*} \begin{split} (u, v)_{U_\alpha} = \int_{\partial\Omega} u v \, ds + \alpha\int_\Omega D^2u:D^2v \,dx. \end{split} \end{equation*} \subsubsection*{Example 3} Let us consider the ``prototype'' problem: \begin{equation*} \min_{f, \, u} \left\{ \frac{1}{2}\@ifstar\@lrnorms\@norm{ u - d }_{{L^2(\Omega)}}^2 + \frac{\alpha}{2}\@ifstar\@lrnorms\@norm{ f }_{{L^2(\Omega)}}^2 \right\} \end{equation*} subject to \begin{alignat*}{2} - \laplace u + u + f&= 0 &&\quad \mbox{in } \Omega, \\ \frac{\partial u}{\partial \mathbf{n}} &= 0 &&\quad \mbox{on } \partial \Omega. \end{alignat*} Note that we here consider the case in which observation data is assumed to be available throughout the entire domain $\Omega$ of the state equation. If the usual variational form of the PDE is used, i.e., \begin{equation}\label{G:ex_var1} (u,w)_{{H^1(\Omega)}} + (f,w)_{{L^2(\Omega)}} = 0, \quad\forall w \in {H^1(\Omega)}, \end{equation} then the control space equals ${L^2(\Omega)}$, whereas the dual space is ${H^1(\Omega)}$. The preconditioning strategy presented in this section is therefore not applicable. If instead we can assume ${H^2(\Omega)}$-regularity, we can use the variational form \begin{equation}\label{G:ex_var2} (- \laplace u +u,w)_{{L^2(\Omega)}} + (f,w)_{{L^2(\Omega)}} =0, \quad \forall w \in {L^2(\Omega)}. \end{equation} Now, the control and dual spaces both equal ${L^2(\Omega)}$. The methodology presented in this section can thus be applied, and a robust preconditioner is obtained. Compared with the preconditioner for the problem with boundary observations only, see Section \ref{Preconditioning}, equation (\ref{def_Bca}), the only change is the replacement of $M_{\partial}$, in the $(2,2)$ block of $\mathcal{B}_{\alpha}$ with $M$. We remark that in \cite{s-z} and \cite{Pea12}, parameter-robust preconditioners were proposed for the ``prototype'' problem, using the standard variational formulation \eqref{G:ex_var1} of the PDE. Those methods do not require improved regularity for the state space. Instead, they require that observations are available throughout the computational domain. \section{Eigenvalue analysis} \label{Alternative} In Section \ref{Generalization} it was shown that the condition number of $\mathcal{B}_{\alpha}\mathcal{A}_{\alpha}$, with $\mathcal{A}_{\alpha}$ defined in \eqref{G3} and $\mathcal{B}_{\alpha}$ defined in \eqref{G:precond}, can be bounded independently of $\alpha$, as well as independently of the operators appearing in \eqref{G1}-\eqref{G2}. Moreover, the numerical experiments indicate that the eigenvalues are contained in three intervals, independently of the regularization parameter $\alpha$, see Figure \ref{fig:1}. In this section we detail the structure of the spectrum of the preconditioned system considered in Section \ref{Generalization}, and we obtain sharp estimates for the constants appearing in Theorem \ref{GG:isomorphism}. We consider self-adjoint linear operators $\mathcal{A}_{\alpha}$ and $\mathcal{B}_\alpha $, \begin{equation} \label{aa:systems} \mathcal{A}_{\alpha} = \left[ \begin{matrix} \alpha M & \ze & M' \\ \ze & K & A ' \\ M & A & 0 \end{matrix} \right] \quad\mbox{and}\quad \mathcal{B}_\alpha^{-1} = \left[ \begin{matrix} \alpha M & \ze & \ze \\ \ze & K + \alpha R & \ze \\ \ze & \ze & \alpha^{-1} M \end{matrix} \right] \end{equation} where $R$ is defined by \begin{equation} R = A ' M ^{-1} A . \label{aa:def_R} \end{equation} We assume that $A:U\rightarrow W'$ and $M:W\rightarrow W'$ are continuous operators, for some Hilbert spaces $U$ and $W$. In addition we will make use of the following assumptions. \begin{itemize} \labeleditem{(B1)}{AA:A1} $ M $ is a self-adjoint and positive definite, \labeleditem{(B2)}{AA:A2} $ K + R$ is positive definite, \labeleditem{(B3)}{AA:A3} $ K $ is self-adjoint and positive semi-definite. \end{itemize} Assumptions \ref{AA:A1}-\ref{AA:A3} ensure that $\mathcal{B}_\alpha $ is a self-adjoint and positive definite. In particular, assumptions \ref{AA:A1}-\ref{AA:A3} hold for $\mathcal{A}_{\alpha}$ as in \eqref{G3}, provided that the assumptions of Section \ref{Generalization} hold. For simplicity, we also assume that that $\mathcal{A}_{\alpha}$ and $\mathcal{B}_\alpha $ are finite-dimensional operators. \begin{theorem} \label{aa:th} Let $p$, $q$, and $r$ be the polynomials \begin{equation*} p(\lambda) = 1-\lambda, \quad q(\lambda) = 1+\lambda p(\lambda), \quad r(\lambda) = p - \lambda q(\lambda). \end{equation*} Let $q_1<q_2$ and $r_1 < r_2 <r_3$ be the roots of $q$ and $r$, respectively. The spectrum of $\mathcal{B}_\alpha \mathcal{A}_{\alpha}$ is contained within three intervals, determined by the roots of $p$ and $r$, independently of $\alpha$: \begin{equation} \operatorname{sp}(\mathcal{B}_\alpha \mathcal{A}_{\alpha}) \subset [r_1,q_1] \cup [r_2,1] \cup [q_2,r_3]. \label{aa:spectral_bounds} \end{equation} Consequently, the spectral condition number of $\mathcal{B}_\alpha \mathcal{A}_{\alpha}$ is bounded, uniformly in $\alpha$, \begin{equation} k (\mathcal{B}_\alpha \mathcal{A}_{\alpha}) \leq \frac{r_3}{r_2} \approx 4.089. \label{aa:condition} \end{equation} If $ K $ has a nontrivial kernel, inequality \eqref{aa:condition} becomes an equality. \end{theorem} \subsubsection*{Proof} Consider the equivalent generalized eigenvalue problem \begin{align} \left[ \begin{array}{ccc} \alpha M & \ze & M' \\ \ze & K & A ' \\ M & A & 0 \end{array} \right] \left[\begin{matrix} f \\ u \\ w \end{matrix} \right] \quad = \quad \lambda\, \left[ \begin{array}{ccc} \alpha M & \ze & \ze \\ \ze & K + \alpha R & \ze \\ \ze & \ze & \alpha^{-1} M \end{array} \right] \left[\begin{matrix} f \\ u \\ w \end{matrix} \right] \label{aa:gep} \end{align} We show that \eqref{aa:gep} admits no nontrivial solutions unless $\lambda$ is as in \eqref{aa:spectral_bounds}. Since $ M $ is a self-adjoint isomorphism, by assumption \ref{AA:A1}, we can rewrite \eqref{aa:gep} as the three identities \begin{align} \alpha p f + w & = 0, \label{aa:gep1}\\ p K u + A ' w - \lambda \alpha R u & = 0, \label{aa:gep2}\\ f + M ^{-1} A u -\lambda \alpha^{-1} w & = 0. \label{aa:gep3} \end{align} Assume that $\lambda$ is not contained within the three closed intervals of \eqref{aa:spectral_bounds}. Then $p \neq 0$, and we can use \eqref{aa:gep1} to eliminate $ f $ from \eqref{aa:gep3}. \begin{equation} \label{special1} \begin{split} 0 &= \alpha p ( f + M ^{-1} A u - \lambda \alpha^{-1} w ) = \alpha p M ^{-1} A u - (1 +\lambda p) w \\ &= \alpha p M ^{-1} A u - q w . \end{split} \end{equation} Since $q$ is nonzero, we can use \eqref{special1} to eliminate $ w $ from \eqref{aa:gep2}, \begin{equation} \begin{split} 0 &= q (p K u + A ' w - \lambda \alpha R u ) = qp K u + \alpha(p - \lambda q)R u \\ &= qp K u + r R u, \label{aa:x2} \end{split} \end{equation} where the identity \eqref{aa:def_R} was used. By assumption, $pq$ and $r$ are both nonzero. Moreover, it can be easily seen that $pq$ and $r$ have the same sign outside of the bounded intervals of \eqref{aa:spectral_bounds}. From assumptions \ref{AA:A1}-\ref{AA:A3}, we conclude that $qp K + r R$ is a self-adjoint definite operator. Then \eqref{aa:x2} only admits trivial solutions, hence $\lambda$ can not be an eigenvalue of $\mathcal{B}_\alpha \mathcal{A}_{\alpha}$. The estimate \eqref{aa:condition} follows from \eqref{aa:spectral_bounds}, noting that $\vert \operatorname{sp}(\mathcal{B}_\alpha \mathcal{A}_{\alpha}) \vert \subset [r_2, r_3]$. From \eqref{aa:x2} it can be seen that the roots of $r$ are eigenvalues of $\mathcal{B}_\alpha \mathcal{A}_{\alpha}$ if $\ker K $ is nontrivial. \\ \rule{2mm}{2mm} \subsection*{Remark} If $A = (1 - \laplace):{\bar{H}^2(\Omega)}\rightarrow{L^2(\Omega)}'$, then $R = A' M^{-1} A$ is characterized by a bilinear form $b(\cdot,\cdot)$ as in \eqref{N2.1}: \begin{equation*} \begin{split} \brack[\big]{ A' M^{-1} A u, v } &= \int_\Omega \laplace u \laplace v + 2 \grad u \cdot \grad v + u v \dx \\ & = (u,v)_{{H^2(\Omega)}} + \int_\Omega \grad u \cdot \grad v \dx = b(u, v) \end{split} \end{equation*} For discretizations $U_h\subset U$ and $W_h\subset W$ of $A$ such that $A(U_h) \subset M(W_h)$, the discretization of $b$ coincides with $A_h'M_h^{-1} A_h$. This follows from an argument similar to that in the proof of Lemma \ref{compatibility_lemma}, and as a consequence, Theorem \ref{aa:th} can be applied to the preconditioned discrete systems considered in Section \ref{Numerical_experiments}. \section{Discussion} \label{Conclusions} Previously, parameter robust preconditioners for PDE-constrained optimization problems have been successfully developed, provided that observation data is available throughout the entire domain of the state equation. For many important inverse problems, arising in industry and science, this is an unrealistic requirement. On the contrary, observation data will typically only be available in subregions, of the domain of the state variable, or at the boundary of this domain. We have therefore explored the possibility for also constructing robust preconditioners for PDE-constrained optimization problems with limited observation data. For an elliptic control problem, with boundary observations only, we have developed a regularization robust preconditioner for the associated KKT system. Consequently, the number of \textsc{Minres}{} iterations required to solve the problem is bounded independently of both regularization parameter $\alpha$ and the mesh size $h$. In order to achieve this, it was necessary to write the elliptic state equation on a non-standard, and non-self-adjoint, variational form. If this approach is employed, then the control and the Lagrange multiplier will belong to the same Hilbert space, which leads to extra regularity requirements for the state. This fact makes it possible to construct parameter weighted metrics such that the constants appearing in the Brezzi conditions, as well as the constants in the inequalities expressing the boundedness of the KKT system, are independent of $\alpha$ and $h$. Consequently, the spectrum of the preconditioned KKT system is uniformly bounded with respect to $\alpha$ and $h$, which is ideal for the \textsc{Minres}{} scheme. These properties were illuminated through a series of numerical experiments, and the preconditioned \textsc{Minres}{} scheme handled our model problem excellently. The use of a non-self-adjoint form of the elliptic state equation leads to additional challenges for constructing discretization schemes and suitable multigrid methods. More specifically, it becomes necessary to implement a FE space approximating $H^2$. We accomplished this by a $C^1$ discretization that is conforming in $H^2$. The method employed does, however, have strong restrictions on the mesh, which seemingly must be composed of rectangles. \bibliographystyle{plain}
1,314,259,995,500
arxiv
\section{Introduction and preliminaries} \subsection*{Preamble} Extensions and liftings are classical notions in Operator Theory. To give some examples, we recall that a linear bounded Hilbert space operator is an isometry if and only if it is the restriction of a unitary operator to an invariant subspace. Also, it is known from the Sz.-Nagy-Foias dilation theory that an operator $C$ is a contraction if and only if it lifts to an isometry $V$; that is if and only if its adjoint $C^*$ is the restriction of a coisometry $V^*$ to an invariant subspace (see \cite{FF,SFb}). In this paper, we prove some $2$-isometric lifting and extension results for Hilbert space concave operators, that is for operators satisfying the inequality \eqref{eq11} below. A $2$-isometry is an operator for which the equality in \eqref{eq11} holds true. The notion of Cauchy dual operator for a left invertible operator is more recent, being introduced in $2001$ by Shimorin in his seminal study \cite{Sh} of Wold-type decompositions and wandering subspaces. Here we study the Cauchy dual operators for the special class of concave operators satisfing the condition \eqref{eq13} below. \subsection*{Notation and basic definitions} For a complex Hilbert spaces $\mathcal{H}$ we denote by $\mathcal{B(H)}$ the Banach algebra of all bounded linear operators on $\mathcal{H}$ with the unit element $I=I_{\mathcal{H}}$ (the identity operator). For $T\in \mathcal{B(H)}$ the kernel and the (closed) range of $T$ are denoted by $\mathcal{N}(T)$ respectively $\overline{\mathcal{R}(T)}$. Also, $T^*\in \mathcal{B(H)}$ stands for the adjoint operator of $T$, and the orthogonal projection in $\mathcal{B(H)}$ onto a closed subspace $\mathcal{M} \subset \mathcal{H}$ is denoted by $P_{\mathcal{M}}$. For $T\in \mathcal{B(H)}$ we consider the operator $\Delta_T:=T^*T-I$. The operator $T$ is called {\it expansive} (respectively {\it contractive}) if $\Delta_T\ge 0$ (respectively $\Delta_T\le 0$). If $T$ is a contraction, then $D_T=-\Delta_T$ is the defect operator and $\mathcal{D}_T=\overline{\mathcal{R}(D_T)}$ is the defect space of $T$. Recall that an operator $T\in \mathcal{B(H)}$ is said to be \emph{normal} if $TT^*=T^*T$, \emph{quasinormal} if $TT^*T=T^*T^2$, \emph{hyponormal} if $TT^*\le T^*T$, an \emph{isometry} if $T^*T=I$ and, finally, $T$ is \emph{unitary} if it is a normal isometry. A (closed) subspace $\mathcal{H}_0\subset \mathcal{H}$ is \emph{invariant} for $T\in \mathcal{B(H)}$ if $T\mathcal{H}_0\subset \mathcal{H}_0$, and $\mathcal{H}_0$ is \emph{reducing} for $T$ if $T\mathcal{H}_0 \subset \mathcal{H}_0$ and $T^*\mathcal{H}_0 \subset \mathcal{H}_0$. If $T\in \mathcal{B(H)}$ and $\mathcal{H}$ is a closed subspace of another Hilbert space $\mathcal{K}$, then $S\in \mathcal{B}(\mathcal{K})$ is an {\it extension} of $T$ if $\mathcal{H}$ is invariant for $S$ and $T=S|_{\mathcal{H}}$. This definition can be rephrased as $J_{\mathcal{H},\mathcal{K}}T=SJ_{\mathcal{H},\mathcal{K}}$ where $J_{\mathcal{H},\mathcal{K}}:\mathcal{H}\to \mathcal{K}$ is the natural embedding of $\mathcal{H}$ into $\mathcal{K}$. We also say that $S$ is a {\it lifting} of $T$ if $S^*$ is an extension of $T^*$; that is if $P_{\mathcal{K},\mathcal{H}}S=TP_{\mathcal{K},\mathcal{H}}$ where $P_{\mathcal{K},\mathcal{H}}=J^*_{\mathcal{H},\mathcal{K}}$ is the projection of $\mathcal{K}$ onto $\mathcal{H}$. An operator $T$ on $\mathcal{H}$ is called \emph{subnormal} if it has a normal extension on a Hilbert space $\mathcal{K}\supset \mathcal{H}$. Recall (\cite{Sh}, \cite{Ch, CC}, \cite{KFAH}) that an operator $T$ on $\mathcal{H}$ is called {\it concave} if it satisfies the inequality \begin{equation}\label{eq11} T^{*2}T^2-2T^*T+I\le 0. \end{equation} The operator $T$ is said to be a 2-{\it isometry} whenever the equality in \eqref{eq11} holds true. In this case, according to \cite{Ag, AS1, AS2}, the above operator $\Delta_T = T^*T-I$ is called the {\it covariance operator} of $T$, while the scalar ${\rm cov}(T):=\|\Delta_T\|^{1/2}$ is called the {\it covariance} of $T$. It is obvious from the inequality \eqref{eq11} that $\Delta_T\ge 0$, i.e. $T$ is expansive, hence $T$ is left invertible. For a positive operator $A\in \mathcal{B(H)}$ and an integer $m\ge 1$ we define the operator \begin{equation}\label{eq12} B_A^m(T):=\sum_{j=0}^m (-1)^j\begin{pmatrix} m\\ j \end{pmatrix} T^{*j}AT^j, \quad T\in \mathcal{B(H)}. \end{equation} Accordingly to \cite{JKKL} we say that $T$ is $(A,m)$-{\it expansive} (or $(A,m)$-{\it contractive}) if $B_A^m(T)\le 0$ ($B_A^m(T)\ge 0$). Also, $T$ is said to be $(A,m)$-{\it hyperexpansive} (or $(A,m)$-{\it hypercontractive}) if $B_A^n(T)\le 0$ ($B_A^n(T)\ge 0$) for all positive integers $n\le m$. Finally, $T$ is called {\it completely} $A$-{\it hyperexpansive} ({\it completely} $A$-{\it hypercontractive}) if $B_A^m(T)\le 0$ ($B_A^m(T)\ge 0$) for all $m\ge 1$. When $A=I$ we denote $B_m(T)=B_I^m(T)$ and we shortly refer to m-(hyper)expansivity instead of $(I,m)$-(hyper)expansivity etc. In particular, following the terminology of \cite{CFM, MMS, S-2006, S-2009} we say that $T$ is an $A$-{\it contraction} if $T$ is $(A,1)$-contractive,\, i.e. $T^*AT\le A$, and $T$ is an $A$-{\it isometry} when $T^*AT=A$. Every $A$-isometry is completely $A$-hypercontractive (and completely $A$-hyperexpansive). In this context, the inequality \eqref{eq11} can be written as $T^*\Delta_T T\le \Delta_T$. Hence, concave operators are $\Delta_T$-contractions, or $2$-hyperexpansive operators, while $2$-isometries are $\Delta_T$-isometries. Such general classes of operators were studied by many authors, from several points of view. We refer the reader to \cite{Ag}, \cite{AS1, AS2}, \cite{Al}, \cite{ACJS}, \cite{At}, \cite{Ch, CC}, \cite{CFM}, \cite{Ex}, \cite{JKKL}, \cite{KFAH}, \cite{MMS, MS}, \cite{McC}, \cite{Ol}, \cite{R}, \cite{SS}, \cite{Sh}, \cite{S-2006, S-2009} for some of these contributions. \subsection*{Organization of the paper} In Section 2 we construct several $2$-isometric liftings for a given concave operator. Also, we show that concave operators have certain extensions with block matrices containing contractions and unitary operators on the main diagonal, so having their spectrum in the closed unit disc (as in \cite {McC}, \cite{Ba}). Finally, we characterize concave operators $T$ which are $\Delta_T$-{\it regular}, that is they satisfy the condition \begin{equation}\label{eq13} \Delta_TT=\Delta_T^{1/2}T\Delta_T^{1/2}. \end{equation} In Section 3 we study properties of the Cauchy dual $T'=T(T^*T)^{-1}$ associated to a given concave operator $T$. The Cauchy dual operator was studied in \cite{Sh}, \cite{Ch}, \cite{CC} and recently in \cite{ACJS}. We describe the $\Delta_T$-regular concave operators in terms of $T'$ and we solve the Cauchy dual subnormality problem (which appear in \cite{ACJS} for $2$-isometries) for this class of operators. Recall that J. Agler showed in \cite{Ag} that a contraction $T$ is subnormal if and only if it is completely hypercontractive, while in \cite{ACJS} it was proved that if $T$ is a $\Delta_T$-regular $2$-isometry, then $T'$ is subnormal. Here we show that for $T$ concave and $\Delta_T$-regular the Cauchy dual contraction $T'$ is subnormal if and only if $T$ is completely hyperexpansive. In Section 4 we present several conditions which are equivalent to the quasinormality of the compression to $\overline{\mathcal{R}(\Delta_T)}$ of a $\Delta_T$-regular concave operator $T$. The same thing is done for the compression of $T'$ to $\overline{\mathcal{R}(\Delta_T)}$. As usual, by the compression of $T$ to a subspace $\mathcal{H}_0\subset \mathcal{H}$ we mean the operator $P_{\mathcal{H}_0}T|_{\mathcal{H}_0}$. \medskip \section{Liftings and extensions of concave operators} \medskip We proceed to the construction of two special 2-isometric liftings for the class of concave operators. Recall that a lifting $S$ on $\mathcal{K} \supset \mathcal{H}$ is said to be \emph{minimal} if $\mathcal{K}=\bigvee_{n\ge 0} S^n\mathcal{H}.$ \begin{theorem}\label{te21} Let $T\in \mathcal{B(H)}$ be a concave operator. Then: \begin{itemize} \item[(i)] $T$ has a 2-isometric lifting $S$ on a Hilbert space $\mathcal{K}=\mathcal{H}^{\perp} \oplus \mathcal{H}$ with the covariance ${\rm cov}(S)=\sqrt{2}{\rm max}\{1,\|\Delta_T\|^{1/2}\}$ and having a block matrix of the form \begin{equation}\label{eq21} S= \begin{pmatrix} W & X\\ 0 & T \end{pmatrix} \begin{bmatrix} \mathcal{H}^{\perp}\\ \mathcal{H} \end{bmatrix}, \quad W= \begin{pmatrix} S_+^1 & \sqrt{2}E_0\\ 0 & S_+^0 \end{pmatrix} \begin{bmatrix} \mathcal{H}_1\\ \mathcal{H}_0 \end{bmatrix}, \quad W^*X=0, \end{equation} where $W$ is a 2-isometry on a nontrivial decomposition $\mathcal{H}^{\perp} = \mathcal{H}_1\oplus \mathcal{H}_0$ with $S_+^0,S_+^1$ unilateral shifts and $E_0$ is an isometry. \item[(ii)] $T$ has a minimal 2-isometric lifting $S$, with $W=S|_{\mathcal{H}^{\perp}}$ in \eqref{eq21} an isometry such that $W^*X=0$ and ${\rm cov}(S)=\|\Delta_T\|^{1/2}$. \end{itemize} \end{theorem} \begin{proof} Assume $T$ concave, that is $T^*\Delta_TT\le \Delta_T$. Let $\Omega_T:=\Delta_T-T^*\Delta_TT$. Clearly, we may assume $\Delta_T\neq 0$ and $\Omega_T\neq 0$, otherwise $T$ is an isometry or a 2-isometry, respectively. Then it is easy to see that for every $h\in \mathcal{H}$ and any integer $n\ge 1$ one has $$ \|\Delta_T^{1/2}h\|^2=\sum_{j=0}^n\|\Omega_T^{1/2}T^jh\|^2+\|\Delta_T^{1/2}T^{n+1}h\|^2. $$ Since $T^{*(n+1)}\Delta_TT^{n+1}\le T^{*n}\Delta_TT^n\le \Delta_T$, the sequence $\{T^{*n}\Delta_TT^n\}$ converges strongly to an operator $A_T\ge 0$ such that $T^*A_TT=A_T$. So, from the above equality we obtain the relation $$ \|\Delta_T^{1/2}h\|^2=\sum_{j=0}^{\infty} \|\Omega_T^{1/2}T^jh\|^2+\|A_Th\|^2, $$ whence we have $\overline{\Delta_T\mathcal{H}}\subset \mathcal{H}_0 \oplus \overline{A_T\mathcal{H}}$ where $\mathcal{H}_0=l_+^2(\overline{\Omega_T\mathcal{H}})$. For the construction of $S$ in (i) we use the operator $\Delta_T$. The previous equality suggests that one can firstly obtain a one step lifting $T_0$ of $T$ on $\mathcal{H}\oplus \mathcal{H}_0$ and later a second step lifting of $T$ (one step for $T_0$) on $\mathcal{H}\oplus \mathcal{H}_0 \oplus \mathcal{H}_1$ where $\mathcal{H}_1=l_+^2(\mathcal{H}_0\oplus \overline{(\Delta_T-\Omega_T)\mathcal{H}})$. Clearly, since $\Delta_T-\Omega_T=T^*\Delta_TT$, we have $(\Delta_T-\Omega_T)\mathcal{H} \subset \mathcal{H}_0\oplus A_T\mathcal{H}$. We define $S$ on $\mathcal{K}=\mathcal{H}_1\oplus (\mathcal{H}_0\oplus \mathcal{H})=\mathcal{H}_1\oplus \mathcal{H}_0\oplus \mathcal{H}$ by the block matrices \begin{equation}\label{eq22} S= \begin{pmatrix} S_+^1& E\\ 0 & T_0 \end{pmatrix} = \begin{pmatrix} S_+^1 & \sqrt{2}E_0 & J_1(\Delta_T-\Omega_T)^{1/2}\\ 0 & S_+^0 & J_0\Omega_T^{1/2}\\ 0 & 0 & T \end{pmatrix}. \end{equation} Here $S_+^0, S_+^1$ are the forward shifts on $\mathcal{H}_0$, respectively $\mathcal{H}_1$, while $E_0,J_1,J_0$ are the embedding mappings of $\mathcal{H}_0$ and $\overline{(\Delta_T-\Omega_T)\mathcal{H}}$ into $\mathcal{H}_1$, respectively of $\overline{\Omega_T \mathcal{H}}$ into $\mathcal{H}_0$. Also, the operator $E: \mathcal{H}_0 \oplus \mathcal{H}\to \mathcal{H}_1$ and the lifting $T_0$ of $T$ on $\mathcal{H}_0\oplus \mathcal{H}$ have the matrix representations which appear in the second matrix of $S$ in \eqref{eq22}, respectively. Since $S_+^{1*}E=0$ and $S_+^{0*}J_0\Omega_T^{1/2}=0$, we have $$ \Delta_{T_0}=0\oplus (2\Delta_T-T^*\Delta_TT), \quad \Delta_S=0\oplus [(2I_{\mathcal{H}_0}\oplus T^*\Delta_TT)+\Delta_{T_0}]=0\oplus 2(I_{\mathcal{H}_0}\oplus \Delta_T). $$ Now a simple computation gives the equality $S^*\Delta_SS=\Delta_S$, i.e. $S$ is a 2-isometry and by \eqref{eq22}, $S$ is a lifting of $T$. Obviously, $S$ can be expressed in the terms of $W$ and $X$ as in \eqref{eq21} with $W^*X=0$, $W$ being a 2-isometry (as the restriction of $S$ to its invariant subspace $\mathcal{H}_1 \oplus \mathcal{H}_0$). Also, from the above representation of $\Delta_S$ we get ${\rm cov}(S)=\|\Delta_S\|^{1/2}=\sqrt{2} {\rm max}\{1, \|\Delta_T\|^{1/2}\}$ taking into account that $\Delta_T\ge 0$. The assertion (i) is now proved. To show the assertion (ii) we use that $T$ is expansive, i.e. $T^*T\ge I$. Thus, by a result of Treil and Volberg (see \cite{BFF}, \cite{TV}), there exist a Hilbert space $\mathcal{K}'\supset \mathcal{H}$, an isometry $V'$ on $\mathcal{K}'$ and an operator $B:\mathcal{H} \to \mathcal{K}'$ such that $BT=V'B$, $P_{\mathcal{H}}B=\Delta_T^{1/2}$ and $\|B\|=\|\Delta_T\|^{1/2}$. Then the operator $A=B^*B$ satisfies the conditions: $T^*AT=A$, $\Delta_T \le A$ and $\|A\|=\|\Delta_T\|$. Now we define the lifting $S_0$ of $T$ on the space $\mathcal{K}_0=\mathcal{H}^{\perp}\oplus \mathcal{H}$, where $\mathcal{H}^{\perp}=l_+^2(\overline{(A-\Delta_T)\mathcal{H}})$, by \begin{equation}\label{eq23} S_0= \begin{pmatrix} S_+ & J(A-\Delta_T)^{1/2}\\ 0 & T \end{pmatrix}. \end{equation} Here $S_+$ is the forward shift on $\mathcal{H}^{\perp}$ and $J$ is the embedding mapping of $\overline{(A-\Delta_T)\mathcal{H}}$ into $\mathcal{H}^{\perp}$. It is clear that $\Delta_{S_0}= 0\oplus A$, which immediately gives $S_0^*\Delta_{S_0}S_0=\Delta_{S_0}$. Thus $S_0$ is a $2$-isometry. Also, one has ${\rm cov}(S_0)=\|A\|^{1/2}=\|\Delta_T\|^{1/2}$. By a standard argument one can see that the $2$-isometric lifting $S_0$ for $T$ is minimal, that is it satisfies the condition $\mathcal{K}_0=\bigvee_{n\ge 0} S_0^n\mathcal{H}$. This ends the proof. \end{proof} The converses of the statements (i) and (ii) in Theorem \ref{te21} are not true. In other words, the class of operators which have 2-isometric liftings as in (i) and (ii) above is larger than that of concave operators. For example, let $T$ be a $T^*T$-isometry, i.e. $T^{*2}T^2=T^*T$. Let $S$ be the operator on $\widehat{\mathcal{H}}=l_+^2(\mathcal{H}) \oplus \mathcal{H}$ with the matrix representation $$ S= \begin{pmatrix} S_+ & J\\ 0 & T \end{pmatrix}, $$ where $S_+$ is the forward shift on $l_+^2(\mathcal{H})$ and $J$ is the canonical embedding of $\mathcal{H}$ into $l_+^2(\mathcal{H})$. It is clear that $S$ is a minimal 2-isometric lifting for $T$ as in Theorem \ref{te21} (ii). But $T$ is not necessary concave. In fact, since $T|_{\overline{\mathcal{R}(T)}}$ is an isometry, one can easily see that $T$ is concave if and only if the operator $T_1=P_{\overline{\mathcal{R}(T)}}T|_{\mathcal{N}(T^*)}$ is expansive. We mention without further details that if $T$ satisfies the inequality $T^{*2}T^2\le T^*T$, then $T$ has a 2-isometric lifting as in Theorem \ref{te21}, (i). An interesting problem in this context is to describe the class of all operators having 2-isometric liftings; presently we do not know the answer. Returning to Theorem \ref{te21}, we remark that the lifting in (i) is not minimal, in general. In this case the lifts produced by (i) and (ii) are not unitarily equivalent. The minimal lifting from (ii) leads to an extension of $T$ with interesting properties. \begin{proposition}\label{pr22} Every concave operator $T$ on $\mathcal{H}$ has an extension $\widetilde{T}$ on a Hilbert space $\mathcal{M}\supset \mathcal{H}$, which with respect to a decomposition $\mathcal{M}=\mathcal{M}_0\oplus \mathcal{M}_1$ has the form \begin{equation}\label{eq24} \widetilde{T}= \begin{pmatrix} C & \delta E\\ 0 & U \end{pmatrix}, \end{equation} where $C,E$ are contractions, $U$ is unitary and $\delta=\|\Delta_T\|^{1/2}$, such that there exist a Hilbert space $\mathcal{M}'$ and two isometries $J_C:\mathcal{D}_C\to \mathcal{M}'$, $J_E:\mathcal{D}_E\to \mathcal{M}'$ satisfying the condition \begin{equation}\label{eq25} C^*E+D_CJ_C^*J_ED_E=0. \end{equation} \end{proposition} \begin{proof} Clearly, one can assume $T$ non-isometric, i.e. $\delta=\|\Delta_T\|^{1/2}>0$. Let $S_0$ be the 2-isometric lifting of $T$ with ${\rm cov(S_0)}=\delta$ given by \eqref{eq23} on $\mathcal{K}_0=\mathcal{H}^{\perp}\oplus \mathcal{H}$, and let $\widetilde{S}$ be a Brownian unitary extension of the 2-isometry $S_0$ on $\mathcal{K}=\mathcal{K}_0\oplus \mathcal{K}_1$ with ${\rm cov}(\widetilde{S})=\delta$, obtained by \cite[Theorem 5.80]{AS2}. Using that $S_0$ is a lifting of $T$, as well as the canonical representation of $\widetilde{S}$ on $\mathcal{K}=\mathcal{N}(\Delta_{\widetilde{S}})\oplus \mathcal{R}(\Delta_{\widetilde{S}})$ given by \cite[Proposition 5.12]{AS2}, we obtain $\widetilde{S}$ in the form $$ \widetilde{S}= \begin{pmatrix} S_0 & \star\\ 0 & \star \end{pmatrix} \begin{bmatrix} \mathcal{K}_0 \\ \mathcal{K}_1 \end{bmatrix} = \begin{pmatrix} S_+ & \star & \star\\ 0 & T & \star\\ 0 & 0 & \star \end{pmatrix} \begin{bmatrix} \mathcal{H}^{\perp}\\ \mathcal{H}\\ \mathcal{K}_1 \end{bmatrix} = \begin{pmatrix} \widetilde{V} & \delta \widetilde{E}\\ 0 & U \end{pmatrix} \begin{bmatrix} \mathcal{N}(\Delta_{\widetilde{S}})\\ \mathcal{R}(\Delta_{\widetilde{S}}) \end{bmatrix}. $$ Here $S_+$ is from the matrix of $S_0$ in \eqref{eq23}, $\widetilde{V}$ and $\widetilde{E}$ are isometries with $\mathcal{N}(\widetilde{V}^*)=\mathcal{R}(\widetilde{E})$, $U$ is unitary and $\delta$ is as above. The subspace $\mathcal{H}^{\perp}$ is invariant for $S_0$ and so for $\widetilde{S}$ and $\widetilde{S}|_{\mathcal{H}^{\perp}}=S_0|_{\mathcal{H}^{\perp}}=S_+$. Hence $\mathcal{H}^{\perp} \subset \mathcal{N}(\Delta_{\widetilde{S}})$ (having in view that $\widetilde{S}$ is a 2-isometry) and $\widetilde{V}|_{\mathcal{H}^{\perp}}=S_+$. Then the operator $\widetilde{T}$ with $\widetilde{T}^*=\widetilde{S}^*|_{\mathcal{H}\oplus \mathcal{K}_1}$ is an extension of $T$ on $\mathcal{M}=\mathcal{H}\oplus \mathcal{K}_1$. Clearly, $\widetilde{T}$ has a block matrix on $\mathcal{M}=(\mathcal{N}(\Delta_{\widetilde{S}})\ominus \mathcal{H}^{\perp})\oplus \mathcal{R}(\Delta_{\widetilde{S}})=:\mathcal{M}_0\oplus \mathcal{M}_1$ of the form \eqref{eq24}, where $C=P_{\mathcal{M}_0}\widetilde{V}|_{\mathcal{M}_0}$, $E=P_{\mathcal{M}_0}\widetilde{E}|_{\mathcal{M}_1}$ and $U,\delta$ are as above. For the condition \eqref{eq25} we decompose $\widetilde{V}$ on $\mathcal{H}^{\perp}\oplus \mathcal{M}_0$ and $\widetilde{E}:\mathcal{R}(\Delta_{\widetilde{S}})\to \mathcal{H}^{\perp} \oplus \mathcal{M}_0$ as $$ \widetilde{V}= \begin{pmatrix} S_+ & C'\\ 0 & C \end{pmatrix}, \quad \widetilde{E}= \begin{pmatrix} E'\\ E \end{pmatrix}. $$ Here $C,C',E,E'$ are contractions ($C,E$ as above) with $S_+^*C'=0$, $C'^*C'=D_C^2$ and $E'^*E'=D_E^2$ (because $\widetilde{V}$ and $\widetilde{E}$ are isometries). In addition, as $\widetilde{V}^*\widetilde{E}=0$, it follows that $S_+^*E'=0$ and $C'^*E'+C^*E=0$. Thus, using the polar decomposition, we have $C'=J_CD_C$, $E'=J_ED_E$, where $J_C:\mathcal{D}_C\to \mathcal{N}(S_+^*)$ and $J_E:\mathcal{D}_E\to \mathcal{N}(S_+^*)$ are isometries with $\mathcal{R}(J_C)=\overline{\mathcal{R}(C')}$ and $\mathcal{R}(J_E)=\overline{\mathcal{R}(E')}$. We used here that $\mathcal{R}(C')\cup \mathcal{R}(E') \subset \mathcal{N}(S_+^*)$. Then the above condition becomes $D_CJ_C^*J_ED_E+C^*E=0$, i.e. the condition \eqref{eq25}. The proof is complete. \end{proof} Note that in \cite{McC} a Brownian extension for concave operators $T$ with $\|T\|\le \sqrt{2}$ was obtained in a different way. In fact, the Brownian operators from \cite{McC} are concave, and (up to a normalization) they do model all concave operators. Concerning the operator $\widetilde{T}$ in \eqref{eq24}, it is easy to see that it is concave if and only if $C$ is an isometry with $C^*E=0$ and $U^*E^*EU\le E^*E$. Next we turn to the class of concave operators $T$ which are $\Delta_T$-regular, that is satisfy the condition \eqref{eq13}. We have the following characterizations for these operators. \begin{theorem}\label{te23} For a non-isometric concave operator $T\in \mathcal{B(H)}$ the following statements are equivalent: \begin{itemize} \item[(i)] $T$ is $\Delta_T$-regular; \item[(ii)] $T$ has on the decomposition $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ the representation \begin{equation}\label{eq26} T= \begin{pmatrix} V & \sigma Z\\ 0 & \widehat{T} \end{pmatrix} \end{equation} where $V$ is an isometry, $Z$ is an injective contraction with $V^*Z=0$ and $\sigma^2=\|\Delta_T\|+1$, while $\widehat{T}$ is a contraction which commutes with $\sigma ^2 Z^*Z+\Delta_{\widehat{T}}$; \item[(iii)] $T$ has a 2-isometric lifting $S$ on $\mathcal{K} \supset \mathcal{H}$ with $S^*S \mathcal{H} \subset \mathcal{H}$ and $\mathcal{N}(\Delta_T)\subset \mathcal{N}(\Delta_S)$, such that $\sigma^{-2} \Delta_S$ is an orthogonal projection. \end{itemize} \end{theorem} \begin{proof} Assume that $T$ is $\Delta_T$-regular. Since $\delta:=\|\Delta_T\|^{1/2}>0$ and $T^*\Delta_T T\le \Delta_T$, the subspace $\mathcal{N}(\Delta_T)$ is invariant for $T$. Therefore $T$ has a block matrix of the form \eqref{eq26} with $V$ an isometry and two appropriate operators $Z, \widehat{T}$, and some scalar $\sigma>0$ which will be determined. As $\Delta_T\ge 0$, we obtain by \eqref{eq26} that $V^*Z=0$ and so $\Delta_T=0\oplus \Delta_0$, where $\Delta_0=\sigma^2Z^*Z+\Delta_{\widehat{T}}$ is an injective positive operator. It follows from the assertion (i) that $\Delta_0\widehat{T}=\Delta_0^{1/2}\widehat{T}\Delta_0^{1/2}$, which means (by the injectivity of $\Delta_0$) that $\Delta_0^{1/2}\widehat{T}=\widehat{T}\Delta_0^{1/2}$. Also, from $T^*\Delta_TT\le \Delta_T$ one obtains $\widehat{T}^*\Delta_0\widehat{T}\le \Delta_0$. This inequality together with the previous equality imply that we have $$ \|\widehat{T}\Delta_0^{1/2}k\|=\|\Delta_0^{1/2}\widehat{T}k\|\le \|\Delta_0^{1/2}k\| $$ for $k\in \overline{\mathcal{R}(\Delta_T)}$. Hence $\widehat{T}$ is a contraction. On the other hand, if $D_{\widehat{T}}$ is the defect operator of $\widehat{T}$, we have $\sigma^2Z^*Z=\Delta_0+D_{\widehat{T}}^2$. We infer $\sigma^2\|Z\|^2\le \|\Delta_0\|+1=\delta^2+1$, so choosing $\sigma=\sqrt{\delta^2+1}$ we get that $Z$ is a contraction. Also, one has $\mathcal{N}(Z)=\mathcal{N}(\Delta_0) \cap \mathcal{N}(D_{\widehat{T}})=\{0\}$. Therefore $Z$ is injective and the statement (ii) is proved. Hence (i) implies (ii). Suppose now that $T$ has the form \eqref{eq26}. Let $\widehat{V}$ be the minimal isometric lifting of $\widehat{T}$ on $\mathcal{K}_1=\overline{\mathcal{R}(\Delta_T)} \oplus l_+^2(\mathcal{D}_{\widehat{T}})=:\mathcal{H}_1\oplus \mathcal{H}_2$ (see \cite{FF, SFb}). Consider the operators $$ Z_0:\mathcal{H}_1 \oplus \mathcal{H}_2 \to \mathcal{N}(\Delta_T)=:\mathcal{H}_0, \quad Z_0= \begin{pmatrix} \sigma Z & 0 \end{pmatrix} $$ and $$ Z_1:\mathcal{H}_1 \oplus \mathcal{H}_2 \to \mathcal{K}_0:= l_+^2(\mathcal{D}_Z\oplus \mathcal{H}_2), \quad Z_1= \sigma \begin{pmatrix} J_1D_Z & J_2 \end{pmatrix}, $$ where $Z$ is the contraction from \eqref{eq26}, while $J_1: \mathcal{D}_Z\to \mathcal{K}_0$ and $J_2: \mathcal{H}_2 \to \mathcal{K}_0$ are the corresponding embedding mappings. Define the operator $S_1$ on $\mathcal{K}=\mathcal{K}_0 \oplus \mathcal{H}_0 \oplus \mathcal{K}_1=(\mathcal{K}_0 \oplus \mathcal{H}_0)\oplus \mathcal{K}_1$ with the corresponding block matrices $$ S_1= \begin{pmatrix} S_+^0 & 0 & Z_1\\ 0 & V & Z_0\\ 0 & 0 & \widehat{V} \end{pmatrix} =\begin{pmatrix} V_1 & \sigma \widehat{Z}\\ 0 & \widehat{V} \end{pmatrix}. $$ Here $S_+^0$ is the forward shift on $\mathcal{K}_0$ and $V$ is as in \eqref{eq26}, while $V_1=S_+^0\oplus V$ on $\mathcal{K}_0 \oplus \mathcal{H}_0$ and the operator $\widehat{Z}: \mathcal{H}_1\oplus \mathcal{H}_2\to \mathcal{K}_0 \oplus \mathcal{H}_0$ is given by the column matrix $\widehat{Z}=\sigma^{-1} \begin{pmatrix} Z_1 & Z_0 \end{pmatrix} ^{\rm tr}$. It is easy to see that $\widehat{Z}$ is an isometry and $V_1^*\widehat{Z}=0$ because $V^*Z=0$ in \eqref{eq26}. As $V_1$ and $\widehat{V}$ are isometries too, it follows that $S_1$ is a 2-isometry with $\Delta_{S_1}=0\oplus \sigma ^2 I=\sigma ^2 P,$ where $P$ is the orthogonal projection onto $\mathcal{K}_1 =\mathcal{R}(\Delta_{S_1})$. To see that $S_1$ is a lifting of $T$, we write explicitly $\widehat{V}$ in the above $3\times 3$ matrix of $S_1$. So, by using \eqref{eq26}, we obtain the representation \begin{equation}\label{eq27} S_1= \begin{pmatrix} W & X\\ 0 & T \end{pmatrix} \begin{bmatrix} \mathcal{K}_0\oplus \mathcal{H}_2\\ \mathcal{H}_0 \oplus \mathcal{H}_1 \end{bmatrix}, \quad W= \begin{pmatrix} S_+^0 & \sigma J_2\\ 0 & S_+^1 \end{pmatrix} \begin{bmatrix} \mathcal{K}_0\\ \mathcal{H}_2 \end{bmatrix}, \quad X= \begin{pmatrix} 0 & \sigma J_1D_Z\\ 0 & \widetilde{D}_{\widehat{T}} \end{pmatrix} \begin{bmatrix} \mathcal{H}_0\\ \mathcal{H}_1 \end{bmatrix}, \end{equation} where $S_+^1$ is the forward shift on $\mathcal{H}_2$ and $\widetilde{D}_{\widehat{T}}=\widehat{J}D_{\widehat{T}}$ with $\widehat{J}$ the embedding mapping of $\mathcal{D}_{\widehat{T}}$ into $\mathcal{H}_2$. This shows that $S_1$ is a lifting of $T$ with $W^*X=0$ because $J_2^*J_1D_Z=0$, $\mathcal{R}(J_1)$ and $\mathcal{R}(J_2)$ being orthogonal in $\mathcal{K}_0$. Thus on $\mathcal{K}=\mathcal{K}_0 \oplus \mathcal{H}_2\oplus \mathcal{H}_0\oplus \mathcal{H}_1$ one obtains $$ S_1^*S_1=W^*W\oplus (X^*X+\Delta_T)=I_{\mathcal{K}_0}\oplus (\sigma^2+1)I_{\mathcal{H}_2} \oplus I_{\mathcal{H}_0}\oplus (\sigma ^2+1)I_{\mathcal{H}_1}. $$ Hence $\mathcal{N}(\Delta_T)=\mathcal{H}_0 \subset \mathcal{N}(\Delta_{S_1})$ and $\overline{\mathcal{R}(\Delta_T)}=\mathcal{H}_1\subset \mathcal{R}(\Delta_{S_1})$. In conclusion, $S_1$ has the properties from (iii). Therefore (ii) implies (iii). Finally, we assume that there exists a lifting $S_1$ for $T$ as in (iii). So $S_1$ has the form \eqref{eq27} for some 2-isometry $W$ on $\mathcal{H}^{\perp}=\mathcal{K} \ominus \mathcal{H}$ and an operator $X:\mathcal{H}\to \mathcal{H}^{\perp}$ with $W^*X=0$. Therefore $\Delta_{S_1}=\Delta_W\oplus (X^*X+\Delta_T)=\sigma^2P$ and $P=P_{\mathcal{R}(\Delta_{S_1})}$. Since $(\sigma^{-2}\Delta_{S_1})^2=\sigma^{-2}\Delta_{S_1}$, we infer $\Delta_W^2=\sigma^2\Delta_W$. Therefore $\Delta_W=\sigma^2P_0$, where $P_0=P_{\mathcal{R}(\Delta_W)}$. We used here that $W$ is a $2$-isometry, $\Delta_W\ge 0$, and that $\mathcal{N}(\Delta_W)\subset \mathcal{N}(\Delta_{S_1})$. On the other hand, we always have $\mathcal{N}(\Delta_{S_1}) \cap \mathcal{H} \subset \mathcal{N}(\Delta_T)$ because $$ 0\le \langle \Delta_T h,h \rangle=\langle S_1^*P_{\mathcal{H}}S_1h,h\rangle -\|h\|^2\le \langle \Delta_{S_1}h,h \rangle $$ for $h\in \mathcal{H}$. Since we have also $\mathcal{N}(\Delta_T)\subset \mathcal{N}(\Delta_{S_1})$ from (iii), it follows that $\mathcal{N}(\Delta_T)=\mathcal{N}(\Delta_{S_1})\cap \mathcal{H}= \mathcal{N}(X^*X+\Delta_T)$. Therefore $\mathcal{N}(\Delta_T)$ is invariant for $S_1$. On the other hand, the above inequality $\Delta_T \le \Delta_{S_1}|_{\mathcal{H}}$ gives $\mathcal{R}(\Delta_T)\subset \Delta_{S_1}\mathcal{H} \subset \mathcal{R}(\Delta_{S_1})\cap \mathcal{H}$ which by the previous equality becomes $\overline{\mathcal{R}(\Delta_T)}=\mathcal{R}(\Delta_{S_1})\cap \mathcal{H}=\overline{\mathcal{R}(X^*X+\Delta_T)}$. Thus $\overline{\mathcal{R}(\Delta_T)}$ reduces $S_1^*S_1$ and one has $\Delta_{S_1}=\sigma^2P_0\oplus \sigma^2 P_T$, where $P_T\in \mathcal{B(H)}$ is the orthogonal projection onto $\overline{\mathcal{R}(\Delta_T)}$. In addition, since $S_1$ is a 2-isometry, $T$ as a compression of $S_1$ to $\mathcal{H}$ is a $P_T$-contraction. As $\mathcal{N}(\Delta_T)$ is invariant for $T$ one also has the relation $P_TT=P_TTP_T$, i.e. $T$ is $P_T$-regular. Now by the inequality $T^*P_TT\le P_T$ there exists a contraction $T_0$ on $\mathcal{R}(P_T)=\overline{\mathcal{R}(\Delta_T)}$ satisfying the relation $T_0P_T=P_TT$. So $T_0$ is even the compression of $T$ to $\overline{\mathcal{R}(\Delta_T)}$. Then the usual representation of the concave operator $T$ on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ is $$ T= \begin{pmatrix} V & F\\ 0& T_0 \end{pmatrix}, $$ with $V$ an isometry and some operator $F$ satisfying the condition $V^*F=0$ (as $\Delta_T\ge 0$). By $P_T$-regularity of $T$ we have $T_0P_Th=P_TTh=P_TT_0P_Th$ whence $T_0P_Th=P_TT_0h$ for $h\in \overline{\mathcal{R}(\Delta_T)}$. Since $\sigma ^2 P_T={\Delta_{S_1}}|_{\mathcal{H}}=0\oplus \Delta_0$, where $\Delta_0=F^*F+\Delta_{T_0}={\Delta_T}|_{\overline{\mathcal{R}(\Delta_T)}}$, it follows that $T_0\Delta_0=\Delta_0T_0$. Finally, as $\Delta_T=0\oplus \Delta_0$, we infer that $\Delta_TT=\Delta_{T}^{1/2}T\Delta_{T}^{1/2}$, i.e. $T$ is $\Delta_T$-regular. Thus we proved that (iii) implies (i), and this ends the proof. \end{proof} As a consequence, we re-obtain the equivalence of (i) with (ii) for a 2-isometry given in \cite[Proposition 5.1]{MMS}; see also \cite[Theorem 7.1]{ACJS}. Notice that in the terminology of \cite{ACJS} a $\Delta_T$-regular 2-isometry $T$ is called a {\it quasi-Brownian isometry}. \begin{corollary}\label{co24} A non-isometric operator $T\in \mathcal{B(H)}$ is a $\Delta_T$-regular 2-isometry if and only if $T$ has on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ a representation \eqref{eq26} with $V$ and $\widehat{T}$ isometries, $Z$ an injective contraction such that $V^*Z=0$, $\widehat{T}Z^*Z=Z^*Z\widehat{T}$ and $\sigma^2=\|\Delta_T\|$. \end{corollary} \begin{remark}\label{re25} \rm In order to compare the 2-isometric liftings obtained in Theorem \ref{te23}, (iii) and in Theorem \ref{te21}, (i), we record the following remarks. For the 2-isometric lifting $S_1$ obtained in (the proof of) Theorem \ref{te23}, (iii), the subspaces $\mathcal{N}(\Delta_T)$ and $\overline{\mathcal{R}(\Delta_T)}$ reduce $S_1^*S_1$ (not just $\mathcal{H}$), while the covariance of $S_1$ is less or equal to the covariance of the lifting from Theorem \ref{te21}, (i). It is easy to see that if the concave operator $T$ possess a 2-isometric lifting $S$ which (only) satisfies the property that $\beta^{-2}\Delta_S$ is an orthogonal projection for some scalar $\beta >0$, then $\mathcal{R}(\Delta_T) \subset \mathcal{R}(\Delta_S)$. Also, $\overline{\mathcal{R}(\Delta_T)}$ is invariant for $S^*$ and $T_*:=T^*|_{\overline{\mathcal{R}(\Delta_T)}}=S^*|_{\overline{\mathcal{R}(\Delta_T)}}$ is a contraction. But this does not ensure that $T_*$ commutes to $\Delta_0:={\Delta_T}|_{\overline{\mathcal{R}(\Delta_T)}}$, in general, so $T$ in not $\Delta_T$-regular in this case. \end{remark} On the other hand, for any contraction $T_0$ on $\mathcal{H}$ one can obtain a non-isometric concave lifting $T$ on $\mathcal{K}=l_+^2(\mathcal{H})\oplus \mathcal{H}$ which is $\Delta_T$-regular, so of the form \eqref{eq26} with $T_0$ instead of $\widehat{T}$, $V$ a shift operator, and $\sigma Z$ an isometry with $\sigma=1+\|T_0\|^2$. Therefore $\Delta_T$-regularity of $T$ does not force $\widehat{T}$ in \eqref{eq26} to belong to a restrictive class of contractions. \medskip \section{The Cauchy dual of a regular concave operator} \label{Sect:3} \medskip By definition, the {\it Cauchy dual} of a left invertible operator $T\in\mathcal{B(H)}$ is the operator $T'=T(T^*T)^{-1}$. Recall (see \cite{Sh}) that if $T$ is an left invertible operator on $\mathcal{H}$, then the operator $T^*T$ is invertible and $T'^*=(T^*T)^{-1}T^*$ is a left inverse of $T$, hence $T'$ is left invertible too. Also, $T'$ is a contraction if $T$ is expansive. It is known from \cite[Theorem 2.9]{Ch} that if $T$ is concave, then $T'$ is hyponormal, i.e. $T'T'^*\le T'^*T'$. This implies that $T'$ is a $D_{T'}^2$-contraction, that is a 2-hypercontraction. Indeed, one has $D_{T'}^2\le D_{T'^*}^2$. Therefore $$T'^*D_{T'}^2T'\le T'^*D_{T'^*}^2T'=T'^*T'-(T'^*T')^2 \le D_{T'}^2.$$ In this case, according to the terminology from $A$-contractions (see \cite{S-2006, S-2009}) we say that $T'$ is $D_{T'}^2$-{\it regular} if it satisfies the condition $D_{T'}^2T'=D_{T'}T'D_{T'}$. The next result shows that the mapping $T \to T'$ preserves the regularity condition. \begin{theorem}\label{te31} There is a bijective mapping between the set of all $\Delta_T$-regular concave operators $T$ on $\mathcal{H}$ and the set of all left invertible 2-hypercontractions $T'$ on $\mathcal{H}$ which are $D_{T'}^2$-regular, with $\|P_{\mathcal{D}_{T'}}T'h\|\le \|T'^*T'h\|$ for $h\in \mathcal{D}_{T'}$. \end{theorem} \begin{proof} The required mapping is given by $\varphi(T) = T'$, where $T'$ is the Cauchy dual of $T$. Assume firstly that $T$ is a $\Delta_T$-regular concave operator. So by Theorem \ref{te23} (ii), $T$ has the representation \eqref{eq26} with $V$ an isometry, $Z$ and $\widehat{T}$ contractions such that $V^*Z=0$ and $\widehat{T}\Delta=\Delta \widehat{T}$, where $\Delta:=\sigma ^2 Z^*Z+ \widehat{T}^*\widehat{T}$. Since $T^*T=I\oplus \Delta$ on $\mathcal{N}(\Delta_T) \oplus \overline{\mathcal{R}(\Delta_T)}$ and $T^*T$ is invertible it follows that $\Delta$ is invertible too, and so $\widehat{T}\Delta^{-1}=\Delta^{-1}\widehat{T}$. Now using \eqref{eq26} we get that $T'=T(T^*T)^{-1}$ has on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ the block representation \begin{equation}\label{eq31} T'= \begin{pmatrix} V & Z'\Delta^{-1}\\ 0 & \widehat{T}\Delta^{-1} \end{pmatrix}, \quad Z'=\sigma Z. \end{equation} But $T'$ is a contraction and the representation \eqref{eq31} gives $D_{T'}^2=I-T'^*T'=0\oplus \Delta'$, where $$ \Delta'=I-\Delta^{-1}Z'^*Z'\Delta^{-1}-\Delta^{-1}\widehat{T}^*\widehat{T}\Delta^{-1}=I-\Delta^{-1}. $$ Since $\Delta-I=\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}$ is injective, it follows that $\Delta'=\Delta^{-1}(\Delta-I)$ is injective. Therefore $\mathcal{N}(D_{T'})=\mathcal{N}(\Delta_T)$, so $\overline{\mathcal{R}(D_{T'})}=\overline{\mathcal{R}(\Delta_T)}$. Hence the above representations of $T'$ and $D_{T'}^2$ are given on the decomposition $\mathcal{H}=\mathcal{N}(D_{T'})\oplus \overline{\mathcal{R}(D_{T'})}$. As $T'$ is a $D_{T'}^2$-contraction, the $D_{T'}^2$-regularity of $T'$ will mean that $\widehat{T}\Delta^{-1}\Delta'=\Delta'\widehat{T}\Delta^{-1}$ (taking into account that $\Delta'$ is injective). But this last relation holds because $\widehat{T}$ commutes with both $\Delta^{-1}$ and $\Delta'$, while $\Delta^{-1}$ and $\Delta'$ obviously commute. Hence $T'$ is $D_{T'}^2$-regular. Now denoting $T_0'=\widehat{T}\Delta^{-1}$ we have, for $h\in \mathcal{D}_{T'}$, $$ \|P_{\mathcal{D}_{T'}}T'h\|=\|T'_0h\|\le \|\Delta^{-1}h\|=\|(I-\Delta ')h\|=\|T'^*T'h\|. $$ Therefore $T'$ satisfies all the properties from the statement of the theorem, and so the correspondence $T \to T'$ induces a well-defined mapping $\varphi$ between the corresponding sets. To prove that this map $\varphi$ is onto, let $C$ be a left invertible 2-hypercontraction on $\mathcal{H}$ which is $D_C^2$-regular, with $C_0^*C_0\le D_0^2$, where $C_0:=P_{\mathcal{D}_C}C|_{\mathcal{D}_C}$ and $D_0:=C^*C|_{\mathcal{D}_C}$. Then $\mathcal{N}(D_C)$ is invariant for $C$ and $V:=C|_{\mathcal{N}(D_C)}$ is an isometry ($C$ being a contraction). Therefore $C$ has on the decomposition $\mathcal{H}=\mathcal{N}(D_C)\oplus \mathcal{D}_C$ the block matrix $$ C= \begin{pmatrix} V & C_1\\ 0 & C_0 \end{pmatrix} $$ with $V^*C_1=0$ ($C$ and $V$ being as above). Then $C^*C=I\oplus (C_1^*C_1+C_0^*C_0)=I\oplus D_0$. As $C$ is left invertible, it follows that $C^*C$ is invertible, hence $D_0$ is an invertible contraction. Also, we have $D_C^2=0\oplus (I-D_0)$. Let $T:=C'=C(C^*C)^{-1}$. Then $T^*T=(C^*C)^{-1}C^*C(C^*C)^{-1}=(C^*C)^{-1}$, whence $T'=T(T^*T)^{-1}=C$. To conclude the surjectivity of the map $\varphi$, we show that $T$ is a $\Delta_T$-regular concave. Firstly, we have $\Delta_T=(C^*C)^{-1}-I=(I-C^*C)(C^*C)^{-1}$. Hence $\mathcal{N}(\Delta_T)=\mathcal{N}(D_C)$ and $\mathcal{R}(\Delta_T)=\mathcal{R}(D_C)$. Thus, using the block matrix of $C$, we obtain the following representation of $T$ on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$: $$ T= \begin{pmatrix} V & C_1D_0^{-1}\\ 0 & C_0D_0^{-1} \end{pmatrix}. $$ Now, since $C$ is $D_C^2$-regular, i.e. $D_C^2C=D_CCD_C$, we infer that $$ (I-D_0)C_0=(I-D_0)^{1/2}C_0(I-D_0)^{1/2}. $$ Since $I-D_0={D_C^2}|_{\mathcal{D}_C}$ is injective, we deduce that $(I-D_0)C_0=C_0(I-D_0)$. This also gives $D_0C_0=C_0D_0$ and so $C_0D_0^{-1}=D_0^{-1}C_0$. But from the above block matrix of $T$ we get $$ \Delta_T=T^*T-I=0\oplus [D_0^{-1}(C_1^*C_1+C_0^*C_0)D_0^{-1}-I]=0\oplus (D_0^{-1}-I). $$ Then, denoting $\widehat{T}=C_0D_0^{-1}$, we have $$ \widehat{T}(D_0^{-1}-I)=C_0(D_0^{-1}-I)D_0^{-1}=(D_0^{-1}-I)\widehat{T}. $$ This relation implies that $\widehat{T}$ is a contraction and $T$ is concave. Indeed, since $C_0^*C_0\le D_0^2$, it follows that $D_0^{-1}C_0^*C_0D_0^{-1}\le I$, that is $\widehat{T}$ is a contraction. So, we get \begin{eqnarray*} T^*\Delta_TT&=& 0\oplus D_0^{-1}C_0^*(D_0^{-1}-I)C_0D_0^{-1}\\ &=& 0 \oplus (D_0^{-1}-I)^{1/2}D_0^{-1}C_0^*C_0D_0^{-1}(D_0^{-1}-I)^{1/2}\le 0 \oplus (D_0^{-1}-I)=\Delta_T, \end{eqnarray*} that is $T$ is concave. In addition, as $\widehat{T}$ commutes to $D_0^{-1}-I={\Delta_T}|_{\overline{\mathcal{R}(\Delta_T)}}$, we have by Theorem \ref{te23} (i) that $T$ is $\Delta_T$-regular. Thus $T$ has the required properties. We conclude that the mapping $\varphi$ from the set of $\Delta_T$-regular concave operators $T$ into the set of $D_T^2$-regular 2-hypercontractions $C$ given by $\varphi(T)=T'(=C)$ is surjective, and it remains to show that $\varphi$ is injective. Indeed, let us assume that $T'=T'_1$ for two regular concave operators $T$ and $T_1$. Then $T=T'T^*T=T'_1T^*T$, which gives $T^*T=T^*T'_1T^*T$ so $I=T^*T'_1=T'^*_1T$. Using this and a previous relation, we obtain $I=T'^*_1T'_1T^*T$, whence $$ (T^*T)^{-1}=T'^*_1T'_1=(T^*_1T_1)^{-1}T_1^*T_1(T^*_1T_1)^{-1}=(T^*_1T_1)^{-1}, $$ that is $T^*T=T^*_1T_1$. Finally, as $T'=T'_1$, we get $ T=T'T^*T=T'_1T^*_1T_1=T_1. $ Therefore $\varphi$ is injective. This ends the proof. \end{proof} \begin{remark}\label{re32} \rm Assume that $T$ is a concave operator. Then $T'$ is $D_{T'}^2$-isometry if and only if $T'$ is an isometry, and in this case $T=T'$. Therefore, even if $T$ is a $\Delta_T$-regular 2-isometry, we do not have more information about $T'$. Recall that sometimes concave operators are called 2-hyperexpansive. Also, it is a well-known fact that an operator $T$ is $m$-hyperexpansive for $m\ge 2$ if and only if $T$ is $(\Delta_T,m-1)$-hypercontractive (see Section 1 for terminology). Now, assuming $\Delta_T$-regularity, we can express this equivalence in terms of the contraction $\widehat{T}$ from \eqref{eq26}. \end{remark} \begin{proposition}\label{pr33} Let $T\in \mathcal{B(H)}$ be a $\Delta_T$-regular concave operator and let $m\ge 2$ be an integer. Then $T$ is $m$-hyperexpansive if and only if the compression $\widehat{T}$ of $T$ on $\overline{\mathcal{R}(\Delta_T)}$ is a $(m-1)$-hypercontraction. \end{proposition} \begin{proof} We use the representation \eqref{eq26} of $T$ on $\mathcal{H}=\mathcal{N}(\Delta_T) \oplus \overline{\mathcal{R}(\Delta_T)}$. Thus $\Delta_T=0\oplus \Delta_0$, where $\Delta_0=\sigma^2 Z^*Z+\Delta_{\widehat{T}}\ge 0$ is an injective operator, while $\widehat{T}=P_{\overline{\mathcal{R}(\Delta_T)}}T|_{\overline{\mathcal{R}(\Delta_T)}}$ is a contraction with $\widehat{T}\Delta_0=\Delta_0\widehat{T}$. One can easily prove by induction that the relation $$ \sum_{j=0}^m(-1)^j\begin{pmatrix} m\\ j \end{pmatrix} T^{*j}T^j\le 0\quad {\rm i.e.} \hspace*{2mm} T \hspace*{2mm} {\rm is}\hspace*{2mm} m-{\rm expansive} $$ is equivalent to $$ \sum_{j=0}^{m-1} (-1)^j\begin{pmatrix} m-1\\ j \end{pmatrix} T^{*j}\Delta_TT^j\ge 0 \quad {\rm i.e.} \hspace*{2mm} T \hspace*{2mm} {\rm is} \hspace*{2mm} (\Delta_T,m-1)-{\rm contractive}. $$ Then the last relation can be expressed in terms of $\widehat{T}$ and $\Delta_0$ as $$ \sum_{j=0}^{m-1} (-1)^j\begin{pmatrix} m-1\\ j \end{pmatrix} \widehat{T}^{*j}\Delta_0\widehat{T}^j \ge 0. $$ Using that $\widehat{T}\Delta_0=\Delta_0\widehat{T}$, this is equivalent to $$ \Delta_0\left[\sum_{j=0}^{m-1} (-1)^j\begin{pmatrix} m-1\\ j \end{pmatrix} \widehat{T}^{*j}\widehat{T}^j\right]\Delta_0^{1/2} \ge 0. $$ Since $\Delta_0$ is injective, this inequality is equivalent to $$ \sum_{j=0}^{m-1} (-1)^j\begin{pmatrix} m-1\\ j \end{pmatrix} \widehat{T}^{*j}\widehat{T}^j \ge 0, $$ which means that $\widehat{T}$ is $(m-1)$-contractive. This argument shows that for $m\ge 2$ and for $2\le n\le m-1$, $T$ is $n$-expansive if and only if $\widehat{T}$ is $(n-1)$-contractive. In other words, $T$ is $m$-hyperexpansive if and only if $\widehat{T}$ is $(m-1)$-hypercontractive. This ends the proof. \end{proof} Now, for $\Delta_T$-regular concave operators we obtain an affirmative answer to the Cauchy dual problem (see \cite[Question 2.11]{Ch}). \begin{theorem}\label{te34} Let $T\in \mathcal{B(H)}$ be a $\Delta_T$-regular concave operator. The following statements are equivalent: \begin{itemize} \item[(i)] $T$ is completely hyperexpansive; \item[(ii)] The Cauchy dual $T'$ of $T$ is subnormal; \item[(iii)] The compression of $T$ on $\overline{\mathcal{R}(\Delta_T)}$ is subnormal; \item[(iv)] The compression of $T'$ to $\mathcal{D}_{T'}$ is subnormal. \end{itemize} \end{theorem} \begin{proof} The assertion (i) means that $T$ is $m$-hyperexpansive for every integer $m\ge 2$ which by Proposition \ref{pr33} is equivalent to the fact that $\widehat{T}=P_{\overline{\mathcal{R}(\Delta_T)}}T|_{\overline{\mathcal{R}(\Delta_T)}}$ is an $m$-hypercontraction for any $m\ge 1$. According to a result of Agler from \cite{Ag}, the operator $\widehat{T}$ is subnormal. Hence (i) is equivalent to (iii). Next we take into account the representation \eqref{eq31} of $T'$ on $\mathcal{H}=\mathcal{N}(D_{T'})\oplus \mathcal{D}_{T'}$, where $\mathcal{N}(D_{T'})=\mathcal{N}(\Delta_T)$ (see the proof of Theorem \ref{te31}). Denote $\widehat{T}'=\widehat{T}\Delta^{-1}=P_{\mathcal{D}_{T'}}T'|_{\mathcal{D}_{T'}}$, where $\Delta=\Delta_0+I$ and $\Delta_0=\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}$. Then $D_{T'}^2=0\oplus (I-\Delta^{-1})$ and it is easy to see that $T'$ is $m$-hypercontractive if and only if $$ (I-\Delta^{-1})^{1/2}\left[\sum_{j=0}^{n} (-1)^j\begin{pmatrix} n\\ j \end{pmatrix} \widehat{T}'^{*j}\widehat{T}'^j\right](I-\Delta^{-1})^{1/2}\ge 0, \quad 1\le n\le m. $$ We use here that $\widehat{T}'(I-\Delta^{-1})=(I-\Delta^{-1})\widehat{T}'$ because $\widehat{T}\Delta=\Delta\widehat{T}$ by Theorem \ref{te23} (ii). Since $(I-\Delta^{-1})^{1/2}=D_{T'}|_{\mathcal{D}_{T'}}$ is injective, the previous inequality is equivalent to the fact that $\widehat{T}'$ is $n$-contractive for $1\le n\le m$, that is to the fact that $\widehat{T}'$ is $m$-hypercontractive. So, by Agler's result (see \cite{Ag}), we infer that the statements (ii) and (iv) are equivalent. Finally, if $\widehat{T}$ is subnormal then, because $\widehat{T}'=\widehat{T}\Delta^{-1}=\Delta^{-1}\widehat{T}$ and $\Delta^{-1}\ge 0$, it follows by Bram's result in \cite{B} that $\widehat{T}'$ is subnormal too. Conversely, if $\widehat{T}'$ is subnormal, then $\widehat{T}=\widehat{T}'\Delta=\Delta\widehat{T}'$ is subnormal by the same argument. Thus the statements (iii) and (iv) are equivalent. The proof is complete. \end{proof} In particular, for 2-isometries we re-obtain the main assertion of \cite[Theorem 7.5]{ACJS} which was proved there in a different manner. As we already mentioned, in \cite{ACJS} the regular 2-isometries are called quasi-Brownian isometries. Clearly such operators are completely hyperexpansive. Thus by Theorem \ref{te34} we have the following \begin{corollary}\label{co35} The Cauchy dual $T'$ of a $\Delta_T$-regular 2-isometry $T$ is a subnormal contraction. \end{corollary} As a direct consequence of \cite[Proposition 5.6]{AS2} and Theorem \ref{te34} we have the following result which generalizes \cite[Corollary 7.6]{ACJS}. \begin{corollary}\label{co36} If $T$ is a concave operator with $\Delta_T$ of rank one, then $T$ is a $\Delta_T$-regular completely hyperexpansive operator and $T'$ is a subnormal contraction. \end{corollary} \medskip \section{Quasinormality conditions} \medskip We study now when the compressions of $T$ and $T'$ from the statements (iii) and (iv) of Theorem \ref{te34} are quasinormal. \begin{theorem}\label{te41} Let $T\in \mathcal{B(H)}$ be a concave operator and let $m,n\ge 1$ be positive integers. Then $T^n$ is a $\Delta_{T^m}$-contraction and $\mathcal{N}(\Delta_T)=\mathcal{N}(\Delta_{T^n})$. Moreover, if $T$ is $\Delta_T$-regular, then the following statements are equivalent: \begin{itemize} \item[(i)] $T^n$ is $\Delta_{T^m}$-regular for every $m,n\ge 1$; \item[(ii)] The compression $\widehat{T}$ of $T$ to $\overline{\mathcal{R}(\Delta_T)}$ is quasinormal; \item[(iii)] $\widehat{T}$ commutes with $\widehat{Z}^*\widehat{Z}$ where $\widehat{Z}=P_{\mathcal{N}(\Delta_T)}T|_{\overline{\mathcal{R}(\Delta_T)}}$; \item[(iv)] The Cauchy dual $T'$ of $T$ is a regular $D_{T'^2}^2$-contraction; \item[(v)] The compression $T'_0$ of $T'$ to $\overline{\mathcal{R}(\Delta_T)}$ is quasinormal; \item[(vi)] $T'_0$ commutes with $T'^*_1T'_1$, where $T'_1=P_{\mathcal{N}(\Delta_T)}T'|_{\overline{\mathcal{R}(\Delta_T)}}$. \end{itemize} \end{theorem} \begin{proof} Assume that $T$ is concave, that is a $\Delta_T$-contraction. So, one has $T^{*2}T^2-T^*T\le T^*T-I$. Then $T^2$ is also a $\Delta_T$-contraction, therefore we have $T^{*3}T^3-T^{*2}T^2\le T^*T-I$. Both these relations and the fact that $T$ is expansive give that $T$ is a $\Delta_{T^2}$-contraction. Indeed, we have $$ T^{*3}T^3-T^{*2}T^2=T^*(T^{*2}T^2-T^*T)T\le T^*(T^*T-I)T=T^{*2}T^2-T^*T. $$ We obtain \begin{eqnarray*} T^*\Delta_{T^2}T &=& T^{*3}T^3-T^*T=T^{*3}T^3-T^{*2}T^2+T^{*2}T^2-T^*T\\ & \le & T^{*2}T^2-T^*T+T^*T-I=\Delta_{T^2}. \end{eqnarray*} Since $\Delta_{T^2}\ge 0$ ($T$ being expansive) we conclude that $T$ is a $\Delta_{T^2}$-contraction. We can now show by induction that $T$ is a $\Delta_{T^m}$-contraction for each integer $m\ge 2$. So, assuming that $T^*\Delta_{T^m}T \le \Delta_{T^m}$ for $m>2$, we have \begin{eqnarray*} T^*\Delta_{T^{m+1}}T&=& T^{*(m+2)}T^{m+2}-T^*T=T^{*2}\Delta_{T^m}T^2+T^{*2}T^2-T^*T\\ &\le & T^*\Delta_{T^m}T+\Delta_T=T^{*(m+1)}T^{m+1}-I=\Delta_{T^{m+1}}. \end{eqnarray*} Hence $T$ is a $\Delta_{T^m}$-contraction for any $m\ge 1$ and consequently $T^n$ is a $\Delta_{T^m}$-contraction for $m,n\ge 1$. In addition, because $T$ is expansive, one has $\Delta_T\le \Delta_{T^m}$. Therefore $\mathcal{N}(\Delta_{T^m})\subset \mathcal{N}(\Delta_T)$. But $\mathcal{N}(\Delta_T)$ is invariant for $T$, so also for $T^m$ and $T^m|_{\mathcal{N}(\Delta_T)}$ is an isometry, $T|_{\mathcal{N}(\Delta_T)}$ being so. Since $\mathcal{N}(\Delta_T)$ is also invariant for $T^{*m}T^m$ it follows that $\mathcal{N}(\Delta_T)\subset \mathcal{N}(\Delta_{T^m})$. We conclude that $\mathcal{N}(\Delta_{T^m})=\mathcal{N}(\Delta_T)$. The first assertion of theorem is proved. Assume now that $T$ is $\Delta_T$-regular and that the statement (i) is true. Then $T$ is also $\Delta_{T^2}$-regular, as a $\Delta_{T^2}$-contraction, while $T$ and $T^2$ have on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}=\mathcal{N}(\Delta_{T^2})\oplus \overline{\mathcal{R}(\Delta_{T^2})}$ the block matrices $$ T= \begin{pmatrix} V & \widehat{Z}\\ 0 & \widehat{T} \end{pmatrix}, \quad T^2= \begin{pmatrix} V^2 & V\widehat{Z}+\widehat{Z}\widehat{T}\\ 0 & \widehat{T}^2 \end{pmatrix} $$ with $V^*\widehat{Z}=0$. Consequently, $V^{*2}(V\widehat{Z}+\widehat{Z}\widehat{T})=0$. But, by Theorem \ref{te23} (ii), one has $\widehat{T}(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})=(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})\widehat{T}$. Also, since $T^{*2}T^2=I\oplus [\widehat{Z}^*\widehat{Z}+\widehat{T}^*(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})\widehat{T}]$ and $T$ is $\Delta_{T^2}$-regular, i.e. $\Delta_{T^2}T=\Delta_{T^2}^{1/2}T\Delta_{T^2}^{1/2}$, we infer that $\widehat{T}$ is a $\Delta_1$-contraction and $\Delta_1$-regular, where $\Delta_1:=\Delta_{T^2}|_{\overline{\mathcal{R}(\Delta_{T^2})}}=\Delta_{T^2}|_{\overline{\mathcal{R}(\Delta_T)}}$. Having in view the above expression of $T^{*2}T^2$ and the fact that $\Delta_1$ is a positive injective operator, we infer that $$ \widehat{T}[\widehat{Z}^*\widehat{Z}+\widehat{T}^*(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})\widehat{T}]=[\widehat{Z}^*\widehat{Z}+\widehat{T}^* (\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})\widehat{T}]\widehat{T}. $$ Using that $\widehat{T}\Delta=\Delta\widehat{T}$, where $\Delta:=T^*T|_{\overline{\mathcal{R}(\Delta_T)}}=\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T}$, we can equivalently write the previous relation in the form $$ \widehat{T}\widehat{Z}^*\widehat{Z}-\widehat{Z}^*\widehat{Z}\widehat{T} + (\widehat{T} \widehat{T}^*\widehat{T}- \widehat{T}^*\widehat{T}^2)\Delta=0. $$ By the commutation $\widehat{T}\Delta=\Delta\widehat{T}$ we have $\widehat{T}\widehat{Z}^*\widehat{Z}-\widehat{Z}^*\widehat{Z}\widehat{T}= \widehat{T}^*\widehat{T}^2-\widehat{T} \widehat{T}^*\widehat{T}$, which together with the above relation lead to $$ (\widehat{T} \widehat{T}^*\widehat{T}- \widehat{T}^*\widehat{T}^2)(\Delta-I)=0. $$ Because $\Delta-I=\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}$ is positive and injective, one has $\overline{\mathcal{R}(\Delta-I)}=\overline{\mathcal{R}(\Delta_T)}$. So from the previous equality we infer that the contraction $\widehat{T}$ is quasinormal. Hence (i) implies (ii). Next, we assume (ii), that is, $\widehat{T}$ is quasinormal. Since $T$ is $\Delta_T$-regular we have $\widehat{T}(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})=(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})\widehat{T}$. By our assumption we get that $\widehat{T}\widehat{Z}^*\widehat{Z}=\widehat{Z}^*\widehat{Z}\widehat{T}$. Therefore (ii) implies (iii). Also, the last commutation relation of $\widehat{T}$ to $\widehat{Z}^*\widehat{Z}$ implies, using the $\Delta_T$-regularity of $T$, that $\widehat{T}$ is quasinormal. So the assertions (ii) and (iii) are equivalent. To prove that (ii) implies (i) we represent $T^m$ on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ using the above block matrix of $T$, in the form $$ T^m= \begin{pmatrix} V^m & \sum_{j=0}^{m-1}V^{m-j-1}\widehat{Z}\widehat{T}^j\\ 0 & \widehat{T}^m \end{pmatrix} = \begin{pmatrix} V^m & Z_m\\ 0 & \widehat{T}^m \end{pmatrix}. $$ Since $V^*\widehat{Z}=0$, we have $V^{*m}Z_m=0$. From this representation we get, using $V^*\widehat{Z}=0$, \begin{eqnarray*} T^{*m}T^m|_{\overline{\mathcal{R}(\Delta_T)}}&=& Z_m^*Z_m+\widehat{T}^{*m}\widehat{T}^m=\sum_{j=0}^{m-1}\widehat{T}^{*j}\widehat{Z}^*\widehat{Z}T^j+\widehat{T}^{*m}\widehat{T}^m\\ &=& \Delta + \sum_{j=1}^{m-1}\widehat{T}^{*j}(\Delta-I)\widehat{T}^j=:\Delta_m. \end{eqnarray*} Here, as above, $\Delta=\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T}$. Now, as $T$ is $\Delta_T$-regular, we have $\widehat{T}^n\Delta=\Delta\widehat{T}^n$ for $n\ge 1$. Also, assuming that the assertion (ii) is true, i.e. $\widehat{T}$ is quasinormal, we infer that $\widehat{T}^n\Delta_m=\Delta_m \widehat{T}^n$ for $m,n\ge 1$. Recall that $\widehat{T}^n=T^n|_{\overline{\mathcal{R}(\Delta_m)}}$ is a contraction ($\widehat{T}$ being so) and $\Delta_m-I=\Delta_{T^m}|_{\overline{\mathcal{R}(\Delta_m)}}$ because $\overline{\mathcal{R}(\Delta_T)}=\overline{\mathcal{R}(\Delta_{T^m})}$. We also have that $T^n$ is a $\Delta_{T^m}$-contraction and from the above commutation of $\widehat{T}^n$ with $\Delta_m$ it follows that $T^n$ is $\Delta_{T^m}$-regular, for $m,n\ge 1$. Hence (ii) implies (i). Next we use the representation \eqref{eq31} of $T'$ and denote $T'_0=\widehat{T}\Delta^{-1}=P_{\overline{\mathcal{R}(\Delta_T)}}T'|_{\overline{\mathcal{R}(\Delta_T)}}$, where $\Delta$ is as above. We have that $T'_0$ is quasinormal if and only if $\widehat{T}$ is quasinormal, taking into account that $\widehat{T}\Delta^{-1}=\Delta^{-1}\widehat{T}$ by the $\Delta_T$-regularity of $T$. So (ii) is equivalent to (v), and similarly (iii) is equivalent to (vi), where $T'_1=Z'\Delta^{-1}=\widehat{Z}\Delta^{-1}$ in \eqref{eq31}. To end the proof we will prove that (ii) is equivalent to (iv). Notice firstly that since $T'$ is a $D_{T'}^2$-contraction, $T'^2$ will be also a $D_{T'}^2$-contraction, i.e. $T'^{*2}D_{T'}^2T'^2\le D_{T'}^2$, because $T'^{*2}D_{T'}^2T'^2\le T'^*D_{T'}^2T'$. Moreover, we get that $$ T'^*D_{T'^2}^2T'= T'^*D_{T'}^2T'+ T'^{*2}D_{T'}^2T'^2 \le D_{T'}^2+T'^*D_{T'}^2T'=D_{T'^2}^2. $$ Hence $T'$ is a $D_{T'^2}^2$-contraction. Since $T'$ is a contraction, we have $D_{T'}^2\le D_{T'^2}^2$. Therefore $\mathcal{N}(D_{T'^2}^2)\subset \mathcal{N}(D_{T'})=\mathcal{N}(\Delta_T)$. But the last kernel is an invariant subspace for $T'$, so also for $T'^2$. From the block matrix \eqref{eq31} of $T'$ we have that $T'^2|_{\mathcal{N}(\Delta_T)}$ is an isometry. Thus we obtain that $\mathcal{N}(\Delta_T)=\mathcal{N}(D_{T'}^2)=\mathcal{N}(D_{T'^2}^2)$. Consequently one has $\overline{\mathcal{R}(\Delta_T)}=\mathcal{D}_{T'}=\mathcal{D}_{T'^2}$. Using the block matrix \eqref{eq31} of $T'$ on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ we get $T'^2$ in the form $$ T'^2= \begin{pmatrix} V^2& V\widehat{Z}\Delta^{-1}+\widehat{Z}\Delta^{-1}\widehat{T}\Delta^{-1}\\ 0 & \widehat{T}\Delta^{-1}\widehat{T}\Delta^{-1} \end{pmatrix} = \begin{pmatrix} V^2 & (V\widehat{Z}+\widehat{Z}\Delta^{-1}\widehat{T})\Delta^{-1}\\ 0 & \widehat{T}^2\Delta^{-2} \end{pmatrix}. $$ We used here that $\widehat{T}\Delta^{-1}=\Delta^{-1}\widehat{T}$ by $\Delta_T$-regularity of $T$. This representation of $T'^2$ gives immediately that \begin{eqnarray*} T'^{*2}T'^2 &= & I\oplus \Delta^{-1}[\widehat{Z}^*\widehat{Z}+ \widehat{T}^*\Delta^{-1}(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\widehat{T})\Delta^{-1}\widehat{T}]\Delta^{-1}\\ &=& I\oplus \Delta^{-1}(\widehat{Z}^*\widehat{Z}+\widehat{T}^*\Delta^{-1}\widehat{T})\Delta^{-1}\\ &=& I\oplus \Delta^{-1}(\widehat{Z}^*\widehat{Z}+\widehat{T}^*(I-\Delta')\widehat{T})\Delta^{-1}\\ &=& I\oplus \Delta^{-1}(\Delta-\widehat{T}^*\Delta'\widehat{T})\Delta^{-1}\\ &=& I\oplus (I-\widehat{T}^*\widehat{T}\Delta^{-1}\Delta')\Delta^{-1}. \end{eqnarray*} Here $\Delta':=I-\Delta^{-1}$ and we used also the fact that $\widehat{T}$ commutes with both $\Delta^{-1}$ and $\Delta'$. We infer that $D_{T'^2}^2=0\oplus (I+\widehat{T}^*\widehat{T}\Delta'\Delta^{-1})\Delta^{-1}$ and thus $D_{T'^2}^2$-regularity of $T'$ will mean that $T'_0=P|_{\overline{\mathcal{R}(\Delta_T)}}T'|_{\overline{\mathcal{R}(\Delta_T)}}=\widehat{T}\Delta^{-1}$ commutes with $D_{T'^2}^2|_{\overline{\mathcal{R}(\Delta_T)}}=(I+\widehat{T}^*\widehat{T}\Delta'\Delta^{-1})\Delta^{-1}$. Equivalently, this relation can be written as $\widehat{T}\widehat{T}^*\widehat{T}\Delta'\Delta^{-3}=\widehat{T}^*\widehat{T}^2\Delta'\Delta^{-3}$, which holds if and only if $\widehat{T}$ is quasinormal, since $\Delta'=I-\Delta^{-1}=(\Delta-I)\Delta^{-1}$ is injective by the injectivity of $\Delta-I=\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}$. This argument shows that $D_{T'^2}^2$-regularity of $T'$ is equivalent to the quasinormality of $\widehat{T}$. We conclude that the assertions (ii) and (iv) of theorem are equivalent. The proof is complete. \end{proof} \begin{remark}\label{re42} \rm It is easily seen that if $T$ is a $\Delta_T$-regular concave operator, then $T'$ (and consequently $T$) is quasinormal if and only if $T$ is an isometry (and in this case $T=T'$). On the other hand, by the first assertion of Theorem \ref{te41} a concave operator $T$ is a $\Delta_{T^2}$-contraction, while the assertion (i) of Theorem \ref{te41} ensures that $T$ is $\Delta_{T^2}$-regular if $T$ is $\Delta_T$-regular. But $\Delta_{T^2}$-regularity of $T$ is equivalent to each of the other statements (i)-(vi), because it implies (ii), as we have seen in the previous proof. Thus the general statement (i) can be reduced only to (i) for $n=1$ and $m=2$, what can be easily verified in applications. \end{remark} As a consequence of Theorem \ref{te41} and Theorem \ref{te34} we have the following result. \begin{corollary}\label{co43} Let $T$ be a $\Delta_T$-regular concave operator. If $T$ is $\Delta_{T^2}$-regular, then $T$ is completely hyperexpansive. In particular, if $\Delta_T$ is of rank one, then $T$ is $\Delta_{T^2}$-regular. \end{corollary} \begin{corollary}\label{co44} \hspace*{7cm} \begin{itemize} \item[(a)] If $T$ is a regular concave operator that satisfies one of (the equivalent) conditions of Theorem \ref{te41}, then $T^n$ is a $\Delta_{T^n}$-regular concave operator for every integer $n\ge 2$. \item[(b)] If $T$ is concave and $\sigma^{-1}P_{\mathcal{N}(\Delta_T)}T|_{\overline{\mathcal{R}(\Delta_T)}}$ is an isometry, where $\sigma^2=\|\Delta_T\|+1$, then $T$ is $\Delta_T$-regular if and only if $\widehat{T}= P_{\overline{\mathcal{R}(\Delta_T)}}T|_{\overline{\mathcal{R}(\Delta_T)}}$ is a quasinormal contraction. \end{itemize} \end{corollary} \begin{proof} The assertion (a) follows directly from the statement (i) of Theorem \ref{te41}. For the assertion (b) we assume that $Z=\sigma^{-1}P_{\mathcal{N}(\Delta_T)}T|_{\overline{\mathcal{R}(\Delta_T)}}$ is an isometry. Then, by the matrix representation of the concave operator $T$ on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$ with $V=T|_{\mathcal{N}(\Delta_T)}$, $\widehat{Z}=\sigma Z$ and $\widehat{T}=P_{\overline{\mathcal{R}(\Delta_T)}}T|_{\overline{\mathcal{R}(\Delta_T)}}$, we have $\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}=(\sigma^2+1)I+\widehat{T}^*\widehat{T}$. Thus $\widehat{T}$ commutes to $\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}$ (i.e. $T$ is $\Delta_T$-regular) if and only if $\widehat{T}$ is quasinormal. \end{proof} The second statement of this corollary shows that if $Z$ before is an isometry, then the condition of $\Delta_T$-regularity can be added to the equivalent statements of Theorem \ref{te41}. If $T$ is a $\Delta_T$-regular 2-isometry (i.e. a quasi-Brownian isometry), then $\widehat{T}$ is even an isometry. Therefore $T$ trivially satisfies the assertions of Theorem \ref{te41} in this case. Returning to Theorem \ref{te34}, we remark that for $T\in \mathcal{B(H)}$ the assertion (i) of Theorem \ref{te34} means that $T$ is an $A_n(T)$-contraction for any integer $n\ge 1$, where as in \eqref{eq12}, $$ A_n(T)=-B_n(T)\ge 0, \quad B_n(T)=B_I^n(T), \quad A_1(T)=-B_1(T)=\Delta_T. $$ Under the assumption of $\Delta_T$-regularity of $T$, this fact ensures that $\widehat{T}$ in \eqref{eq26} is subnormal, while $\Delta_{T^n}$-regularity of $T$ for any $n\ge 1$ means that $\widehat{T}$ is quasinormal, by Theorem \ref{te41}. We analyse now the case when $T$ is $A_n(T)$-regular, that is $A_n(T)T=A_n(T)^{1/2}TA_n(T)^{1/2}$. \begin{theorem}\label{te46} Let $T\in \mathcal{B(H)}$ be a $\Delta_T$-regular completely hyperexpansive operator. With the above notation, the following statements are equivalent: \begin{itemize} \item[(i)] $T$ is $A_n(T)$-regular for every integer $n\ge 2$; \item[(ii)] $T$ is $A_j(T)$-regular for $j=2,3$; \item[(iii)] $T$ is $A_2(T)$-regular and the compression of $T$ to $\overline{A_2(T)\mathcal{H}}$ is a quasinormal contraction. \end{itemize} In addition, if (one of) the assertions (i)-(iii) hold, then $\mathcal{N}(A_2(T))=\mathcal{N}(A_n(T))$ for $n\ge3$. \end{theorem} \begin{proof} By hypothesis and the above remark, we have that $T$ is an $A_n$-contraction, where $A_n:=A_n(T)$, for $n\ge 1$. This ensures that $\mathcal{N}(A_n)$ is an invariant subspace for $T$. Obviously, (i) implies (ii). Assume now that (ii) holds, that is $T$ is $A_j$-regular for $j=2,3$, where $$ A_2=\Delta_T-T^*\Delta_TT=I-2T^*T+T^{*2}T^2\ge 0, $$ $$ A_3=A_2-T^*A_2T=I-3T^*T+3T^{*2}T^2-T^{*3}T^3\ge 0. $$ Since $T$ is $\Delta_T$-regular, it follows from \cite[Proposition 2.1 and Theorem 4.6]{S-2006} that $$ \mathcal{N}(A_2)=\mathcal{N}(\Delta_T)\oplus \mathcal{N}(I-S_{\widehat{T}}) $$ and that $\mathcal{N}(I-S_{\widehat{T}})$ is invariant for $\Delta_T$. Here $S_{\widehat{T}}$ is the asymptotic limit of the contraction $\widehat{T}=P_{\overline{\mathcal{R}(\Delta_T)}}T|_{\overline{\mathcal{R}(\Delta_T)}}$. But $\mathcal{N}(I-S_{\widehat{T}})$ is the maximum invariant subspace for $\widehat{T}$ on which $\widehat{T}$ is an isometry (see \cite{Ku}). In fact $\mathcal{N}(I-S_{\widehat{T}})=\mathcal{N}(I-\widehat{T}^*\widehat{T})$ because this last subspace is also invariant for $\widehat{T}$, $\widehat{T}$ being a $D_{\widehat{T}}^2$-contraction. Using the block matrix \eqref{eq26} of $T$ and denoting $\Delta_0=\Delta_T|_{\overline{\mathcal{R}(\Delta_T)}}$, we have $$ A_2=0\oplus (\Delta_0-\widehat{T}^*\Delta_0\widehat{T})=0\oplus (I-\widehat{T}^*\widehat{T})\Delta_0=0\oplus \Delta_0(I-\widehat{T}^*\widehat{T}), $$ taking into account that $\widehat{T}\Delta_0=\Delta_0\widehat{T}$ (by Theorem \ref{te23}). Next we represent $\widehat{T}$ and $\Delta_0$ on $\overline{\mathcal{R}(\Delta_T)}=\mathcal{N}(D_{\widehat{T}})\oplus \mathcal{D}_{\widehat{T}}$ in the form \begin{equation}\label{eq41} \widehat{T}= \begin{pmatrix} \widehat{V} & D\\ 0 & C \end{pmatrix}, \quad \Delta_0=\Delta_2\oplus \Delta_1, \end{equation} where $\widehat{V}$ is an isometry, $C, D$ are contractions, $\widehat{V}^*D=0$ and $\Delta_j\ge 0$ ($j=1,2$). Clearly, $\Delta_0$ can be written in this form because $\mathcal{N}(D_{\widehat{T}})$ is invariant for $\Delta_T=0\oplus \Delta_0$ on $\mathcal{H}=\mathcal{N}(\Delta_T)\oplus \overline{\mathcal{R}(\Delta_T)}$, so $\mathcal{N}(D_{\widehat{T}})$ reduces $\Delta_0$. Then we obtain from the relation $\widehat{T}\Delta_0=\Delta_0\widehat{T}$ that $C\Delta_1=\Delta_1C$. By the above expression of $A_2$ we get that $$ A_2=0\oplus (I-D^*D-C^*C)\Delta_1=0\oplus \Delta_1(I-D^*D-C^*C) $$ on $\mathcal{H}=\mathcal{N}(A_2)\oplus \mathcal{D}_{\widehat{T}}$, with $\mathcal{D}_{\widehat{T}}=\overline{\mathcal{R}(A_2)}$, because $\mathcal{N}(D_{\widehat{T}})=\mathcal{N}(A_2)\ominus \mathcal{N}(\Delta_T)$. Next we use the expression of $A_3$ in terms of $A_2$ and the fact that $\mathcal{N}(A_3)$ is invariant for $T$. Thus, as $T$ is $A_2$-regular by (ii), we have by \cite[Proposition 2.1 and Theorem 4.6]{S-2006} that $$ \mathcal{N}(A_3)=\mathcal{N}(A_2)\oplus \mathcal{N}(I-S_C), $$ $C$ being the compression of $T$ to $\mathcal{D}_{\widehat{T}}=\overline{\mathcal{R}(A_2)}$. By \eqref{eq41} and the condition $\widehat{V}^*D=0$, we get that $\mathcal{N}(I-C^*C)\subset \mathcal{N}(I-\widehat{T}^*\widehat{T})$. Hence $$ \mathcal{N}(I-S_C)=\mathcal{N}(I-C^*C)=\{0\}. $$ We obtain $\mathcal{N}(A_3)=\mathcal{N}(A_2)$ and so $\overline{\mathcal{R}(A_3)}=\overline{\mathcal{R}(A_2)}$. Also, since $T$ is $A_2$-regular, we have \begin{equation}\label{eq42} C(I-D^*D-C^*)\Delta_1=(I-D^*D-C^*C)\Delta_1C. \end{equation} On the other hand, as $A_3=A_2-T^*A_2T$, we infer $$ A_3=0\oplus [(I-D^*D-C^*C)\Delta_1-C^*(I-D^*D-C^*C)\Delta_1C]=:0\oplus \Delta_3 $$ on $\mathcal{H}=\mathcal{N}(A_2) \oplus \mathcal{D}_{\widehat{T}}=\mathcal{N}(A_3)\oplus \mathcal{D}_{\widehat{T}}$. As $T$ is $A_3$-regular by (ii) and $C=P_{\overline{\mathcal{R}(A_3)}}T|_{\overline{\mathcal{R}(A_3)}}$, we have as in the proof of Theorem \ref{te23} that $C\Delta_3=\Delta_3C$. This relation, together with \eqref{eq42} and with the fact that $C\Delta_1=\Delta_1C$, lead to the identity $$ CC^*C(I-D^*D-C^*C)\Delta_1=C^*C^2(I-D^*D-C^*C)\Delta_1. $$ This means that $CC^*CA_2|_{\overline{\mathcal{R}(A_2)}}=C^*C^2A_2|_{\overline{\mathcal{R}(A_2)}}$. Since $A_2|_{\overline{\mathcal{R}(A_2)}}$ is an injective positive operator, we conclude that $CC^*C=C^*C^2$, i.e. $C$ is quasinormal. Hence (ii) implies (iii). Next we assume that (iii) holds, that is $T$ is $A_2$-regular and that $C$ in \eqref{eq41} is quasinormal. We show that $\mathcal{N}(A_2)=\mathcal{N}(A_n)$ and that $T$ is $A_n$-regular for $n\ge 3$. Recall that $A_n=A_{n-1}-T^*A_{n-1}T$ and that $T$ is an $A_n$-contraction, so $\mathcal{N}(A_n)$ is invariant for $T$, for any $n\ge 2$. Since $T$ is also $A_2$-regular by (iii), it follows that $C$ commutes with $A_2|_{\overline{\mathcal{R}(A_2)}}=(I-D^*D-C^*C)\Delta_1$, $D$ and $\Delta_1$ from \eqref{eq41}. This gives that $CA_2^{1/2}h=A_2^{1/2}Th$ for $h\in \mathcal{H}$. Since $T$ is an $A_2$-contraction, $C$ is the (unique) contraction on $\overline{\mathcal{R}(A_2)}$ induced by $T$. Then, using \cite[Proposition 2.1 and Theorem 4.1]{S-2006}, we have $$ \mathcal{N}(A_3)=\mathcal{N}(A_2-T^*A_2T)=(A_2^{1/2})^{-1}\mathcal{N}(I-S_C). $$ As we have seen before we have $\mathcal{N}(I-S_C)=\{0\}$. Therefore we get $$ \mathcal{N}(A_3)=\{h\in \mathcal{H}:A_2^{1/2}h=0\}=\mathcal{N}(A_2), \quad \overline{\mathcal{R}(A_3)}=\overline{\mathcal{R}(A_2)}=\mathcal{D}_{\widehat{T}}. $$ Recall that $\Delta_3=A_3|_{\overline{\mathcal{R}(A_3)}}$. Therefore $ \Delta_3=A_2|_{\mathcal{D}_{\widehat{T}}}-C^*(A_2|_{\mathcal{D}_{\widehat{T}}})C. $ Taking into account that $C$ is quasinormal and that it commutes with $A_2|_{\mathcal{D}_{\widehat{T}}}$, it follows that $C\Delta_3=\Delta_3C$. This means that $T$ is $A_3$-regular. We show by induction that $\mathcal{N}(A_2)=\mathcal{N}(A_n)$ and that $T$ is $A_n$-regular for $n\ge 3$. We had proved this fact for $n=3$ and we assume now that $T$ is $A_j$-regular with $\mathcal{N}(A_j)=\mathcal{N}(A_2)$ for $3\le j\le n$. A simple computation shows that $$ \Delta_m:=A_m|_{\overline{R(A_m)}}=\sum_{j=0}^{m-2}(-1)^j \begin{pmatrix} m-2\\ j \end{pmatrix} C^{*j}(A_2|_{\mathcal{D}_{\widehat{T}}})C^j $$ for $m\ge 3$. So, as $C$ is quasinormal and it commutes to $A_2|_{\mathcal{D}_{\widehat{T}}}$, it follows that $C\Delta_m=\Delta_mC$ for $m\ge 3$. Furthermore, since for $m\le n$ one has $\overline{\mathcal{R}(A_m)}=\overline{\mathcal{R}(A_2)}=\mathcal{D}_{\widehat{T}}$, we have by this commutation relation that $C$ is the (unique) contraction on $\overline{\mathcal{R}(A_n)}$ induced by $T$ as an $A_n$-contraction. Then, by \cite[Proposition 2.1 and Theorem 4.1]{S-2006}, we get (as above) $$ \mathcal{N}(A_{n+1})=\mathcal{N}(A_n-T^*A_nT)=(A_n^{1/2})^{-1}\mathcal{N}(I-S_C)=\mathcal{N}(A_n). $$ Hence $C$ is the compression of $T$ to $\overline{\mathcal{R}(A_{n+1})}= \overline{\mathcal{R}(A_n)}$. Since $C\Delta_{n+1}=\Delta_{n+1}C$, we infer that $T$ is $A_{n+1}$-regular. We conclude that $T$ is $A_m$-regular and that $\mathcal{N}(A_m) =\mathcal{N}(A_2)$ for every integer $m\ge 2$. Thus (iii) implies (i) and all assertions are proved. \end{proof} From the above proof we can extract more information about the operator $C$. \begin{corollary}\label{co47} If $T$ is a completely hyperexpansive operator which is $A_j$-regular for $j=1,2,3$, then the compression of $T$ to $\overline{\mathcal{R}(A_2)}$ is a completely non isometric quasinormal contraction. \end{corollary} Note that if $T$ is a $\Delta_T$-regular 2-isometry, then $A_n=0$ for $n\ge 2$. So Theorem \ref{te46} and Corollary \ref{co47} are meaningful only for completely hyperexpansive operators which are not $\Delta_T$-regular 2-isometries. \subsection*{Acknowledgment} The authors gratefully thank the Referee for the constructive comments and recommendations which definitely helped to improve the readability and quality of the paper. The first named author was supported in part by the project FRONT of the French National Research Agency (grant ANR-17-CE40-0021) and by the Labex CEMPI (ANR-11-LABX-0007-01). The second named author was supported by a Project financed from Lucian Blaga University of Sibiu research grants LBUS-IRG-2017-03. \bibliographystyle{amsalpha}
1,314,259,995,501
arxiv
\section{Introduction} \label{1} The \textit{ratchet} effect, i. e. the possibility to obtain the directed transport by using zero-mean perturbations only, has induced notable scrutiny over the last decade [1]. Initially, most studies have been focused on noisy overdamped models have been inspired by a molecular motors realm [1]. Then the ratchet's approach has been applied to a broad class of physical systems in which inertia effects are essential [2]. The examples are Josepshon junctions [3], cold atoms systems [4], and mechanical engines [5]. Recently it has been shown that the ratchet idea can be viewed as a part of a general symmetry-breaking approach [6]. This approach is based on the analysis of all relevant symmetries which have to be broken in order to fulfill necessary conditions for a dc-output appearance. The formalization of symmetry analysis for one particle's dynamics has been addressed in Ref.[7]. In the present paper we aim at collective rectification effects which arise in a set of coupled single ratchet units. Although various examples of interacting ratchet systems have been proposed already in the context of molecular motors [8], here the emphasis is put on the weak-noise underdamped limit. We show that a coupling scheme determines a set of certain symmetries for the ratchet's collective and hence defines necessary conditions for a dc-current generation. Dynamical mechanisms of a current rectification are connected with a coherence between units, which depends not only on a coupling scheme, but also on a strength of interactions. \section{Coupling schemes and symmetries} \label{} Let us consider a set consisting of $N$ identical dynamical units, $\mathbf{x}=\{x_{i}, i=1,...,N\}$, that are linearly and symmetrically coupled. The coupling scheme is described by some graph which can be encoded in the symmetrical $N\times N$ binary matrix $G_{ij}$, $G_{ij}=G_{ji}$. The equations of motion are the following: \begin{eqnarray} \ddot{x}_{i}=-\alpha\dot{x}_{i}+F(x_{i}-x_{i}^{0},t-t_{i}^{0})+ c\sum_{j=1}^{N}G_{ij}H(x_{i},x_{j},\dot{x}_{i},\dot{x}_{j})+\xi_{i}(t), \end{eqnarray} where $F(x+L,t)=F(x,t+T)=F(x,t)$ is the double periodic force function, $H$ is linear over all arguments coupling function, and $c$ is the strength of interactions. The stochastic terms $\xi_{i}(t)$ are mutually independent delta-correlated Gaussian white noises, $\langle \xi_{i}(t)\xi_{j}(s) \rangle=\sigma \delta_{ij}\delta(t-s)$, where $\sigma$ is the noise intensity. We assume also that the force function, $F$, is the same for all the units, but the phases, $x_{i}^{0}$ and $t_{i}^{0}$, can be different for different units. Finally, we are interested in the mean dc-current, \begin{equation}\label{2} J=\lim_{t\rightarrow \infty} \frac{1}{Nt}\sum_{i=1}^{N}x_{i}(t). \end{equation} Following the symmetry analysis ideology [6-7], in order to determine necessary conditions for a dc-output appearance, we have to check whether there exist symmetry transformations which allow to generate out for each trajectory of the system (1) another one with a reversal velocity. The presence of the white noise, $\xi_{i}(t)$, does not change the symmetry properties of the system. Moreover, the coupling to a heat bath leads to an effective exploration of the whole phase space and produce an averaging in a case of several coexisting attractors [6,9]. For the one-particle case, $N=1$, a transformation of interest has to involve a change of the sign of $x$ (and thus change of the current direction). It allows also some shifts in time and space domains [6-7]: \begin{equation}\label{3} \hat{S}_{single}: x\rightarrow-x+\lambda,~~~t\rightarrow t +\tau,~~~~x\in \textbf{R} . \end{equation} In the case of several coupled ratchets a symmetry operation should be performed in the global coordinate space $\textbf{R}^{N}$ and can also involve a permutation between different units. The corresponding symmetry operation can be described as the linear transformation, \begin{equation}\label{2} \hat{S}_{network}: \mathbf{x}\rightarrow-\mathbf{S}\mathbf{x}+\mathbf{\lambda},~~~ t\rightarrow t+\tau,~~~~\mathbf{x}\in \textbf{R}^{N}, \end{equation} where $S_{ij}$ is a $N\times N$ binary matrix with only one non-zero element at each row and line, and $\mathbf{\lambda}=\{\lambda_{i}\}, i=1,...,N$ is the vector of shifts. The matrix $\mathbf{S}$ encodes permutation between units. The properties of the permutation matrix $S_{ij}$ (and the existence of such a matrix at all) are strongly depend on the structure of connections among units. Below, using simple examples, we will illustrate this statement. {\it Breaking symmetries by connections.} Let us first consider the case of two particles (rotators) in the standing-wave potential with the modulated amplitude [10], coupled by the linear spring: \begin{eqnarray} \label{1} \ddot{x}_{1}=-\alpha\dot{x}_{1}+F(x_{1},t)+c(x_{2}-x_{1})+\xi_{1}(t) \\ \ddot{x}_{2}=-\alpha\dot{x}_{2}+F(x_{2}-x_{0},t-t_{0})+c(x_{1}-x_{2})+\xi_{2}(t), \end{eqnarray} where $F(x,t)=\sin(x)\sin(\omega t)$. In the uncoupled case, $c=0$, both the systems posses symmetries of the type (3): \begin{eqnarray}\label{2} \hat{S}_{1}: x\rightarrow-x+ \pi, t\rightarrow t+T/2 \\ \hat{S}_{2}: x\rightarrow-x+\pi+x_{0}, t\rightarrow t+T/2+t_{0}, \end{eqnarray} with $x_{0}=t_{0}=0$ for the first rotator, Eq.(5). The symmetry transformations, Eqs.(7-8), are independent for each rotators. The whole system, Eqs.(5-6), is symmetric with respect to the transformation $\hat{S}_{1} \times \hat{S}_{2}$ for any choice of $t_{0}$ and $x_{0}$. So, the mean dc-output for the uncoupled case is zero (line(1) in Fig.1a). In the case of coupled particles, $c>0$, abovementioned transformations, , Eqs.(7-8), should been conjugated and independent spatial and temporal shifts are forbidden now. For $x_{0}\neq k\pi$ and $t_{0}\neq T/2$, the connection breaks both the symmetries and we can expect on nonzero current appearance (see also Ref. [11] for another example of an overdamped dimer with an additive driving force). For the set of parameters $\alpha=0.1$, $w=0.3$, $x_{0}=\pi/2$, $\sigma=0.01$ and $t_{0}=T/4$, the symmetry violation is realized by the asymmetrical limit cycle with the negative winding number (line(2) in Fig.1a). So, in this case the connection between the units destroy all the symmetries and leads to the dc-current generation. \begin{figure}[htbp] \centering{\resizebox{8cm}{!}{\includegraphics{symm.eps}}} \caption{(a) The dependence of $x_{1}(t)$ versus $t$ for different coupling schemes (see text for details) for the parameter values $\alpha=0.1$, $w=0.3$, $x_{0}=\pi/2$ and $t_{0}=T/4$ and noise intensity $\sigma=0.01$. The dependencies for another rotators from the set are the same due to the homogeneity of the system, Eqs.(5-6) and Eqs.(9-10); (b) Poincar\`{e} sections for the first rotator for the third variant of the coupling scheme. The coupling to the heat bath leads to the averaging over two symmetry-related chaotic attractors (white dots) with opposite mean velocities.} \end{figure} {\it Restoring symmetries by connections.} Let us now consider the dimer identical to the previous one, Eq.(5-6), but spatially shifted by $\pi$ (half of the period): \begin{eqnarray} \label{1} \ddot{x}_{3}=-\alpha\dot{x}_{3}+F(x_{3}+\pi, t)+c(x_{4}-x_{3})+\xi_{3}(t) \\ \ddot{x}_{4}=-\alpha\dot{x}_{4}+F(x_{2}-x_{0}+\pi, t-t_{0})+c(x_{3}-x_{4})+\xi_{4}(t). \end{eqnarray} Both the systems, Eqs.(5-6) and Eqs.(9-10), can be transformed from one to another by the simple coordinate shift. Thus, the system in Eqs.(9-10) produces the same mean current, as the system in Eqs.(6-7). As the next step we couple both the systems by additional connections $((1)\leftrightarrow (4), (2) \leftrightarrow (3))$ (see the inset in Fig.1a). The generalized symmetry transformation, Eq.(4), can be identified with the following permutation matrix: \begin{equation} S_{ij}=\left(\begin{array}{cccc}1&0&0&0\\0&0&0&1\\0&0&1&0\\0&1&0&0 \end{array}\right), \end{equation} and the shift vector $\mathbf{\lambda}=\{\pi\}$. Here, despite to the previous case, the introduction of additional connections leads to the appearance of the new symmetry and we may expect on the nullification of the dc-output (line(3) in Fig.1a). For the above set of parameters we found that the symmetry is realized in the phase space by two symmetry-coupled chaotic attractors with opposite mean velocities (see Fig.1b). It is easy to check, that any other type of connection with two additional links does not restore the system's symmetry. \section{Synchronization and current reversals} \label{} While the presence or the absence of the dc-output is clearly connected to the absence/presence of the symmetry transformation, the dc-current value is determined by dynamical mechanisms. From the previous studies [12] it is known that global properties of a collective dynamics are defined by a coherence between units. Thus, we can expect that an efficiency of the current rectification is closely related to a degree of synchronization within a ratchet's collective. As a natural example we consider the model of $N$ globally coupled array of underdamped Josepshon junctions (JJ), subjected to an ac-current $E(t)$ [13]. The equation for the superconducting phase difference $x_{i}$ across the single junction is \begin{eqnarray} \label{1} \ddot{x}_{i}=-\alpha\dot{x}_{i}+sin(x_{i})+E(t)+ c(\dot{x}_{i}-\langle \dot{x} \rangle_{A})+\xi_{i}(t) , \end{eqnarray} where $\langle \dot{x} \rangle_{A}=\frac{1}{N} \sum_{i=1}^{N}\dot{x}_{i}(t)$ is the instantaneous mean array current. As the driving force we used the two-harmonics combination, $E(t)=cos(\omega t)+cos(\omega t+\pi/2)$, which ensures that all the relevant symmetries are broken [7]. The numerically obtained dependence of the array dc-output on the strength of interaction $c$ is shown in Fig.2(a). For the set of parameters $\alpha=0.1, \omega=1, \sigma=0$, the dependence demonstrates the presence of two successive current reversals, at $c \approx 0.03$ and $c\approx 0.33$. In order to understand dynamical mechanisms of these events, we introduced the mean decoherence, which we define as \begin{equation} D=\langle | \dot{x(t)}-\langle \dot{x} \rangle_{A}| \rangle_{A,T}, \end{equation} where $\langle ... \rangle_{A, T}$ means the averaging over the array and over the one period of ac-driving. The first result from the comparison of both the dependencies (Fig.2(b))is that the current reversal at $c \approx 0.33$ is connected with the transition to the regime of the complete synchronization, $D=0$. \begin{figure}[t] \centering{\resizebox{8cm}{!}{\includegraphics{Network.eps}}} \caption{The dependence of the mean dc-output, $J$, and the degree of decoherence, $D$, versus the strength of interaction $c$ for the system of $N=50$ coupled units from Eq.(12). The values of parameters are $\alpha=0.1$, $w=1$, $\sigma=0$ (circles) and $\sigma=0.03$ (triangles).} \end{figure} Lets now track briefly a relation between the current reversal and the synchronization. Initially, in the uncoupled limit, $c=0$, all JJ's work independently, each one as the single one-dimensional rectificator [1,6]. The asymmetry of the ac-force $E(t)$ is realized through the chaotic attractor with the positive mean current (see Fig.3a) [14]. The local phase space of each units, $(x_{i}, \dot{x}_{i},t)$, has identical structure, but the relative phases of different units are randomly distributed (depending on the initial conditions). After the introduction of a nonzero coupling, $c>0$, some coherence between the units occurs. From a point of view of the dynamics of the single junction, this leads to changes of an attractor structure in the phase space $\textbf{R}^{3}$ (see Fig.3b). This causes changes in a projection of the attractor's invariant density on the velocity subspace $\dot{x}_{i}$ and, as a result, leads to the current reversal at $c\approx 0.03$. The further increase of the interaction strength up to $c \approx 0.33$ results in the complete synchronization and to a shrinking of the global attractor in $\textbf{R}^{2N+1}$ to the hyperplane $\textbf{R}^{3}$,$(x_{i}=x,\dot{x}_{i}=\dot{x},t)$. The global attractor has now dimension equal to $3$. The attractor of the single unit now is the same as in the uncoupled limit but all the units have the same phase. The mean dc-output returns to its value at the limit $c=0$. \begin{figure}[t] \centering{\resizebox{10cm}{!}{\includegraphics{Poinc.eps}}} \caption{Poincar$\grave{e}$ sections for a single junction from the array of $N=50$ coupled JJ, Eq.(12), for (a) $c=0$ and (b) $c=0.27$ (zero-noise case). Parameter values are the same as in Fig.2.} \end{figure} Here some analogy with a current reversal in an one-dimensional deterministic ratchet [14] can be drawn. In the one-dimensional case current reversals have been identified with tangent bifurcations from chaotic to regular (limit cycles) attractors [14]. In our case the current reversal at $c \approx 0.33$ corresponds to the transition "hyperchaos - chaos" [15], connected with the shrinking of a the system attractor in a global phase space $\textbf{R}^{2N+1}$. It is interesting to note, that from a point of view of the local attractor of the single junction, this transition corresponds to a crisis, connected with {\it expanding} of the attractor in three-dimensional subspace $(x_{i}, \dot{x}_{i},t)$ (compare Fig.3a and Fig.3b). The presence of a weak noise (which is equivalent to a weak coupling with a heat bath) can strongly suppress correlations between units and may lead to a delayed synchronization transition [16]. Due to a strong conjugation between the synchronization and the current rectification process, we can manage a noise-induced current reversal for a fixed strength of interaction (Fig.2(b)). Thus, the presence of a thermostat allows to control the dc-output by changing a temperature of the system. \section{Concluding remarks} \label{} Finally, we have presented the symmetry approach to the problem of the collective current rectification by a set of coupled dynamical units. This idea can be used for a more general problem such as an obtaining of a non-zero value of some relevant mean ensemble characteristic, $\langle A(\mathbf{x})\rangle_{t}$. This characteristic can correspond, for example, to a mean magnetization of a spin lattice with a complex geometry [17]. The proposed collective ratchet's ideology may also be relevant in a context of a cooperative dynamics of neural networks, as an approach to a visual processing with directional selectivity [18]. Relations between symmetries of a collective ratchet and its coupling scheme on the one side, and relations between a topology of interactions and synchronization properties [19], on the other one, may open an interesting perspective. It has been found that in the thermodynamic limit, $N \rightarrow \infty$, the synchronization is impossible for nearest-neighbor coupled dynamical networks if the number of sites connected to a given site, $N_{c}$, is a finite fraction of the total number of sites $N$, $\frac{N_{c}}{N}=const$ [20]. On the other hand, in a case of small-world networks, i.e. sets with long (in a sense of a topological distance) connections, the synchronization can be achieved in the thermodynamic limit through a small fraction of distant connections [20]. This nontrivial effect provides a tool for a control of the dc-output in a massive collective ratchet by changing a small number of relevant connections only. This work has been supported by the Emmy Noether-Programm of the DFG under contract LU1382/1-1
1,314,259,995,502
arxiv
\section{Introduction} Let $G$ be an abelian group, written additively, and $A$ a subset of $G$. For a positive integer $h$ we denote by $hA$ the subset of $G$ consisting of all possible sums of $h$ not necessarily distinct element of $A$, i.e.: \be hA = \{a_1 + \cdots + a_h : a_i \in A \}. \ee This set is called the {\em $h$-fold sumset} of $A$. We say that $A$ is a {\em basis} for $G$ if $hA = G$ for some $h \in \mathbb{N}$. Define the function $\rho : 2^{G} \rightarrow \mathbb{N} \cup \{\infty\}$ as follows : \be \rho (A) := \left\{ \begin{array}{lr} \min \{h : hA = G\}, & {\hbox{if $A$ is a basis for $G$}}, \\ \infty, & {\hbox{otherwise}}. \end{array} \right. \ee In the case where $\rho (A) < \infty$, this invariant is usually referred to as the {\em order}{\footnote{In \cite{DH} the term {\em exponent} is used, while in \cite{KL} the term {\em positive diameter} appears. These authors also employ different notations.}} of the basis $A$. \\ \\ Now let us specialise to the case $G = \mathbb{Z}_n$, a finite cyclic group. Throughout this paper we will write $\rho_{n}(A)$ when referring to a subset $A$ of $\mathbb{Z}_n$. Clearly a subset $A \subseteq \mathbb{Z}_n$ is a basis if and only if the greatest common divisor of its elements is relatively prime to $n$. Also, it is easy to see that, if $\rho_{n}(A) < \infty$ then $\rho_{n}(A) \leq n-1$, with equality if and only if $A = \{a_1,a_2\}$ is a 2-element set with GCD$(a_2-a_1,n) = 1$. Hence the range of the function $\rho_{n}$ is contained inside $[1,n-1] \cup \{\infty\}$. It has been known for some time that, for large enough $n$, the range of $\rho_{n}$ does not contain the entire interval of integers $[1,n-1]$. In a recent paper, which also contains a summary of previously known results, Dukes and Herke \cite{DH} make a conjecture regarding gaps in the range of $\rho_{n}$. Their hypothesis may be stated precisely as follows : \begin{conj} For each $k \in \mathbb{N}$ there exists an absolute constant $c_k > 0$ such that the following holds : \par For any $n \in \mathbb{N}$, if $A$ is a basis for $\mathbb{Z}_n$ for which $\rho_{n}(A) \geq n/k$, then there is some integer $l \in [1,k]$ such that $|\rho_{n} (A) - n/l | < c_k$. \end{conj} Observe that the conjecture would imply the existence of arbitrarily long gaps in the range of $\rho_{n}$, for all sufficiently large $n$. The purpose of our note is to prove this conjecture, using results from additive number theory concerning the structure of sets with small doubling. The idea is roughly as follows : On the one hand, the conjecture says something about the possible orders of a basis for $\mathbb{Z}_n$ when that order is large, namely of order $n$. On the other hand, various results from additive number theory imply that if $A$ is a basis for $\mathbb{Z}_n$, then the iterated sumsets $hA$ cannot grow in size $\lq$too slowly' and, if the growth rate is close to the slowest possible, then $A$ has a very restricted structure. Putting these two things together allows us to describe closely the structure of (a small multiple of) a basis $A$ of large order, and from there we can establish the conjecture. \setcounter{equation}{0} \section{Preliminaries} Here we state three results from the additive number theory literature which will be used in our proof of Conjecture 1.1. The first result is part of Theorem 2.5 of \cite{KL} : \begin{thm} {\bf (Klopsch-Lev)} Let $n \in \mathbb{N}$ and $\rho \in [2,n-1]$. Let $A$ be a basis for $\mathbb{Z}_n$ such that $\rho_{n}(A) \geq \rho$. Then \be |A| \leq \max \left\{ \frac{n}{d} \left( \lfloor \frac{d-2}{\rho - 1} \rfloor + 1 \right) : d | n, \; d \geq \rho + 1 \right\}, \ee In particular, for each fixed $k \in \mathbb{N}$, if $\rho_{n}(A) \geq n/k$ and $n \gg 0$, then $|A| \leq 2k$. \end{thm} The second result concerns the structure of subsets of $\mathbb{Z}_n$ with small doubling and is Theorem 1 of \cite{DF} : \begin{thm} {\bf (Deshouillers-Freiman)} Let $n \in \mathbb{N}$ and $A$ a non-empty subset of $\mathbb{Z}_n$ such that $|A| < 10^{-9} n$ and $|2A| < 2.04 |A|$. Then there is a subgroup $H \subsetneqq G$ such that one of the following three cases holds : \par (i) if the number of cosets of $H$ met by $A$, let us call it $s$, is different from $1$ and $3$, then $A$ is included in an arithmetic progression of $l$ cosets modulo $H$ such that \be (l-1)|H| \leq |2A| - |A|. \ee \par (ii) if $A$ meets exactly three cosets of $H$, then (2.2) holds with $l$ replaced by $\min \{l,4\}$. \par (iii) if $A$ is included in a single coset of $H$, then $|A| > 10^{-9} |H|$. \\ \\ Furthermore, when $l \geq 2$, there exists a coset of $H$ which contains more than $\frac{2}{3} |H|$ elements from $A$, a relation superseded by (2.2) when $l \geq 4$. \end{thm} \begin{rek} In \cite{DF} the authors remark that they expect that the same structure theorem holds for larger constants than $2.04$ and $10^{-9}$ respectively. This is known to be the case when $n$ is prime, according to the so-called {\em Freiman-Vosper theorem}. For a proof of that $\lq$classical' result, see Theorem 2.10 in \cite{N}. \end{rek} The third and last result from the literature that we shall use is a special case of a result of Lev \cite{L}, generalising an earlier result of Freiman \cite{F}, concerning the growth of sumsets of a large subset of an arithmetic progression of integers : \begin{thm} {\bf (Freiman, Lev)} Let $A \subseteq \mathbb{Z}$ satisfy \be |A| = n, \;\;\; A \subseteq [0,l], \;\;\; \{0,l\} \subseteq A, \;\;\; {\hbox{gcd}}(A) = 1. \ee If $2n-3 \geq l$ then, for every $h \in \mathbb{N}$ one has \be |hA| \geq n + (h-1)l. \ee \end{thm} \setcounter{equation}{0} \section{Proof of Conjecture 1.1} First some notation. Let $G$ be an abelian group and $A \subseteq G$. For $g \in G$ we denote \be A + g := \{a+g : a \in A \}, \ee and for $h \in \mathbb{Z}$ we denote \be h \cdot A := \{ ha : a \in A \}. \ee \begin{lem} Let $A \subseteq \mathbb{Z}_n$ and $u,v \in \mathbb{Z}$ such that $(u,n) = 1$. Then $\rho_{n}(A) = \rho_{n}[(u \cdot A) + v]$. \end{lem} \begin{proof} This is clear. \end{proof} \begin{lem} Conjecture 1.1 holds for bases consisting of at most $3$ elements. \end{lem} \begin{proof} Let $n \in \mathbb{N}$ and $A$ be a basis for $\mathbb{Z}_n$ such that $|A| \leq 3$. If $|A| = 1$ then $n=1$, so the Conjecture is vacuous. If $|A| = 2$ then $\rho_{n}(A) = n-1$, as already noted in the Introduction. Suppose $|A| = 3$. From Lemma 3.1 it is easy to deduce that, without loss of generality, one of the following cases arises : \\ \\ (i) $A = \{0,1,t\}$ for some $t \in [2,n-1]$, \\ (ii) $A = \{0,a,b\}$ where $a \geq 2$, $a \mid n$ and $(a,b) = 1$. \\ \\ In case (ii) it is easy to see that \be \max \left\{ \frac{n}{a} - 1, a-1 \right\} \leq \rho_{n} (A) \leq \left( \frac{n}{a} - 1 \right) + (a-1), \ee which in turn is easily seen to imply Conjecture 1.1. It remains to deal with case (i). In what follows we adopt the following notation : If $x \in \mathbb{Z}$ and $n \in \mathbb{N}$ then $||x||_n$ denotes the numerically least residue of $x$ modulo $n$, that is, the unique integer $x_0 \in (n/2,n/2]$ such that $x \equiv x_0 \; ({\hbox{mod $n$}})$. \\ \\ So fix $k,t \in \mathbb{N}_{> 1}$ and set $A = \{0,1,t\}$. Let $n \in \mathbb{N}$, which we think of as being very large. We suppose that $\rho_{n}(A) > n/k$ and shall show that Conjecture 1.1 holds. First of all, by a standard pigeonhole principle argument, there is some integer $c \in [1,k-1]$ such that $||ct||_{n} \leq n/k$. Put $r := ||ct||_{n}$ and $s := |r|$. Clearly, the order of the basis $\{0,1,s\}$ is at most $s + n/s$ (if $s = 0$ this quantity becomes infinite, which is consistent with our earlier notations). In terms of $A$, this implies that \be \rho_{n}(A) \leq s + \frac{cn}{s}. \ee The function $f(s) = s + cn/s$ has a local minimum at $s = \sqrt{cn}$. Note also that $f(ck) = f(n/k) = n/k + ck$. It follows that, for $n \gg 0$, if $\rho_{n}(A) > n/k + ck$ then $s \leq ck$. In terms of $t$, the latter implies that \be t = \frac{dn+e}{c}, \ee for some integers $d \in [0,c)$, $e \in [-ck,ck]$. In this representation of $t$, we may assume that $(d,c) = 1$. The important point is that each of $c,d,e$ is $O(k)$. First suppose $e \geq 0$. Clearly then, the number of terms from $A$ needed to represent every number from $0$ through $n-1$ is at most $O(k)$ greater than the number of terms needed to represent every number from $0$ through $\lfloor n/c \rfloor$. But since $ct \equiv e \; ({\hbox{mod $n$}})$ it is easy to see in turn that the latter number of terms is within $O(k)$ of $n/l$, where $l = \max \{c,e\}$. Thus $| \rho_{n}(A) - n/l| = O(k)$, which implies Conjecture 1.1. \par If $e < 0$, then replace $A$ by $1-A = \{0,1,1-t\} \; ({\hbox{mod $n$}})$ and argue as before. This completes the proof of the lemma. \end{proof} We can now complete the proof of Conjecture 1.1. Fix $k \in \mathbb{N}$. All constants $c_{i,k}$ appearing below depend on $k$ only. Let $n$ be a positive integer which we think of as being very large. Let $A$ be a basis for $\mathbb{Z}_n$ such that $\rho_{n}(A) > n/k$. By Lemma 3.1 we may assume, without loss of generality, that $0 \in A$. This is a convenient assumption, as it implies that $hA \subseteq (h+1)A$ for every $h$. From Theorem 2.1 it is easy to deduce the existence of positive constants $c_{1,k}, c_{2,k}$, such that \be |A| \leq c_{1,k} \ee and, for some integer $j \in [1,c_{2,k}]$ one must have \be |2^{j+1} A| < 2.04 |2^{j} A|. \ee Set $h := 2^j$. For $n$ sufficiently large, we'll certainly have $|hA| < 10^{-9} n$ and so we can apply Theorem 2.2. Let $H$ be the corresponding subgroup of $\mathbb{Z}_n$ and $\pi : \mathbb{Z}_n \rightarrow \mathbb{Z}_n / H$ the natural projection. We can identify $H$ with $\mathbb{Z}_m$ for some proper divisor $m$ of $n$, and then identify $\mathbb{Z}_n / H$ with $\mathbb{Z}_{n/m}$. Let $B := h A$. Since $A$ is a basis for $\mathbb{Z}_n$, then so is $B$ and hence $\pi(B)$ is a basis for $\mathbb{Z}_{n/m}$. This means that either case (i) or case (ii) of Theorem 2.2 must apply. Moreover, since some coset of $H$ contains at least $\frac{2}{3} |H|$ elements from $B$, it follows that $m = |H| = O(|B|) = O(k)$. Thus \be m \leq c_{3,k}, \ee say. Since \be \rho_{n/m}(\pi(A)) \leq \rho_{n}(A) \leq \rho_{n/m}(\pi(A)) + m, \ee this together with (3.7) and (3.8) imply that \be |\rho_{n}(A) - h \rho_{n/m}(\pi(B))| \leq c_{4,k}. \ee To prove Conjecture 1.1, it thus suffices to show that \be | \rho_{n/m}(\pi(B)) - n/q | \leq c_{5,k}, \;\;\; {\hbox{for some multiple $q$ of $h$}}. \ee Let $s$ be the number of cosets of $H$ met by $B$ and $s^{\prime}$ the number met by $A$. \\ \\ {\sc Case 1} : $s = 3$. \\ \\ Then $s^{\prime} \leq 3$. We don't need (3.11) in this case and can instead deduce Conjecture 1.1 directly from (3.9) and Lemma 3.2. \\ \\ {\sc Case 2} : $s \neq 3$. \\ \\ Then Case (i) of Theorem 2.2 must apply. Let $l$ be the minimum length of an arithmetic progression in $\mathbb{Z}_{n/m}$ containing $\pi(B)$. Note that $l \leq c_{6,k}$, by (2.1). By Lemma 3.1, there is no loss of generality in assuming that $\pi(B)$ is contained inside an interval of length $l-1$. Since $\pi(A) \subseteq \pi(B)$ and $l = O(k)$ we can now also see that $l-1$ is a multiple of $h$, provided $n \gg 0$. Thus it suffices to prove that \be \left| \rho_{n/m}(\pi(B)) - \frac{n}{l-1} \right| \leq c_{7,k}. \ee It is here that we use Theorem 2.4. Indeed (3.12) is easily seen to follow from that theorem provided that $2s-3 \geq l-1$. But this inequality is in turn easily checked to result from (2.1) (as applied to $B$), (3.7) and the fact that $|B| \leq s|H|$. \par Thus the proof of Conjecture 1.1 is complete. \setcounter{equation}{0} \section{Concluding remarks} Explicit values for each of the constants $c_{i,k}$, $i = 1,...,7$, can easily be extracted from the argument given above. Similarly, one can extract explicit bounds for all the $O(k)$ terms in the proof of Lemma 3.2. All of this will in turn yield explicit constants $c_k$ in Conjecture 1.1. We refrain from carrying out this messy procedure, however, since the more interesting question is what the optimal values are for the $c_k$. Note that $c_k \geq (k-2) + \frac{1}{k}$, which can be seen by considering the basis $\{0,1,k\}$ for $\mathbb{Z}_n$, when $n \equiv -1 \; ({\hbox{mod $k$}})$. \section*{Acknowledgement} I thank Renling Jin for very helpful discussions.
1,314,259,995,503
arxiv
\section{Introduction} Room-level occupancy in buildings provides crucial information to numerous use cases regarding, amongst others, building automation or building energy performance simulation. We differentiate between \textit{occupancy estimation}, i.e. the determination of an exact count of occupants in a room, and \textit{occupancy detection}, the binary discrimination between presence and absence. In both categories, a large number of models has been proposed over the last years. These include rule-based, stochastic and data-driven models. Recently, machine learning approaches have gained increasing interest. A recent study \cite{Carlucci.2020b} shows that 56\% of the models published between 2004 and 2019 are data-driven, 15\% use neural network techniques. In addition to different model types, diverse sensing technologies can be found in literature. We classify sensing methods into \textit{intrusive} and \textit{non-intrusive}. Intrusive methods include additional sensors, such as light barriers or thermal cameras. Non-intrusive methods use existing data to predict occupancy states. \textit{Environmental sensing} of carbon dioxide rates, temperature etc. is considered non-intrusive, as climate sensors are already widely installed for the purpose of automated climate control. Furthermore, climate sensors are more privacy-preserving than cameras, and, in contrast to transition-based measuring (e.g. with light barriers), they do not accumulate errors during the day. Chen et al. (2017) propose a first deep learning model for building occupancy estimation based on environmental factors, outperforming several state-of-the-art models \cite{Chen.2017b}. Deep learning allows the detection of occupancy at high precision without manual feature engineering. The challenge is that deep learning models require large quantities of room-specific labelled training data, which is not applicable in practice. Due to the high effort for collecting ground truth occupancy data for several days or weeks, models that provide good performance in a few test rooms cannot easily be applied to a variety of other rooms. We propose to use \textit{transfer learning} to address this impediment. In our work, transfer learning is referred to in a transductive sense: Given equal learning tasks $T_s$ = $T_t$ for two domains (rooms) $R_s$ and $R_t$ with data drawn from different distributions within the same feature space, and given labeled data from the source domain $R_s$ and unlabeled or partially labeled data from the target domain $R_t$, transfer learning improves the training of $T_t$ using $R_s$ and $T_s$. Some publications have already reported benefits of a knowledge transfer between different rooms. Arief-Ang et al. (2017) extended a previously published seasonal decomposition model with a transfer approach, and used data recorded from a university office to effectively improve a model trained for a cinema hall \cite{AriefAng.2017}. In a later approach \cite{Zhang.2019b}, transfer learning was successfully applied with a recurrent neural network (RNN), which is favorable in terms of generalizability. To the best of our knowledge, no deep transfer learning approach for environmental sensing-based detection of occupancy has been investigated to date. Moreover, synthetic data has not yet been considered for transfer, although physical simulation is common practice in the field of building technology. Tools such as EnergyPlus, TRNSYS and IDA~ICE provide simulations for various environmental variables with high accuracy \cite{Mazzeo.2019}. Regarding synthetic occupancy data, Chen et al. (2018) proposed an agent-based occupancy simulator \cite{Chen.2018c} to generate stochastic occupancy data in office buildings. \begin{figure*}[!hbt] \includegraphics[scale=0.365]{approach.pdf} \caption{Simulation-aided approach to occupancy detection} \label{fig:approach} \end{figure*} \section{Approach} We propose to leverage synthetic data generated from simulations to lower the need for costly real world data in context of occupancy detection. We conduct two simulations: \begin{itemize} \item[(1.)] Occupancy Simulation: For occupant presence and possibly additional, relevant actions, such as window opening actions \item[(2.)] Physical Simulation: For environmental factors, such as CO$_2$ \end{itemize} A simulation of occupants, and possibly of their actions (1.), provides the ground truth data that also serves as an input to a simulation of the physical environment (2.). In this work, we focus on CO$_2$ dynamics. Also other environmental factors such as temperature or humidity may be considered. A physical simulation requires metadata about the room, such as its volume and infiltration rate. Occupant simulations need to take into account the type of room, e.g. 2-person office or lecture room. Otherwise, occupant behavior may not be comparable. \pagebreak \noindent When building a model for a concrete room of interest, we distinguish two approaches: \begin{itemize} \item[(a)] run a preceding simulation based on metadata of the room of interest \item[(b)] use a general base model built from simulations on a variety of virtual rooms of the same room type \end{itemize} In this work, we use alternative (a) for a first proof of concept. It is illustrated in Fig.~\ref{fig:approach}. The two consecutive simulations (occupancy and CO$_2$) generate a large-scale synthetic dataset. This dataset is used to pre-train a base model that is able to fit the general behavior of CO$_2$ dynamics under human presence or absence. Room-specific behavior due to certain conditions such as sensor placement, infiltration, window size etc. is then learned in a transfer step. \section{Methods} This section describes the simulation methods we applied to simulate occupancy as well as CO$_2$ rates. In Subsec.~\ref{section:occupancyDetection} we then introduce the model architecture used to detect occupancy. \subsection{Occupancy Simulation}\label{section:occSim} We simulate human occupancy in a naturally ventilated single office space of the size of a selected real world office. Occupancy behavior in offices can be subdivided into status transition events (e.g. arrival or final departure from work), random moving events (e.g. going to the bathroom) and meeting events \cite{Chen.2018c}. As in \cite{Chen.2018c}, we combine the LIGHTSWITCH-2002 approach \cite{Reinhart.2004} for status transitions and Wang's (2011) Markov chain approach \cite{Wang.2011} for random moving.\\%Other offices and meetings between occupants are out of scope.\newline \textbf{1. Status transition:} For each day of simulation, we place an arrival at 8:00, a lunch break of 1h at 12:00, a departure at 18:00 and two 15-minute breaks at 10:00 respectively 15:00, with a random shift of ${\pm}$15min for each event \cite{Reinhart.2004}. The timespans determine basic occupancy. Within these timespans, random moving events are simulated.\newline \textbf{2. Random moving:} Throughout all discrete time steps at basic occupancy, we use a Markov chain to successively determine the state of occupancy $occ[t]\in\{0, 1\}$ at time t, depending on the previous state $occ[t-1]$ and a transition probability \cite{Wang.2011}. Transition probabilities are picked from the transition matrix $P$ depicted in Eq.~\ref{eq:transitionProbabilities}. To consider a variety of different occupant types, we update $P$ for each simulated day. The used sojourn times, $s_0$ for transitions from absence (0) to presence (1), and $s_1$ vice versa, are randomly selected within the following bounds. \vspace{0.2mm} \begin{align} \label{eq:transitionProbabilities} \left. \begin{aligned} P = \begin{bmatrix} 1-(\frac{1}{s_0}) & \frac{1}{s_0}\\ \frac{1}{s_1} & 1-(\frac{1}{s_1}) \end{bmatrix} \end{aligned} \right. \begin{aligned} \ &&~10min~\leq~s_0~\leq~60min~\\ \ &&30min~\leq~s_1~\leq~180min \end{aligned} \end{align} \noindent\textbf{3. Window opening behavior:} We decided to use a similar Markov chain to determine the window state w[t]$\in$\{0, 1\} where 0 denotes that all windows or openings are closed, and 1 represents a state of ventilation. We set the time bounds to reflect different ventilation behavior to $60min\leq s_0^w\leq8h$ and $5min \leq s_1^w \leq 30min$. Windows can be open also in case of absence due to random moving. \subsection{Carbon Dioxide Simulation}\label{section:co2Sim} The change in CO$_2$ rate per time step within a room of volume V is calculated using Eq.~\ref{eq:massBalance}. The formula is adopted from \cite{Parsons.2014}. \vspace{0.2mm} \begin{equation} \label{eq:massBalance} \frac{dc(t)}{dt}=\frac{\dot{m}(t)}{V} (c_{out} - c(t))+\frac{G(t)}{V} \end{equation} \noindent $c(t)$ is the indoor CO$_2$ concentration at time t, c$_{out}$ is the outdoor CO$_2$ concentration, $\dot{m}(t)$ is the current mass flow rate, and $G(t)$ is the amount of CO$_2$ generated by human occupants present at the moment. Applying Eq.~\ref{eq:massBalance}, we successively calculate CO$_2$ level values for time steps of one second, which can be later aggregated to the adequate granularity. For simplification purposes, air pressure and outdoor CO$_2$ are considered constant. The infiltration rate $\dot{m}(t)$ consists of a steady infiltration $\dot{m}_{inf}(t)$ independent from human occupancy, and a ventilation rate $\dot{m}_{vent}(t)$ determined by occupants' actions, especially window opening events: $\dot{m}(t) = \dot{m}_{inf}(t) + \dot{m}_{vent}(t)$. To simplify, we use a constant value $\dot{m}_{inf}(t)$ = $\dot{m}_{inf}$. Also, we consider a detailed calculation of $\dot{m}_{vent}(t)$ under consideration of varying window counts, -types and -sizes, as well as opening angles and air pressure differences, as unnecessary complex in a first proof of concept. Hence, whenever the window state is 1, we use a multiple of $\dot{m}_{inf}$ instead: $\dot{m}(t) = \dot{m}_{inf} \cdot vm$. A multiplier $vm$ is randomly chosen from the range [10, 100] at each window opening event, and set to 1 if windows are closed. $\dot{m}_{inf}$ is calculated from the volumetric air flow rate $\dot{V}_{inf}$ by multiplication with the mass of air: $\dot{m}_{inf}$ = $\dot{V}_{inf} \cdot m_{air}$. The value of m$_{air}$ is set to 1.2754 g/l according to the IUPAC standard for the mass of dry air at standard temperature and pressure. $\dot{V}_{inf}$ can be estimated by using CO$_2$ as a tracer gas, fitting a theoretical decay curve to the actual decay in CO$_2$ during a period of non-occupancy \cite{Parsons.2014}. G(t), defined in Eq.~\ref{eq:humanCO2Generation}, is the product of the number of present occupants n(t) (in our case 0 or 1), an average CO$_2$ generation rate of a single occupant $g_{occ}$, the mass of CO$_2$ $m_{co2}$ and a unit translation term. For simulating intervals of one second, the term is divided by 60. \begin{equation} \label{eq:humanCO2Generation} G(t) = n(t) \cdot g_{occ} \cdot m_{co2} \cdot \frac{1000}{60}, \hspace{2mm} n(t) \in \{0, 1\} \end{equation} We choose $g_{occ}$ = 0.24 l/min, which is the mean value measured by \cite{Reiff.2012} for a person of an age between 21 an 28 standing at a desk. 0.18, the other value from \cite{Reiff.2012}, achieves lower results in our experiment. \vspace{-2mm} \subsection{Occupancy Detection} \label{section:occupancyDetection} To detect occupancy, we apply the model architecture from \cite{Chen.2017b}, see Fig.~\ref{fig:modelArchitectur}. It combines a convolutional network with a deep bidirectional long short-term memory (DBLSTM). We downsample the data to a 1-min-granularity aggregating by the mean, and then use a sliding 15min-window on the input data stream. A one-dimensional convolutional layer (1D-Conv) and a max-pooling layer perform an automated feature extraction. As in \cite{Chen.2017b}, we use a filter size of 3, and a pooling factor of 2. We reduce the number of filters to 10, as we use CO$_2$ as a single input factor. The number of neurons in the following bidirectional long short-term memory (BLSTM) layers (200, 150 and 100 neurons) and the fully connected (FC) layers (300 and 200 neurons) are selected as proposed in \cite{Chen.2017b}, as well as the masking probabilities (0.5 and 0.3) for the dropout applied before the first and second FC layer for regulation purposes. A final softmax activation layer discriminates between presence (1) and abscence (0). \begin{figure}[!hbt] \includegraphics[width=\linewidth]{CDBLSTM.pdf} \caption{Deep learning model architecture based on \cite{Chen.2017b}} \label{fig:modelArchitectur} \end{figure} \section{Experimental Setup} \subsection{Datasets} \textbf{Real world dataset:} For a preliminary study, we measured the CO$_2$ rate in a 2-person office of a German university building located in Munich throughout 7 complete working days. A \textit{Sensirion~SCD30} non-dispersive infrared (NDIR) sensor was placed in the center of the room, and a measured CO$_2$ value was stored every 5 seconds. According to the datasheet, the sensor provides a resolution of 1~ppm with an accuracy of ${\pm}$30 ppm +3$\%$. The CO$_2$ data was collected between 10 and 19 March 2020. Occupancy ground truth data was manually recorded by the occupants over the observation period.\linebreak \textbf{Simulated dataset:} In addition to this, we conducted a CO$_2$ time series simulation according to the methods previously described in Sec.~\ref{section:occSim} and \ref{section:co2Sim} for a total of 500 artificial working days in a room with the same volume as the real world office ($V$ = 77.5 m³). A natural infiltration rate and outdoor CO$_2$ concentration were estimated fitting Eq.~\ref{eq:massBalance} to observed CO$_2$ values from one unoccupied night not used for the dataset by minimizing the mean squared error (MSE). Consequently, they were set to realistic values of $\dot{V}_{inf}$ = 0.0046 m³/s and c$_{out}$ = 360 ppm. Table \ref{tab:datasetOverview} summarizes the properties of the two datasets. Note that two occupants share the office, and guests may visit. Our simulation is only intended to provide basic information for a single-person scenario. \begin{table}[!hbt] \caption{Dataset Overview} \label{tab:datasetOverview} \begin{tabular}{ccc} \toprule & Real World Dataset & Simulated Dataset\\ \midrul Dataset Size & 7 working days & 500 working days \\ Time Granularity & 5 sec & 1 sec \\ Occupancy Values & [0, 3], mostly [0, 2] & [0, 1] \\ Presence Rate & 29.28$\%$ & 25.73$\%$\\ CO$_2$ Value Range & [338, 1749] ppm & [360, 1483] ppm \\ \bottomrule \end{tabular} \end{table} \subsection{Experiments} We conducted a first study to show the positive effects of transfer learning from synthetic CO$_2$ data. First, 400 days of our simulated dataset were used to train a synthetic base model. The remaining 100 days were used to report the performance of the base model on simulated data. Using only small amounts of real world training data, we then trained two models: (1) a conventional deep learning model as described in Sec.~\ref{section:occupancyDetection}, and (2) a transfer model with equal architecture and hyperparameters, using the same real world data, but based on the synthetic base model. The conventional model was trained using a uniform random initializer for model weights. \begin{table*}[ht] \caption{Comparison of transfer model, non-transferred model and LR baseline for different training data extents*} \label{tab:Results} \begin{tabular}{llccccc} \toprule & Training Data & 1 Day & 2 Days & 3 Days & 4 Days \\ \midrule Transfer Model & Accuracy & \textbf{0.875} \scriptsize{(${\pm}$\textbf{0.057})} & \textbf{0.915} \scriptsize{(${\pm}$\textbf{0.020})} & \textbf{0.929} \scriptsize{(${\pm}$\textbf{0.014})} & \textbf{0.933} \scriptsize{(${\pm}$\textbf{0.015})}\\ & F1 Score & 0.764 \scriptsize{(${\pm}$0.171)} & \textbf{0.863} \scriptsize{(${\pm}$\textbf{0.035})} & \textbf{0.888} \scriptsize{(${\pm}$\textbf{0.027})} & \textbf{0.898} \scriptsize{(${\pm}$\textbf{0.028})}\\ & Epochs & 27.2 \scriptsize{(${\pm}$16.6)} & 26.1 \scriptsize{(${\pm}$14.4)} & 22.6 \scriptsize{(${\pm}$14.1)} & 19.9 \scriptsize{(${\pm}$11.9)}\\ \cmidrule{2-6} Non-Transferred Model & Accuracy & 0.715 \scriptsize{(${\pm}$0.086)} & 0.874 \scriptsize{(${\pm}$0.040)} & 0.895 \scriptsize{(${\pm}$0.035)} & 0.914 \scriptsize{(${\pm}$0.029)}\\ & F1 Score & 0.565 \scriptsize{(${\pm}$0.162)} & 0.791 \scriptsize{(${\pm}$0.065)} & 0.821 \scriptsize{(${\pm}$0.071)} & 0.859 \scriptsize{(${\pm}$0.053)}\\ & Epochs & 91.3 \scriptsize{(${\pm}$54)} & 123 \scriptsize{(${\pm}$50.2)} & 100.8 \scriptsize{(${\pm}$46)} & 105.2 \scriptsize{(${\pm}$34.4)}\\ \cmidrule{2-6} Logistic Regression (LR) & Accuracy & 0.860 \scriptsize{(${\pm}$0.073)} & 0.879 \scriptsize{(${\pm}$0.058)} & 0.900 \scriptsize{(${\pm}$0.035)} & 0.920 \scriptsize{(${\pm}$0.016)}\\ & F1 Score & \textbf{0.767} \scriptsize{(${\pm}$\textbf{0.112})} & 0.803 \scriptsize{(${\pm}$0.080)} & 0.829 \scriptsize{(${\pm}$0.056)} & 0.866 \scriptsize{(${\pm}$0.038)}\\ \bottomrule \end{tabular}\\ \vspace{2mm} \small{*mean values (${\pm}$ standard deviations), best scores are highlighted in bold} \end{table*} Transfer learning was carried out by using the model weights of the base model for initialization instead, and retraining all layers. We evaluated with amounts of 1, 2, 3 and 4 days of training data. A cross validation was applied, using each of the 7 days in our dataset (and accordingly each 2, 3 or 4 consecutive days) for training in one iteration, and the remaining days as test data. Each iteration was repeated 10 times with a new seed value for randomization in initial weight generation and shuffling of input sequences. All experiments were carried out on an \textit{Nvidia GeForce Tesla V100 SXM2} GPU, using the TensorFlow framework with Keras. For optimizer, learning rate, batch size and validation split, we chose RMSprop, 0.001, 70 and 0.2. Early stopping was applied after 20 epochs of no improvement in validation loss. To compare the results, we calculate the detection accuracy, which is defined as the number of correctly predicted occupancy states divided by the total number of predictions. As a second metric, we use the F1 score. \section{Results} Table \ref{tab:Results} reports the mean and standard deviation values of accuracies, F1 scores and training epochs under different extents of training data, in both the transfer and non-transfer setting. Additionally, results are compared to a logistic regression (LR) baseline. The pre-trained transfer model clearly outperformed the non-transferred model, with a higher impact the less training data was used. Vice versa, only half of the training data was required to achieve a similar accuracy as in the non-transfer setting: 0.87 was reached with one instead of two days, and 0.91 with two instead of four days of training data. Besides a substantial improve in accuracy and F1 score, also the standard deviation was reduced in most cases. Hence, the transfer approach can also improve model robustness. As data was scarce, the deep learning model was unable to reach its full potential, and performed even slightly worse than an LR classifier. The transfer model, in contrast, showed superior results in nearly all cases. Regarding training times, the transfer model was able to be trained in only a fraction of the training epochs. The number of epochs reported in Table \ref{tab:Results} indicate after how many training epochs validation loss reached a minimum with respect to all previous and 20 subsequent epochs. By average, 91 epochs were trained for the non-transferred model with one day of training data, and above 100 with multiple days of training data. In contrast to this, transfer learning allowed a validation loss convergence after less than 30 epochs. However, this reduction of training times results from the additional upfront effort for pre-training. The effect is only beneficial if the base model is reused for multiple rooms. Pre-training required 30 epochs on data from 400 synthetic training days. The resulting base model achieved an accuracy of 0.981 and an F1 score of 0.963 on 100 simulated days. Without the subsequent transfer step, the base model by itself was not able to make accurate predictions on the real world test dataset. Simulations do not replace but reduce the need for data collection. \section{Conclusion \& Future Research} In our experiment, we have demonstrated that transfer from synthetic data can effectively improve model performance and robustness regarding occupancy detection. The results encourage the use of simulations in this field of limited real world data to enable deep learning in practice. We see future research in finding a concrete method considering how to generate adequate synthetic data, and how to accomplish transfer. It should also be investigated whether a large, generalized base model, as it is common in the field of image processing for instance, may be beneficial for a variety of different rooms. We aim to prepare a broader base model and evaluate on multiple room types. Furthermore, we are planning to investigate other model architectures and their ability to transfer knowledge. For the purpose of demonstration, in this work several constant values and arbitrary boundaries were used in the simulation. We want to overcome these limitations, and also consider predicting the number of present occupants. \begin{acks} \begin{anonsuppress} Special thanks go to the \textit{OpenPower@TUM} project and the industry partner IBM for providing us with the computing infrastructure for our experiments. \end{anonsuppress} \end{acks} \bibliographystyle{ACM-Reference-Format}
1,314,259,995,504
arxiv
\section{Motivation} Detached eclipsing binary stars (dEBs) are the primary source of fundamental stellar quantities: mass $M$ and radius $R$. Both components must be spectroscopically detectable for direct determination of these quantities. Modern astronomical instrumentation and analytical methods enable a high accuracy and precision to be reached, reducing errors in the masses and radii for the stars in binaries to 1\% or less (e.g.\ Pilecki \etal\ 2013). A critical examination of published analyses found 95 systems satisfying the criterion of 3\% in the uncertainties for $M$ and $R$ (Torres \etal\ 2010). The DEBCat catalogue lists 159 systems\footnote{\tt http://www.astro.keele.ac.uk/jkt/debcat/}. For a useful comparison with theoretical evolutionary models further observables are needed: effective temperature, \Teff, and metallicity, [M/H]. Only the ratio of the \Teff s of the two stars is provided by the light curve analysis. The \Teff\ of one component must be determined from other sources, e.g.\ spectral energy distribution, colour indices, or spectral type estimates. Far worse is the situation with empirical determination of the components' metallicity. Of the 95 systems listed by Torres \etal\ (2010), fewer than half have an empirically estimated metallicity. Detailed abundance studies has been performed for only four systems, including two from our own work (see Sect.~5.). The method of Doppler tomography (Bagnuolo \& Gies 1991), and its generalisation as spectral disentangling (Simon \& Sturm 1994, Hadrava 1995) enables the study of the individual spectra of the components of binary star systems. This is particularly useful for the determination of \Teff\ and metallicity (c.f.\ Pavlovski \& Southworth 2012, and references therein). Since the structure and evolution of a star depends on its chemical composition, the degeneracy in the location of a star in evolutionary diagrams can only be broken if an empirical metallicity is known. \section{Theoretical framework} In the last two decades a new generation of stellar evolutionary models has been developed (c.f.\ Maeder \& Meynet 2012, Langer 2012). The inclusion of stellar rotation has had profound effects on these models (Meynet \& Maeder 2000, Heger \& Langer 2000, Heger \etal\ 2000). Centrifugal force changes the stellar shape and structure, causes meridional circulation and induces turbulent mixing. In turn, the \Teff\ and luminosity of the rotating star changes, affecting its liftime. As a consequence, the surface abundance pattern changes too, heavily affecting the expected photospheric nitrogen abundance. The calculations predict an increase in nitrogen abundance with increasing stellar mass, increasing initial rotational velocity, and decreasing metallicity. This makes nitrogen abundance viable as a new observable (K\"{o}hler \etal\ 2012). However, caution is needed since Hunter \etal\ (2009) have found from the observations of a large sample of the Galactic and Magellanic Clouds B stars the relation between nitrogen enhancement and stellar parameters (mass and projected rotational velocity, $v\sin i$) to be more complex than predicted from the models. Subsequently, Morel \etal\ (2008) did not find any tight correlation of the nitrogen abundance with the strength of the magnetic field. This was also supported by extension of the analysis to O stars (Martins \etal\ 2012). Recently, comprehensive grids of theoretical stellar evolutionary models incorporating rotation have been available by the Utrecht (Brott \etal\ 2011) and Geneva (Ekstr\"{o}m \etal\ 2012) groups. These models differ in the treatment of rotational mixing, as well as in the calibration. Brott \etal\ (2011) tailored their grids of different initial composition to match the results of {\sc vlt-flames} survey of massive stars in the Magellanic Clouds and the Galaxy (Evans \etal\ 2005). In consequence, their models have no scaled-solar composition, and result in a rather low metallicity for Galactic stars, Z$ = 0.088$. In contrast, Ekstr\"{o}m \etal\ (2012) use scaled-solar compositions, and galactic models are represented with the metallicity Z$ = 0.014$. \section{Spectral disentangling} The complexity of composite spectra of close binaries makes the study of the individual stellar spectra extremely difficult. Shifts of the spectral lines due to the orbital motion of components in binary or multiple systems are essential for the determination of stellar masses. Spectral lines overlap for much of each orbital cycle, and the secondary star may be much fainter than the primary, and thus contribute only a small fraction of the total light of the system. To these problems should be added the intrinsic broadening of spectral lines in OB stars, and high $v\sin i$ in binaries. All these pose a difficulty for accurate measurements of radial velocities (RVs) for the components, lowering the accuracy of the derived stellar masses (c.f.\ Andersen 1991, Torres \etal\ 2010) or making the fainter star impossible to detect. Since the first spectroscopic detection of binary stars in the last decades of the 19th century, binary star spectra have been used almost solely for RV measurement. Important astrophysical information contained in these spectra was ignored due to the inability of researchers to extract individual spectra of the components. The method of {\em spectral disentangling} ({\sc spd}) is the way to avoid these obstacles. Bagnuolo \& Gies (1992) successfully applied the method of Doppler tomography and isolated component star spectra for the case when the RVs of the individual exposures were already known. Simon \& Sturm (1994) generalised tomographic separation by setting a matrix equation from a time-series of binary star spectra. A set of linear equations is solved directly for the orbital elements and the component spectra, thus bypassing the determination of RVs entirely. An independent formulation of the equations of {\sc spd} in the Fourier space by Hadrava (1995) has been particularly advantageous since the calculations are more efficient and computationally less demanding. An overview of {\sc spd} and its variants is given by Pavlovski \& Hensberge (2010). \begin{figure}[h] \begin{center} \includegraphics[width=0.74\textwidth]{plot_sample.ps} \end{center} \caption{Location of the stars in our sample of eclipsing binary systems versus the theoretical evolutionary tracks and isochrones from Ekstr\"{o}m \etal\ (2011).} \end{figure} In comprehensive studies of V578\,Mon, Hensberge \etal\ (2000) and Pavlov\-ski \& Hensberge (2005) constructed a complete procedure for the analysis of spectroscopically double-lined eclipsing binary stars. In the core of the procedure is {\sc spd}, which yields the optimal set of orbital elements and isolated individual spectra of the components. Further, the analysis of these spectra provides the atmospheric parameters for the components: \Teff s, metallicity, detailed abundances and $v\sin i$ values. In turn, the atmospheric parameters are fed back into fine-tuning the light curve solution. In an iterative cycle an optimal set of stellar and binary parameters can be achieved. Further development and applications are given in Pavlovski \& Southworth (2009) and Pavlovski et \etal\ (2009). This scheme makes possible the determination of the fundamental quantities -- $M$, $R$, \Teff\ -- with the accuracy needed for a useful comparison to theoretical models. Complementary observables hold an important possibility in the analysis of binary and multiple stars. Only a complementary solution of astrometric and spectroscopic measurements could yield a complete set of the orbital elements and its space orientation. With the recent tremendeous development in interferometric capability, many more spectroscopic binaries have been spatially resolved. The stability and quality of the solution in {\sc spd} could be supported with constraints from external observables such as interferometric measurements (Kolbas \& Pavlovski, this proceedings). An important step in the analysis is determination of the \Teff s for the stars in binary systems, directly from renormalised disentangled spectra. The traditional degeneracy between \Teff\ and surface gravity can be lifted for the stars in binary systems. The surface gravities for these star can be determined with uncertainties below 0.01\,dex. A promising and important new development is modelling of the composite spectra for tidally and rotationally distorted stars in binary system given in Palate \etal\ (2013). \begin{figure}[h!] \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{plot.v380.prim.carb1.ps} & \includegraphics[width=0.49\textwidth]{plot.v380.prim.nitro1.ps} \end{tabular} \caption{Comparison between renormalised disentangled spectrum (symbols) of the giant component in V380\,Cyg with the synthetic spectra (lines) computed assuming different abundances of C (left panel) and N (right panel). Dashed lines correspond to the 'present-day abundances' as reported by Nieva \& Przybilla (2012). \label{v380abund}} \end{figure} \section{Abundances for high-mass stars in binary systems} So far, we have observed and analysed a dozen binary stars with high-mass components. Their location in the Hertzsprung-Russell diagram (HRD) is shown in Fig.\,1 along with theoretical evolutionary tracks and isochrones. The majority of the stars are in early phases of their main-sequence evolution, with the youngest systems being V573\,Car and V578\,Mon. The stars in our sample are late-O to early-B, i.e.\ at the lower end of the high-mass stars. The largest masses are 20--25\,M$_\odot$ for the components of V1034\,Sco, DN\,Cas and HD\,164258. Their $v \sin i$ values range from $\sim$30\,km\,s$^{-1}$ (V621\,Per\,A; Southworth \etal\, in preparation) to 250\,km\,s$^{-1}$ (HD\,164258\,A; Mayer \etal\ 2013). Of the systems studied, V380\,Cyg, V453\,Cyg and V621\,Per are the most informative as their primary components are evolved to either close to or beyond the terminal-age main sequence (TAMS). Here we described some principal results and summarise published results for the most important systems.\\[-5pt] \begin{figure}[h!] \begin{center} \includegraphics[width=0.62\textwidth]{plot.abund.ratio.ps} \end{center} \caption{The trend in N/C versus N/O ratios found for B stars. Three samples of B stars are shown: single stars from Nieva \& Przybilla (2012) (blue circles), magnetic stars from Morel \etal\ (2008) (black circles) and stars in close binaries (this work, red circles).} \end{figure} {\bf V578\,Mon} (HD\,259135, BD\,+4$^\circ$1299) is an eccentric dEB, in the young open cluster NGC\,2244 embedded in the Rosette Nebula, with orbital period 2.408\,d. Hensberge \etal\/ (2000) determined the absolute dimensions of the components from $uvby$ photometry and high-resolution spectroscopy. They found the binary to consist of B1\,V + B2\,V stars with masses 14.5 and 10.3\,M$_\odot$, radii 5.2 and 4.3\,R$_\odot$, and $v \sin i$ values 114 and 98\,km\,s$^{-1}$. The age inferred from isochrone fitting was estimated to be $\sim$3\,Myr. Garcia \etal\ (2011, 2013) provided the first measurement of the apsidal motion period and improved the orbital and stellar parameters. A new comprehensive analysis of all available photometric data plus a new \'{e}chelle spectroscopy with Mercator/{\sc hermes}, has allowed a fine-tuning of the physical properties and the internal structure of the stars (Garcia \etal\ 2014). The photospheric chemical composition of the components has been determined using disentangled spectra (Pavlovski \& Hensberge 2005). Their analysis was done relative to sharp-lined B1\,V star \#201 in NGC\,2244 for which accurate abundances are known (Vrancken \etal\ 1997). The abundance pattern for both stars is consistent with those found for several B1 stars in NGC\,2244 (Vrancken \etal\ 1997).\\[-5pt] {\bf V453\,Cyg} (HD\,227696) is totally eclipsing dEB for which accurate physical properties have been measured (Southworth \etal\ 2004a). The system consists of two B stars with masses 14.4 and 11.1\,M$_\odot$. The more massive component is approaching the TAMS whilst its companion has completed about half of its main-sequence lifetime. These characteristics make V453\,Cyg ideal for probing the main-sequence chemical evolution of high-mass stars. The photospheric chemical composition of the components was determined by Pavlovski \& Southworth (2009). The more massive component is the only star in our sample which shows the trend in CNO abundances expected on theoretical grounds (Sect.\,2, see also Przybilla, this volume). In Fig.\,3 the observational results are presented for the three samples: single B stars (Nieva \& Przybilla 2012), single magnetic B stars (Morel \etal\ 2008) and B stars in close binaries (this work). The trend in the changes of abundance ratio between N/O and N/C is governed by the CNO cycle and stellar evolution (Przybilla \etal\ 2010; Przybilla, this volume).\\[-5pt] {\bf V380\,Cyg} (HD\,187879, KIC\,5385723) is an eccentric dEB with an evolved B-type component ($\log g = 3.1$). A large discrepancy between the dynamical and evolutionary masses for this component has been found. Guinan \etal\ (2000) removed this mass discrepancy by invoking extremely strong convective core overshooting. Pavlovski \etal\ (2009) tried unsuccesfully to match the giant component to the rotating evolutionary models of Ekstr\"{o}m \etal\ (2008). As a potential candidate for stochastic pulsations, concerted effort has been put in to observe the binary photometrically with the {\em Kepler} satellite, and spectroscopically with Mercator/{\sc hermes} (Tkachenko \etal\ 2012, 2013). This has allowed revision of the absolute dimensions and atmospheric diagnostics for both components even though the secondary contributes only about 6\% of the total light of the system. The measured chemical composition for the giant component corroborates our previous study (Pavlovski \etal\ 2009). Spectral disentangling of a large collection of high-resolution spectra also makes possible a study of the secondary component. Its composition agrees with the primary's to within the uncertainties. N/C and N/O are characteristic for unevolved B stars, despite one of the component having evolved to the giant stage.\\[-5pt] {\bf V621\,Per} (BD\,+56$^\circ$576) is a totally eclipsing binary system, and a member of the young open cluster $\chi$\,Per (NGC\,884). The primary star is an early-type subgiant (Southworth \etal\ 2004b). In an extensive photometric campaign on $\chi$\,Per, complete multicolour light curves of this long-period ($P = 25.5$ d) binary have been obtained (Saesen \etal\ 2010). An application of spectral disentangling reveals the secondary component which makes possible the determination of the absolute dimensions of the components (Southworth \etal\, in preparation). It is found that the subgiant component has $M = 11.2$\,M$_\odot$, and $\log g = 3.5$. Its helpfully low rotation rate ($v\sin i = 32$\,km\,s$^{-1}$), allows an accurate determination of its atmospheric parameters and chemical composition. We find that the nitrogen and carbon abundances do not show the theoretically predicted excess, thus corroborating our results for the giant component of V380\,Cyg. \section{Conclusion} We have determined the photospheric abundances for helium and CNO elements in high-mass stars in close binaries. We do not find any dependence on their masses, rotational velocities or evolutionary status except for the marginal case of V453\,Cyg\,A. The obvious question is: are high-mass stars in detached binary systems fundamentally different to single high-mass stars for which changes in the photospheric abundances have been disclosed observationally (Hunter \etal\ 2009, Przybilla \etal\ 2010, see also Przybilla this volume)? Obviously, tidal effects have some effect on the stellar structure and evolution, and may hinder rotationally induced mixing in high-mass components in close binary systems. This was calculated and discussed by de Mink \etal\ (2009, 2013). However, a significantly larger sample is needed before firm conclusions can be drawn.
1,314,259,995,505
arxiv
\section*{Acknowledgments} This research was supported by the NSF under Grant No. 1750499, a Google Focused Award, an AI2 KSC Award, a Workday Faculty Award, Unity, and an Amazon Cloud Credits Grant. This material is based on work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1650441. We thank Valts Blukis, Jin Sun, and Mark Yatskar for comments and suggestions, the workers who participated in our data collection, and the reviewers. \section{Discussion} \label{sec:discussion} Our human evaluation highlights several directions for future work. While human leaders adapt to the agent, scoring up to 14 points, there remains a significant gap to collaborations with human followers. Reported errors include getting stuck behind objects, selecting unmentioned cards, going in the wrong direction, and ignoring instructions. At least one worker developed a strategy that took advantage of the agent's full observability, writing instructions with only simple card references. An important direction for future work is to remove our full observability assumption. Other future directions include experimenting with using the interaction history, expanding the learning example aggregation to error cases beyond incorrect start positions, and making agent reasoning interpretable to reduce user frustration. \textsc{CerealBar}\xspace also provides opportunities to study pragmatic reasoning for language understanding~\cite{Andreas:16pragmatics,Fried:17pragmatic-models,Liang:19collaboration}. While we currently focus on language understanding by limiting the communication to be unidirectional, bidirectional communication would allow for more natural and efficient collaborations~\cite{Potts:12,Ilinykh:19meetup}. \textsc{CerealBar}\xspace could be easily adapted to allow bidirectional communication, and provide a platform to study challenges in language generation. \section{Cascaded Evaluation} \label{sec:cascaded} Sequential instruction scenarios are commonly evaluated using recorded interactions by executing individual instructions or executing complete interactions starting from their beginning~\cite[e.g.,][]{Chen:11,Long:16context}. Both have limitations. Instruction-level metrics ignore error propagation, and do not accurately reflect the system's performance. In contrast, interaction-level metrics do consider error propagation and capture overall system performance well. However, they poorly utilize the test data, especially when performance is relatively low. When early failures lead to unexpected world states, later instructions become impossible to follow, and measuring performance on them is meaningless. For example, with our best-performing model, 82\% of development instructions become impossible due cascading errors when executing complete interactions. The two measures may also fail to distinguish models. For example, consider an interaction with three instructions. Two models, A and B, successfully execute the third instruction in isolation, but fail on the two others. They also both fail when executing the entire interaction starting from the beginning. According to common measures, the models are equal. However, if model B can actually recover from failing on the second instruction to successfully execute the third, it means it is better than model A. Both metrics fail to reflect this. We propose \emph{cascaded evaluation}, an evaluation protocol for sequential instruction using static corpora. Our method utilizes all instructions during testing, while still accounting for the effect of error propagation. Unlike instruction-level evaluation, cascaded evaluation executes the instructions in sequence. However, instead of starting of starting only from the start state of the first instruction, we create separate examples for starting from the starting state of each instruction in the interaction and continuing until the end of the interaction. For example, given a sequence of three instructions $\langle1,2,3\rangle$ we will create three examples: $\langle 1, 2, 3\rangle$, $\langle 2,3\rangle$, and $\langle 3 \rangle$. To evaluate performance in \textsc{CerealBar}\xspace, we compute two statistics using cascaded evaluation: the proportion of the remaining instructions followed successfully, and the proportion of potential points scored. We only consider the remaining instructions and points left to achieve in the example. For example, for the sequence $\langle 2,3\rangle$, we will subtract any points achieved before the second instruction to compute the proportion of potential points scored. Appendix~\ref{sec:sup:cascaded} describes cascaded evaluation formally. \section{Experimental Setup} \paragraph{Data} We collect 1,202 human-human interactions using Mechanical Turk, split into train (960 games), development (120), and test (122). Appendix~\ref{sec:sup:data} details data collection and statistics. \paragraph{Recorded Interactions Metrics} We evaluate instruction-level, interaction-level, and cascaded (Section~\ref{sec:cascaded}) performance. We allow the follower ten steps per turn, and interleave the actions taken by the leader during each turn in the recorded interaction. Instruction execution often crosses turns. At the instruction-level, we evaluate the mean \emph{card state accuracy} comparing the state of the cards after inference with the correct card state, \emph{environment state accuracy} comparing both cards and the agent's final position, and \emph{action sequence accuracy} comparing the generated action sequence with the correct action sequence. For complete interactions, we measure mean \emph{full game points}. Finally, for cascaded evaluation, we measure the mean proportion of instructions correctly executed and of possible points scored. \paragraph{Human Evaluation} We perform evaluation with human leaders, comparing our model and human followers. Workers are told they will work with a human or an automated follower, but are not told which in each game. We evaluate both human (105 games) and automated agents at the same time (109 games). We evaluate the game scores, and also elicit free-form feedback. \begin{table*}[th!] \centering\footnotesize \begin{tabular}{|l||c|c|c||c||c|c|} \hline \multicolumn{1}{|c||}{\multirow{2}{*}{\textbf{System}}} & \textbf{Card} & \textbf{Env.} & \textbf{Action Seq.} & \textbf{Full Game} & \textbf{Prop. Instr.} & \textbf{Prop. Points}\\ & \textbf{State Acc.} & \textbf{State Acc.} & \textbf{Accuracy} & \textbf{Points} & \textbf{Followed} & \textbf{Scored} \\ \hline \hline \multicolumn{7}{|l|}{\textbf{Development Results \& Ablation Analysis}} \\ \hline Full model & \stdev{58.2}{0.5} & \stdev{32.6}{0.8} & \stdev{15.8}{0.5} & \stdev{0.66}{0.1} & \stdev{20.5}{1.2} & \stdev{18.1}{0.8} \\ \hdashline[0.5pt/1pt] -- Trajectory distribution & \stdev{38.5}{2.7} & \stdev{10.1}{2.7} & \stdev{\phantom{0}5.5}{2.6} & \stdev{0.29}{0.02} & \stdev{10.0}{0.9} & \stdev{\phantom{0}7.9}{0.7} \\ -- \textsc{GOAL} distribution & \stdev{56.2}{1.5} & \stdev{30.8}{0.4} & \stdev{14.9}{0.3} & \stdev{0.66}{0.09} & \stdev{17.9}{1.0} & \stdev{15.9}{1.3} \\ -- \textsc{AVOID} distribution & \stdev{57.0}{0.3} & \stdev{32.6}{1.6} & \stdev{15.4}{1.3} & \stdev{0.63}{0.04} & \stdev{18.8}{1.5} & \stdev{17.8}{0.7} \\ -- \textsc{NOPASS} distribution & \stdev{59.2}{0.5} & \stdev{32.0}{0.8} & \stdev{15.0}{0.5} & \stdev{0.70}{0.03} & \stdev{18.4}{0.9} & \stdev{16.6}{0.9} \\ -- Action recurrence & \stdev{42.3}{1.5} & \stdev{16.7}{1.2} & \stdev{10.0}{0.7} & \stdev{0.42}{0.03} & \stdev{12.8}{1.7} & \stdev{10.7}{0.5} \\ -- Fine-tuning & \stdev{43.6}{1.9} & \stdev{\phantom{0}8.5}{1.1} & \stdev{\phantom{0}4.5}{0.5} & \stdev{0.65}{0.09} & \stdev{14.1}{1.3} & \stdev{\phantom{0}9.2}{0.9} \\ -- Early goal auxiliary & \stdev{57.2}{2.3} & \stdev{31.2}{1.7} & \stdev{14.9}{1.6} & \stdev{0.65}{0.05} & \stdev{17.9}{1.1} & \stdev{16.5}{0.7} \\ -- Example aggregation & \stdev{59.4}{1.8} & \stdev{32.0}{1.0} & \stdev{15.7}{0.6} & \stdev{0.65}{0.09} & \stdev{20.4}{1.4} & \stdev{16.5}{0.4}\\ -- Implicit discriminator & \stdev{57.5}{2.1} & \stdev{32.7}{1.0} & \stdev{16.4}{0.3} & \stdev{0.70}{0.02} & \stdev{18.8}{1.8}& \stdev{16.7}{0.6} \\ \hdashline[0.5pt/1pt] -- Instructions & \stdev{15.5}{1.5} & \stdev{\phantom{0}2.7}{1.5} & \stdev{\phantom{0}1.2}{1.2} & \stdev{0.24}{0.07} & \stdev{\phantom{0}4.4}{1.0} & \stdev{\phantom{0}4.6}{0.7} \\ \hdashline[0.5pt/1pt] + Gold plan & \stdev{87.4}{0.5} & \stdev{80.2}{0.2} & \stdev{63.4}{0.2} & -- & -- & --\\ \hdashline[0.5pt/1pt] \system{Seq2seq+attn} & \stdev{35.3}{0.8} & \stdev{11.1}{0.5}& \stdev{\phantom{0}9.4}{0.5} & \stdev{0.20}{0.04} & \stdev{\phantom{0}8.8}{0.1}& \stdev{\phantom{0}6.3}{0.1} \\ \hdashline[0.5pt/1pt] Static oracle & 99.7 & 99.7 & 100.0 & 6.58 & 98.5 & 97.9 \\ \hline \hline \multicolumn{7}{|l|}{\textbf{Test Results}} \\ \hline Full model & 58.4 & 32.1 & 15.6 & 0.62 & 15.4 & 17.9 \\ \hdashline[0.5pt/1pt] \system{Seq2seq+attn} & 37.3 & 10.8 & \phantom{0}8.5 & 0.22 & \phantom{0}8.7 & \phantom{0}6.5 \\ \hdashline[0.5pt/1pt] Static oracle & 99.7 & 99.7 & 100.0 & 6.66 & 96.8 & 95.6 \\ \hline \end{tabular} \caption{Development and test results on all systems, including ablation results.} \label{tab:static_results} \end{table*} \paragraph{Systems} We evaluate three systems: (a) the full model; (b) \system{Seq2seq+attn}:\footnote{This baseline is similar to \citet{Mei:16neuralnavi}.} sequence-to-sequence with attention; and (c) a static oracle that executes the gold sequence of actions in the recorded interaction. We report mean and standard deviation across three trials for development results. We ablate model and learning components, and additionally evaluate the action generator with access to gold plans.\footnote{We do not measure interaction-level metrics with gold plans as they are only available for the gold start positions.} On the test set and for human evaluation, we use the model with the highest proportion of points scored. We provide implementation and learning details in Appendix~\ref{sec:sup:impl}. \section{Introduction} \label{sec:intro} \begin{figure}[t] \footnotesize \centering \includegraphics[trim={115 187 196 136}, clip, width=0.98\linewidth]{figs/cereal_bar.pdf} \fbox{\begin{minipage}[]{0.95\linewidth} \dots \\ $\bar{x}_3$: \nlstring{turn left and head toward the yellow hearts, but don't pick them up yet. I'll get the next card first.} \\ $\bar{x}_4$: \textbf{\nlstring{Okay, pick up yellow hearts and run past me toward the bush sticking out, on the opposite side is 3 green stars}}\\ \texttt{[Set made. New score: 4]} \\ \dots \end{minipage}} \caption{A snapshot from an interaction in \textsc{CerealBar}\xspace. The current instruction is in bold. The large image shows the entire environment. This overhead view is available only to the leader. The follower sees a first-person view only (bottom right). The zoom boxes (top) show the leader and follower. } \label{fig:intro} \end{figure} Sequential instruction scenarios commonly assume only the system performs actions, and therefore only its behavior influences the world state. This ignores the collaborative potential of such interactive scenarios and the challenges it introduces. When the user acts in the world as well, they can adapt to the system abilities not only by adopting simpler language, but also by deciding to accomplish tasks themselves. The system must then recover from errors as new instructions arrive and be robust to changes in the environment that are not a result of its own actions. In this paper, we introduce \textsc{CerealBar}\xspace, a collaborative game with natural language instruction, and design modeling, learning, and evaluation methods for the problem of sequential instruction following in collaborative interactions. In \textsc{CerealBar}\xspace, two agents, a leader and a follower, move in a 3D environment and collect valid sets of cards to earn points. A valid set is a set of three cards with distinct color, shape, and count. The game is turn-based, and only one player can act in each turn. In addition to collecting cards, the leader sends natural language instructions to the follower. The follower's role is to execute these instructions. Figure~\ref{fig:intro} shows a snapshot from the game where the leader plans to pick up a nearby card (red square) and delegates to the follower two cards, one close and the other much further away. Before that, the leader planned ahead and asked the follower to move in preparation for the next set. The agents have different skills to incentivize collaboration. The follower has more moves per turn, but can only see from first-person view, while the leader observes the entire environment but has fewer moves. This makes natural language interaction key to success. We address the problem of mapping the leader instructions to follower actions. In addition to the collaborative challenges, this requires grounding natural language to resolve spatial relations and references to objects, reason about dependencies on the interaction history, react to the changing environment as cards appear and disappear, and generate actions. \textsc{CerealBar}\xspace requires reasoning about the changing environment (e.g., when selecting cards) and instructions with multiple goals (e.g., selecting multiple cards). We build on the Visitation Prediction Network model~\cite[VPN;][]{Blukis:18visit-predict}, which casts planning as mapping instructions to the probability of visiting positions in the environment. Our new model generalizes the planning space of VPN to reason about intermediate goals and obstacles, and includes recurrent action generation for trajectories with multiple goals. We collect 1,202 human-to-human games for training and evaluation. While our model could be trained from these recorded games only, it would often fail when an instruction would start at the wrong position because of an error in following the previous one. We design a learning algorithm that dynamically augments the data with examples that require recovering from such errors, and train our model to distinguish such recovery reasoning from regular instruction execution. Evaluation with recorded games poses additional challenges. As agent errors lead to unexpected states, later instructions become invalid. Because measuring task completion from such states is meaningless, we propose \emph{cascaded evaluation}, a new evaluation protocol that starts the agent at different points in the interaction and measures how much of the remaining instructions it can complete. In contrast to executing complete sequences or single instructions, this method allows to evaluate all instructions while still measuring the effects of error propagation. We evaluate using both static recorded games and live interaction with human players. Our human evaluation shows users adapt to the system and use the agent effectively, scoring on average 6.2 points, compared to 12.7 for human players. % Our data, code, and demo videos are available at \href{http://lil.nlp.cornell.edu/cerealbar/}{lil.nlp.cornell.edu/cerealbar/}. \section{Learning} \label{sec:learning} We assume access to a set of $N$ recorded interactions $\{ \bar{I}^{(i)} \}^N_{i=1}$. We generate instruction-level examples $\mathcal{D} = \bigcup^N_{i=1}\{ \bar{I}^{(i,j)}\}^{M^{(i)}}_{j=1}$, where $M^{(i)}$ is the number of examples from $\bar{I}^{(i)}$. Each $\bar{I}^{(i,j)} = \langle (s^{(i,j)}_1, \gamma^{(i,j)}_1, a^{(i,j)}_1), \dots, (s^{(i,j)}_{k}, \gamma^{(i,j)}_{k}, a^{(i,j)}_{k}) \rangle$ is a subsequence of tuples in $\bar{I}^{(i)}$, where $a^{(i,j)}_1$ is the first action the follower takes after observing the $j$-th instruction in $\bar{I}^{(i)}$, and $a^{(i,j)}_{k}$ is the $\act{DONE}$ action completing that instruction. We first estimate the parameters for plan prediction $\theta_1$ and action generation $\theta_2$ separately (Section~\ref{sec:learn:pretrain}), and then fine-tune jointly with data augmentation (Section~\ref{sec:learn:finetune}). \subsection{Pretraining} \label{sec:learn:pretrain} \paragraph{Stage 1: Plan Prediction} The input of Stage 1 is the world state $s_1$ and the instruction $\bar{x}$ at the head of the queue $\bar{Q}$.\footnote{We omit example indices for succinctness.} We generate labels for the four output distributions using $\bar{I}^{(i,j)}$. The visitation distribution $p(\rho \mid s_1, \bar{x})$ label is proportional to number of states $s_t \in \bar{I}^{(i,j)}$ where the follower is in position $\rho$. The goal and avoidance distributions model how the agent plans to manipulate parts of its environment to achieve the specified goals, but avoid manipulating other parts. In \textsc{CerealBar}\xspace, this translates to changing the status of cards, or avoiding doing so. For $p(\mathrm{GOAL} = 1 \mid \rho, s_1, \bar{x})$, we set the label to 1 for all $\rho$ that contain a card that the follower changed its selection status in $\bar{I}^{(i,j)}$, and 0 for all other positions. Similarly, for the avoidance distribution $p(\mathrm{AVOID} = 1 \mid \rho, s_1, \bar{x})$, the label is 1 for all $\rho$ that have cards that the follower does not change during the interaction $\bar{I}^{(i,j)}$. Finally, for $p(\mathrm{NOPASS} = 1\mid \rho, s_1, \bar{x})$, the label is 1 for all positions the agent cannot move onto, and zero otherwise. We define four cross-entropy losses: visitation $\mathcal{L}_V$, goal $\mathcal{L}_G$, avoidance $\mathcal{L}_A$, and no passing $\mathcal{L}_P$. We also use an auxiliary cross-entropy goal-prediction loss $\mathcal{L}_{G'}$ using a probability $p'_G(\mathrm{GOAL} = 1 \mid \rho, s_1, \bar{x})$ we predict from the pre-$\system{LingUNet}\xspace$ representation $\mathbf{S}'$ by classifying each position. The complete loss is a weighted sum with coefficients:\footnote{Additional details are in Appendix~\ref{sec:sup:learn}.} \begin{small} \begin{align*} \mathcal{L}_1(\theta_1) = &\lambda_V \mathcal{L}_V(\theta_1) + \lambda_G \mathcal{L}_G(\theta_1) +\lambda_A \mathcal{L}_A(\theta_1) \\ &+ \lambda_P\mathcal{L}_P(\theta_1) + \lambda_{G'}\mathcal{L}_{G'}(\theta_1)\;\;. \end{align*} \end{small} \paragraph{Stage 2: Action Generation} We use the gold distribution to create the input $\mathbf{P}$, and optimize towards the annotated set of actions using teacher forcing~\cite{Williams:89teacherforcing}. We compute the loss only over actions taken by the follower: \begin{small} \begin{equation*} \mathcal{L}_2(\theta_2) = -\textstyle\sum_{t=1}^n \mathds{1}_{\alpha_t = \mathrm{Follower}}\log p(a_t)\;\;, \end{equation*} \end{small} \noindent where $p(a_t)$ is computed by Equation~\ref{eq:prob_act}. \subsection{Fine-tuning with Example Aggregation} \label{sec:learn:finetune} Simply combining the separately-trained networks together results in low performance. We perform additional fine-tuning with the two stages combined, and introduce a data augmentation method to learn to recover from error propagation. \paragraph{Error Propagation} Executing a sequence of instructions is susceptible to error propagation, where an agent fails to correctly complete an instruction, and because of it also fails on the following ones. While the collaborative, turn-switching setup allows the leader to adjust their plan following a follower mistake, leaders often strategically issue multiple instructions to use the available follower steps optimally. Given an agent failure, subsequent instructions may not align with the state of the world resulting from the follower's error. In supervised learning, we do not have the opportunity to learn to recover from such errors, even when it is relatively simple. This usually requires exploration. However, conventional frameworks like reinforcement learning (RL) or imitation learning (IL) are poorly suitable. In a live interaction, when an agent makes a mistake (e.g., selecting the wrong card), the leader is likely to adjust their actions. Because of this, in a recorded interaction, which contains the leader actions following a correct execution, it is not possible to reliably compute an RL reward for states following erroneous executions. For similar reasons, we cannot compute an IL oracle. We identify two classes of erroneous states in \textsc{CerealBar}\xspace: (a) not selecting the correct set of cards; and (b) finishing with the right card selection, but stopping at the wrong position.\footnote{See Appendix~\ref{sec:app:learning} for further discussion of the two cases.} Case (a) requires to modify the model, for example to know when to skip instructions that refer to a state that is no longer possible. We leave this case for future work. We address case (b) by augmenting the data with new examples that are aggregated during learning. Our process is similar to \textsc{DAgger}~\cite{Ross:11dagger}. We alternate between: (a) collecting new training examples using a heuristic oracle, and (b) performing model updates. We generate training examples that demonstrate recovery by starting in an incorrect initial position for an instruction, having arrived there by executing the previous instruction. We train our model to distinguish between the reasoning required for generating implicit actions to correct errors, and explicit actions directly mentioned in the instruction. \paragraph{Learning with Example Aggregation} We alternate between aggregating a new set of recovery examples $\mathcal{D}'$ and updating our parameters. At each epoch, we first use the current policy to create new training examples. We run inference for each example $\bar{I}^{(i,j)}$ in $\mathcal{D}$, the original training set, using the current policy.\footnote{We do not perform inference for the last instruction in an interaction, as there is no subsequent example for which to generate a new example.} We compare the state $s'$ at the end of execution to the final state in $\bar{I}^{(i,j)}$ to generate an error-recovery example $\bar{I}'^{(i,j+1)}$ for the subsequent example $\bar{I}^{(i,j+1)}$. We only generate such examples if the position or rotation of the agent are different, and there are no other difference between the states. Starting from $s'$, we generate the shortest-path sequence of actions that: (a) changes the cards as specified in $\bar{I}^{(i,j+1)}$, and (b) executes $\act{DONE}$ in the same position as in $\bar{I}^{(i,j+1)}$. We then create $\bar{I}'^{(i,j+1)}$ using $\bar{I}^{(i,j+1)}$ and the new sequence of state-action pairs, and add it to $\mathcal{D}'$.\footnote{Appendix~\ref{sec:app:learning} describes this process.} Given the original set of examples $\mathcal{D}$ and the aggregate examples $\mathcal{D}'$ we update our model parameters. We randomly sample without replacement at most $\sum_{i=1}^N M^{(i)}$ examples, the size of $\mathcal{D}$, from $\mathcal{D}'$. We use all the examples in $\mathcal{D}$ and the sampled examples to do a single parameter update epoch. We limit the number of examples from $\mathcal{D}'$ to maintain the effect of the original data. \paragraph{Optimizing with Implicit Action Prediction} The examples we generate during aggregation often include sequences of state-action pairs that do not align with the instruction, for example when a mentioned spatial relation is incorrect from the new starting position. Such examples require reasoning differently about the text and world state than with the original data. We identify such examples in $\mathcal{D}'$ by comparing their follower starting position to the starting position in the original corresponding example in $\mathcal{D}$. If the distance is over two, we treat the examples as requiring implicit actions~\cite{Artzi:13}. All other examples, including all original examples in $\mathcal{D}$ are considered as not requiring implicit reasoning. We encourage the model to reason differently about these examples with a discriminator that classifies if the example requires implicit reasoning or not using the internal activations of $\system{LingUNet}\xspace$. The discriminator classifies each of the $L$ layers in $\system{LingUNet}\xspace$ for implicit reasoning. The goal is to encourage implicit reasoning at all levels of reasoning in the first stage. The probability of implicit reasoning for each $\system{LingUNet}\xspace$ layer $l$ is: \begin{small} \begin{eqnarray*} p(\mathrm{IMPLICT} = 1 \mid l, s_1, \bar{x}) =& \\ &\hspace{-2cm} \begin{cases} \sigma(\textsc{AvgPool}(\mathbf{G}_1 * \mathbf{K}^{\rm IMP}_1)) & l=1 \\ \sigma(\textsc{AvgPool}(\mathbf{H}_l * \mathbf{K}^{\rm IMP}_l)) & l>1\end{cases} \;\;, \end{eqnarray*} \end{small} \noindent where $\mathbf{K}^{\rm IMP}_l$ are $1 \times 1$ learned kernels and $\textsc{AvgPool}$ does average pooling. We define a cross-entropy loss $\mathcal{L}_{\rm IMP}$ that averages across the $L$ layers. The complete fine-tuning loss is: \begin{small} \begin{equation*} \mathcal{L}(\theta_1, \theta_2) = \mathcal{L}_1(\theta_1) + \mathcal{L}_2(\theta_2) + \lambda_{\rm IMP}\mathcal{L}_{\rm IMP}(\theta_1)\;\;. \end{equation*} \end{small} \section{Model} \label{sec:model} \begin{figure*} \centering % \includegraphics[width=\textwidth,trim={92pt 274pt 99pt 72pt},clip=true]{figs/diagram-tall.pdf} \caption{Illustration of the model architecture. Given the instruction $\bar{x}$ and the world state $s$, we compute $\mathbf{F}_0$ from the embeddings of the instruction $\mathbf{\sentence}$ and environment $\mathbf{S}$. We use $\system{LingUNet}\xspace$ to predict four distributions, which are visualized over the map (grayscaled to emphasize the distributions). We show three action generation steps. Each step receives the map cropped around the agent and the previous action, and outputs the next action. } \label{fig:diagram} \end{figure*} We use a two-stage model for the follower policy $\pi(s_t, \gamma_t, \bar{I}_{<t})$, where $s_t$ is a world state, $\gamma_t$ is an interaction state, and $\bar{I}_{<t}$ is the interaction history. The instruction $\bar{x}$ that is the first in the queue $\bar{Q}_t$, which is part of $\gamma_t$, is the currently executed instruction. In our model, we assume the follower observes the entire environment. First, we map $\bar{x}$ and $s_t$ to distributions over locations in the environment, including what locations to visit and what are the goals. These distributions are considered as an execution plan, and are used to generate a sequence of actions in the second stage. The distribution can also be used to easily easily visualize the agent plan. The first stage is used when starting a new instruction, and the predicted distributions are re-used for all actions for that instruction. Figure~\ref{fig:diagram} illustrates the architecture and the distributions visualization. The two-stage approach was introduced by \citet{Blukis:18visit-predict}. We generalize its planning space and add a recurrent action generator for execution. \paragraph{Input Representation} The inputs to the first stage are the instruction $\bar{x}$ and the world state $s_t$. We generate feature maps for both. We use a learned embedding function $\phi^{\mathcal{X}}$ and a bi-directional recurrent neural network~\cite[RNN;][]{Elman:90rnn} with a long short-term memory cell~\cite[LSTM;][]{Hochreiter:97lstm} $\textrm{RNN}^\mathcal{X}$ to map $\bar{x}$ to a vector $\mathbf{\sentence}$. The world state $s_t$ is a 3D tensor that encodes the properties of each position. The dimensions of $s_t$ are $P\times W\times H$, where $P$ is the number of properties, and $W$ and $H$ are the environment width and height. Each of the $W\times H$ positions is represented in $s_t$ as a binary vector of length $P$. For example, a position with a red hut will have 1's for the \emph{red} and \emph{hut} dimensions and 0's for all other dimensions. We map the world state to a tensor feature map $\mathbf{F}_0$ by embedding $s_t$ and processing it using the text representation $\mathbf{\sentence}$. We use a learned embedding function $\phi^{\mathcal{S}}$ to map each position vector to a dense embedding of size $N_s$ by summing embeddings of each of the position's properties. The embeddings are combined to a tensor $\mathbf{S}$ of dimension $N_s \times W \times H$ representing a featurized global view of the environment. We create a text-conditioned state representation by creating a kernel $\mathbf{K}_s$ and convolving with it over $\mathbf{S}$. We use a linear transformation to create $\mathbf{K}_s = \mathbf{W}_s \mathbf{\sentence} + \mathbf{b}_s$, where $\mathbf{W}_s$ and $\mathbf{b}_s$ are learned weights. We reshape $\mathbf{K}_s$ to a $1\times 1$ convolution kernel with $N_{s'}$ output channels, and compute $\mathbf{S}' = \mathbf{S} * \mathbf{K}_s$. We concatenate $\mathbf{S}$ and $\mathbf{S}'$ along the channel dimension and rotate and center so the follower position is at center pixel to generate $\mathbf{F}_0$.\footnote{Appendix~\ref{app:sec:lingunet} describes the relationship between the environment representations and the agent's initial and current orientation.} \paragraph{Stage 1: Plan Prediction} We treat plan generation as predicting distributions over positions $\rho$ in the environment. There are $W \times H$ possible positions. We predict four distributions: (a) \mbox{$p(\rho \mid s_t, \bar{x})$}, the probability of visiting $\rho$ while executing the instruction $\bar{x}$; (b) \mbox{$p(\mathrm{GOAL} = 1 \mid \rho, s_t, \bar{x})$}, the binary probability that $\rho$ is a goal (i.e., $\mathrm{GOAL} = 1$ when containing a card to select); (c) \mbox{$p(\mathrm{AVOID} = 1 \mid \rho, s_t, \bar{x})$}, the binary probability that the agent must not pass in $\rho$ (i.e., $\mathrm{AVOID} = 1$ when it contains a card that should not be touched); and (d) \mbox{$p(\mathrm{NOPASS} = 1\mid \rho, s_t, \bar{x})$}, the binary probability the agent cannot pass in $\rho$ (i.e., $\mathrm{NOPASS} = 1$ when it contains another object). We use $\system{LingUNet}\xspace$~\cite{Misra:18goalprediction} to predict the distributions. The inputs to $\system{LingUNet}\xspace$ are the instruction embedding $\mathbf{\sentence}$ and featurized world state $\mathbf{F}_0$, which is relative to the agent's frame of reference. The output are four matrices, each of dimension $W \times H$ corresponding to the environment. $\system{LingUNet}\xspace$ is formally defined in \citet{Misra:18goalprediction} and Appendix~\ref{app:sec:lingunet}. Roughly speaking, $\system{LingUNet}\xspace$ reasons about the environment representation $\mathbf{F}_0$ at $L$ levels. First, $\mathbf{F}_0$ is used to generate feature maps of decreasing size $\mathbf{F}_j$, $j=1\dotsL$ using a series of convolutions. We create convolution kernels from the instruction representation $\mathbf{\sentence}$, and apply them to the feature maps $\mathbf{F}_j$ to generate text-conditioned feature maps $\mathbf{G}_j$. Finally, feature maps of increasing size $\mathbf{H}_j$ are generated using a series of $L$ deconvolutions. The last deconvolution generates a tensor of size $4 \times W \times H$ with a channel for each of the four distributions. We use a softmax over one channel to compute $p(\rho \mid s_t, \bar{x})$. Because the other distributions are binary, we use a sigmoid on each value independently for the other channels. When computing $p(\mathrm{GOAL} = 1 \mid \rho, s_t, \bar{x})$ and $p(\mathrm{AVOID} = 1 \mid \rho, s_t, \bar{x})$ we mask positions without objects that can be changed (i.e., positions without cards) to assign them zero probability. \paragraph{Stage 2: Action Generation} We use the four distributions to generate a sequence of actions. We concatenate the distributions channel-wise to a tensor $\mathbf{P} \in \mathbb{R}^{4 \times W \times H}$. We use a forward LSTM RNN to predict a sequence of actions. At each prediction step $t$, we rotate, transform, and crop $\mathbf{P}$ to generate the egocentric tensor $\mathbf{P}'_t \in \mathbb{R}^{N' \times C \times C}$, where the agent is always at the center and facing in the same direction, such that $\mathbf{P}'_t$ is relative to the agent's current frame of reference. The input to the action generation RNN at time $t$ is: \begin{small} \begin{eqnarray*} \mathbf{p}'_t &=& \vect(\textsc{Norm}(\textsc{ReLU}(\textsc{CNN}^P(\mathbf{P}'_t)))) \\ \mathbf{p}_t &=& \textsc{ReLU}(\mathbf{W}^P_2[\mathbf{p}'_t; \textsc{ReLU}(\mathbf{W}^P_1\mathbf{p}'_t + \mathbf{b}^P_1)] + \mathbf{b}^P_2)\;\;, \end{eqnarray*} \end{small} \noindent where $\textsc{CNN}^P$ is a convolutional layer, $\textsc{ReLU}$ is a non-linearity, $\textsc{Norm}$ is instance normalization~\cite{Ulyanov2017:instancenorm}, and $\mathbf{W}^P_1$, $\mathbf{W}^P_2$, $\mathbf{b}^P_1$, $\mathbf{b}^P_2$ are learned weights. The action probability is: \begin{small} \begin{eqnarray} \nonumber \mathbf{h}_t &=& \textrm{RNN}^\mathcal{A}(\mathbf{h}_{t-1}, [\phi^{\mathcal{A}}(a_{t-1}); \mathbf{p}_t]) \\ \label{eq:prob_act} p(a) &\propto & \exp(\mathbf{W}^\mathcal{A}[\mathbf{h}_t; \mathbf{p}_t] + \mathbf{b}^\mathcal{A})\;\;, \end{eqnarray} \end{small} \noindent where $\textrm{RNN}^\mathcal{A}$ is an LSTM RNN, $\phi^\mathcal{A}$ is a learned action embedding function, $a_0$ is a special $\mathtt{START}$ action, and $\mathbf{W}^\mathcal{A}$ and $\mathbf{b}^\mathcal{A}$ are learned. During inference, we assign zero probabilities to actions $a$ when $\mathcal{T}_w(s_t, a)$ is invalid (Appendix~\ref{sec:sup:transfunc}), for example when an agent would move into an obstacle. \section{Setup and Technical Overview} \label{sec:overview} We consider a setup where two agents, a leader and a follower, collaborate. Both execute actions in a shared environment. The leader, additionally, instructs the follower using natural language. The leader goal is to maximize the task reward, and the follower goal is to execute leader instructions. We consider a turn-based version, where at each turn only one agent acts. % We instantiate this scenario in \textsc{CerealBar}\xspace, a navigation card game (Figure~\ref{fig:intro}), where a leader and follower move in an environment selecting cards to complete sets.\footnote{The name \textsc{CerealBar}\xspace does not carry special meaning. It was given to the project early on, and we came to like it. Our game is inspired by \href{https://en.wikipedia.org/wiki/Set_(card_game)}{the card game Set}.} \paragraph{\textsc{CerealBar}\xspace Overview} The objective of \textsc{CerealBar}\xspace is to earn points by selecting valid sets of cards. A valid set has three cards with distinct color, shape, and count. When the only cards selected in the world form a valid set, the players receive a point, the selected cards disappear, three new cards are added randomly, and the number of remaining turns increases. The increase in turns decays for each set completed. An agent stepping on a card flips its selection status. The players form sets together. The follower has more steps per turn than the leader. This makes using the follower critical for success. The follower only sees a first-person view of the environment, preventing them from planning themselves, and requiring instructions to be sensible from the follower's perspective. The leader chooses the next target set, plans which of the two players should get which card, and instructs the follower. The follower can not respond to the leader, and should not plan themselves, or risk sabotaging the leader's plan, wasting moves and lowering their potential score. Followers mark an instruction as finished before observing the next one. This provides alignment between instructions and follower actions. In contrast to the original setup that we use for data collection, in our model (Section~\ref{sec:model}), we assume the follower has full observability, leaving the challenge of partial observability for future work. Appendix~\ref{sec:sup:game} provides further game design details. \paragraph{Problem Setup} We distinguish between the world state and the interaction state. Let $\mathcal{S}$ be the set of all world states, $\Gamma$ be the set of all interaction states, and $\mathcal{X}$ be the set of all natural language instructions. A world state $s \in \mathcal{S}$ describes the current environment. In \textsc{CerealBar}\xspace, the world state describes the spatial environment, the location of cards, whether they are selected or not, and the location of the agents. An interaction state $\gamma \in \Gamma$ is a tuple $\langle \bar{Q}, \alpha, \psi \rangle$. The first-in-first-out queue $\bar{Q} = [\bar{x}_q,\dots,\bar{x}_{q'}]$ contains the instructions $\bar{x}_i \in \mathcal{X}$ available to execute. The current instruction is the left-most instruction $\bar{x}_q$. The current turn-taker $\alpha \in \{\mathrm{Leader}, \mathrm{Follower}\}$ indicates the agent currently executing actions, and $\psi \in {\rm I\!N}_{\geq 0}$ is the number of steps remaining in the current turn. At each time step, the current turn-taker agent takes an action. An action may be the leader issuing an instruction, or either agent performing an action in the environment. Let $\mathcal{A} = \actions_w \cup \{ \act{DONE} \}\cup \mathcal{X}$ be the set of all actions. The set $\actions_w$ includes the actions available to the agents in the environment. In \textsc{CerealBar}\xspace, this includes moving forward or backward, and turning left or right. Moving onto a card flips it selection status. $\act{DONE}$ indicates completing the current instruction for the follower or ending the turn for the leader. An instruction action $a = \bar{x} \in \mathcal{X}$ can only be taken by the leader and adds the instruction $\bar{x}$ to the queue $\bar{Q}$. The effect of each action is determined by the transition function $\mathcal{T} :\mathcal{S} \times \Gamma \times \mathcal{A} \rightarrow \mathcal{S} \times \Gamma$, which is formally defined in Appendix~\ref{sec:sup:transfunc}. Only world actions $a \in \actions_w$ decrease the remaining steps $\psi$. The goal of the leader is to maximize the total reward of the interaction. An interaction $\bar{I} = \langle (s_1, \gamma_1, a_1), \dots, (s_{|\bar{I}|}, \gamma_{|\bar{I}|}, a_{|\bar{I}|}) \rangle$ is a sequence of state-action tuples, where $\mathcal{T}\left(s_{i}, \gamma_{i}, a_{i} \right)=(s_{i+1}, \gamma_{i+1})$. The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ assigns a numerical reward to a world state and an action. The total reward of an interaction $\bar{I}$ is $\sum_{i=0}^{|\bar{I}|}R(s_i, a_i)$. In \textsc{CerealBar}\xspace, the agents receive a reward of 1 when a valid set is selected. \paragraph{Task} Our goal is to learn a follower policy to execute the leader instructions. At time $t$, given the current world and interaction states $s_t$ and $\gamma_t$, and the interaction so far $\bar{I}_{<t}$, the follower policy $\pi(s_t, \gamma_t, \bar{I}_{<t})$ predicts the next action $a_t$. \paragraph{Model} We decompose the follower policy $\pi(s_t, \gamma_t, \bar{I}_{<t})$ to predicting a set of distributions over positions in the environment, including positions to visit, intermediate goals (e.g., cards to select), positions to avoid (e.g., cards not to touch), and positions that are not passable. These distribution are used in a second stage to generate a sequence of actions. Section~\ref{sec:model} describes the model. \paragraph{Learning} We assume access to a set of $N$ recorded interactions $\{ \bar{I}^{(i)} \}^N_{i=1}$, and create examples where each instruction is paired with a sequence of state-action tuples. We maximize the action-level cross entropy objective, and use two auxiliary objectives (Section~\ref{sec:learning}). We first train each stage of the model separately, and then fine-tune them jointly. During fine-turning, we continuously generate additional examples using model failures. These examples help the agent to learn how to recover from errors in prior instructions. \paragraph{Evaluation} We measure correct execution of instructions and the overall game reward. We assume access to a test set of $M$ recorded interactions $\{ \bar{I}^{(i)} \}^M_{i=1}$. We measure instruction-level and interaction-level performance, and develop \emph{cascaded evaluation}, an evaluation protocol that provides a more graded measure than treating each interaction as a single example, while still accounting for error propagation (Section~\ref{sec:cascaded}). Finally, we conduct online evaluation with human leaders. \section{Related Work} \label{sec:related} Goal-driven natural language interactions have been studied in various scenarios, including dialogue where only one side acts in the world~\cite{Anderson:91,Williams:13dstc,Vlachos:14,Devries:18,Kim:19codraw,Hu:19minirts}, coordination for agreed selection of an object~\cite{He:17dialogue, Udagawa:19}, and negotiation~\cite{Lewis:17negotiation,He:18negotiation}. We focus on collaborative interactions where both the user and the system perform sequences of actions in the same environment. This allows the user to adapt to the language understanding ability of the system and balance between delegating goals to it and accomplishing them themselves. For example, a user may decide to complete a short but hard-to-describe task and delegate to the system a long but easy-to-describe one. In prior work, in contrast, recovery is limited to users paraphrasing their requests. The Cards corpus~\cite{Djalali11:cards-qud,Djalali11:cards-preference,Potts:12} was used for linguistic analysis of collaborative bi-directional language interaction. The structure of collaborative interactions was also studied using Wizard-of-Oz studies~\cite{Lochbaum:98,Sidner:00,Koulouri:09}. In contrast, we focus on building agents that follow instructions. \citet{Ilinykh:19meetup} present a corpus for the related task of natural language coordination in navigation. Collaboration has also been studied for emergent communication~\cite[e.g.,][]{Andreas:17,Evtimova:17}. Understanding sequences of natural language utterances has been addressed using semantic parsing~\cite[e.g.,][]{Miller:96,MacMahon:06,Chen:11,Artzi:13,Artzi:14,Long:16context,Iyyer:17seq-qa,Suhr:18context,Arkin:17,Broad:17}. Interactions were also used for semantic parser induction~\cite{Artzi:11,Thomason:15robotdialog,Wang:16games}. These methods require hand-crafted symbolic meaning representation, while we use low-level actions~\cite{Suhr:18situated}. The interactions in our environment interleave actions of both agents with leader utterances, an aspect not addressed by these methods. Executing single instructions has been widely studied~\cite[e.g.,][]{Tellex:11,Duvallet:13,Misra:17instructions,Misra:18goalprediction,Anderson:17,Blukis:18drone,Blukis:18visit-predict,Chen:19touchdown}. The distinction we make between actions specified in the instruction and implicit recovery actions is similar to how \citet{Artzi:13} use implicit actions for single instructions. Finally, our model is based on the VPN model of \citet{Blukis:18visit-predict}. While we assume full observability, their original work did not. This indicates that our model is likely to generalize well to partially observable scenarios. \section{Results} \label{sec:results} Table~\ref{tab:static_results} shows development and test results, including ablations. We consider the proportion of points scored computed with cascaded evaluation as the main metric. Our complete approach significantly outperforms \system{Seq2seq+attn}. Key to this difference is the added structure within the model and the direct supervision on it. The results also show the large remaining gap to the static oracle.\footnote{Appendix~\ref{sec:sup:cascaded} explains the static oracle performance.} Our results show how considering error propagation for all available instructions in cascaded evaluation guides different design choices. For example, example aggregation and the implicit discriminator lower performance according to instruction-level metrics, which do not consider error propagation. We see a similar trend for the implicit discriminator when looking at full game points, an interaction-level metric that does not account for performance on over 80\% of the data because of error propagation. In contrast, the proportion of points scored computed using cascaded evaluation shows the benefit of both mechanisms. Our ablations demonstrate the benefit of each model component. All four distributions help. Without the trajectory distribution (-- Trajectory distribution), performance drops almost to the level of \system{Seq2seq+attn}. This indicates the action predictor is not robust enough to construct a path given only the three other disjoint distributions. While the predicted trajectory distribution contains all information necessary to reach the correct cards and goal location, the other three distributions further improve performance. This is likely because redundancy with the trajectory distribution makes the model more robust to noisy predictions in the trajectory distribution. For example, the GOAL distribution guides the agent to move towards goal cards even if the predicted trajectory is discontinuous. The action generation recurrence is also critical (-- Action recurrence), allowing the agent to keep track of which locations it already passed when navigating complex paths that branch, loop, or overlap with themselves. While we observe that each stage separately performs well after pretraining, combining them without fine-tuning (-- Fine-tuning) leads to low performance because of the shift in the second stage input. Providing the gold distributions to the action generator illustrates this (+ Gold plan). Removing early goal auxiliary loss $\mathcal{L}_{G'}$ (Section~\ref{sec:learn:pretrain}) leads to a slight drop in performance on all metrics (-- Early goal auxiliary). Learning with aggregated recovery examples helps the model to learn to recover from errors in previous instructions and increases the proportion of points scored (-- Example aggregation). However, without the implicit reasoning discriminator (\mbox{-- Implicit discriminator}), the additional examples make learning too difficult, and do not help. Finally, removing the language input (-- Instructions) significantly decreases performance, showing that the data is relatively robust to observational biases and language is necessary for the task. In the human evaluation, we observe a mean of 6.2 points (max of 14) with our follower model, compared to 12.7 (max of 20) with human followers. While this shows there is much room for improvement, it illustrates how human leaders adapt and use the agent effectively. One key strategy of adaptation is to use simplified language that fits the model better. This includes shorter instructions, with 8.5 tokens on average with automated followers compared to 12.3 with humans, and a smaller vocabulary, 578 word types with automated followers and 1037 with humans. In general, human leaders commented that they are able to easily distinguish between automated and human followers, and find working with the automated agent frustrating. \section{Data Collection Details} \label{sec:sup:data} Figures~\ref{fig:interface_leader} and~\ref{fig:interface_follower} show the leader's and follower's interfaces. \begin{figure*} \centering \fbox{\includegraphics[width=\textwidth]{figs/interface_leader.png}} \caption{The \textsc{CerealBar}\xspace leader gameplay interface.} \label{fig:interface_leader} \end{figure*} \begin{figure*} \centering \fbox{\includegraphics[width=\linewidth]{figs/interface_follower.png}} \caption{The \textsc{CerealBar}\xspace follower gameplay interface.} \label{fig:interface_follower} \end{figure*} \paragraph{Crowdsourcing Management} We use a qualification task to both teach workers how to play the game and to mark workers as qualified for our main task. We restrict those who can qualify to workers located in majority English-speaking countries with at least 90\% approved HITs and at least 100 completed HITs. The qualification task has three components: an interactive tutorial for the leader role, an interactive tutorial for the follower role, and a short quiz about the gameplay. In both tutorials, turn-switching is disabled and workers have an unlimited number of moves to use to complete the tutorial. Each tutorial uses the same map. This allows us to pre-program instructions for the tutorials. In the leader tutorial, the worker has access to the full game board. They are asked to send a command to the follower, and are instructed via in-game prompts to collect a specific set of cards. Finally, they are asked to collect two more sets in the environment that are valid. Workers who send a command and collect a total of three sets successfully complete this tutorial. In the follower tutorial, the worker has access only to the follower view. Pre-written commands are issued to the worker, and they must follow them one-by-one to complete a set. The commands include an example of the leader correcting a set-planning mistake. If the worker marks all commands as finished and successfully collects one set, the follower tutorial is complete. Finally, workers are asked to read the game instructions and complete a short quiz. They are asked questions regarding the validity of card sets, the responsibilities of both players, and how each game ends. We maintain two groups of workers split by experience with the game, and use separate pools of HITs for each. A worker can join the expert pool if they have shown they understand how to play as a leader and as a follower through at least one game each. This allows new players to learn the game rules without frustrating expert players. At the end of data collection, 95 workers were in the expert pool while 169 were in the non-expert pool, for a total of 264 participating workers. We pay workers a bonus per point they earn, increasing the bonus as more points are earned, in addition to a base pay of per game. We do not pay leaders and followers differently. The median game cost was \$5.80. \paragraph{The \textsc{CerealBar}\xspace Dataset} In total, we collect 1,526 games played by both experts and non-experts. Of these, we keep 1,202 (78.8\%) games, comprising 23,979 total instructions, discarding those where no instructions were complete, or where alignment between instructions and actions was suspected low-quality. For example, we removed interactions with a low proportion of instructions being marked as complete, or very long action sequences from the follower, both which indicate the follower did not properly complete instructions. When splitting the data, we ensured the mean score between the three splits was roughly the same. Table~\ref{tab:dataset_statistics} shows basic statistics of the data we collected after pruning. 82.6\% of post-pruning games are from the expert pool. In the training set, the mean number of completed instructions is 19.9 and the median is 24.0. 83.3\% of games have a score greater than zero. We include games with a score of zero if the alignment between instructions and actions is high-quality according to our pruning heuristics. The vocabulary size is computed by lowercasing all word types and tokenizing using the NLTK word tokenizer. Our dataset contains longer interactions than several existing datasets for sequential instruction following and interaction~\cite[e.g.,][]{Chen:11,Long:16context,He:17dialogue,Devries:18,Kim:19codraw,Hu:19minirts,Udagawa:19}, though still shorter than the Cards corpus~\cite{Djalali11:cards-qud,Djalali11:cards-preference,Potts:12}. Individual sentences are also longer than several similar corpora~\cite[e.g.,][]{Chen:11,Djalali11:cards-qud,Long:16context,He:17dialogue,Hu:19minirts}. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline & \textbf{Mean} & \textbf{Median} & \textbf{Max} \\ \hline {Total Game Time (m:s)} & 16:28 & 18:40 & 31:31\\ \hline {Score / Interaction} & 7.9 & 9.0 & 19 \\ \hline {\# Instr. / Interaction} & 19.9 & 24.0 & 40 \\ \hline {\# Tokens / Instr.} & 14.0 & 13.0 & 55 \\ \hline {\# Follower Actions / Instr.} & 8.5 & 8.0 & 50 \\ \hline \hline {\# Interactions} & \multicolumn{3}{|c|}{1,202} \\ \hline {Vocabulary Size} & \multicolumn{3}{|c|}{3,641} \\ \hline \end{tabular} \caption{Human-human games data statistics. All statistics except the number of examples are computed on the training set only.} \label{tab:dataset_statistics} \end{table} \section{Evaluation Details} \label{sec:sup:cascaded} \paragraph{Cascaded Evaluation} To compute metrics using cascaded evaluation, we construct a set of cascaded evaluation examples from the original test set. We assume access to a test set of $M$ recorded interactions $\left\{ \bar{I}^{(i)} \right\}^M_{i=1}$, where each $\bar{I}^{(i)} = \left\langle \left(s_1^{(i)}, \gamma_1^{(i)}, a_1^{(i)}\right), \dots, \left(s_{|\bar{I}|}^{(i)}, \gamma_{|\bar{I}|}^{(i)}, a_{|\bar{I}|}^{(i)}\right) \right\rangle$. For each instruction $\bar{x}_j$ in $\bar{I}^{(i)}$, we create an example $\bar{I}_C^{(i,j)} = \left\langle \left(s_{j'}^{(i)}, \gamma_{j'}^{(i)}, a_{j'}^{(i)}\right), \dots, \left(s_{|\bar{I}|}^{(i)}, \gamma_{|\bar{I}|}^{(i)}, a_{|\bar{I}|}^{(i)}\right) \right\rangle$, where $j'$ is the first follower step of executing $\bar{x}_j$. We treat each $\bar{I}_C^{(i,j)}$ as a separate example. For each metric, we report the proportion of the maximum value possible for each $\bar{I}_C^{(i,j)}$, and average across all examples $\bar{I}_C^{(i,j)}$. When computing the proportion of instructions followed in cascaded evaluation, the maximum possible for example $\bar{I}_C^{(i,j)}$ is the number of remaining instructions $N-j$ where $N$ is the number of instructions in $\bar{I}^{(i)}$. When computing the proportion of points scored, we subtract the points scored in the game before step $j$ to only account for points possible in the instructions present in $\bar{I}_C^{(i,j)}$. \paragraph{Performance of the Static Oracle} The static oracle does not have perfect performance. This is because the follower's turn ended before all ten steps were used in some recorded interactions. During evaluation, however, we allow the follower to move for all ten available steps. This sometimes leads to misalignment between leader and follower actions. This means some expected sets can not be completed. \section{\textsc{CerealBar}\xspace Game Design} \label{sec:sup:game} This appendix supplements Section~\ref{sec:overview} with further game design details and discussion of the reasoning behind them. \paragraph{World View} Figure~\ref{fig:interface_leader} shows the leader's point of view, and Figure~\ref{fig:interface_follower} shows the follower's. The leader observes the entire environment, while the follower only has access to a restricted first person view. The leader can also toggle to an overhead view to see obstructed cards using the camera button, and has access to the follower's current view to aide them in writing instructions that make sense to the follower. Selected cards are outlined in blue for both players. Invalid selections appear in red for the leader only. This setup makes the follower dependent on the leader, limits the follower ability to plan the card collection strategy, and encourages collaboration. \paragraph{Game Progression} The two players switch control of the game by taking turns. During each turn, the follower can take ten ($\Psi_f = 10$) steps while the leader can take five ($\Psi_l = 5$). Allowing the follower more steps than the leader incentivizes delegating lengthier tasks to the follower, such as grabbing multiple cards per turn or moving further away. We do not count actions which do not change the player's location or rotation, such as moving forward into an obstacle, against this limit. We additionally limit the amount of time each player has per turn. This requires players to move quickly without frustrating their partner by taking a long time, and additionally limits the maximum time per game. Both players begin with six turns each. The game ends when the players run out of turns. The leader turn ends once they press the end turn button or after 45 seconds. The end turn button is disabled as long as there are no instructions in the follower queue to nudge the leader to use the follower if time allows it. The allotted 45 seconds allow the leader sufficient time to move, plan, and write instructions. During the leader's turn, they can add any number of new instructions to the queue. The follower only receives control if there are instructions in the queue. If the queue is empty when the leader finishes their turn, the follower's turn is skipped, but the number of turns remaining still decreases. The follower's turn ends automatically when they run out of steps, after 15 seconds, or when they complete all instructions in the queue. During the follower's turn, they can mark any number of instructions as complete using the $\act{DONE}$ action. The follower sees the current and previous instructions, even if there are more instructions in the queue. They must mark the current instruction as complete before seeing the next. This is done to simplify the reasoning available to the follower. For example, to avoid cases where the follower skips a command based on future ones. Because there may be more future instructions in the queue, this incentivizes the follower to not waste moves in the current instruction and be as efficient as possible. During data collection, this provides alignment of actions to instructions because it prohibits a follower from taking actions aligning with a future instruction without marking the current instruction as complete. Without instruction completion annotation, the problem of alignment between instructions and actions becomes much more difficult when processing the recorded interactions. \paragraph{Scoring Points} When a valid set is made, the selected cards disappear, and three cards are randomly generated and placed on the grid such that the new grid contains a least one valid set. The two players earn a point, and are given extra turns. The number of added turns decays as they complete more sets, eventually reaching zero added turns. The maximum possible number of turns in a game is 65. In the training data, 454 games reached this number of turns. Adding extra turns when a set is made allows us to collect more data from games that are going well. It also allows us to pay players based on the number of sets completed, and incentivizes them to play as well as possible. If a game is going poorly, e.g., if the pair fails to earn a point in the first six turns, the game will end early. However, if the game is going well, implying the pair is collaborating well, the game will continue for longer, and will contain a longer sequence of instructions. \section{\textsc{CerealBar}\xspace Transition Function} \label{sec:sup:transfunc} \begin{table*}[t] \centering\footnotesize \begin{tabular}{|c|r|rcl|} \hline \textbf{Rule No. } & \multicolumn{1}{|c|}{\textbf{Domain}} & \multicolumn{3}{|c|}{\textbf{Definition}} \\ \hline 1 & $\forall \bar{x} \in \mathcal{X}$, $s \in \mathcal{S}$ & $\mathcal{T}(s, \langle \bar{Q}, \mathrm{Leader}, \psi \rangle, \bar{x})$ & $=$ & $(s, \langle \bar{Q} \bar{x}, \mathrm{Leader}, \psi \rangle)$\\ \hline 2 & $\forall s \in \mathcal{S}$, $|\bar{Q}| \geq 1$ & $\mathcal{T}(s, \langle \bar{Q}, \mathrm{Leader}, \psi \rangle, \act{DONE})$ & $=$ & $(s, \langle \bar{Q}, \mathrm{Follower}, \Psi_f \rangle)$\\ \hline 3 & $\forall s \in \mathcal{S}$, $|\bar{Q}| = 0$ & $\mathcal{T}(s, \langle \langle~\rangle, \mathrm{Leader}, \psi \rangle, \act{DONE})$ & $=$ & $(s \langle \bar{Q}, \mathrm{Leader}, \Psi_l \rangle)$\\ \hline 4 & $\forall a \in \actions_w$, $s \in \mathcal{S}$ & $\mathcal{T}(s, \langle \bar{Q}, \mathrm{Leader}, 1 \rangle, a)$ & $=$ & $(\mathcal{T}_w(s, a), \langle \bar{Q}, \mathrm{Leader}, 0 \rangle)$\\ \hline \hline 5 & $\forall s \in \mathcal{S}$, $|\bar{Q}| > 1$ & $\mathcal{T}(s, \langle \bar{x}\bar{Q}, \mathrm{Follower}, \psi \rangle, \act{DONE})$ & $=$ & $(s, \langle \bar{Q}, \mathrm{Follower}, \psi\rangle )$ \\ \hline 6 & $\forall s \in \mathcal{S}$, $|\bar{Q}| = 1$ & $\mathcal{T}(s, \langle \bar{Q}, \mathrm{Follower}, \psi \rangle, \act{DONE})$ & $=$ & $(s, \langle \langle~\rangle, \mathrm{Leader}, \Psi_l \rangle )$ \\ \hline 7 & $\forall a \in \actions_w$, $s \in \mathcal{S}$ & $\mathcal{T}(s, \langle \bar{Q}, \mathrm{Follower}, 1\rangle, a)$ & $=$ & $(\mathcal{T}_w(s, a), \langle \bar{Q}, \mathrm{Leader}, \Psi_l\rangle)$\\ \hline \hline \multirow{3}{*}{8} & $\forall a \in \actions_w$, $s \in \mathcal{S}$ & \multirow{3}{*}{$\mathcal{T}\left(s, \langle \bar{Q}, \alpha, \psi \rangle, a\right)$} & \multirow{3}{*}{$=$} & \multirow{3}{*}{$\left(\mathcal{T}_w(s, a), \langle \bar{Q}, \alpha, \psi - 1\rangle\right)$} \\ & $\forall\psi \in {\rm I\!N}_{>1}$ & & & \\ & $\forall \alpha \in \{ \mathrm{Leader}, \mathrm{Follower}\}$ & & & \\ \hline \end{tabular} \caption{Definition of transition function $\mathcal{T}$. $\mathcal{T}_w$ is the world state transition function.} \label{tab:transition_func} \end{table*} The transition function in \textsc{CerealBar}\xspace $\mathcal{T}: \mathcal{S} \times \Gamma \times \mathcal{A} \rightarrow \mathcal{S} \times \Gamma$ is formally defined in Table~\ref{tab:transition_func}. Each of the rules in the table is additionally associated with a domain over which it is not defined, for example when $\alpha = \mathrm{Follower}$ and $a \in \mathcal{X}$ (i.e., the follower can not give instructions). The rules are: \begin{enumerate}[start=1,label={\bfseries Rule \arabic*:},labelwidth=3em, itemindent=2em] \item When an instruction is issued, it is added to the end of the queue. This action does not use a step, so the number of steps remaining $\psi$ does not decrease. This rule is not defined when $\alpha = \mathrm{Follower}$ because the follower cannot give an instruction. \item When the leader ends their turn, and the queue is not empty, control switches to the follower, and the number of steps remaining in the turn is the maximum number for the follower $\Psi_f$. \item When the leader ends their turn, and the queue is empty, control does not switch to the follower; instead, a new leader turn begins with $\Psi_l$ available steps. \item When the leader runs out of remaining steps, control does not immediately switch to the follower. This allows the leader to issue more instructions before manually ending their turn or when their time runs out. \item When the follower marks an instruction as finished, and more instructions remain in the queue, the current instruction at the head of the queue is removed. This action does not use a step. \item When the follower marks an instruction as finished, if the finished instruction was the last in the queue, control automatically switches to the leader with $\Psi_l$ remaining steps. \item When the follower runs out of steps in their turn, control immediately switches to the leader with $\Psi_l$ remaining steps. \item Both agents can take actions which modify the world state $s$. Each such action $a \in \actions_w$ costs a step. We assume access to a domain-specific transition function, $\mathcal{T}_w : \mathcal{S} \times \actions_w \rightarrow \mathcal{S}$, that describes how an environment action modifies the environment. There may exist combinations of states and actions for which $\mathcal{T}_w$ is not defined; for example, an agent moving forward onto an obstacle. Additionally, $\forall s \in \mathcal{S}$ and $ a \in \actions_w$, $\mathcal{T}(s, \langle Q, \mathrm{Leader}, 0 \rangle, a )$ results in an invalid state because, while the leader can still issue instructions after running out of steps, they cannot move. \end{enumerate} \section{Implementation and Hyperparameters} \label{sec:sup:impl} \paragraph{Hyperparameters} We tune hyperparameters on the development set. We use a word embedding size of 64, and encode instructions into a vector of length 64 using a single-layer RNN with LSTM units. We lowercase words in the vocabulary and map all words with a frequency of one in the training set to a single out-of-vocabulary token. We use a hex property embedding size of 32. $\mathbf{S}'$ has four channels. The text-based kernels map to a feature map with 24 channels. The convolution and transpose convolution phases of $\system{LingUNet}\xspace$ use kernel sizes of three. The action generator uses a forward RNN with a single layer consisting of 128 LSTM hidden units. The action embedding size is 32. We rotate, transform, and crop the input plan distribution to a $4 \times 5 \times 5$ feature map around the agent's current position and rotation for each generated action. $\textsc{CNN}^P$ maps the cropped distributions to a feature map with eight channels, and has a kernel size of three and stride of one. During fine-tuning, each $\mathbf{K}^\mathrm{IMP}_l$ does not have biases. For all LSTMs, we initialize the hidden state $\mathbf{h}_0$ as a zero vector. For brevity, cell memory $\mathbf{c}^D$, also initialized as a zero vector, is omitted from RNN descriptions. \paragraph{Learning} The plan prediction stage (Stage 1) includes the following parameters and parameterized components: $\phi^{\mathcal{X}}$, $\textrm{RNN}^\mathcal{X}$, $\phi^{\mathcal{S}}$, $\mathbf{W}_s$, $\mathbf{b}_s$, and $\system{LingUNet}\xspace$. The action generation stage (Stage 2) includes the following parameters and parameterized components: $\textsc{CNN}^P$, $\mathbf{W}^P_1$, $\mathbf{W}^P_2$, $\mathbf{b}^P_1$, $\mathbf{b}^P_2$, $\phi^{\mathcal{A}}$, $\textrm{RNN}^\mathcal{A}$, $\mathbf{W}^\mathcal{A}$, and $\mathbf{b}^\mathcal{A}$. We add the following parameters for the early goal prediction auxiliary objective and implicit reasoning discriminator $\mathbf{W}^{G'}$, $\mathbf{b}^{G'}$, and $\mathbf{K}^{\rm IMP}_l$, $1<l<L$. For pretraining Stage 1, we use a learning rate of 0.0075 using \textsc{Adam}~\cite{Kingma:14adam} and an L2 coefficient of $10^-6$. For pretraining Stage 2 and during fine-tuning, we use a learning rate of 0.001 and \textsc{Adam} with no L2 regularization. For pretraining Stage 1 and during fine-tuning, $\lambda_V = 0.04$, $\lambda_G = 1$, $\lambda_A = 0.1$, $\lambda_P = 0.1$, and $\lambda_{G'} = 1$. During fine-tuning, $\lambda_\mathrm{IMP} = 0.7$. During evaluation, we limit the maximum action sequence length to 25. For all experiments, we keep 5\% of the training data as held-out from parameter updates and used as a validation set. We use patience for stopping during pretraining of the plan predictor and the action generator (Section~\ref{sec:learn:pretrain}). We start with a patience of 10, which increases by a factor of 1.01 each time the stopping metric improves on the validation set. For plan prediction training, we use patience on the validation set accuracy of predicted goal locations. We compute goal location predictions by finding all positions $\rho$ such that $p(\mathrm{GOAL} = 1 \mid \rho, s_1, \bar{x}) \geq 0.5$. For action generation, we stop when card-state accuracy reaches a maximum on the validation set. For fine-tuning (Section~\ref{sec:learn:finetune}), we stop training after $25$ epochs, and choose the epoch that maximizes the proportion of points scored computed using cascaded evaluation (Section~\ref{sec:cascaded}) on the validation set. \paragraph{\system{Seq2seq+attn} Baseline} We embed the sentence tokens into 64-dimensional vectors, and compute a sentence representation using a single-layer RNN with 64 LSTM hidden units. We embed each position in the environment with a learned embedding function $\phi^\mathcal{S}$ mapping to a vector of size 32. The resulting feature map is put through four convolutional layers separated by leaky ReLU non-linearities. Each convolutional layer has a stride of two and divides the number of channels in half. The output of the last convolutional layer is flattened to a vector. We initialize the decoder hidden state to a zero-vector. In each timestep we pass in the concatenation of the embedding of the previous output, the embedded environment vector, and the previous result of the attention computation on the sentence. We take the initial attention result to be a zero vector. We compute the attention over the sentence hidden states using the dot product of hidden state with the current hidden state in the decoder RNN. The resulting attention state is concatenated with the decoder hidden state and the embedded environment vector, put through a leaky ReLU non-linearity, and and finally through a single fully-connected layer to predict probabilities over actions. We train the model using teacher forcing and apply the same learning rate, optimizer, and stopping criteria as the fine-tuning experiments. \section{Learning Details} \subsection{Stage 1 Loss Computation} \label{sec:sup:learn} This section provides formal details of the loss computation used in Section~\ref{sec:learn:pretrain}. For ease of notation, we consider a single example $\bar{I} = \langle (s_1, \gamma_1, a_1), \dots, (s_n, \gamma_n, a_n)\rangle$, where the instruction at the head of the queue $\bar{Q}$ is $\bar{x}$. The loss of the visitation distribution $p(\rho \mid s_1, \bar{x})$ is: \begin{small} \begin{equation*} \mathcal{L}_V(\theta_1) = -\sum_\rho p^*_V(\rho) \log p(\rho \mid s_1, \bar{x})\;\;, \end{equation*} \end{small} \noindent where the summation is over all positions $\rho$ in the environment and $p^*_V(\rho)$ is proportional to the number of states $s_t \in \bar{I}$ where the follower is in position $\rho$. We compute the goal and avoidance distribution losses only for positions that have cards: \begin{small} \begin{eqnarray*} \mathcal{L}_G (\theta_1) =& \\ &\hspace{-1cm} -\frac{1}{W\times H}\sum_{\rho \in C} p^*_G(\rho) \log p(\mathrm{GOAL} = 1 \mid \rho, s_1, \bar{x}) \\ \mathcal{L}_A(\theta_1) =& \\ &\hspace{-1cm} -\frac{1}{W\times H}\sum_{\rho \in C} p^*_A(\rho) \log p(\mathrm{AVOID} = 1 \mid \rho, s_1, \bar{x})\;\;, \end{eqnarray*} \end{small} \noindent where $C$ is the set positions that contain cards, $W$ is the width of the environment, and $H$ is the height. We set $p^*_G(\rho)$ to 1 for all $\rho$ that contain a card that the follower changed its selection status in $\bar{I}$, and 0 for all other positions. Similarly, we set $p^*_A(\rho)$ to 1 for all $\rho$ that have cards that the follower does not change during the interaction $\bar{I}$, but zero for the initial position regardless of whether it contains a card. The loss for the no passing distribution is: \begin{small} \begin{eqnarray*} \mathcal{L}_P(\theta_1) =& \\ &\hspace{-1cm} -\frac{1}{W\times H}\sum_{\rho} p^*_P(\rho)\log p(\mathrm{NOPASS} = 1\mid \rho, s_1, \bar{x})\;\;, \end{eqnarray*} \end{small} \noindent where $p^*_P(\rho)$ is 1 for all positions the agent cannot move onto, and zero otherwise. The auxiliary goal-prediction loss is: \begin{small} \begin{eqnarray*} \mathcal{L}_{G'}(\theta_1) =& \\ &\hspace{-1cm} -\frac{1}{W\times H}\sum_{\rho \in C} p^*_G(\rho) \log p'_G(\mathrm{GOAL} = 1 \mid \rho, s_1, \bar{x})\;\;. \end{eqnarray*} \end{small} \noindent We compute the goal probability with the learned parameters $\mathbf{W}^{G'}$ and $\mathbf{b}^{G'}$: \begin{small} \begin{equation*} p'_G(\mathrm{GOAL} = 1 \mid \rho, s_1, \bar{x} ) = \sigma(\mathbf{W}^{G'} \mathbf{S}'_\rho + \mathbf{b}^{G'})\;\;, \end{equation*} \end{small} \noindent where $\mathbf{S}'_\rho$ is the vector along the channel dimension for position $\rho$ in the environment embedding tensor $\mathbf{S}'$. \subsection{Example Aggregation} \label{sec:app:learning} \paragraph{Error Classes} We identify two classes of erroneous states in \textsc{CerealBar}\xspace: (a) not selecting the correct set of cards specified by the instruction; and (b) finishing with the right card selection, but stopping at the wrong position. To recover from case (a), the agent could unselect cards it shouldn't have selected, or select cards it missed. Alternatively, the agent could recognize it has made an error, and instead stop and wait for the next leader instruction, anticipating a correction. However, learning this requires access to previous world states and instructions. We focus on modification of the learning algorithm using example aggregation, and leave this case for future work. We instead target class (b), and add a discriminator to the model to allow the model to learn different reasoning for examples that require implicit actions, as discussed in Section~\ref{sec:learn:finetune}. \paragraph{Creating Recovery Examples} The oracle generates a sequence of state-action pairs to go from $s'$, the incorrect initial state from the previous instruction, to state $s_t$ at index $t$ in the correct sequence such that $s_t$ is either the first state in the sequence where a card's state changes, or if no cards are changed, the final state $s_n$. The oracle finds a sequence of state-action pairs expressing the shortest path $s'$ to $s_t$. Finally, it appends the remainder of the correct state-action sequence starting from index $t$, $\langle (s_t, \gamma_t, a_t), \dots, (s_n, \gamma_n, a_n) \rangle$. If the correct sequence for $\bar{I}^{(i, {j+1})}$ is $\langle s_n, \gamma_n, \act{DONE} \rangle$ (i.e., no action was done in the original example), we do not generate a new path, but instead use the state-action sequence $\langle s', \gamma', \act{DONE} \rangle$ as annotation for $\bar{I}^{\prime(i,j+1)}$. These examples are annotated as not requiring implicit reasoning. During inference on the previous example $\bar{I}^{(i, j)}$, it is possible that some leader actions associated with that example may not be executed (i.e., if the follower predicted $\act{DONE}$ too soon). If this happens, the leader must execute actions to `catch up' to the follower in the generated recovery example. We first find the sequence of leader actions starting from the first leader turn associated with $\bar{I}^{(i, j)}$ that was not executed during inference, to the final leader turn associated with $\bar{I}^{(i, {j+1})}$. When generating the recovery sequence $\bar{I}^{\prime(i,j+1)}$, we take into consideration this sequence as affecting the world states $s$. For example, suppose that the agent stops a turn early during inference, and the final leader's turn consisting of actions $\langle \mathtt{FORWARD}, \mathtt{FORWARD}, \mathtt{FORWARD}, \act{DONE} \rangle$ was not executed. Instead of stopping in, for example, position $(3, 0)$, this may mean the leader has stopped in position $(0, 0)$. When creating the recovery example, the first world state $s_0$ shows the leader at position $(0, 0)$ rather than $(3, 0)$. To correct this, the recovery example will start with a leader turn, where the leader executes the sequence $\langle \mathtt{FORWARD}, \mathtt{FORWARD}, \mathtt{FORWARD}, \act{DONE} \rangle$. \section{Model Architecture Details} \label{app:sec:lingunet} \paragraph{\system{LingUNet}\xspace Formal Description} We provide a formal description of \system{LingUNet} for reference only. \system{LingUNet} was originally introduced by \citet{Misra:18goalprediction} and \citet{Blukis:18drone}. The input to $\system{LingUNet}\xspace$ are the environment representation $\mathbf{F}_0$ and instruction representation $\mathbf{\bar{x}}$. $\system{LingUNet}\xspace$ consists of three major stages: a series of convolutions on $\mathbf{F}_0$, a series of text-based convolutions derived from $\mathbf{\bar{x}}$, and a series of transposed convolutions to form a final prediction. The output of the $\system{LingUNet}\xspace$ is a feature map with the same width and height as $\mathbf{F}_0$. Each stage has the same number of operations, which we refer to as the depth $L$. First, a series of $L$ convolutional layers is applied to $\mathbf{F}_0$. Each layer at depth $l$ is a sequence of two convolution operations separated by a leaky ReLU non-linearity: \begin{small} \begin{equation*} \mathbf{F}_l = \textsc{Norm}(\textsc{ReLU}(\textsc{ReLU}(\mathbf{F}_{l-1} * \mathbf{K}^C_l)* \mathbf{K}^{C \prime}_l )) \;\; . \end{equation*} \end{small} \noindent We use a stride of two when convolving with $\mathbf{K}^{C\prime}_l$, and do not apply $\textsc{Norm}$ when $l = L$. In the second stage, the instruction representation $\mathbf{\bar{x}}$ is split into $L$ segments $\mathbf{\bar{x}}_l$ such that $\mathbf{\bar{x}} = [ \mathbf{\bar{x}}_1; \dots; \mathbf{\bar{x}}_L]$ and segments have equal length. Each segment is mapped to a $1 \times 1$ kernel $\mathbf{K}^I_l$ using learned weights $\mathbf{W}^I_l$ and biases $\mathbf{b}^I_l$. $\mathbf{K}^I_l$ is normalized and used to convolve over $\mathbf{F}_l$: \begin{small} \begin{equation*} \mathbf{G}_l = \textsc{Norm}(\mathbf{F}_l * ||\mathbf{K}^I_l||_2) \;\; . \end{equation*} \end{small} \noindent As before, we do not apply $\textsc{Norm}$ when $l = L$. In the last stage, a series of transposed convolutions\footnote{We use $*^\top$ to represent the transposed convolution operation.} are applied starting from the bottom layer and gradually synthesizing a larger feature map. For $l > 1$: \begin{small} \begin{equation*} \mathbf{H}_l = \textsc{Norm}(\textsc{ReLU}(\textsc{ReLU}([\mathbf{H}_{l+1}; \mathbf{G}_l] {*^\top} \mathbf{K}^T_l) *^\top \mathbf{K}^{T\prime}_{l})) \;\;, \end{equation*} \end{small} \noindent where $[\mathbf{H}; \mathbf{G}]$ indicates channel-wise concatenation of feature maps $\mathbf{H}$ and $\mathbf{G}$, $\mathbf{H}_{H+1}$ is a zero matrix, and $\textsc{Norm}$ is not applied when $l = L$. We use a stride of two when convolving with $\mathbf{K}^{T\prime}_l$. At the topmost layer of $\system{LingUNet}\xspace$, a final transposed convolution is applied to form a feature map $\mathbf{H}'_1$: \begin{small} \begin{equation*} \mathbf{H}'_1 = [\mathbf{H}_2; \mathbf{G}_1] *^\top \mathbf{K}^T_1 \;\; . \end{equation*} \end{small} The top layer $\mathbf{H}'_1 \in \mathbb{R}^{4 \times W \times H}$ is split into the four planning distributions as the output of the $\system{LingUNet}\xspace$. \paragraph{Frames of Reference} The world state is first embedded using a feature lookup and a text-conditioned kernel (Section~\ref{sec:model}; Input Representation). This feature map is rotated and centered to create $\mathbf{F}_0$, so that the agent's location when beginning to follow the instruction is in the center, and the agent is facing in a consistent direction. Therefore, $\system{LingUNet}\xspace$ (Section~\ref{sec:model}; Stage 1: Plan Prediction) operates over a feature map relative to the agent's frame of reference at the time of starting to follow the instruction. The action generator (Section~\ref{sec:model}; Stage 2: Action Generation) also operates on feature maps relative to the agent's frame of reference, updated as the agent moves and turns in the environment changing its location and orientation. At each action generation prediction step, the concatenated planning distributions $\mathbf{P}$ are rotated, centered, and cropped around the agent's current orientation. This orientation is determined by the orientation when starting the instructions and the actions it has executed so far for the current instruction.
1,314,259,995,506
arxiv
\section{Introduction} Recent advance in neutrino physics observation mainly of astrophysical observation suggested the existence of tiny neutrino mass. The experiments and observation has shown evidences for neutrino oscillation. The solar neutrino deficit has been observed {[}1,2,3,4], the atmospheric neutrino anomaly has been found {[}5,6,7], and currently almost confirmed by KamLAND {[}8], and hence indicate that neutrino massive and there is mixing in lepton sector, this indicate to imagine that there occurs CP violation in lepton sector. Several physicist has considered whether we can see CP violation effect in lepton sector through long baseline neutrino oscillation experiments. The neutrino oscillation probabilities in general depend on six parameters two independent mass squared difference $\Delta_{21}$and $\Delta_{31}$, there mixing angle, $\theta_{12}$, $\theta_{13}$., $\theta_{23}$.and one CP violating phase $\delta$. There are two large mixing angle $\theta_{12}$, $\theta_{23}$and one small ($\theta_{13}),$and two mass square difference $\Delta_{ij}=m_{j}^{2}-m_{i}^{2},$with $m_{ij}$the neutrino masses. Where \begin{equation} \Delta_{21}=\Delta_{solar},\end{equation} \begin{equation} \Delta_{31}=\Delta_{atm}.\end{equation} The angle $\theta_{12}$ and $\theta_{23}$~represent the neutrino mixing angles corresponding to solar and atmospheric neutrino oscillation. Much progress has been made towards determining the values of the three mixing angle. In this paper we discuss the effect of plancks scale on neutrino mixing and neutrino oscillation probability. \section{Neutrino Mixing Angle and Mass Squared Differences due to Planck Scale Effects} To calculate the effects of perturbation on neutrino observables. The calculation developed in an earlier paper {[}12]. A natural assumption is that unperturbed ($0^{th}$ order mass matrix) $M$~is given by \begin{equation} \mathbf{M}=U^{*}diag(M_{i})U^{\dagger},\end{equation} where, $U_{\alpha i}$ is the usual mixing matrix and $M_{i}$ , the neutrino masses is generated by Grand unified theory. Most of the parameter related to neutrino oscillation are known, the major expectation is given by the mixing elements $U_{e3}.$ We adopt the usual parametrization. \begin{equation} \frac{|U_{e2}|}{|U_{e1}|}=tan\theta_{12},\end{equation} \begin{equation} \frac{|U_{\mu3}|}{|U_{\tau3}|}=tan\theta_{23},\end{equation} \begin{equation} |U_{e3}|=sin\theta_{13}.\end{equation} In term of the above mixing angles, the mixing matrix is \begin{equation} U=diag(e^{if1},e^{if2},e^{if3})R(\theta_{23})\Delta R(\theta_{13})\Delta^{*}R(\theta_{12})diag(e^{ia1},e^{ia2},1).\end{equation} The matrix $\Delta=diag(e^{\frac{1\delta}{2}},1,e^{\frac{-i\delta}{2}}$) contains the Dirac phase. This leads to CP violation in neutrino oscillation $a1$ and $a2$ are the so called Majoring phase, which effects the neutrino less double beta decay. $f1,$ $f2$ and $f3$ are usually absorbed as a part of the definition of the charge lepton field. Planck scale effects will add other contribution to the mass matrix that gives the new mixing matrix can be written as {[}12] \[ U^{'}=U(1+i\delta\theta),\] \begin{equation} =\left(\begin{array}{ccc} U_{e1} & U_{e2} & U_{e3}\\ U_{\mu1} & U_{\mu2} & U_{\mu3}\\ U_{\tau1} & U_{\tau2} & U_{\tau3}\end{array}\right)+i\left(\begin{array}{ccc} U_{e2}\delta\theta_{12}^{*}+U_{e3}\delta\theta_{23,}^{*} & U_{e1}\delta\theta_{12}+U_{e3}\delta\theta_{23}^{*}, & U_{e1}\delta\theta_{13}+U_{e3}\delta\theta_{23}^{*}\\ U_{\mu2}\delta\theta_{12}^{*}+U_{\mu3}\delta\theta_{23,}^{*} & U_{\mu1}\delta\theta_{12}+U_{\mu3}\delta\theta_{23}^{*}, & U_{\mu1}\delta\theta_{13}+U_{\mu3}\delta\theta_{23}^{*}\\ U_{\tau2}\delta\theta_{12}^{*}+U_{\tau3}\delta\theta_{23}^{*}, & U_{\tau1}\delta\theta_{12}+U_{\tau3}\delta\theta_{23}^{*}, & U_{\tau1}\delta\theta_{13}+U_{\tau3}\delta\theta_{23}^{*}\end{array}\right).\end{equation} Where $\delta\theta$ is a hermition matrix that is first order in $\mu${[}12,13]. The first order mass square difference $\Delta M_{ij}^{2}=M_{i}^{2}-M_{j}^{2},$get modified {[}12,13] as \begin{equation} \Delta M_{ij}^{'^{2}}=\Delta M_{ij}^{2}+2(M_{i}Re(m_{ii})-M_{j}Re(m_{jj}),\end{equation} where \[ m=\mu U^{t}\lambda U,\] \[ \mu=\frac{v^{2}}{M_{pl}}=2.5\times10^{-6}eV.\] The change in the elements of the mixing matrix, which we parametrized by $\delta\theta${[}12], is given by \begin{equation} \delta\theta_{ij}=\frac{iRe(m_{jj})(M_{i}+M_{j})-Im(m_{jj})(M_{i}-M_{j})}{\Delta M_{ij}^{'^{2}}}.\end{equation} The above equation determine only the off diagonal elements of matrix $\delta\theta_{ij}$. The diagonal element of $\delta\theta_{ij}$ can be set to zero by phase invariance. Using Eq(8), we can calculate neutrino mixing angle due to Planck scale effects, \begin{equation} \frac{|U_{e2}^{'}|}{|U_{e1}^{'}|}=tan\theta_{12}^{'},\end{equation} \begin{equation} \frac{|U_{\mu3}^{'}|}{|U_{\tau3}^{'}|}=tan\theta_{23}^{'},\end{equation} \begin{equation} |U_{e3}^{'}|=sin\theta._{13}^{'}\end{equation} For degenerate neutrinos, $M_{3}-M_{1}\cong M_{3}-M_{2}\gg M_{2}-M_{1},$ because $\Delta_{31}\cong\Delta_{32}\gg\Delta_{21}.$ Thus, from the above set of equations, we see that $U_{e1}^{'}$ and $U_{e2}^{'}$ are much larger than $U_{e3}^{'},\,\, U_{\mu3}^{'}$ and $U_{\tau3}^{'}$. Hence we can expect much larger change in $\theta_{12}$ compared to $\theta_{13}$ and $\theta_{23}.$ As one can see from the above expression of mixing angle due to Planck scale effects, depends on new contribution of mixing $U^{'}=U(1+i\delta\theta).$ We assume that, just above the electroweak breaking scale, the neutrino masses are nearly degenerate and the mixing are Tri-bimaximal, with the value of the mixing angle as $\theta_{12}=35^{o},\,\,\theta_{23}=\pi/4$ and $\theta_{13}=0.$ Taking the common degenerate neutrino mass to be 2 eV, which is the upper limit coming from tritium beta decay {[}9]. We compute the modified mixing angles using Eqs (11)-(13). We have taken $\Delta_{31}=0.002eV^{2}[10]$ and $\Delta_{21}=0.00008eV^{2}${[}11]. For simplicity we have set the charge lepton phases $f_{1}=f_{2}=f_{3}=0.$~Since we have set the $\theta_{13}=0,$ the Dirac phase $\delta$~drops out of the zeroth order mixing angle. Next section , we discuss the neutrino oscillation probability under Planck scale effects \section{Neutrino Oscillation Probability Under Planck Scale Effects} The flux of solar neutrino observed by the Homestake detector was on third of that predicted by Standard solar Model (SSM). The phenomenon of neutrino oscillation can be used to explain neutrino deficit. suppose an electron neutrino is produced at $t=0.$ A set of neutrino mass eigen state at $t=0$ as \begin{equation} |\nu(t=0)>|\nu_{e}>=cos\theta_{12}|\nu_{1}(0)>+sin\theta_{12}|\nu_{2}(0)>.\end{equation} After time $t$ it becomes \begin{equation} |\nu(t=t)>|\nu_{\mu}>=cos\theta_{12}e^{-iE_{1}t}|\nu_{1}(0)>+sin\theta_{12}e^{-iE_{2}t}|\nu_{2}(0)>.\end{equation} Then the oscillation probability becomes \begin{equation} P(\nu_{e}\rightarrow\nu_{\mu})=sin^{2}2\theta_{12}sin^{2}\left(\frac{1.27\Delta_{21}L}{E}\right),\end{equation} and the survival probability \begin{equation} P(\nu_{e}\rightarrow\nu_{e})=1-sin^{2}2\theta_{12}sin^{2}\left(\frac{1.27\Delta_{21}L}{E}\right).\end{equation} In the above two equation units of $\Delta_{21}=m_{2}^{2}-m_{1}^{2}$ is $ev^{2},$L (baseline length) is in meter and E is neutrino energy in MeV. For a maximum oscillation case the phase term in eq(16), $\left(\frac{1.27\Delta_{21}L}{E}\right)$ equal to $\frac{\pi}{2}$, then oscillation probability only depend on $\theta_{12}$ \begin{equation} P(\nu_{e}\rightarrow\nu_{\mu})=sin^{2}2\theta_{12}.\end{equation} The oscillation probability due to Planck scale effects is \begin{equation} P(\nu_{e}\rightarrow\nu_{\mu})=sin^{2}2\theta_{12}^{'},\end{equation} In the above Eq(19), $\theta_{12}^{'}$ is the mixing angle due to Planck scale effects. \section{Numerical Results} We assume that, just above the electroweak breaking scale, the neutrino masses are nearly degenerate and the mixing are Teri-bi maximal, with the value of the mixing angle as $\theta_{12}=35^{o},\,\theta_{23}=\pi/4$ and $\theta_{13}=0.$ Taking the common degenerate neutrino mass to be 2 eV, which is the upper limit coming from tritium beta decay {[}9]. We compute the modified mixing angles using Eqs (11)-(13). We have taken $\Delta_{31}=0.002eV^{2}[10]$ and $\Delta_{21}=0.00008eV^{2}${[}11]. For simplicity we have set the charge lepton phases $f_{1}=f_{2}=f_{3}=0.$~Since we have set the $\theta_{13}=0,$ the Dirac phase $\delta$~drops out of the zeroth order mixing angle. We compute the modified mixing angles as function of $a_{1}$ and $a_{2}$ using Eq(11). In table 1, we list the modified neutrino mixing angle $\theta_{12}^{'}$ and maximum $P(\nu_{e}\rightarrow\nu_{\mu})$ oscillation probability for some sample of $a_{1}$ and $a_{2}$. \begin{table} \begin{tabular}{|c|c|c|c|} \hline $a_{1}$ & $a_{2}$ & $\theta_{12}^{'}$ & $P(\nu_{e}\rightarrow\nu_{\mu})=sin^{2}2\theta_{12}^{'}$\tabularnewline \hline \hline $0^{o}$ & $0^{o}$ & $36.63^{o}$ & $0.94$\tabularnewline \hline $0^{o}$ & $45^{o}$ & $36.82^{o}$ & $0.92$\tabularnewline \hline $0^{o}$ & $90^{o}$ & $34.99^{o}$ & $0.88$\tabularnewline \hline $0^{o}$ & $135^{o}$ & $36.88^{o}$ & $0.87$\tabularnewline \hline $0^{o}$ & $180^{o}$ & $38.51^{o}$ & $0.94$\tabularnewline \hline $45^{o}$ & $0^{o}$ & $36.63^{o}$ & $0.91$\tabularnewline \hline $45^{o}$ & $45^{o}$ & $34.97^{o}$ & $0.96$\tabularnewline \hline $45^{o}$ & $90^{o}$ & $33.26^{o}$ & $0.84$\tabularnewline \hline $45^{o}$ & $135^{o}$ & $35.09^{o}$ & $0.88$\tabularnewline \hline $45^{o}$ & $180^{o}$ & $36.63^{o}$ & $0.91$\tabularnewline \hline $90^{o}$ & $0^{o}$ & $35^{o}$ & $0.88$\tabularnewline \hline $90^{o}$ & $45^{o}$ & $33.43^{o}$ & $0.84$\tabularnewline \hline $90^{o}$ & $90^{o}$ & $31.77^{o}$ & $0.80$\tabularnewline \hline $90^{o}$ & $135^{o}$ & $33.49^{o}$ & $0.84$\tabularnewline \hline $90^{o}$ & $180^{o}$ & $35^{o}$ & $0.88$\tabularnewline \hline $135^{o}$ & $0^{o}$ & $36.63^{o}$ & $0.91$\tabularnewline \hline $135^{o}$ & $45^{o}$ & $35.04^{o}$ & $0.89$\tabularnewline \hline $135^{o}$ & $90^{o}$ & $33.26^{o}$ & $0.84$\tabularnewline \hline $135^{o}$ & $135^{o}$ & $35.02^{o}$ & $0.88$\tabularnewline \hline $135^{o}$ & $180^{o}$ & $36.63^{o}$ & $0.91$\tabularnewline \hline $180^{o}$ & $0^{o}$ & $38.51^{o}$ & $0.94$\tabularnewline \hline $180^{o}$ & $45^{o}$ & $36.82^{o}$ & $0.92$\tabularnewline \hline $180^{o}$ & $90^{o}$ & $34.99^{o}$ & $0.92$\tabularnewline \hline $180^{o}$ & $135^{o}$ & $36.88^{o}$ & $0.87$\tabularnewline \hline $180^{o}$ & $180^{o}$ & $38.51^{o}$ & $0.94$\tabularnewline \hline \end{tabular} \caption{Modified mixing angles and maximum $P(\nu_{e}\rightarrow\nu_{\mu})$ oscillation probabilities for some sample of $a_{1}$ and $a_{2}$. Input value are $\Delta_{31}=0.002eV^{2},$ $\Delta_{21}=0.00008eV^{2},$$\theta_{12}=35^{o},\theta_{23}=45^{o},$$\theta_{13}=0^{o}.$} \end{table} From Table 1, we see that planck scale effects change the $\theta_{12}$~from the Tri-bimaximal value of $\theta_{12}=35^{o}$to a value close the the best fit value of the data {[}15,16]. The Planck scale effects give rise the correction to neutrino mass matrix on electroweak symmetry breaking. It is imperative to cheack that these correction do not spoil the good agreement between the experiments fits and the predection of the tri-bimaximal mixing scenorio. It is expected that dynamics at a higher scale generates the neutrino mass matrix, which will eventually provides the presently observed neutrino mass and mixing. In an attractive scenario, the neutrino mixing pattern generated by high scale dynamics is predicted to be tri-bimaximal. However the solar neutrino data show that the mixing angle $\theta_{12}$ is substantially less than $35^{o}$. It is argued in the literature that renormalization group evolution effects from the higher scale to electroweak scale, can bring down the value of $\theta_{12}$ from $35^{o}$ to a value which is within the experimentally acceptable range. However, for a large range of neutrino parameters, the renormalization group evolution leads to negligible change in the neutrino mass matrix. Then it become imperative to search for such alternate mechanism for which the necessary reduction in $\theta_{12}$ can be achieved. \section{Conclusions} In this paper, we studied, how Planck scale effects the mixing and oscillation probability. The effective dimension-5 operator from Planck scale {[}12], leads to correction in neutrino mass matrix at the electroweak symmetry breaking scale. We compute the change in the mixing angle due to additional mass terms for the case of Tri-bimaximal. The change in $\theta_{12}$ is more than $3^{o}$from the Tri-bimaximal value. Therefore corresponding maximum change in oscillation probability is about 10\%. The change of $\theta_{12}$occurs of course, for degenerate neutrino mass with a common mass of about 2 eV. Cosmology constraints, from WMAP measurement {[}14] impose an upper limit of 0.7eV on neutrino mass.Then the change in the value of $\theta_{12}$is smaller. One summarizing statement of this work might be the following, due to Planck scale effects only $\theta_{12}$ deviated by $3.5^{o}$ and other mixing angle have very small deviation and maximum change of $P(\nu_{e}\rightarrow\nu_{\mu})$ oscillation probability is about 10\%, this can be achieved by our calculation of {}``Tri-Bimaximal'' neutrino mixing.
1,314,259,995,507
arxiv
\section{Introduction} Tight Binding Model (TBM) \textrm{\cite{1,2,3,4}} is a particular lattice QFT \textrm{\cite{5,6,7,8}} modeling couplings between quantum states living at closed neighboring sites. The interactions are modeled in terms of hops of crystal particles or holes; and brings together issues from High Energy Physics and Condensed Matter \textrm{\cite{9A,9B,10,11,12,13,14}}. Despite the restriction to the first nearest neighbor interactions, the studies based on TBM have been shown to capture the main information on the physical properties of lattice systems; like in graphene \textrm{\cite{15,16,17}} whose basic physical properties have been related to QED in $\left( 2+1\right) $ dimensions; for reviews see \textrm{\cite{9A,9B}, refs therein and \cite{18,19}}.\newline In this paper, we use TBM to engineer a board class of lattice QFTs that are based on ADE Lie algebras \textrm{\cite{191,192}} and their basic representations \textrm{\cite{193,194}}. These engineered lattice systems classify the electronic properties of acetylene chain as a $su\left( 2\right) $ model, graphene as a $su\left( 3\right) $ lattice model and may have application in other fields; in particular in QFT on non commutative geometry \textrm{\cite{20,21A,21B},} where space time is viewed as a crystal, and in the special subset of conformal field models based on affine Kac-Moody invariance\textrm{\ }and vertex operators\textrm{\ \cite{22,23,24} . \newline To fix the ideas; let us describe briefly the main lines of the construction in the case of the series $A_{N-1}\simeq su\left( N\right) $ \textrm{\cit {231} }which is generated by $\left( N-1\right) $ commuting Cartan generators $h^{i}$ and $N\left( N-1\right) $ step operators $E^{\pm \beta }$ where the vectors $\mathbf{\beta }=\left( \beta ^{1},...,\beta ^{N-1}\right) $ stand for the positive roots of $su\left( N\right) $. As a QFT on crystal, our $su\left( N\right) $ lattice model involves the two basic following \newline (\textbf{1}) \emph{the lattice} $\mathcal{L}_{su\left( N\right) }$:\newline It is made by the superposition of two sublattices $\mathcal{A}_{su\left( N\right) }$ and $\mathcal{B}_{su\left( N\right) }$ generated by the su\left( N\right) $ simple roots as in eqs(\ref{z1}). This $\left( N-1\right) $- dimensional lattice extends the \emph{1D} chain and the well known \emph{2D} honeycomb of graphene corresponding to $N=2$ and $N=3$ respectively; see figures (\ref{ch}), (\ref{co}) for illustration. \newline Each $\mathbf{r}_{m}$ site of $\mathcal{L}_{su\left( N\right) }$; say \mathbf{r}_{m}\in \mathcal{A}_{su\left( N\right) }$, has $N$ first nearest neighbors at $\left( \mathbf{r}_{m}+\mathbf{v}_{i}\right) \in \mathcal{B _{su\left( N\right) }$ and $N\left( N-1\right) $ second nearest neighbors at $\left( \mathbf{r}_{m}+\mathbf{V}_{ij}\right) \in \mathcal{A}_{su\left( N\right) }$ with the two remarkable relation \begin{equation} \begin{tabular}{lllll} {\small 1st nearest} & : & $\mathbf{v}_{0}+\mathbf{v}_{1}+\ldots +\mathbf{v _{N-1}=0$ & , & {\small (a)} \\ {\small 2nd nearest} & : & $\mathbf{V}_{ij}=\mathbf{v}_{i}-\mathbf{v}_{j}$ & , & {\small (b) \end{tabular} \label{bti} \end{equation that respectively have interpretation in terms of the vector weights of the fundamental and the adjoint representations of $su\left( N\right) $. \newline Recall that the weight vectors $\mathbf{\mu }_{i}$ (resp. $\mathbf{\beta _{ij}=\mathbf{\mu }_{i}-\mathbf{\mu }_{j}$) of the complex N dimensional fundamental (resp. adjoint) representations of su$\left( N\right) $ obey the following relations \begin{equation} \begin{tabular}{lllll} {\small fundamental} & : & $\mathbf{\mu }_{0}+\mathbf{\mu }_{1}+\ldots \mathbf{\mu }_{N-1}=0$ & , & {\small (a)} \\ {\small adjoint} & : & $\mathbf{\beta }_{ij}=\mathbf{\mu }_{i}-\mathbf{\mu _{j}$ & , & {\small (b) \end{tabular} \label{bt} \end{equation which are analogous to (\ref{bti}) and so solve them by taking $\mathbf{v _{i}=a\mathbf{\mu }_{i}$ and $\mathbf{V}_{ij}=a\beta _{ij}$ with constant $a$ to be interpreted later on. \newline Notice that the constraint eqs $\mathbf{\mu }_{0}+\mathbf{\mu }_{1}+\ldots \mathbf{\mu }_{N-1}=0$ and similarly \ $\sum \mathbf{\beta }_{ij}=0$ are also important from the physical side since they are interpreted in terms of the conservation of total momentum of the outgoing and incoming waves at each site of $\mathcal{L}_{su\left( N\right) }$ \begin{equation} \begin{tabular}{ll} $\mathbf{p}_{1}+\ldots +\mathbf{p}_{N-1}+\mathbf{p}_{0}=0$ & \end{tabular \end{equation (\textbf{2}) \emph{the hamiltonian} $\mathcal{H}_{su\left( N\right) } \newline Denoting by $F_{\mu _{i}}$ (resp. $G_{\beta _{ij}}$) the field operators generating the hops of the particles/holes between the site $\mathbf{r}_{m}$ and $\mathbf{r}_{m}+\mathbf{v}_{i}$ (resp. $\mathbf{r}_{m}+\mathbf{V}_{ij} ), the proposed hamiltonian $\mathcal{H}_{su\left( N\right) }$ describing the quantum state couplings up to second nearest neighbors on $\mathcal{L _{su\left( N\right) }$ reads as follows \begin{equation} \begin{tabular}{lll} $\mathcal{H}_{su\left( N\right) }=$ & $-t_{1}\left( \dsum\limits_{\substack{ \text{weights of } \\ \text{fund of }su\left( N\right) }}F_{\mu _{k}}+hc\right) -t_{2}\left( \dsum\limits_{su\left( N\right) \text{ roots}G_{\beta _{ij}}+hc\right) $ & \end{tabular} \label{A1} \end{equation where the $t_{1}$, $t_{2}$ are hop energies. The fermionic field realisations of $F_{\mu _{k}}$, $G_{\beta _{ij}}$ are given by eqs(\ref{B1}) \newline The proposed $\mathcal{H}_{_{su\left( N\right) }}$ depends on the $su\left( N\right) $ algebra representation quantities namely the weight vectors \mathbf{\mu }_{i}$ of the fundamental of $su\left( N\right) $ and its roots \mathbf{\beta }_{ij}$. This property leads a priori to an energy spectrum of $\mathcal{H}_{su\left( N\right) }$ completely characterized by the wave vector $\mathbf{k}$, the weights $\mathbf{\mu }_{i}$ and the roots $\mathbf \beta }_{ij}$; but as we will see the $\mathbf{\mu }_{i}$ dependence is implicit and appears only through the roots. Such results are also shown to extend naturally to the $so\left( 2N\right) $ lattice models.\newline The presentation is as follows: In section 2, we develop our proposal for the case of lattice models based on $su\left( N\right) $ Lie algebras. In section 3, we consider the $su\left( 2\right) $ and $su\left( 3\right) $ models describing respectively the electronic properties of the acetylene chain and graphene. In section 4, we deepen the $su\left( 4\right) $ lattice model and in section 5 we give conclusion and further comments regarding DE extension. \section{The proposal: $su\left( N\right) $ model} In this section, we develop our proposal by first building the real lattice \mathcal{L}_{su\left( N\right) }$ that is associated with the hamiltonian \ref{A1}) refered to as the su$\left( N\right) $ lattice model. Then, we give a QFT realization of the field operators $F_{\mu _{i}}$ and $G_{\beta _{ij}}$ using free fermionic fields on $\mathcal{L}_{su\left( N\right) }$. We also give the energy dispersion $\varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) $ relation in terms of the wave vector $\mathbf{k}$, the weights $\mathbf{\mu }_{i}$ and the roots $\beta _{ij}$. \subsection{Building the lattice $\mathcal{L}_{su\left( N\right) }$} The lattice $\mathcal{L}_{su\left( N\right) }$ is a real $\left( N-1\right) - dimensional crystal with two superposed integral sublattices $\mathcal{A _{su\left( N\right) }$ and $\mathcal{B}_{_{su\left( N\right) }}$; each site \mathbf{r}_{\mathbf{m}}$ of these sublattices is generated by the $su\left( N\right) $ simple roots $\mathbf{\alpha }_{1},...,\mathbf{\alpha }_{N-1}$; \begin{equation} \begin{tabular}{llll} $\mathbf{r}_{\mathbf{m}}$ & $=$ & $\dsum\limits_{m_{1},...,m_{N-1}}m_{1 \mathbf{\alpha }_{1}+m_{2}\mathbf{\alpha }_{2}+...m_{N-1}\mathbf{\alpha _{N-1}$ & \end{tabular} \label{z1} \end{equation with $m_{i}$ integers; for illustration see the schema (a), (b), (c) of the figure (\ref{123}) corresponding respectively to $N=2,3,4$ and which may be put in one to one with the $sp^{1}$, $sp^{2}$ and $sp^{3}$ hybridization of the carbon atom orbitals $2s$ and $2p$. \begin{figure}[tbph] \begin{center} \hspace{0cm} \includegraphics[width=12cm]{1230} \end{center} \par \vspace{-0.5 cm} \caption{(\textbf{a}) 1A+2B lattice sites of $\mathcal{L}_{su\left( 2\right) }$; A-type in blue and B-type in red; the 2B form a $su\left( 2\right) $ doublet. (\textbf{b}) 1A+3B sites of $\mathcal{L}_{su\left( 3\right) }$; the 3B form a $su\left( 3\right) $ triplet. (\textbf{c}) 1A+4B sites of \mathcal{L}_{su\left( 4\right) }$ with 4B sites forming a regular tetrahedron. } \label{123} \end{figure} On each lattice site $\mathbf{r}_{m}$ of $\mathcal{L}_{su\left( N\right) }$; say of A-type, lives a quantum state $A\left( \mathbf{r}_{m}\right) $ coupled to the nearest neighbor states; in particular the first nearest states $B\left( \mathbf{r}_{m}+\mathbf{v}_{i}\right) $ and the second nearest ones $A\left( \mathbf{r}_{m}+\mathbf{V}_{ij}\right) $. \newline Generally, generic sites in $\mathcal{L}_{su\left( N\right) }$ have the following properties:\newline (\textbf{1}) $N$ first nearest neighbors with relative position vectors \mathbf{v}_{i}$ constrained as \begin{equation} \begin{tabular}{lll} $+\mathbf{v}_{0}+\mathbf{v}_{1}+\ldots +$ $\mathbf{v}_{N-1}$ & $=0$ & \end{tabular} \label{21} \end{equation or equivalentl \begin{equation} \begin{tabular}{lll} $-\mathbf{v}_{0}-\mathbf{v}_{1}-\ldots -$ $\mathbf{v}_{N-1}$ & $=0$ & \end{tabular \end{equation respectively related with the fundamental \underline{$\mathbf{N}$} and anti-fundamental $\mathbf{\bar{N}}$ representations of $su\left( N\right) $. Indeed, by using (\ref{bt}-a), these constraint relations are solved in terms of the $su\left( N\right) $ weight vectors $\mathbf{\mu }_{i}$ (resp. -\mathbf{\mu }_{i}$) of the fundamental (anti-fundamental) representation as follow \begin{equation} \begin{tabular}{lllll} $\mathbf{v}_{i}$ & $=a\mathbf{\mu }_{i}$ & $\equiv $ & $d\frac{\mathbf{\mu _{i}}{\left\Vert \mathbf{\mu }_{i}\right\Vert }$ & \end{tabular} \label{22} \end{equation where $d$ is the relative distance between the closest $\mathcal{L _{su\left( N\right) }$ sites. From the QFT view, this means that the quantum states at $\mathbf{r}_{m}+\mathbf{v}_{i}$ sites are labeled by the $\mathbf \mu }_{i}$ weights as \begin{equation} \begin{tabular}{lll} $B\left( \mathbf{r}_{m}+\mathbf{v}_{i}\right) $ & $\equiv B_{\mathbf{\mu _{i}}\left( \mathbf{r}_{m}\right) $ & \end{tabular \end{equation and so the multiple \begin{equation} \begin{tabular}{ll} $\left( \begin{array}{c} |B_{\mathbf{\mu }_{0}}> \\ \vdots \\ |B_{\mathbf{\mu }_{N-1}} \end{array \right) $ & \end{tabular \end{equation transform in the fundamental representation of $su\left( N\right) $ and its conjugate in the anti-fundamental. \newline (\textbf{2}) $N\left( N-1\right) $ second nearest neighbors of A-type with relative position vectors $\mathbf{V}_{ij}$ given by eq(\ref{bti}-b) and obeying the constraint relation \begin{equation} \begin{tabular}{lll} $\dsum\limits_{i,j}\mathbf{V}_{ij}$ & $=0$ & \end{tabular} \label{cd} \end{equation This condition is naturally solved by (\ref{bt}-a) and (\ref{22}) showing that the relative vectors between second nearest neighbors are proportional to su$\left( N\right) $ roots $\mathbf{\beta }_{ij}$ like \begin{equation} \begin{tabular}{lll} $\mathbf{V}_{ij}=a\mathbf{\beta }_{ij}$ & , & $\mathbf{\beta }_{ij}=\mathbf \mu }_{i}-\mathbf{\mu }_{j} \end{tabular} \label{26} \end{equation and so the condition (\ref{cd}) turns to a $su\left( N\right) $ property on its adjoint representation labeled by the roots. \subsection{More on $\mathcal{L}_{su\left( N\right) }$} To get more insight into the structure of the lattice $\mathcal{L}_{su\left( N\right) }$, it is useful to recall some basic results on $su\left( N\right) $ \textrm{\cite{231}}. \newline This algebra has $\frac{N\left( N-1\right) }{2}$ positive roots $\mathbf \beta }_{ij}$ with $i>j$, which we denote collectively as $+\beta $, and \frac{N\left( N-1\right) }{2}$ negative ones $-\mathbf{\beta }$ so that the sum on the total roots is zer \begin{equation} \begin{tabular}{lll} $\dsum\limits_{\func{positive}\text{ roots}}\mathbf{\beta }+\dsum\limits_ \func{negative}\text{ roots}}\mathbf{\beta }$ & $=\mathbf{0}$ & \end{tabular \end{equation This property which captures (\ref{cd}) is precisely the analog of eq(\re {bti}-a) for the case of the the adjoint representation of $su\left( N\right) $. \newline Moreover, the $\pm \mathbf{\beta }$ roots have same length $\mathbf{\beta ^{2}=2$ and are given by positive/negative integral combinations of the \left( N-1\right) $ simple roots $\mathbf{\alpha }_{1},...,\mathbf{\alpha _{n-1}$ \begin{equation} \begin{tabular}{llll} $\pm \beta =\pm \dsum\limits_{i}l_{i}\alpha _{i}$ & , & $l_{i}\in Z_{+}$ & \end{tabular} \label{28} \end{equation Notice that the simple roots $\mathbf{\alpha }_{i}$ are basic objects in Lie algebras; they capture several information. In particular, they allow to define the fundamental weight vectors $\mathbf{\lambda }_{i}$ obeying \begin{equation} \begin{tabular}{ll} $\mathbf{\lambda }_{i}.\mathbf{\alpha }_{j}=\delta _{ij}$ & \end{tabular} \label{29} \end{equation and give as well the intersection matrix \begin{equation} \begin{tabular}{llllll} $\mathbf{K}_{ij}=\frac{2\left( \mathbf{\alpha }_{i},\mathbf{\alpha _{j}\right) }{\left( \mathbf{\alpha }_{i},\mathbf{\alpha }_{i}\right) }$ & = $ & $\mathbf{\alpha }_{i}.\mathbf{\alpha }_{j}$ & , & $\mathbf{\alpha _{i}.\mathbf{\alpha }_{i}=2$ & \end{tabular \end{equation encoding all data on the Lie algebra properties of $su\left( N\right) $. This matrix is real symmetric reading as \begin{equation} \begin{tabular}{ll} $\mathbf{K}_{ij}=\left( \begin{array}{cccccc} 2 & -1 & 0 & \cdots & 0 & 0 \\ -1 & 2 & -1 & & 0 & 0 \\ 0 & -1 & 2 & & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & & 2 & -1 \\ 0 & 0 & 0 & \cdots & -1 & \end{array \right) _{r\times r}$ & \end{tabular \end{equation with rank $r=\left( N-1\right) $. \newline Notice also that $su\left( N\right) $ has $\left( N^{2}-1\right) $ dimensions generated by $r$ commuting Cartan operators $h^{1},...,h^{r}$ giving the charge vectors of the $su\left( N\right) $ states; and by the step ones $E^{\pm \beta }$ allowing to hop between the states of the representation. These operators obey the commutation relations, \begin{equation} \begin{tabular}{lll} $\left[ h^{i},h^{j}\right] $ & $=$ & $0$ \\ $\left[ h^{i},E^{\beta }\right] $ & $=$ & $\beta ^{i}E^{\beta }$ \\ $\left[ E^{-\beta },E^{\beta }\right] $ & $=$ & $\frac{2}{\beta ^{2}}\beta .h $ \\ $\left[ E^{\alpha },E^{\beta }\right] $ & $=$ & $\varepsilon _{\alpha \beta }E^{\alpha +\beta }\text{ \ \ \ \ if \ }\alpha +\beta \ \text{is a root} \end{tabular \end{equation and are used to construct highest weight state representation (HWR) with highest state $|\phi _{\lambda }>$ and highest weight vector (dominant weight) $\mathbf{\lambda }$ solving the following constraint relation \begin{equation} \begin{tabular}{lll} $E^{+\beta }|\phi _{\lambda }>$ & $=0$ & , \\ $h^{i}|\phi _{\mathbf{\lambda }}>$ & $=\lambda ^{i}|\phi _{\mathbf{\lambda }>$ & \end{tabular \end{equation The other $\left( N-1\right) $ states $|\phi _{\mu _{i}}>$ of the representation are obtained by successive actions on $|\phi _{\lambda }>$ by the typical monomials $E^{-\beta _{m}}\ldots E^{-\beta _{2}}E^{-\beta _{1}} . \newline One of these HWRs is precisely the N dimensional fundamental representation we are interested in here; it has $N$ states \begin{equation} \begin{tabular}{ll} $\left( \begin{array}{c} F_{\mu _{0}} \\ F_{\mu _{1}} \\ \vdots \\ F_{\mu _{N-1} \end{array \right) $ & \end{tabular \end{equation with weight vectors \begin{equation} \begin{tabular}{ll} $\mathbf{\mu }_{i}=\mathbf{\lambda }-\sum_{l=1}^{i}\mathbf{\beta }_{l}$ & \end{tabular} \label{ml} \end{equation \ satisfying (\ref{bti}-a) with $\mathbf{\mu }_{0}=\mathbf{\lambda }$; and from which we learn that \begin{equation} \begin{tabular}{ll} $\mathbf{\mu }_{i+1}-\mathbf{\mu }_{i}$ & \end{tabular \end{equation is indeed an $su\left( N\right) $ root. For an illustration of (\ref{ml}); see the explicit analysis regarding the $su\left( 4\right) $ lattice model; in particular eq(\ref{mm}). \subsection{Fermionic realization of $\mathcal{H}_{su\left( N\right) }$} Denoting by $A_{\mathbf{r}_{m}}^{\pm }$ (resp. $B_{\mathbf{r}_{m}+\mathbf{v _{i}}^{\pm }$) the local fermionic creation and annihilation operators satisfying the usual anticommutation relations, the hamiltonian on $\mathcal L}_{su\left( N\right) }$ reads as in (\ref{A1}) with $F_{\mu _{i}}$ and G_{\beta }$ operators given by \begin{equation} \begin{tabular}{lll} $F_{\mu _{i}}$ & $=\dsum\limits_{r_{m}\in \mathcal{A}_{su\left( N\right) }}A_{\mathbf{r}_{m}}^{-}B_{\mathbf{r}_{m}+a\mu _{i}}^{+}$ & , \\ $G_{\beta }$ & $=\dsum\limits_{r_{m}\in \mathcal{A}_{su\left( N\right) }}\left( A_{\mathbf{r}_{m}}^{-}A_{\mathbf{r}_{m}+b\beta }^{+}+B_{\mathbf{r _{m}}^{-}B_{\mathbf{r}_{m}+b\beta }^{+}\right) $ & \end{tabular} \label{B1} \end{equation where $\mu _{i}$ are the weight vectors of the fundamental representation of $su\left( N\right) $ and $\beta $ a generic root. Notice that the operators F_{\mu _{i}}$ and its adjoint $F_{\mu _{i}}^{\dagger }$ transform respectively in the fundamental representation and its complex conjugat \begin{equation} \begin{tabular}{llllll} $F_{\mu _{i}}\sim $ & \underline{$N$} & , & $F_{\mu _{i}}^{\dagger }\sim $ & $\overline{N}$ & \end{tabular \end{equation By using Fourier transform of the field operators $A_{\mathbf{r}_{m}}^{\pm }$ and $B_{\mathbf{r}_{m}+\mathbf{v}_{i}}^{\pm }$ namely \begin{equation} \begin{tabular}{llll} $A^{\pm }\left( \mathbf{r}_{m}\right) $ & $\sim $ & $\dsum\limits_{\text wave vectors }\mathbf{k}}e^{\pm i\mathbf{k.r}_{m}}\tilde{A}\left( \mathbf{k \right) $ & \\ $B^{\pm }\left( \mathbf{r}_{m}+\mathbf{v}_{i}\right) $ & $\sim $ & \dsum\limits_{\text{wave vectors }\mathbf{k}}e^{\pm i\mathbf{k.}\left( \mathbf{r}_{m}+\mathbf{v}_{i}\right) }\tilde{B}\left( \mathbf{k}\right) $ & \end{tabular \end{equation we can put the hamiltonian $\mathcal{H}_{su\left( N\right) }$ like the sum over the $\left( N-1\right) $- dimensional wave vectors $\mathbf{k}$ as follows, \begin{equation} \begin{tabular}{ll} $\dsum\limits_{\text{wave vectors }\mathbf{k}}\tilde{H}_{\mathbf{k }^{su\left( N\right) }$ & \end{tabular \end{equation where $\tilde{H}_{\mathbf{k}}^{su\left( N\right) }$ has dispersion relations depending, in addition to $\mathbf{k}$, on the weights $\mathbf{\mu }_{i}$, the roots $\mathbf{\beta }$ and the hop energies t$_{1}$, t$_{2}$. In the particular case where $t_{2}$ is set to zero; the hamiltonian (\ref{A1}) reduces to the leading term \begin{equation} \begin{tabular}{ll} $\mathcal{H}_{su\left( N\right) }^{1}=-t_{1}\left( \dsum\limits_{weight \text{ }\mu _{i}}F_{\mu _{i}}+hc\right) $ & \end{tabular \end{equation and its dual Fourier transform simplifies as follows \begin{equation} \begin{tabular}{lll} $\tilde{H}_{\mathbf{k}}^{su\left( N\right) }=$ & $\left( \tilde{A}_{\mathbf{ }}^{-},B_{\mathbf{k}}^{-}\right) \left( \begin{array}{cc} 0 & \varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) \\ \overline{\varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) } & \end{array \right) \left( \begin{array}{c} A^{+} \\ B^{+ \end{array \right) $ & \end{tabular \end{equation with \begin{equation} \begin{tabular}{ll} $\varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) =\dsum\limits_ \text{weight vectors }\mu _{i}}e^{ia\mathbf{k}.\mathbf{\mu }_{i}}$ & \end{tabular \end{equation From these relations, we can compute the dispersion energies of the "valence" and "conducting" bands by diagonalizing $\tilde{H}_{\mathbf{k }^{su\left( N\right) }$. These energies are given by $\pm \left\vert \varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) \right\vert $ with \begin{equation} \begin{tabular}{ll} $\left\vert \varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) \right\vert =t_{1}\sqrt{N+2\dsum\limits_{i<j=0}^{N-1}\cos \left[ a\mathbf{k .\left( \mathbf{\mu }_{i}\mathbf{-\mu }_{j}\right) \right] }$ & \end{tabular} \label{di} \end{equation Notice that $\left\vert \varepsilon _{su\left( N\right) }\left( \mathbf{k \right) \right\vert $ depends remarkably in the difference of the weights \mathbf{\mu }_{i}\mathbf{-\mu }_{j}$; which according to (\ref{28}) is just the sum over $su\left( N\right) $ roots $\sum_{l=i}^{j}\mathbf{\beta }_{l}$ with $\mathbf{\beta }_{l}=\mathbf{\mu }_{l}\mathbf{-\mu }_{l+1}$. It follows then that dispersion energies for the first nearest neighbors depend on the wave vector $\mathbf{k}$ and the roots of $su\left( N\right) $. \begin{equation} \begin{tabular}{ll} $\left\vert \varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) \right\vert =t_{1}\sqrt{N+2\dsum\limits_{i<j=0}^{N-1}\cos \left( a\sum_{l=i}^{j}\mathbf{k}.\mathbf{\beta }_{l}\right) }$ & \end{tabular} \label{de} \end{equation This result is expected from group theory view since $\left\vert \varepsilon _{su\left( N\right) }\right\vert ^{2}=\varepsilon _{su\left( N\right) \overline{\varepsilon _{su\left( N\right) }}$ should be put in correspondence with the tensor product of the fundamental representation \underline{$N$} and its complex conjugate $\bar{N}$ \begin{equation} \begin{tabular}{llllll} \underline{$N$}$\otimes \bar{N}$ & $=$ & $I_{id}\oplus adj_{SU\left( N\right) }$ & , & $tr\left( I_{id}\right) =N$ & \end{tabular \end{equation giving the adjoint representation of $U\left( N\right) \simeq U\left( 1\right) \times SU\left( N\right) $.\newline Notice finally that eq(\ref{de}) may be further explicited by first expressing $\mathbf{\beta }_{l}$ in terms of simple roots as in (\ref{28}); that is $\mathbf{\beta }_{l}=\sum_{m=1}^{N-1}M_{ml}\alpha _{m}$ with $M_{nl}$ integers. Then expand the wave vector as $\mathbf{k=}\sum_{n=1}^{N-1}q_{n \mathbf{\lambda }_{n}$ with $q_{n}$ real number; and use (\ref{29}) to put \ref{de}) in the following handleable for \begin{equation} \begin{tabular}{ll} $\left\vert \varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) \right\vert =t_{1}\sqrt{N+2\dsum\limits_{i<j=0}^{N-1}\cos \left( a\dsum\limits_{l=i}^{j}\dsum\limits_{n=1}^{N-1}q_{n}M_{nl}\right) }$ & \end{tabular \end{equation Below, we consider explicit examples. \section{Leading $su\left( N\right) $ lattice models} In this section, we illustrate the above study on the leading examples N=2,3 $. These two lattice models describe the electronic properties of the delocalized electrons in the infinite acetylene type chain and graphene. \subsection{the $su\left( 2\right) $ model} In this case, the lattice $\mathcal{L}_{su\left( 2\right) }$ depicted in the figure (\ref{ch}) is a one dimensional chain with coordinate positions x_{m}=ma$ where $a$ is the site spacing and $m$ an integer. \begin{figure}[tbph] \begin{center} \hspace{0cm} \includegraphics[width=10cm]{A1} \end{center} \par \vspace{-0.5 cm} \caption{the {\protect\small lattice }$\mathcal{L}_{su\left( 2\right) } {\protect\small \ given by the superposition of two sublattices }$\mathcal{A _{su\left( 2\right) }${\protect\small \ (in blue) and }$\mathcal{B _{su\left( 2\right) }$ {\protect\small (in red). The atoms may be thought of as carbons in the }${\protect\small sp}^{{\protect\small 1}}${\protect\small \ hybridization state.}} \label{ch} \end{figure} Each site of $\mathcal{L}_{su\left( 2\right) }$ has two first nearest neighbors forming an $su\left( 2\right) $ doublet; and two second nearest ones associated with the two roots $\pm \alpha $ of $su\left( 2\right) $ in agreement with the generic result summarized in the table \begin{equation} \begin{tabular}{l|l|l|l|l} {\small nearest neighbors} & $su\left( N\right) $ & $su\left( 2\right) $ & su\left( 3\right) $ & $su\left( 4\right) $ \\ \hline {\small \ \ \ \ \ \ \ \ \ \ \ first } & $N$ & $2$ & $3$ & $4$ \\ {\small \ \ \ \ \ \ \ \ \ \ \ second} & $N\left( N-1\right) $ & $2$ & $6$ & 12$ \\ \hline \end{tabular \end{equation In this $N=2$ model, eqs(\ref{bti}) read a \begin{equation} \begin{tabular}{lll} $\mathbf{v}_{0}+\mathbf{v}_{1}=0$ & , & (a) \\ $\mathbf{V}_{01}=\mathbf{v}_{0}-\mathbf{v}_{1}$ & , & (b \end{tabular \end{equation and are solved by the fundamental weights $\mu _{0}=+\frac{1}{2},$ $\mu _{1}=-\frac{1}{2}$ of the su$\left( 2\right) $ fundamental representation; i.e the isodoublet. \newline The hamiltonian $\mathcal{H}_{su\left( 2\right) }$ of this model is given by \begin{equation} \begin{tabular}{lll} $\mathcal{H}_{su\left( 2\right) }$ & $=-t_{1}\left( F_{+\frac{1}{2}}+F_{ \frac{1}{2}}+hc\right) -t_{2}\left( G_{+1}+G_{-1}+hc\right) $ & \end{tabular \end{equation where $t_{1}$ and $t_{2}$ are hop energies. The fermionic field realization of the $F_{\pm \frac{1}{2}}$ and $G_{\pm 1}$ operators read in terms of the creation and annihilation $A_{\mathbf{x}_{m}}^{\pm },$ $B_{\mathbf{x _{m}}^{\pm }$ as follow \begin{equation} \begin{tabular}{lll} $F_{\pm \frac{1}{2}}$ & $=\dsum\limits_{m}A_{\mathbf{x}_{m}}^{-}B_{\mathbf{x _{m}\pm a}^{+}$ & , \\ $G_{\pm 1}$ & $=\dsum\limits_{m}\left( A_{\mathbf{x}_{m}}^{-}A_{\mathbf{x _{m}\pm 2a}^{+}+B_{\mathbf{x}_{m}}^{-}B_{\mathbf{x}_{m}\pm 2a}^{+}\right) $ & \end{tabular \end{equation Moreover, substituting $N=2$ in (\ref{di}), we get the dispersion energ \begin{equation} \left\vert \varepsilon _{su\left( 2\right) }\left( k\right) \right\vert =t_{1}\sqrt{2+2\cos \left( 2ak\right) } \end{equation which is also equal to $2t_{1}\cos \left( ka\right) $ and from which we read that the $\left\vert \varepsilon _{su\left( 2\right) }\left( k\right) \right\vert $ zeros take place for the wave vectors $k_{n}=\pm \frac{\pi }{2 }$ $\func{mod}$ $\frac{2\pi }{a}$. \subsection{the $su\left( 3\right) $ model and graphene} The lattice $\mathcal{L}_{su\left( 3\right) }$ is precisely the $2D$ honeycomb of graphene; it is given by the superposition of two sublattices \mathcal{A}_{su\left( 3\right) }$ and $\mathcal{B}_{su\left( 3\right) }$ as in the figure (\ref{co}). \begin{figure}[tbph] \begin{center} \hspace{0cm} \includegraphics[width=6cm]{A2} \end{center} \par \vspace{-1cm} \caption{{\protect\small sublattices A (in blue) and B (in red) of the honeycomb. The atoms may be thought of as carbons in the }$sp^ {\protect\small 2}}${\protect\small \ hybridization state.}} \label{co} \end{figure} Each site $\mathbf{r}_{m}$ in $\mathcal{L}_{su\left( 3\right) }$ has \emph{3} first nearest neighbors with relative vectors $\mathbf{v}_{i}$; and \emph{6} second neighbors $\pm \mathbf{V}_{ij}=\pm \varepsilon _{ijk}V_{k}$ in one to one correspondence with the vector weights of the $su\left( 3\right) $ fundamental representation and its roots. We have \begin{equation} \begin{tabular}{llll} $\mathbf{v}_{i}=a\mathbf{\mu }_{i}$ & , & $\pm \mathbf{V}_{i}=\pm a\mathbf \alpha }_{i}$ & \end{tabular} \label{DR} \end{equation where $a=d\sqrt{\frac{3}{2}}$. To fix the ideas, we give below the explicit expressions of the weight vectors $\mathbf{\mu }_{1},$ $\mathbf{\mu }_{2},$ \mathbf{\mu }_{0}$ and the roots $\mathbf{\alpha }_{1}$, $\mathbf{\alpha _{2}$, $\mathbf{\alpha }_{3}$, \begin{equation} \begin{tabular}{llllll} $\mathbf{\mu }_{1}=(\frac{\sqrt{2}}{2},\frac{\sqrt{6}}{6})$ & , & $\mathbf \mu }_{2}=(-\frac{\sqrt{2}}{2},\frac{\sqrt{6}}{6})$ & , & $\mathbf{\mu _{0}=-(0,\frac{\sqrt{6}}{3})$ & , \\ $\mathbf{\alpha }_{1}=(\sqrt{2},0)$ & , & $\mathbf{\alpha }_{2}=(-\frac \sqrt{2}}{2},\frac{\sqrt{6}}{2})$ & , & $\mathbf{\alpha }_{3}=(\frac{\sqrt{2 }{2},\frac{\sqrt{6}}{2})$ & \end{tabular} \label{C1} \end{equation from which we learn that $\mathbf{\alpha }_{3}=\mathbf{\alpha }_{1}+\mathbf \alpha }_{2}$ as it should be. In addition to the feature $\mathbf{\mu } \mathbf{\alpha \in }\mathbb{Z}$, these vectors obey the constraint relations \begin{equation} \begin{tabular}{lll} $\dsum\limits_{i=0}^{2}\mathbf{v}_{i}=0$ & , & $\dsum\limits_{\func{positive \text{ roots}}\mathbf{\alpha }+\dsum\limits_{\func{negative}\text{ roots} \mathbf{\alpha }=0 \end{tabular \end{equation together with the following remarkable relations \begin{equation} \begin{tabular}{llllll} $\mathbf{\mu }_{0}-\mathbf{\mu }_{1}=\mathbf{\alpha }_{3}$ & , & $\mathbf \mu }_{1}-\mathbf{\mu }_{2}=\mathbf{\alpha }_{1}$ & , & $\mathbf{\mu }_{2 \mathbf{-\mu }_{0}=\mathbf{\alpha }_{2}$ & \end{tabular \end{equation Substituting $N=3$ in (\ref{di}) and using the above equations, we get the dispersion energ \begin{equation} \begin{tabular}{ll} $\left\vert \varepsilon _{su\left( 3\right) }\left( \mathbf{k}\right) \right\vert =t_{1}\sqrt{3+2\left[ \cos a\mathbf{k.\alpha }_{1}+\cos a\mathbf k.\alpha }_{2}+\cos a\mathbf{k.\alpha }_{3}\right] }$ & \end{tabular \end{equation depending on the wave vector $\mathbf{k}=\left( k_{x},k_{y}\right) $ and the $su\left( 3\right) $ roots. \section{the $su\left( 4\right) $ lattice model} To illustrate the general properties of the $su\left( 4\right) $ model; we first give some basic features on the connection between $\mathcal{L _{su\left( 4\right) }$, depicted by the figure (\ref{4}), and the $su\left( 4\right) $ representations. Then, we develop a dynamical model based on the crystal $\mathcal{L}_{su\left( 4\right) }$. \begin{figure}[tbph] \begin{center} \hspace{0cm} \includegraphics[width=6cm]{SS} \end{center} \par \vspace{-1cm} \caption{the lattice $\mathcal{L}_{su\left( 4\right) }$ with {\protect\small sublattices }$\mathcal{A}_{su\left( 4\right) }${\protect\small \ (in blue) and }$\mathcal{B}_{su\left( 4\right) }${\protect\small \ (in red). Each atom has \emph{4} first nearest neighbors, forming a tetrahedron, and \emph{12} second nearest ones.}} \label{4} \end{figure} \subsection{structure property of $\mathcal{L}_{su\left( 4\right) }$} The lattice $\mathcal{L}_{su\left( 4\right) }$ is a 3-dimensional crystal; it is made by the superposition of two isomorphic, but shifted, sublattices \mathcal{A}_{su\left( 4\right) }$ and $\mathcal{B}_{su\left( 4\right) }$ following the same logic as in the case of the honeycomb which may be recovered by projection on a 2D plane. \newline Each site $\mathbf{r}_{m}$ in $\mathcal{L}_{su\left( 4\right) }$ has \emph{4} first nearest neighbors at $\left( \mathbf{r}_{m}+\mathbf{v}_{i}\right) $ forming the vertices of a regular tetrahedron. A way to parameterize the relative positions $\mathbf{v}_{i}$ with respect to the central position at \mathbf{r}_{m}$ is to embed the tetrahedron inside a cube; in this case we have \begin{equation} \begin{tabular}{llll} $\mathbf{v}_{1}=\frac{d}{\sqrt{3}}\left( -1,-1,+1\right) $ & , & $\mathbf{v _{2}=\frac{d}{\sqrt{3}}\left( -1,+1,-1\right) $ & \\ $\mathbf{v}_{3}=\frac{d}{\sqrt{3}}\left( +1,-1,-1\right) $ & , & $\mathbf{v _{0}=\frac{d}{\sqrt{3}}\left( +1,+1,+1\right) $ & \end{tabular} \label{vi} \end{equation Clearly these vectors satisfy the constraint relation (\ref{bti}-a). Moreover, having these expressions, we can also build the explicit positions of the \emph{12} second nearest neighbors; these are given by eq(\ref{bti -b); but are completely generated by the following basis vectors \begin{equation} \begin{tabular}{lllll} $\mathbf{R}_{1}=\frac{d}{\sqrt{3}}\left( 2,2,0\right) $ & , & $\mathbf{R _{2}=\frac{d}{\sqrt{3}}\left( 0,-2,2\right) $ & , & $\mathbf{R}_{3}=\frac{d} \sqrt{3}}\left( -2,2,0\right) \end{tabular} \label{RR} \end{equation that are related to $\mathbf{V}_{ij}$ as follows \begin{equation} \begin{tabular}{ll} $\mathbf{R}_{i}=\mathbf{V}_{\left( i-1\right) i}$ & \end{tabular \end{equation For later use, it is interesting to notice the two following:\newline (\textbf{a}) the $\mathbf{R}_{i}$ vector basis have the following intersection matrix \begin{equation} \mathbf{R}_{i}.\mathbf{R}_{j}=\frac{4d^{2}}{3}\mathbf{K}_{ij} \label{kij} \end{equation with $\mathbf{K}_{ij}$ and its inverse as \begin{equation} \begin{tabular}{llll} $\mathbf{K}_{ij}=\left( \begin{array}{ccc} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & \end{array \right) $ & , & $\mathbf{K}_{ij}^{-1}=\frac{1}{4}\left( \begin{array}{ccc} 3 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & \end{array \right) $ & \end{tabular \end{equation (\textbf{b}) using (\ref{vi}) and (\ref{RR}), it is not difficult to check the following special relation linking the $\mathbf{R}_{i}$'s and $\mathbf{v _{0}$ \begin{equation} \begin{tabular}{ll} $\frac{3}{4}\mathbf{R}_{1}+\frac{2}{4}\mathbf{R}_{2}+\frac{1}{4}\mathbf{R _{3}=\mathbf{v}_{0}$ & \end{tabular} \label{v0} \end{equation In fact this relation is a $su\left( 4\right) $ Lie algebra property following from $\mathbf{K}_{ij}^{-1}$ and (\ref{29}). \newline Concerning the vector positions of the remaining \emph{9} second neighbors, \emph{3} of them are given by $-\mathbf{R}_{1},-\mathbf{R}_{2},-\mathbf{R _{3}$ and the other \emph{6} by the linear combination \begin{equation} \begin{tabular}{llllll} $+\mathbf{R}_{4}=\mathbf{V}_{02}$ & $=+R_{1}+R_{2}$ & , & $+\mathbf{R}_{5} \mathbf{V}_{13}$ & $=+R_{2}+R_{3}$ & \\ $-\mathbf{R}_{4}=\mathbf{V}_{20}$ & $=-R_{1}-R_{2}$ & , & $-\mathbf{R}_{5} \mathbf{V}_{31}$ & $=-R_{2}-R_{3}$ & \\ $+\mathbf{R}_{6}=\mathbf{V}_{03}$ & $=+R_{1}+R_{2}+R_{3}$ & , & $-\mathbf{R _{6}=\mathbf{V}_{30}$ & $=-R_{1}-R_{2}-R_{3}$ & \end{tabular} \label{R} \end{equation From this construction, it follows that generic positions $\mathbf{r}_ \mathbf{m}}^{A}\equiv \mathbf{r}_{\mathbf{m}}$ and $\mathbf{r}_{\mathbf{m }^{B}$ in the $\mathcal{A}_{su\left( 4\right) }$ and $\mathcal{B}_{su\left( 4\right) }$ sublattices are given b \begin{equation} \begin{tabular}{llllll} $\mathcal{A}_{su_{4}}$ & : & $\mathbf{r}_{\mathbf{m}}$ & $=$ & $m_{1}\mathbf R}_{1}+m_{2}\mathbf{R}_{2}+m_{3}\mathbf{R}_{3}$ & , \\ $\mathcal{B}_{su_{4}}$ & : & $\mathbf{r}_{\mathbf{m}}^{B}$ & $=$ & $\mathbf{ }_{\mathbf{m}}+\mathbf{v}$ & \end{tabular} \label{mr} \end{equation where $\mathbf{m}=\left( m_{1}\mathbf{,}m_{2}\mathbf{,}m_{3}\right) $ is an integer vector and where the shift vector $\mathbf{v=r}_{\mathbf{m}}^{B} \mathbf{r}_{\mathbf{m}}^{A}$ is one of $\mathbf{v}_{i}$'s in (\ref{vi}) \newline Regarding the connection between $\mathcal{L}_{su\left( 4\right) }$ and the su\left( 4\right) $ Lie algebra representations, we distinguish two kinds of relations:\newline (\textbf{1}) \emph{a relation involving the roots}\newline From (\ref{kij}), it follows that the basis vectors $\mathbf{R}_{1}$, \mathbf{R}_{2}$, $\mathbf{R}_{3}$ may be interpreted in terms of the simple roots $\mathbf{\alpha }_{1},$ $\mathbf{\alpha }_{2},$ $\mathbf{\alpha }_{3}$ of the $su\left( 4\right) $ Lie algebra. More precisely, we hav \begin{equation} \begin{tabular}{lllll} $\mathbf{R}_{1}=\frac{2d}{\sqrt{3}}\mathbf{\alpha }_{1}$ & , & $\mathbf{R _{2}=\frac{2d}{\sqrt{3}}\mathbf{\alpha }_{2}$ & , & $\mathbf{R}_{3}=\frac{2 }{\sqrt{3}}\mathbf{\alpha }_{3} \end{tabular} \label{ro} \end{equation from which we learn that the matrix $K_{ij}=\mathbf{\alpha }_{i}.\mathbf \alpha }_{j}$ is indeed the Cartan matrix of $su\left( 4\right) $. \newline We also have the following relations showing that the position vectors of the second nearest neighbors are indeed in one to one correspondence with the roots of $su\left( 4\right) $ \begin{equation} \begin{tabular}{lllll} $\mathbf{R}_{4}=\frac{2d}{\sqrt{3}}\left( \mathbf{\alpha }_{1}+\mathbf \alpha }_{2}\right) $ & $,$ & $\mathbf{R}_{5}=\frac{2d}{\sqrt{3}}\left( \mathbf{\alpha }_{2}+\mathbf{\alpha }_{3}\right) $ & , & $\mathbf{R}_{6} \frac{2d}{\sqrt{3}}\left( \mathbf{\alpha }_{3}+\mathbf{\alpha }_{1}\right) \end{tabular} \label{ri} \end{equation together with their opposites.\newline (\textbf{1}) \emph{a relation involving the fundamental representation of su $\left( \emph{4}\right) $\newline As described for the generic $su\left( N\right) $, the four relative vectors $\mathbf{v}_{i}$ are, up to a global scale factor, nothing but the four weight vectors $\mathbf{\mu }_{i}$ of the fundamental representation of su\left( 4\right) $. The highest weight $\mathbf{\lambda }$ of this representation, which we set as $\mathbf{\mu }_{0}$, is precisely given by eq(\ref{v0}). Thus, we hav \begin{equation} \begin{tabular}{lllllll} $\mathbf{v}_{0}=\frac{2d}{\sqrt{3}}\mathbf{\mu }_{0}$ & , & $\mathbf{v}_{1} \frac{2d}{\sqrt{3}}\mathbf{\mu }_{1}$ & , & $\mathbf{v}_{2}=\frac{2d}{\sqrt{ }}\mathbf{\mu }_{2}$ & , & $\mathbf{v}_{3}=\frac{2d}{\sqrt{3}}\mathbf{\mu _{3} \end{tabular} \label{am} \end{equation that obey obviously the constraint relatio \begin{equation} \begin{tabular}{ll} $\mathbf{\mu }_{0}+\mathbf{\mu }_{1}+\mathbf{\mu }_{2}+\mathbf{\mu }_{3} \mathbf{0}$ & \end{tabular} \label{m} \end{equation This constraint equation is a typical vector relation for highest weight representations of Lie algebras; it extends the well known $su\left( 2\right) $ ones whose leading terms ar \begin{equation} \begin{tabular}{l|l} ${\small su}\left( {\small 2}\right) $ & {\small sum over weights} \\ \hline {\small doublet} & $\frac{1}{2}-\frac{1}{2}=0$ \\ {\small triplet} & $1+0-1=0$ \\ {\small quartet} & $\frac{3}{2}+\frac{1}{2}-\frac{3}{2}-\frac{1}{2}=0$ \\ \hline \end{tabular \end{equation To exhibit more explicitly the constraint relation (\ref{m}), it is interesting to express the weight vectors $\mathbf{\mu }_{i}$ in terms of the simple roots of su$\left( 4\right) $. We have \begin{equation} \begin{tabular}{llll} $\mathbf{\mu }_{0}$ & $\mathbf{=}$ & $+\frac{3}{4}\mathbf{\alpha }_{1}+\frac 2}{4}\mathbf{\alpha }_{2}+\frac{1}{4}\mathbf{\alpha }_{3}$ & \\ $\mathbf{\mu }_{1}$ & $\mathbf{=}$ & $-\frac{1}{4}\mathbf{\alpha }_{1}+\frac 2}{4}\mathbf{\alpha }_{2}+\frac{1}{4}\mathbf{\alpha }_{3}$ & \\ $\mathbf{\mu }_{2}$ & $\mathbf{=}$ & $-\frac{1}{4}\mathbf{\alpha }_{1}-\frac 2}{4}\mathbf{\alpha }_{2}+\frac{1}{4}\mathbf{\alpha }_{3}$ & \\ $\mathbf{\mu }_{3}$ & $\mathbf{=}$ & $-\frac{1}{4}\mathbf{\alpha }_{1}-\frac 2}{4}\mathbf{\alpha }_{2}-\frac{3}{4}\mathbf{\alpha }_{3}$ & \end{tabular} \label{mm} \end{equation This analysis teaches us as well two basic things:\newline First the number $N_{1}$ of the first nearest neighbors in the lattice \mathcal{L}_{su\left( 4\right) }$ is related to the dimension of the fundamental representation of su$\left( 4\right) \begin{equation} N_{1}=\dim \left( \underline{\mathbf{4}}\right) =4 \end{equation This means that the QFT on this lattice should capture some data on su\left( 4\right) $.\newline Second, the number $N_{2}$ of the second nearest neighbors in $\mathcal{L _{su\left( 4\right) }$ is also related to a su$\left( 4\right) $ quantity namely \begin{equation} N_{2}=\dim \left[ {\small su}\left( 4\right) \right] -\text{rank}\left[ {\small su}\left( 4\right) \right] =15-3 \end{equation These two Lie algebra numbers may be used as an algorithm to extend this construction to the case of the $so\left( 2N\right) $ and the $E_{6}$, E_{7} $, $E_{8}$ exceptional simply laced Lie algebras. \subsection{dynamical vacancy on lattice: a toy model} We begin by noting that, as far as the electronic properties are concerned, the schemas (a), (b), (c) of figure (\ref{123}) may be respectively associated with the $sp^{1}$, $sp^{2}$ and $sp^{3}$ hybridizations of the atom orbitals; i.e \begin{equation} \begin{tabular}{l|l|l} {\small \ \ \ figures} & {\small hybridization} & {\small example of molecules} \\ \hline \ \ {\small (\ref{123}-a)} & ${\small \ \ \ \ sp}^{1}$ & {\small acetylene} \\ \ \ {\small (\ref{123}-b)} & ${\small \ \ \ \ sp}^{2}$ & {\small graphene} \\ \ \ {\small (\ref{123}-c)} & ${\small \ \ \ \ sp}^{3}$ & {\small diamond} \\ \hline \end{tabular \end{equation In the two first examples, the atoms have delocalized pi- electrons that capture the electronic properties of the lattice atoms and have the following dispersion relation, \begin{equation} \begin{tabular}{ll} $\left\vert \varepsilon _{su\left( N\right) }\left( \mathbf{k}\right) \right\vert =t_{1}\sqrt{N+2\dsum\limits_{i<j=0}^{N-1}\cos \left[ a\mathbf{k .\left( \mathbf{\mu }_{i}\mathbf{-\mu }_{j}\right) \right] }$ & \end{tabular \end{equation with $N=2,3$. \newline In the case of $sp^{3}$, the atoms have no delocalized pi-electrons; they only have strongly correlated sigma- electrons which make the electronic properties of systems based on $\mathcal{L}_{su\left( 4\right) }$ different from those based on $\mathcal{L}_{su\left( 3\right) }$ and $\mathcal{L _{su\left( 2\right) }$. \newline However, as far as tight binding model idea is concerned, one may consider other applications; one of which concerns the following toy model describing a system based on the lattice $\mathcal{L}_{su\left( 4\right) }$ with dynamical vacancy sites. \emph{Toy model} \newline This is a lattice QFT on the $\mathcal{L}_{su\left( 4\right) }$ with dynamical particles and vacancies. The initial state of the system correspond to the configuration where the sites of the sublattice $\mathcal{ }_{su\left( 4\right) }$ are occupied by particles and those of the sublattice $\mathcal{B}_{su\left( 4\right) }$ are unoccupied \begin{equation} \begin{tabular}{l|l|l} {\small sublattice} & {\small initial configuration} & {\small quantum state} \\ \hline $\mathcal{A}_{su_{4}}$ & {\small particles at }$\mathbf{r}_{m}$ & $\mathbf{A \left( \mathbf{r}_{m}\right) $ \\ $\mathcal{B}_{su_{4}}$ & {\small vacancy at }$\mathbf{r}_{m}+\mathbf{v}$ & \mathbf{B}\left( \mathbf{r}_{m}+\mathbf{v}\right) $ \\ & & \\ \hline \end{tabular \end{equation Then, the particles (vacancies) start to move towards the neighboring sites with movement modeled by hop to first nearest neighbors. \newline Let $A\left( \mathbf{r}_{m}\right) $ and $B\left( \mathbf{r}_{m}+\mathbf{v _{i}\right) $ be the quantum states describing the particle at $\mathbf{r _{m}$ and the vacancy at $\mathbf{r}_{m}+\mathbf{v}_{i}$ respectively. Let also $A_{\mathbf{r}_{m}}^{\pm }$ and $B_{\mathbf{r}_{m}+\mu _{i}}^{\pm }$ be the corresponding creation and annihilation operators. The hamiltonian describing the hop of the vacancy/particle to the first nearest neighbors is given b \begin{equation} \begin{tabular}{ll} $\mathcal{H}_{su_{4}}=$ & $-t_{1}\left( \dsum\limits_{i=0}^{3}A_{\mathbf{r _{m}}^{-}B_{\mathbf{r}_{m}+\mu _{i}}^{+}+hc\right) \end{tabular \end{equation where the $\mu _{i}$'s are the weight vectors of the fundamental of su\left( 4\right) $. By performing Fourier transforms of the $A_{\mathbf{r _{m}}^{\pm }$, $B_{\mathbf{r}_{m}+\mu _{i}}^{\pm }$ field operators , we end with the dispersion energ \begin{equation} \begin{tabular}{ll} $\left\vert \varepsilon _{su_{4}}\left( k\right) \right\vert =t_{1}\sqrt 4+2\dsum\limits_{i<j}\cos \left( \mathbf{k.V}_{ij}\right) }$ & \end{tabular \end{equation with $\mathbf{V}_{ij}$ as in (\ref{v0}-\ref{R}). This relation depends on the wave vector $\mathbf{k}=\left( k_{x},k_{y}\right) $ and the $su\left( 4\right) $ roots. \section{Conclusion and comments} In the present paper, we have used TBM to engineer a board class of systems that are based on $su\left( N\right) $ Lie algebras and their basic representations. These engineered systems classify the acetylene chain and graphene as $su\left( 2\right) $ and $su\left( 3\right) $ models respectively. Our construction may have other applications; in particular in QFT on non commutative geometry where space time is a lattice and in lattice QFT for condensed matter as exemplified by the dynamical vacancy/particle toy model introduced in subsection \emph{4.2}.\newline Our $su\left( N\right) $ lattice models relies on the two following basic ingredients:\newline (\textbf{1}) the $N$ first nearest neighbors in $\mathcal{L}_{su\left( N\right) }$ with positions $\mathbf{v}_{i}=a\mathbf{\mu }_{i}$ and wave functions $\phi _{\mathbf{\mu }_{i}}\left( \mathbf{r}_{m}\right) $ transform in the fundamental representation of $su\left( N\right) $. The $\mathbf{v _{i}$'s and the weight vectors $\mathbf{\mu }_{i}$ are as in eqs(\ref{bti -a, \ref{bt}-a ),\newline (\textbf{2}) the $N\left( N-1\right) $ second nearest neighbors in $\mathcal L}_{su\left( N\right) }$ with positions $\mathbf{V}_{ij}=\mathbf{v}_{i} \mathbf{v}_{j}$ are, up to a global scale factor, given by the $N\left( N-1\right) $ roots $\mathbf{\beta }_{ij}$ of $su\left( N\right) $.\newline Using these features, we have constructed a $su\left( N\right) $ lattice model with hamiltonian $\mathcal{H}_{su\left( N\right) }$ (\ref{A1}); it involves operator fields $F_{\mathbf{\mu }_{i}}$ and $G_{\beta }$ transforming respectively in the fundamental and adjoint representations of su\left( N\right) $. This symmetry captures basic data on the energy spectrum of $\mathcal{H}_{su\left( N\right) }$ as shown by the dispersion energy given by the formulae (\ref{di}-\ref{de}). \newline Our proposal may be extended to the other DE simply laced Lie algebras. In the case of $D_{N}\sim so\left( 2N\right) $ for instance, the lattice \mathcal{L}_{so\left( 2N\right) }$ is N- dimensional generated by the simple roots $\mathbf{\alpha }_{1},\ldots ,\mathbf{\alpha }_{N}$ with matrix intersectio \begin{equation} \begin{tabular}{llll} $\mathbf{\alpha }_{i}.\mathbf{\alpha }_{j}$ & $=$ & $\left( \begin{array}{cccccccc} 2 & -1 & 0 & \cdots & 0 & 0 & 0 & 0 \\ -1 & 2 & -1 & \cdots & 0 & 0 & 0 & 0 \\ 0 & -1 & 2 & & 0 & 0 & 0 & 0 \\ \vdots & \vdots & & \ddots & & & \vdots & \vdots \\ 0 & 0 & 0 & & 2 & -1 & 0 & 0 \\ 0 & 0 & 0 & & -1 & 2 & -1 & -1 \\ 0 & 0 & 0 & \cdots & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & \cdots & 0 & -1 & -1 & \end{array \right) $ & \end{tabular \end{equation Here also, the $\mathcal{L}_{so\left( 2N\right) }$ lattice is made by the superposition of two sublattices $\mathcal{A}_{so\left( 2N\right) }$ and \mathcal{B}_{so\left( 2N\right) }$. For each site at $\mathbf{r}_{m}$, we have $2N$ first nearest neighbors at $\mathbf{r}_{m}+\mathbf{v}_{I}$ with relative vectors $\mathbf{v}_{I}$, which may be split as, \begin{equation} \begin{tabular}{llllll} $\mathbf{v}_{i}$ & , & $\mathbf{v}_{i+N}=-\mathbf{v}_{i}$ & , & $1\leq i\leq N$ & \end{tabular \end{equation satisfying the following constraint relation \begin{equation} \begin{tabular}{llll} $\dsum\limits_{I=1}^{2N}\mathbf{v}_{I}$ & $=$ & $\dsum\limits_{i=1}^{N \left( \mathbf{v}_{i}+\mathbf{v}_{i+N}\right) =0$ & \end{tabular \end{equation which extend the su$\left( N\right) $ one given by (\ref{bti}-a). In the example of so$\left( 6\right) $, the first nearest neighbors form an octahedron as depicted in figure (\ref{br}), \begin{figure}[tbph] \begin{center} \hspace{0cm} \includegraphics[width=4cm]{S6} \end{center} \par \vspace{-1cm} \caption{the lattice $\mathcal{L}_{so\left( 6\right) }${\protect\small ; each (blue) atom has \emph{6} first nearest neighbors constituting an octahedron and transforming in the vector representation of so}$\left( {\protect\small 6}\right) .$ } \label{br} \end{figure} The relative vectors $\mathbf{v}_{I}$ transform in the \underline{$\mathbf{2 }$} vector representation of $so\left( 2N\right) $; they allow to define the second nearest neighbors by help of (\ref{bti}-b) which reads in the so\left( 2N\right) $ case as follows \begin{equation} \begin{tabular}{llll} $\mathbf{V}_{IJ}$ & $=$ & $\left\{ \begin{array}{c} \pm \left( \mathbf{v}_{i}-\mathbf{v}_{j}\right) \\ \pm \left( \mathbf{v}_{i}+\mathbf{v}_{j}\right \end{array \right. $ & \end{tabular} \label{vv} \end{equation These $\mathbf{V}_{IJ}$'s should be put in one to one correspondence with the $2N\left( N-1\right) $ roots of $so\left( 2N\right) $. Recall that so\left( 2N\right) $ is $N\left( 2N-1\right) $- dimensional and has rank $N ; that is $N$ simple roots $\mathbf{\alpha }_{i}$ which read in terms of the weight vectors of the \underline{$\mathbf{2N}$} representation like $\mathbf \alpha }_{i}=\mathbf{\mu }_{i}-\mathbf{\mu }_{i+1}$ and $\mathbf{\alpha _{N}=\mathbf{\mu }_{N-1}+\mathbf{\mu }_{N}$. The generic roots are given by \pm \left( \mathbf{\mu }_{i}\pm \mathbf{\mu }_{j}\right) $ with $1\leq i,j\leq N$ and should be compared with (\ref{vv}). In the end notice that the dispersion energy for the first nearest couplings reads a \begin{equation} \left\vert \varepsilon _{so\left( 2N\right) }\left( \mathbf{k}\right) \right\vert =t_{1}\sqrt{2N+2\dsum\limits_{i<j=0}^{N}\cos \left[ a\mathbf{k .\left( \mathbf{\mu }_{i}\mathbf{-\mu }_{j}\right) \right] +2\dsum\limits_{i<j=0}^{N}\cos \left[ a\mathbf{k}.\left( \mathbf{\mu }_{i \mathbf{+\mu }_{j}\right) \right] } \end{equation and, like in the $su\left( N\right) $ case (\ref{di}), it also depends on the so$\left( 2N\right) $ roots $\beta _{ij}=\mathbf{\mu }_{i}\mathbf{\pm \mu }_{j}$. \ \ \ \newline We end this conclusion by making one more comment concerning some related works on fermions living on a \emph{4d} \emph{hyperdiamond} lattice \mathcal{H}_{4}$ which has been used in lattice QCD \textrm{\cite{BBTW}; see also \cite{KM}-\cite{KM4} for extensions\footnote we thank the referee for pointing to us the relationship between our lattices $\mathcal{L}_{su\left( D+1\right) }$ based on Lie algebras, constructed in section 2, and the D-dimensional hyperdiamond \ $\mathcal{H _{D}$ described in this section.}}. It is interesting to note that the \emph 4d} \emph{hyperdiamond} lattice $\mathcal{H}_{4}$ used in \textrm{\cite{BBTW } is precisely $\mathcal{L}_{su\left( 5\right) }$; and the higher dimensional diamonds $\mathcal{H}_{N}$ given in \textrm{\cite{KM} }are exactly the $\mathcal{L}_{su\left( N+1\right) }$ lattices we have discussed in section 2. Moreover, several features obtained for the \emph{N -dimensional hyperdiamond $\mathcal{H}_{N}$ with $N\geq 2$ are just algebraic relations on the weight and root systems of the $su\left( N+1\right) $ Lie algebra with discrete symmetries generated by Weyl group transformations given by the $\mathcal{S}_{N+1}$ permutation group. This is the case for instance of the remarkable relation, \begin{equation} \begin{tabular}{llll} $\cos \vartheta _{ij}=\frac{\mathbf{a}_{i}.\mathbf{a}_{j}}{\left\Vert \mathbf{a}_{i}\right\Vert .\left\Vert \mathbf{a}_{j}\right\Vert }=\frac{1}{2} $ & , & $i\neq j=1,...,N$ & \end{tabular} \label{cos} \end{equation derived in \textrm{\cite{KM} where} $\vartheta _{ij}=\left( \widehat{\mathbf a}_{i},\mathbf{a}_{j}}\right) $ are the angles between the primitive vectors $\mathbf{a}_{i}=\mathbf{e}_{i}-\mathbf{e}_{5}$ of the lattice $\mathcal{H _{N}$ and where the $\mathbf{a}_{i}$'s stand for the generators of the sublattices $\mathcal{A}_{N}$ (resp. $\mathcal{B}_{N}$) of the \emph{N -dimensional hyperdiamond $\mathcal{H}_{N}$. Notice that eq(\ref{cos}) is independent on lattice dimension and on the orientation of the primitive vectors.\textrm{\ }A way to prove the universality of this relation is to relate it with basic relations of Lie algebras. A lengthy, but straightforward, analysis shows that the \emph{5} bond vectors $\mathbf{e _{i}$ and the \emph{4} primitive $\mathbf{a}_{i}$ used in \textrm{\cite{BBTW} }are respectively related to the \emph{5} weight vectors $\mathbf{\mu }_{i}$ of the fundamental representation of $su\left( 5\right) $ and its \emph{4} simple roots $\mathbf{\alpha }_{i}$ as follow \begin{equation} \begin{tabular}{llll} $\mathbf{e}_{i}=\frac{\sqrt{5}}{2}\mathbf{\mu }_{i}$, & $\mathbf{\mu }_{i} \mathbf{\mu }_{i}=\frac{4}{5}$, & $\mathbf{\mu }_{i}.\mathbf{\mu }_{j}= \frac{1}{5},$ & \end{tabular \end{equation an \begin{equation} \begin{tabular}{ll} $\mathbf{a}_{1}$ & $=-\frac{\sqrt{5}}{2}\mathbf{\alpha }_{1}$ \\ $\mathbf{a}_{2}$ & $=-\frac{\sqrt{5}}{2}\left( \mathbf{\alpha }_{1}+\mathbf \alpha }_{2}\right) $ \\ $\mathbf{a}_{3}$ & $=-\frac{\sqrt{5}}{2}\left( \mathbf{\alpha }_{1}+\mathbf \alpha }_{2}+\mathbf{\alpha }_{3}\right) $ \\ $\mathbf{a}_{4}$ & $=-\frac{\sqrt{5}}{2}\left( \mathbf{\alpha }_{1}+\mathbf \alpha }_{2}+\mathbf{\alpha }_{3}+\mathbf{\alpha }_{4}\right) \end{tabular \end{equation From these realizations, the constraint equation $\sum \mathbf{e}_{i}=0$ corresponds to the property $\sum \mathbf{\mu }_{i}=0$; see also (\ref{bt}). Moreover eq(\ref{cos}) can be read in terms of the $su\left( 5\right) $ Cartan matrix $K_{ij}=\frac{2\mathbf{\alpha }_{i}.\mathbf{\alpha }_{j}} \mathbf{\alpha }_{i}.\mathbf{\alpha }_{i}}$. \begin{acknowledgement} : L.B Drissi would like to thank ICTP for the Associationship program. \end{acknowledgement} \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}% \@ifundefined{tciFourier}{\def\tciFourier{F}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\tag@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\tag@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\tag@true \global\def\@taggnum{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\displaystyle \int}% \def\diint{\displaystyle \iint}% \def\diiint{\displaystyle \iiint}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \RequirePackage{amsmath} \makeatother \endinput
1,314,259,995,508
arxiv
\section{Introduction} The Cabibbo-Kobayashi-Maskawa (CKM) matrix, which relates the mass eigenstates and the weak eigenstates in the Standard Model electroweak theory, is a set of parameters. To determine each CKM matrix element, one requires both theoretical and experimental inputs. On the theoretical side, one needs to know relevant hadronic amplitudes, which often contain nonperturbative QCD effects. A major role of lattice QCD is to calculate such hadronic amplitudes reliably and accurately, from first principles. One can then extract the CKM matrix elements by combining lattice QCD as the theoretical input with the experimental input such as decay rates. In this paper, we show that it is now possible to {\it fully} determine the CKM matrix, for the first time, using lattice QCD. The result for the full CKM matrix with lattice QCD is: \begin{eqnarray} V_{\rm CKM} ~=~ \left( \begin{array}{ccc} {{|V_{ud}|}} & {|V_{us}|} & {|V_{ub}|} \\ 0.9744(5)(3) & ~~~0.225(2)(1)~~~ & 3.5(5)(5)\!\times\! 10^{-3} \\ { |V_{cd}| } & {|V_{cs}|} & {|V_{cb}|} \\ ~~~0.24(3)(2)~~~ & ~~~~0.97(10)(2)~~~~ & 3.9(1)(3)\!\times\! 10^{-2} \\ { |V_{td}| } & {|V_{ts}|} & {|V_{tb}|} \\ 8.1(2.7)\!\times\! 10^{-3} & 3.8(4)(3)\!\times\! 10^{-2} & 0.9992(0)(1) \\ \end{array} \right) \label{ckm} \end{eqnarray} where the first errors are from lattice calculations and the second are experimental, except the one for $|V_{td}|$ which is a combined lattice and experimental error. The results for the Wolfenstein parameters with lattice QCD are: \begin{eqnarray} \lambda = \!\!\!\!\Vus\!\!\!\!\!\!,~~~~ A = 0.77(2)(7) ,~~~~ \rho = 0.16(28) ,~~~~ \eta = 0.36(11). \label{Wolf} \end{eqnarray} To directly determine 5 CKM matrix elements ($|V_{cd}|$, $|V_{cs}|$, $|V_{ub}|$, $|V_{cb}|$ and $|V_{us}|$), we use 5 semileptonic decays ($D\rightarrow \pi l\nu$, $D\rightarrow K l\nu$, $B\rightarrow \pi l\nu$, $B\rightarrow D l\nu$ and $K\rightarrow \pi l\nu$), for which the techniques for lattice calculations are well established, and thus reliable calculations are possible. The accuracy of previous lattice calculations was limited by two large systematic uncertainties --- the error from the ``quenched'' approximation (neglect of virtual quark loop effects) and the error from the ``chiral'' extrapolation in light quark mass ($m_l\rightarrow m_{ud}$). Both led to effects of around 10--20\%. Recent work by the Fermilab Lattice, MILC, and HPQCD Collaborations~\cite{Aubin:2004ej,Okamoto:2004xg} successfully reduces these two dominant uncertainties. The error from the quenched approximation is removed by using the MILC unquenched gauge configurations~\cite{milc}, where the effect of $u,d$ and $s$ quarks is included ($n_f=2+1$). The error from the chiral extrapolation is greatly reduced by using improved staggered quarks. With this improved approach, the accuracy of the 5 CKM matrix elements is comparable to that of the Particle Data Group~\cite{Eidelman:wy}. The results for $|V_{ub}|$, $|V_{cb}|$ and $|V_{us}|$ are preliminary. We then use CKM unitarity to determine the other 4 CKM matrix elements ($|V_{ud}|$, $|V_{tb}|$, $|V_{ts}|$ and $|V_{td}|$). In this way, we obtain all 9 CKM matrix elements and all the Wolfenstein parameters. The results for the $D$ and $B$ decays have been presented in Refs.~\cite{Aubin:2004ej,Okamoto:2004xg}. This work is a part of ongoing project of flavor physics with lattice QCD by the Fermilab Lattice, MILC, and HPQCD Collaborations; see also \cite{Simone:2004fr,diPierro:2003iw,Okamoto:2004df}. \section{5 CKM matrix elements from 5 semileptonic decays} \label{sec:SLdecay} \subsection{$D\rightarrow\pi(K)l\nu$, $|V_{cd(s)}|$ and $B\rightarrow\pi l\nu$, $|V_{ub}|$} The differential decay rate $d\Gamma/dq^2$ for the heavy-to-light semileptonic decay $H\rightarrow Pl\nu$ is proportional to $|V_{ij}|^2 |f_+(q^2)|^2$, where $f_+$ is a form factor of the relevant hadronic amplitude defined through \begin{eqnarray} \< P | V^\mu | H \> &=& f_+(q^2) (p_H+p_P-\Delta)^\mu + f_0(q^2) \Delta^\mu \label{eq:HLff}\\ &=& \sqrt{2m_H} \, \left[v^\mu \, f_\parallel(E) + p^\mu_\perp \, f_\perp(E) \right]. \nonumber \end{eqnarray} Here $q = p_H - p_P$, $\Delta^\mu=(m_H^2-m_P^2)\, q^\mu / q^2$, $v=p_H/m_H$, $p_\perp=p_P-Ev$ and $E=E_P$. To determine the CKM matrix element $|V_{ij}|$ with the experimental rate $\int^{q^2_{\rm max}}_{q^2_{\rm min}}dq^2\ (d\Gamma/dq^2)$, we calculate $f_{+,0}$ as a function of $q^2$. Below we briefly describe the analysis procedure in Refs.~\cite{Aubin:2004ej,Okamoto:2004xg}. We first extract the form factors $f_\parallel$ and $f_\perp$, and interpolate and extrapolate the results for $f_\parallel$ and $f_\perp$ to common values of $E$ using the parametrization of Becirevic and Kaidalov (BK) \cite{Becirevic:1999kt}. We then perform the chiral extrapolation ($m_l\rightarrow m_{ud}$) at each $E$ using the NLO correction in staggered chiral perturbation theory~\cite{Aubin:2004xd}. Finally we convert the results for $f_{\perp}$ and $f_{\parallel}$ at $m_l=m_{ud}$, to $f_+$ and $f_0$. The results for $f_+$ and $f_0$ are parameterized with the BK form~\cite{Becirevic:1999kt}, \begin{eqnarray}\label{eq:BK} f_+(q^2) = \frac{f_+}{(1-\tilde{q}^2)(1-\alpha\tilde{q}^2)},~~~ f_0(q^2) = \frac{f_+}{1-\tilde{q}^2/\beta}, \end{eqnarray} where $\tilde{q}^2=q^2/m_{H^{*}}^2$. We obtain~\cite{Aubin:2004ej,Okamoto:2004xg} $ f_+^{D\rightarrow \pi}=0.64(3),~ \alpha^{D\rightarrow \pi}=0.44(4),~ \beta^{D\rightarrow \pi}=1.41(6)$ and $f_+^{D\rightarrow K}=0.73(3),~ \alpha^{D\rightarrow K}=0.50(4),~ \beta^{D\rightarrow K}=1.31(7) $ for the $D$ decays, and $ f_+^{B\rightarrow\pi}=0.23(2),~ \alpha^{B\rightarrow\pi}=0.63(5),~ \beta^{B\rightarrow\pi}=1.18(5) $ for the $B$ decay, where the errors are statistical only. To estimate the error from the BK parameterization, {\it i.e.,} the error for $q^2$ dependence, we also make an alternative analysis, where we perform a 2-dimensional polynomial fit in $\left(m_l,E(q^2)\right).$ A comparison between two analyses is shown in Fig.~\ref{fig:B2pi}. The results for $D$ decays agree well with recent experimental results \cite{unknown:2004nn}. \begin{figure}[tb] \begin{center} \centerline{ \epsfig{file=f0+.D2pi.BK-noBK.eps,height=37mm} \epsfig{file=f0+.B2pi.BK-noBK.eps,height=37mm}} \vspace{-.3cm} \caption{Form factors from BK-based (filled symbols and curves) and non-BK-based (open symbols) analyses for $D\rightarrow\pi l\nu$ (left panel) and $B\rightarrow\pi l\nu$ (right) decays.} \label{fig:B2pi} \vspace{-1cm} \end{center} \end{figure} We then determine the CKM matrix elements by integrating $|f_+(q^2)|^2$ over $q^2$ and using experimental decay rates~\cite{Eidelman:wy,Athar:2003yg,Belle:B2pi}. For $|V_{ub}|$ we use a combined average of the decay rate for $q^2 \ge 16$ GeV$^2$ in Refs.~\cite{Athar:2003yg} and \cite{Belle:B2pi}. We obtain \begin{eqnarray} ~~~|V_{cd}| = 0.239(10)(24)(20) \,\,\,\,\, ,\,\,\,\,\, |V_{cs}| = 0.969(39)(94)(24) \label{eq:VcdVcs} \end{eqnarray} from the $D$ decay, and \begin{eqnarray} ~~~|V_{ub}|\times 10^{3} = 3.48(29)(38)(47) \label{eq:Vub} \end{eqnarray} from the $B$ decays, where the first errors are statistical, the second systematic, and the third are experimental errors from the decay rates. The systematic errors are dominated by the finite lattice spacing effects, {\it i.e.,} the lattice discretization effects; see Table~\ref{tab:error}. The results for the CKM matrix elements agree with the Particle Data Group averages~\cite{Eidelman:wy} with a comparable accuracy. \begin{table}[b] \begin{center} \caption{Systematic errors in lattice calculations. For comparison, the error for each CKM matrix element by the Particle Data Group~\protect\cite{Eidelman:wy} is shown in the last row.} {\begin{tabular}{l|rrrr} \hline semileptonic decay &$D\rightarrow \pi(K)l\nu$ & $B\rightarrow \pi l\nu$ & $B\rightarrow D l\nu$ & $K\rightarrow \pi l\nu $\\ CKM matrix element &$|V_{cd(s)}|$ &$|V_{ub}|$ &$|V_{cb}|$ & $|V_{us}|$\\ \hline $q^2$ dependence & 2\% & 4\% & & $<$1\%\\ $m_l\!\rightarrow\! m_{ud}$ extrapolation& 3\%(2\%) &4\% & 1\% & 1\%\\ operator matching & $<$1\% & 1\% & 1\% & $<$1\%\\ {discretization effects} & { 9\%}& { 9\%} & $<$1\% & \\ \hline total systematic error & { 10\%} & { 11\%} & { 2\%} & {1\%}\\ \hline \hline \vspace{-.4cm}\\ error in PDG averag & 5\%(1\%) & 13\% & 4\% & 1\%\\ \hline \end{tabular}\label{tab:error}} \end{center} \end{table} \subsection{$B\rightarrow D l\nu$, $|V_{cb}|$ and $K\rightarrow \pi l\nu$, $|V_{us}|$} The differential decay rate of $B\rightarrow D l\nu$ is proportional to the square of $|V_{cb}| {\cal{F}}(w)$, where ${\cal{F}}(w)$ is the relevant form factor and $w=v\cdot v'$ with $v=p_B/m_B$, $v'=p_D/m_D$. To extract $|V_{cb}|$, we calculate the form factor at $w=1$, ${\cal{F}}(1)$, by employing the double ratio method \cite{Hashimoto:1999yp}. The light quark mass dependence of ${\cal{F}}(1)$ is mild, and by extrapolating the result linearly to $m_l\rightarrow 0$ we obtain~\cite{Okamoto:2004xg} $ {\cal{F}}^{B\rightarrow D}(1)=1.074(18)(16), $ where the first error is statistical, and the second is systematic. Combining with the experimental result for $|V_{cb}{|\cal{F}}(1)$~\cite{HFAG}, we obtain \begin{eqnarray} |V_{cb}|\times 10^{2} = 3.91(07)(06)(34), \label{eq:Vcb} \end{eqnarray} where the first two errors from lattice calculation are smaller than the experimental one (third). \begin{figure}[tb] \begin{center} \epsfig{file=am-df.k2p.eps,height=40mm} \vspace{-.5cm} \caption{$m_l$-dependence of $\Delta f$ for $K\rightarrow\pi l\nu$ decay, together with results from Refs.~\protect\cite{Becirevic:2004ya,Leutwyler:1984je}.} \label{fig:K2pi} \vspace{-.6cm} \end{center} \end{figure} Finally we study the $K\rightarrow\pi l\nu$ decay to determine $|V_{us}|$. The expression for the $K\rightarrow\pi$ decay amplitude is given in an analogous way to Eq.~(\ref{eq:HLff}). We calculate the $K^0\rightarrow\pi^-$ form factor at $q^2=0$, $f_+(0)=f_0(0)$, by employing the three-steps method, as in Ref.~\cite{Becirevic:2004ya}. To perform the $m_l\!\rightarrow\! m_{ud}$ extrapolation for $f_+(0)$, we subtract the leading logarithmic correction $f_2$ in chiral perturbation theory, {\it i.e.,} define $\Delta f\equiv 1+f_2-f_+(0)$. We make a fit to $\Delta f$ adopting an ansatz, $\Delta f = (A +B m_l)(m_s-m_l)^2$, where $A,B$ are fit parameters. The $m_l$-dependence of $\Delta f$ and the extrapolated result are shown in Fig.~\ref{fig:K2pi}, together with a recent quenched lattice result~\cite{Becirevic:2004ya} and an earlier result by Leutwyler and Roos~\cite{Leutwyler:1984je}. Our preliminary result~\cite{Okamoto:2004df} is $\Delta f = 0.015(6)(9)$, giving $ {f}_{+}^{K^0\rightarrow\pi^-}(0) = 0.962(6)(9), $ which agrees well with those of Refs.~\cite{Becirevic:2004ya,Leutwyler:1984je}. Combining with a recent experimental result for $|V_{us}|{f}_{+}(0)$~\cite{Alexopoulos:2004sw}, we obtain \begin{eqnarray} |V_{us}| = 0.2250(14)(20)(12). \label{eq:Vus} \end{eqnarray} \section{Other 4 CKM matrix elements using unitarity and Wolfenstein parameters}\label{sec:unitary} Having the 5 CKM matrix elements directly determined from the 5 semileptonic decays, we can check unitarity of the second row of the CKM matrix. Using Eqs.~(\ref{eq:VcdVcs}) and (\ref{eq:Vcb}), we get \begin{eqnarray} ({|V_{cd}|^2+|V_{cs}|^2+|V_{cb}|^2})^{1/2}={ 1.00(10)(2)}, \end{eqnarray} which is consistent with unitarity. Hereafter the first error is from the lattice calculation and the second is experimental, unless otherwise stated. We now use unitarity of the CKM matrix to determine the other 4 CKM matrix elements. $|V_{ud}|$, $|V_{tb}|$ and $|V_{ts}|$ are easily determined: \begin{eqnarray} |V_{ud}| &=& (1 - |V_{us}|^2 - |V_{ub}|^2)^{1/2} ~=~ 0.9744(5)(3), \\ |V_{tb}| &=& (1 - |V_{ub}|^2 - |V_{cb}|^2)^{1/2} ~=~ 0.9992(0)(1), \\ |V_{ts}| &=& |V_{us}^{*}V_{ub} + V_{cs}^{*}V_{cb}|\ /\ |V_{tb}| ~\simeq~ |V_{cs}^{*}V_{cb}|\ /\ |V_{tb}| ~=~ 3.8(4)(3)\!\times\! 10^{-2}. \end{eqnarray} Eqs.~(\ref{eq:Vus}), (\ref{eq:Vcb}) and (\ref{eq:Vub}) give some of the Wolfenstein parameters, \begin{eqnarray} \lambda &=& |V_{us}| ~~~~~~~~~~=\!\!\!\! \Vus\!\!\!\!\!\!, \\ A &=& |V_{cb}|/\lambda^2 ~~~~~~=~ 0.77(2)(7), \\ (\rho^2 + \eta^2)^{1/2} &=& |V_{ub}| / (A\lambda^3) ~=~ 0.40(6)(6). \label{eq:WR} \end{eqnarray} To extract $|V_{td}|$ and $(\rho,\eta)$, we use the experimental result for $\sin(2\beta)$ from $B\rightarrow (c\bar{c}) K^{(*)}$ decays. From a unitary triangle analysis with $\sin(2\beta)=0.726(37)$~\cite{HFAG} and Eq.~(\ref{eq:WR}), we obtain \begin{eqnarray} \rho ~=~ 0.16(28), &&~ \eta ~=~ 0.36(11), \\ |V_{td}| &=& 8.1(2.7)\!\times\! 10^{-3} \end{eqnarray} with a combined lattice and experimental error, completing the {full} CKM matrix. \section*{Acknowledgments} We thank all members of the Fermilab Lattice, MILC and HPQCD Collaborations, in particular, the authors in Ref.~\cite{Aubin:2004ej}. This work is supported by the Fermilab Computing Division, the SciDAC program, and FermiQCD~\cite{DiPierro:2003sz}. Fermilab is operated by Universities Research Association Inc., under contract with the U.S. Department of Energy.
1,314,259,995,509
arxiv
\section{Introduction} Non-Hermitian operator has been introduced phenomenologically as an effective Hamiltonian to fit experimental data in various fields of physics \cite{Gamow,Dattoli,Okolowicz,Moiseyev,Muga}. In spite of the important role played non-Hermitian operator in different branches of physics, it has not been paid due attention by the physics community until the discovery of non-Hermitian Hamiltonians with parity-time symmetry, which have a real spectrum \cite{Bender}. It has boosted the research on the complex extension of quantum mechanics on a fundamental level \cit {Ann,JMP1,JPA1,JPA2,PRL1,JMP2,JMP3,JMP4,JPA3,JPA4,JPA5}. Non-Hermitian Hamiltonian can possess peculiar feature that has no Hermitian counterpart. A typical one is the spectral singularity (or exceptional point for finite system), which is a mathematic concept. It has gained a lot of attention recently \cite{PRA1,PRB1,Ali3,PRA3,JMP5,PRD1,PRA4,PRA5,PRA6}, motivated by the possible physical relevance of this concept since the pioneer work of\ Mostafazadeh \cite{PRL3}. The majority of previous works focus on the non-Hermitian system arising from the complex potential, mean-field nonlinearity \cit {PRA2,JPA6,Ali3,PRA13,prd2,prd3,prd4,prd5,prd6,prd7,prd8} as well as imaginary hopping integral \cite{PRA14}. In this paper, we investigate the physical relevance of the spectral singularities for non-Hermitian interacting many-particle system. The non-Hermiticity arises from the imaginary interaction strength. For two-particle case, the exact solution shows that there exist a series of spectral singularities, forming a spectrum of singularity associated with the central momentum of the two particles. We consider dynamics of two bosons as well as fermions in one-dimensional system with imaginary delta interaction strength. It shows that the two-particle collision leads to amplitude-reduction of the wave function. For fermion pair, the amplitude-reduction depends on the spin configuration of two particles. Remarkably, in both cases, the residual amplitude can vanish only when the relative group velocity of two single-particle Gaussian wave packets with equal width reaches the magnitude of the interaction strength. This phenomenon of complete particle-pair annihilation is the direct result of the spectral singularity. We also discuss the complete annihilations of a singlet fermion pair and a maximally two-mode entangled boson pair based on the second quantization formalism. This paper is organized as follows. In Section \ref{Hamiltonian and solutions}, we present the model Hamiltonian and exact solution. In Section \ref{Dynamical signature}, we construct the local boson pair initial state as initial state which is allowed to calculate the time evolution. Based on this, we reveal the connection between the phenomenon of complete pair annihilation and the spectral singularity. In Section \ref{Second quantization representation}, we extend our study a singlet fermion pair and a maximally two-mode entangled boson pair based on the second quantization formalism. Finally, we give a summary in Section \ref{Summary}. \section{Hamiltonian and solutions} \label{Hamiltonian and solutions} We start with an one-dimensional two-distinguishable particle system with imaginary delta interaction. The solution can be used to construct the eigenstates of two-fermion and boson systems. The Hamiltonian has the form \begin{equation} H_{\mathrm{2p}}=-\frac{1}{2}\left( \frac{\partial ^{2}}{\partial x_{1}^{2}} \frac{\partial ^{2}}{\partial x_{2}^{2}}\right) -i2\gamma \delta \left( x_{1}-x_{2}\right) \end{equation where $\gamma >0$\ and we use dimensionless units $e=\hbar =m=1$ for simplicity. Introducing new variables $R$ and $r$, where \begin{eqnarray} R &=&(x_{1}+x_{2})/2, \\ r &=&x_{1}-x_{2}, \notag \end{eqnarray we obtain the following Hamiltonian \begin{equation} H_{\mathrm{2p}}=H_{\mathrm{R}}+H_{\mathrm{r}}, \end{equation wit \begin{eqnarray} H_{\mathrm{R}} &=&-\frac{\partial ^{2}}{4\partial R^{2}}, \\ H_{\mathrm{r}} &=&-\frac{\partial ^{2}}{\partial r^{2}}-i2\gamma \delta \left( r\right) . \notag \end{eqnarray Here $R$\ is the center-of-mass coordinate and $r$\ is the relative coordinate. The Hamiltonian is separated into a center-of-mass part and a relative part, and can be solvable exactly. The eigenfunctions of the center-of-mass motion $H_{\mathrm{R}}$ are simply plane waves, while the Hamiltonian $H_{\mathrm{r}}$ is equivalent to that of a single-particle in an imaginary delta-potential, which has been exactly solved in the Ref.\cite{Ali1}. Then the eigen functions of the original Hamiltonian can be obtained and expressed as \begin{eqnarray} \psi _{+}\left( K,k,x_{1},x_{2}\right) =e^{iK\left( x_{1}+x_{2}\right) /2}\left\{ \cos \left[ k\left( x_{1}-x_{2}\right) \right] \right. && \label{WF_even} \\ \left. -\frac{i\gamma }{k}\sin \left[ k\left( x_{1}-x_{2}\right) \right] \text{\textrm{sign}}\left( x_{1}-x_{2}\right) \right\} , && \notag \end{eqnarray in symmetrical form, an \begin{equation} \psi _{-}\left( K,k,x_{1},x_{2}\right) =e^{iK\left( x_{1}+x_{2}\right) /2}\sin \left[ k\left( x_{1}-x_{2}\right) \right] , \label{WF_odd} \end{equation in antisymmetrical form. The corresponding energy is \begin{equation} E\left( K,k\right) =K^{2}/4+k^{2}, \end{equation with the central and relative momenta $K,k\in \left( -\infty ,\infty \right) $. The symmetrical wavefunction $\psi \left( K,k,x_{1},x_{2}\right) $ is the spatial part wavefunction for two bosons or two fermions in singlet pair, while the antisymmetrical wavefunction $\varphi \left( K,k,x_{1},x_{2}\right) $ only for two triplet fermions. Before starting the investigation on dynamics of two-particle collision, we would like to point that there exist spectral singularities in the present Hamiltonian. It arises from the same mechanism as that in the single-particle systems \cite{Samsonov,Ali1}. We can see that the eigen functions with even parity and momentum $k=-\gamma $ can be expressed in the for \begin{eqnarray} \psi _{\text{ss}}\left( K\right) &\equiv &\psi \left( K,-\gamma ,x_{1},x_{2}\right) \\ &=&e^{iK\left( x_{1}+x_{2}\right) /2}e^{-i\gamma \left\vert x_{1}-x_{2}\right\vert }, \notag \end{eqnarray with energy \begin{equation} E_{\text{ss}}\left( K\right) =K^{2}/4+\gamma ^{2}. \end{equation We note that function $\psi _{\text{ss}}\left( K\right) $\ satisfie \begin{equation} \underset{x_{1}-x_{2}\rightarrow \pm \infty }{\lim }\left[ \frac{\partial \psi _{\text{ss}}\left( K\right) }{\partial \left( x_{1}-x_{2}\right) }\pm i\gamma \psi _{\text{ss}}\left( K\right) \right] =0, \end{equation which accords with the definition of the spectral singularity in Ref. \cit {Ali3}. It shows that there exist a series of spectral singularities associated with energy $E_{\text{ss}}\left( K\right) $ for $K\in \left( -\infty ,\infty \right) $, which constructs a spectrum of spectral singularities. We will demonstrate in the following section that such a singularity spectrum leads to a peculiar dynamical behavior of two local boson pair or equivalently, singlet fermion pair. \section{Dynamical signature} \label{Dynamical signature} \subsection{Construction of initial state} The emergence of the spectral singularity induces a mathematical obstruction for the calculation of the time evolution of a given initial state, since it spoils the completeness of the eigen functions and prevents the eigenmode expansion. Nevertheless, the completeness of the eigen functions is not necessary for the time evolution of a state with a set of given coefficients of expansion. It does not cause any difficulty in deriving the time evolution of an initial state with arbitrary combination of the eigen functions. Namely, any linear combination of function set $\left\{ \psi \left( K,k,x_{1},x_{2}\right) \right\} $\ or $\left\{ \varphi \left( K,k,x_{1},x_{2}\right) \right\} $\ can be an initial state, and the time evolution of it can be obtained simply by adding the factor $e^{-iE\left( K,k\right) t}$. In order to investigate the dynamical consequence of the singularity spectrum, we consider the time evolution of the initial state of the form \begin{equation} \Psi \left( x_{1},x_{2},0\right) =\frac{1}{\sqrt{\Lambda }}\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }G\left( K\right) g\left( k\right) \psi \left( K,k,x_{1},x_{2}\right) \text{\textrm{d}}K\text{\textrm{d}}k, \end{equation where $\Lambda $ is the normalization factor, which will be given in the following and \begin{eqnarray} G\left( K\right) &=&\exp \left[ -\frac{1}{2\alpha ^{2}}(K-K_{0})^{2}-iKR_{0 \right] , \\ g\left( k\right) &=&\exp \left[ -\frac{1}{2\beta ^{2}}(k-k_{0})^{2}-ikr_{0 \right] . \end{eqnarray Here $\alpha ,\beta ,r_{0},k_{0}>0$ and $K_{0}$ is arbitrary real number. We explicitly hav \begin{equation} \Psi \left( x_{1},x_{2},0\right) =\frac{\pi \alpha \beta }{2\sqrt{\Lambda k_{0}}\left[ \left( k_{0}+\gamma \right) \exp \theta _{+}+\left( k_{0}-\gamma \right) \exp \theta _{-}\right] \end{equation wher \begin{eqnarray} \theta _{\pm } &=&-\frac{\alpha ^{2}}{2}\left( \frac{x_{1}+x_{2}}{2 -R_{0}\right) ^{2}-\frac{\beta ^{2}}{2}\left( \left\vert x_{1}-x_{2}\right\vert +r_{0}\right) ^{2} \\ &&+i\left( K_{0}\frac{x_{1}+x_{2}}{2}\mp k_{0}\left\vert x_{1}-x_{2}\right\vert -k_{0}r_{0}+K_{0}R_{0}\right) . \notag \end{eqnarray Furthermore, from the identity \begin{eqnarray} &&\alpha ^{2}\left( x_{1}+x_{2}+A\right) ^{2}+4\beta ^{2}\left( x_{1}-x_{2}+B\right) ^{2}=\alpha ^{2}\left[ \left( x_{1}+A\right) ^{2}+\left( x_{2}+A\right) ^{2}-A^{2}\right] \\ &&+4\beta ^{2}\left[ \left( x_{1}+B\right) ^{2}+\left( x_{2}-B\right) ^{2}-B^{2}\right] +\left( \alpha ^{2}-4\beta ^{2}\right) x_{1}x_{2} \notag \end{eqnarray we can see that the cross term $x_{1}x_{2}$\ vanishes if we take $\alpha =2\beta $. The initial state can be written as a separable form \begin{eqnarray} \Psi \left( x_{1},x_{2},0\right) &=&\frac{\pi \beta ^{2}}{\sqrt{\Lambda k_{0}}\left\{ \left( k_{0}+\gamma \right) \right. \left[ \varphi _{+}\left( x_{1}\right) \varphi _{-}\left( x_{2}\right) u\left( x_{2}-x_{1}\right) +\varphi _{+}\left( x_{2}\right) \varphi _{-}\left( x_{1}\right) u\left( x_{1}-x_{2}\right) \right] \\ &&+\left( k_{0}-\gamma \right) \left. \left[ \varphi _{+}\left( x_{2}\right) \varphi _{-}\left( x_{1}\right) u\left( x_{2}-x_{1}\right) +\varphi _{+}\left( x_{1}\right) \varphi _{-}\left( x_{2}\right) u\left( x_{1}-x_{2}\right) \right] \right\} , \notag \end{eqnarray where $u\left( x\right) $ is Heaviside step function and \begin{equation} \varphi _{\pm }\left( x\right) =\exp \left[ -\beta ^{2}\left( x\mp \frac r_{0}}{2}\right) ^{2}+\frac{i}{2}\left( K_{0}\pm 2k_{0}\right) x\right] . \label{Phi(+/-)} \end{equation In this case, $\Lambda $ can be obtained as \begin{equation*} \Lambda =\frac{4\pi ^{3}\beta ^{2}\left( k_{0}-\gamma \right) ^{2}}{k_{0}^{2 }. \end{equation* Without loss of generality we have set the initial center-of-mass coordinate $R_{0}=0$ and dropped an overall phase $k_{0}r_{0}$. We note that functions \varphi _{+}\left( x\right) $\ and $\varphi _{-}\left( x\right) $\ represent Gaussian functions with centers at $r_{0}/2$ and $-r_{0}/2$, respectively. Obviously, the probability contributions of $\varphi _{+}\left( x_{2}\right) \varphi _{-}\left( x_{1}\right) u\left( x_{2}-x_{1}\right) $ and $\varphi _{+}\left( x_{1}\right) \varphi _{-}\left( x_{2}\right) u\left( x_{1}-x_{2}\right) $ are negligible under the condition $\beta r_{0}\gg 1$. We then yield \begin{equation} \Psi \left( x_{1},x_{2},0\right) \approx \beta \sqrt{\frac{1}{2\pi }}\left[ \varphi _{+}\left( x_{1}\right) \varphi _{-}\left( x_{2}\right) +\varphi _{+}\left( x_{2}\right) \varphi _{-}\left( x_{1}\right) \right] , \label{initial state_1} \end{equation which represents two-boson wavepacket state with the same width, group velocity $K_{0}/2\pm k_{0}$, and location $\mp r_{0}/2$. Here the renormalization factor has been readily calculated by Gaussian integral. So far we have construct an expected initial state\ without using the biothogonal basis set. The dynamics of two separated boson wavepackets can be described\ by the time evolution as that in the conventional quantum mechanics. \subsection{Annihilating collision} It is presumable that before the bosons start to overlap they move as free particles with the center moving in their the group velocities $K_{0}/2\pm k_{0}$ and the width spreading as function of\ time $\sqrt{\left( 4\beta ^{4}t^{2}+1\right) /\beta ^{2}}$. We concern the dynamic behavior after the collision. To this end, we calculate the time evolution of the given initial state, which can be expressed as \begin{figure}[tbp] \includegraphics[ bb=65 180 535 560, width=5.9 cm, clip]{SSa.eps} \includegraphics[ bb=65 180 535 560, width=5.9 cm, clip]{SSb.eps} \includegraphics[ bb=65 180 535 560, width=5.9 cm, clip]{SSc.eps} \caption{(Color online) The profiles of $\left\vert \protect\varphi \left( r,t\right) \right\vert ^{2}$ of the two-boson Gaussian wavepackets are plotted for different values of $k_{0}$ and $\protect\alpha =2\protect\beta : (a) $\protect\gamma =5.0,$ $k_{0}=5.0,\protect\alpha =2\protect\beta =2.0; (b) $\protect\gamma =5.0,$ $k_{0}=5.0,\protect\alpha =2\protect\beta =3.0; (c) $\protect\gamma =2.0,k_{0}=5.0,\protect\alpha =2\protect\beta =2.0.$ One can see that the perfect pair annihilation in the case of (a) and imperfect pair annihilation in the cases of (b) and (c) when the width of the initial wavepackets becomes narrower, and the relative group velocity deviates from \protect\gamma $, respectively. It shows that perfect pair annihilation can be a signature of the singularity spectrum.} \label{fig1} \end{figure} \begin{equation} \Psi \left( x_{1},x_{2},t\right) =\frac{1}{\sqrt{\Lambda }}\int_{-\infty }^{\infty }\int_{-\infty }^{\infty }G\left( K\right) g\left( k\right) \psi \left( K,k,x_{1},x_{2}\right) e^{-i\left( K^{2}/4+k^{2}\right) t}\text \textrm{d}}K\text{\textrm{d}}k. \end{equation By the similar procedure as above, we find that the evolved wave function can always be written in the separated for \begin{equation} \Psi \left( x_{1},x_{2},t\right) =\Phi \left( R,t\right) \phi \left( r,t\right) , \end{equation wher \begin{equation} \Phi \left( R,t\right) =\sqrt[4]{\frac{4\beta ^{2}}{\pi \left( 1+4\beta ^{2}t^{2}\right) }}\exp \left[ -\frac{2\beta ^{2}\left( R-K_{0}/2\right) ^{2 }{1+4\beta ^{2}t^{2}}+\frac{i\left( 16\beta ^{4}R^{2}+4RK_{0}-tK_{0}^{2}\right) }{4+16\beta ^{2}t^{2}}\right] , \end{equation and\ \begin{equation} \phi \left( r,t\right) =\frac{1}{\sqrt{\Omega }}\int_{-\infty }^{\infty }\exp \left[ -\frac{1}{2\beta ^{2}}(k-k_{0})^{2}-ikr_{0}\right] \left[ \cos \left( kr\right) -\frac{i\gamma }{k}\sin \left( k\left\vert r\right\vert \right) \right] \exp \left( -ik^{2}t\right) \text{\textrm{d}}k. \end{equation where the normalization factor \begin{equation} \Omega =\frac{\pi ^{3/2}\beta \left( k_{0}-\gamma \right) ^{2}}{k_{0}^{2}}. \end{equation \ Straightforward algebra shows tha \begin{equation} \phi \left( r,t\right) =\left( k_{0}+\gamma \right) \Theta _{+}+\left( k_{0}-\gamma \right) \Theta _{-} \end{equation where \begin{eqnarray} \Theta _{\pm } &=&\frac{\sqrt{\beta }\left\vert k_{0}-\gamma \right\vert ^{-1}}{\sqrt{2\pi ^{1/2}\left( 1+2i\beta ^{2}t\right) }}\exp \left\{ -\frac \beta ^{2}\left[ \left\vert r\right\vert \pm \left( r_{0}-2k_{0}t\right) \right] ^{2}}{2\left( 4\beta ^{4}t^{2}+1\right) }+i\Delta _{\pm }\right\} , \\ \Delta _{\pm } &=&\frac{\beta ^{4}\left( \left\vert r\right\vert \pm r_{0}\right) ^{2}t-2k_{0}^{2}t\mp 2k_{0}\left( \left\vert r\right\vert \pm r_{0}\right) }{2\left( 4\beta ^{4}t^{2}+1\right) }. \end{eqnarray} In the case of $\beta ^{4}t^{2}\gg 1$, $k_{0}t\gg r_{0}$ the probability distribution is \begin{equation} \left\vert \phi \left( r,t\right) \right\vert ^{2}\approx \frac{\pi \left( k_{0}+\gamma \right) ^{2}}{4\Omega k_{0}^{2}t}\exp \left\{ -\frac{\left( \left\vert r\right\vert +2k_{0}t\right) ^{2}}{4\beta ^{2}t^{2}}\right\} \frac{\pi \left( k_{0}^{2}-\gamma ^{2}\right) e^{-k_{0}^{2}/\beta ^{2}}} 2\Omega k_{0}^{2}t}\exp \left( -\frac{r^{2}}{4\beta ^{2}t^{2}}\right) , \end{equation which leads the total probability under the case $k_{0}/\beta \gg 1$ \begin{equation} \int_{-\infty }^{\infty }\left\vert \phi \left( r,t\right) \right\vert ^{2 \text{\textrm{d}}r\approx \frac{\left( k_{0}+\gamma \right) ^{2}}{\left( k_{0}-\gamma \right) ^{2}}. \end{equation} We can see that, after the collision the residual probability becomes a constant and vanishes when $k_{0}=-\gamma $. It shows that when the relative group velocity of two single-particle Gaussian wave packets with equal width reaches the magnitude of the interaction strength, the dynamics exhibits complete particle-pair annihilation. In order to demonstrate such dynamic behavior and verify our approximate result, the numerical method is employed to simulate the time evolution process for several typical situations. The profiles of $\left\vert \varphi \left( r,t\right) \right\vert ^{2}$ are plotted in Fig. 1. We would like to point that the complete annihilation depends on the relative group velocity, which is the consequence of singularity spectrum. This enhances the probability of the pair annihilation for a cloud of bosons, which may\ provide an\ detection method of the spectral singularity in experiment. \section{Second quantization representation} \label{Second quantization representation} In this section, we will investigate the two-particle collision process from another point of view and give a more extended example. By employing the second quantization representation, the initial state in Eq. (\ref{initial state_1}) can be expressed as the form $a_{1}^{\dag }a_{2}^{\dag }\left\vert 0\right\rangle $, where $a_{i}^{\dag }$\ $\left( i=1,2\right) $\ is the creation operator for a boson in single-particle state with the wavefunction \begin{figure}[tbp] \includegraphics[ bb=65 180 535 560, width=5.9 cm, clip]{SSSa.eps} \includegraphics[ bb=65 180 535 560, width=5.9 cm, clip]{SSSb.eps} \includegraphics[ bb=65 180 535 560, width=5.9 cm, clip]{SSSc.eps} \caption{(Color online) The profiles of $\left\vert \protect\varphi \left( r,t\right) \right\vert ^{2}$ of a maximally two-mode entangled boson pair are plotted for different values of $k_{0}$ and $\protect\alpha = \protect\beta $: (a) $\protect\gamma =10.0,$ $k_{0}=10.0,\protect\alpha = \protect\beta =1.0;$(b) $\protect\gamma =10.0,$ $k_{0}=10.0,\protect\alpha = \protect\beta =3.0;$(c) $\protect\gamma =4.0,k_{0}=10.0,\protect\alpha = \protect\beta =1.0.$ One can see that the perfect pair annihilation in the case of (a) and imperfect pair annihilation in the cases of (b) and (c) when the width of the initial wavepackets becomes narrower, and the relative group velocity deviates from $\protect\gamma $, respectively.} \label{fig2} \end{figure} \begin{equation} \varphi _{+}\left( x\right) =\left\langle x\right\vert a_{1}^{\dag }\left\vert 0\right\rangle \text{, }\varphi _{-}\left( x\right) =\left\langle x\right\vert a_{2}^{\dag }\left\vert 0\right\rangle , \end{equation and $\left\vert 0\right\rangle $\ denotes the vacuum state of the particle operator. Similarly, if we consider a fermion pair, the initial state in Eq. (\ref{initial state_1}) can be written a \begin{equation} \frac{1}{\sqrt{2}}\left( c_{1,\uparrow }^{\dag }c_{2,\downarrow }^{\dag }-c_{1,\uparrow }^{\dag }c_{2,\downarrow }^{\dag }\right) \left\vert 0\right\rangle , \label{fermion pair} \end{equation where $c_{i,\sigma }^{\dag }$ $\left( i=1,2;\sigma =\uparrow ,\downarrow \right) $ is the creation operator for a fermion in single-particle state with the wavefunctio \begin{equation} \varphi _{+}\left( x\right) \chi _{\sigma }=\left\langle x\right\vert c_{1,\sigma }^{\dag }\left\vert 0\right\rangle \text{, }\varphi _{-}\left( x\right) \chi _{\sigma }=\left\langle x\right\vert c_{2,\sigma }^{\dag }\left\vert 0\right\rangle . \end{equation Her \begin{equation} \chi _{\uparrow }=\binom{1}{0},\chi _{\downarrow }=\binom{0}{1}, \end{equation are the spin part of wavefunction. We see that the initial state in Eq. (\re {fermion pair}) is singlet pair with maximal entanglement. In contract, state $\frac{1}{\sqrt{2}}\left( c_{1,\uparrow }^{\dag }c_{2\downarrow }^{\dag }+c_{1,\uparrow }^{\dag }c_{2\downarrow }^{\dag }\right) \left\vert 0\right\rangle $\ should not lose any amplitude after collision. On the other hand, we can extend our conclusion to other types of initial state. For instance, we can construct the initial state wit \begin{eqnarray} G\left( K\right) &=&\exp \left[ -\frac{1}{2\alpha ^{2}}\left( K-K_{0}\right) ^{2}-iKR_{0}\right] , \\ g\left( k\right) &=&\left( k-k_{0}\right) \exp \left[ -\frac{1}{2\beta ^{2} \left( k-k_{0}\right) ^{2}-ikr_{0}\right] , \notag \end{eqnarray which are also local states in $K$\ and $k$\ spaces, respectively. In coordinate space, the above wavefunction has the fro \begin{eqnarray} \Psi \left( x_{1},x_{2},0\right) &=&\frac{2i\pi \beta ^{4}}{\sqrt{\Xi }k_{0 }\left\{ \left( k_{0}+\gamma \right) \left[ \left( \varphi _{+}^{(1)}\left( x_{1}\right) \varphi _{-}\left( x_{2}\right) -\varphi _{+}\left( x_{1}\right) \varphi _{-}^{\left( 1\right) }\left( x_{2}\right) \right) u\left( x_{2}-x_{1}\right) \right. \right. \notag \\ &&+\left. \left( x_{1}\rightleftarrows x_{2}\right) \right] \notag \\ &&-\left( k_{0}-\gamma \right) \left[ \left( \varphi _{-}^{\left( 1\right) }\left( x_{1}\right) \varphi _{+}\left( x_{2}\right) -\varphi _{+}^{\left( 1\right) }\left( x_{2}\right) \varphi _{-}\left( x_{1}\right) \right) u\left( x_{2}-x_{1}\right) \right. \notag \\ &&+\left. \left. \left( x_{1}\rightleftarrows x_{2}\right) \right] \right\} , \end{eqnarray which can be reduced t \begin{equation} \Psi \left( x_{1},x_{2},0\right) \approx \frac{2i\pi \beta ^{4}\left( k_{0}-\gamma \right) }{\sqrt{\Xi }k_{0}}\left[ \left( \varphi _{+}^{(1)}\left( x_{1}\right) \varphi _{-}\left( x_{2}\right) -\varphi _{+}\left( x_{1}\right) \varphi _{-}^{(1)}\left( x_{2}\right) \right) +\left( x_{1}\rightleftarrows x_{2}\right) \right] , \label{initial state_2} \end{equation under the approximation $\beta r_{0}\gg 1$. Here $\Xi $ is the normalized constant, and $\varphi _{\pm }^{(1)}\left( x\right) =\left( x\mp \frac{r_{0 }{2}\right) \varphi _{\pm }\left( x\right) $. By the same procedure, at time $t$ the evolved wavefunction i \begin{equation} \phi \left( r,t\right) =-\left( k_{0}+\gamma \right) \Theta _{+}+\left( k_{0}-\gamma \right) \Theta _{-}, \end{equation wit \begin{eqnarray} \Theta _{\pm } &=&\sqrt{\frac{\pi }{2\Omega ^{\prime }}}\frac{i\beta ^{3 \left[ \left\vert r\right\vert \pm \left( r_{0}-2k_{0}t\right) \right] } k_{0}\left( 1+2i\beta ^{2}t\right) ^{3/2}}\exp \left\{ -\frac{\beta ^{2 \left[ \left\vert r\right\vert \pm \left( r_{0}-2k_{0}t\right) \right] ^{2}} 2\left( 4\beta ^{4}t^{2}+1\right) }+i\Delta _{\pm }\right\} , \\ \Delta _{\pm } &=&\frac{\beta ^{4}\left( \left\vert r\right\vert \pm r_{0}\right) ^{2}t-2k_{0}^{2}t\mp 2k_{0}\left( \left\vert r\right\vert \pm r_{0}\right) }{2\left( 4\beta ^{4}t^{2}+1\right) }. \end{eqnarray where the normalization facto \begin{equation*} \Omega ^{\prime }=\frac{\pi ^{3/2}\beta ^{3}\left( k_{0}-\gamma \right) ^{2 }{4k_{0}^{2}}. \end{equation* In the case of $\beta ^{4}t^{2}\gg 1$, $k_{0}t\gg r_{0}$ the probability distribution is \begin{eqnarray} \left\vert \phi \left( r,t\right) \right\vert ^{2} &\approx &\frac{\pi \left( k_{0}+\gamma \right) ^{2}}{16\Omega ^{\prime }k_{0}^{2}t^{3}}\left( \left\vert r\right\vert +2k_{0}t\right) ^{2}\exp \left\{ -\frac{\left( \left\vert r\right\vert +2k_{0}t\right) ^{2}}{4\beta ^{2}t^{2}}\right\} \\ &&-\frac{\pi \left( k_{0}^{2}-\gamma ^{2}\right) e^{-k_{0}^{2}/\beta ^{2}}} 8\Omega ^{\prime }k_{0}^{2}t^{3}}\left( r^{2}-4k_{0}^{2}t^{2}\right) \exp \left( -\frac{r^{2}}{4\beta ^{2}t^{2}}\right) , \notag \end{eqnarray which leads the total probabilit \begin{equation} \int_{-\infty }^{\infty }\left\vert \phi \left( r,t\right) \right\vert ^{2 \text{\textrm{d}}r\approx \frac{\left( k_{0}+\gamma \right) ^{2}}{\left( k_{0}-\gamma \right) ^{2}}. \end{equation The profiles of $\left\vert \varphi \left( r,t\right) \right\vert ^{2}$ are plotted in Fig. 2. We can see that the same behavior occurs in the present situation. In order to clarify the physical picture, we still employ the second quantization representation by introducing another type of boson creation operator $b_{i}^{\dag }$\ $\left( i=1,2\right) $\ wit \begin{eqnarray*} \varphi _{-}^{(0)}\left( x\right) &=&\left\langle x\right\vert b_{1}^{\dag }\left\vert 0\right\rangle , \\ \varphi _{-}^{(0)}\left( x\right) &=&\left\langle x\right\vert b_{2}^{\dag }\left\vert 0\right\rangle . \end{eqnarray* Then the initial state in Eq. (\ref{initial state_2}) can be expressed a \begin{equation} \frac{1}{\sqrt{2}}\left( a_{1}^{\dag }b_{2}^{\dag }+a_{2}^{\dag }b_{1}^{\dag }\right) \left\vert 0\right\rangle , \end{equation which is maximally two-mode entangled state. \bigskip \section{Summary and discussion} \label{Summary} In summary we identified a connection between spectral singularities and dynamical behavior for interacting many-particle system. We explored the collision process of two bosons as well as fermions in one-dimensional system with imaginary delta interaction strength based on the exact solution. We have showed that there is a singularity spectrum which leads to complete particle-pair annihilation when the relative group velocity is resonant to the magnitude of interaction strength. The result for this simple model implies that the complete particle-pair annihilation can only occur for two distinguishable bosons, maximally two-mode entangled boson pair and singlet fermions, which may predict the existence of its counterpart in the theory of particle physics. \acknowledgments We acknowledge the support of the National Basic Research Program (973 Program) of China under Grant No.2012CB921900 and CNSF (Grant No. 11374163).
1,314,259,995,510
arxiv
\section{Comparisons to related methods} \label{secComparison} The most similar existing method to ours is the sphering approach from \citet{allen2012inference}. Both methods use a preliminary demeaned version of the data to generate covariance estimates, then use these estimates to adjust the gene-wise $t$-tests. The largest difference between the procedures lies in this last step. The sphering approach produces an adjusted data set based on decorrelating residuals from a preliminary mean estimate and performs testing and mean estimation on this adjusted data using traditional OLS techniques. Though their approach is well-motivated at the population level, they do not provide theoretical support for their plug-in procedure, and in particular do not explore how noise in the initial mean estimate may complicate their decorrelation procedure. In contrast, our approach uses a generalized least squares approach motivated by classical statistical results including the Gauss Markov theorem. The sphering approach also involves decorrelating a data matrix along both axes. Our work, including the theoretical analysis in \citet{Zhou14a}, suggests that when the data matrix is non-square, attempting to decorrelate along the longer axis generally degrades performance. For genetics applications, where there are usually many more genes than samples, this suggests that decorrelating along the genes may hurt the performance of the sphering method. Fortunately, for gene-level analyses it is not necessary to decorrelate along the gene axis, since inference methods like false discovery rate are robust to a certain level of dependence among the variables (genes) ~\citep{benjamini2001control}. Therefore, we also consider a modification of the sphering algorithm that only decorrelates along the samples. Confounder adjustment is another related line of work that deals with similar issues when attempting to discover mean differences. In particular, a part of that literature posits models where row-wise connections arise from the additive effects of potential latent variables. \citet{sun2012multiple} and \citet{wang2015confounder} use models of the form \begin{align*} X_{n \times m} &= D_{n \times 1} \beta_{m \times 1}^T + Z_{n \times r} \Gamma_{m \times r}^T + E_{n \times m} \\ Z_{n \times r} &= D_{n \times 1} \alpha_{r \times 1}^T + W_{n \times r} \end{align*} where $Z$ is an unobserved matrix of $r$ latent factors. Rewriting these equations into the following form lets us better contrast the confounder model to our matrix-variate setup: \begin{equation} \label{cateModelFactoredForm} X = D (\beta + \Gamma \alpha)^T + W \Gamma^T + E. \end{equation} These models are generally estimated by using some form of factor analysis to estimate $\Gamma$ and then using regression methods with additive outlier detection to identify $\beta$, methodology that is quite different from our GLS-based methods. For the two-group model, in the case of a globally centered data matrix $X$, the design matrix $D$ in \eqref{cateModelFactoredForm} takes the form \begin{equation} \label{cateModelGloballyCentered} D_{n \times 1}^T = \begin{bmatrix} -1 & \cdots & -1 & 1 \cdots & 1 \end{bmatrix} = \begin{bmatrix} -1_{n_1}^T & 1_{n_2}^T \end{bmatrix}, \end{equation} and $2\beta$ represents the vector of true mean differences between the groups. The vector $\beta$ is estimated via OLS, yielding $\hat{\beta}_{\text{OLS}}$, and CATE considers whether the residual $X - D_{n \times 1} \hat{\beta}_{\text{OLS}}$ has a low-rank covariance structure plus noise. If so, $\hat{\Gamma} \hat \alpha$ aims to take out the residual low-rank structure through $D (\widehat{\Gamma \alpha})^T$. As illustrated in simulation and data analysis, this improves upon inference based only on $\hat{\beta}_\text{OLS}$. When applying the CATE and related methods to data originated from the generative model as described in the present paper, CATE (and in particular, the related LEAPP) method essentially seeks a sparse approximation of $\hat{\beta}_\text{OLS}$; Moreover in LEAPP, this is essentially achieved via hard thresholding of coefficients of $\hat{\beta}_\text{OLS}$, leading to improvements in performance in variable selection and its subsequence inference when the vector of true mean differences is presumed to be sparse. In our setting, we improve upon OLS using GLS. \subsection{Simulation results} \begin{figure}[tbp] \includegraphics[width=\textwidth]{ROCsimpanelAAR1BUC} \caption{Performance of Algorithm 2 (GLS) relative to sphering and confounder adjustment methods, labeled as \texttt{tsphere} and \texttt{cate}, respectively. These are ROC curves for identifying true mean differences. An implementation of the sphering algorithm that does not adjust for $A$ is also included, labeled as \texttt{tsphere\_noA}. Each panel shows the average ROC curves over 200 simulations. We simulate matrix variate data with gene correlations from an $\text{AR1}(0.8)$ model and let $s=10$ genes have true mean differences of $0.8$, $0.6$, and $0.4$ for the first, second and third rows, respectively. For all of these the true $B$ is set to $\widehat{B}$ from the ulcerative colitis data (using a repeated block structure for larger $n$ values), described in Section~\ref{sec::UCData} and evenly-sized groups are assigned randomly.} \label{fig::roc_comparison_UC} \end{figure} \begin{figure}[htbp] \includegraphics[width=\textwidth]{ROCsimpanelAIBUC} \caption{Performance of Algorithm 2 (GLS) relative to sphering and confounder adjustment, labeled as \texttt{tsphere} and \texttt{cate}, respectively. These are ROC curves for identifying true mean differences. An implementation of the sphering algorithm that does not adjust for $A$ is also included, labeled as \texttt{tsphere\_noA}. Each panel shows the average ROC curves over 200 simulations. We simulate matrix variate data with no gene-wise correlations ($A=I$) and let $s=10$ genes have true mean differences of $0.8$, $0.6$, and $0.4$ for the first, second and third rows, respectively. For all of these the true $B$ is set to $\widehat{B}$ from the ulcerative colitis data (using a repeated block structure for larger $n$ values), described in Section~\ref{sec::UCData} and evenly-sized groups are assigned randomly.} \label{fig::roc_comparison_UC_baseline} \end{figure} Figure~\ref{fig::roc_comparison_UC} compares the performance of Algorithm 2 to the sphering method of \citet{allen2012inference} and the robust regression confounder adjustment method of \citet{wang2015confounder} on simulated matrix variate data motivated by the ulcerative colitis dataset described in Section~\ref{sec::UCData}. Note that this robust regression confounder adjustment is a minor modification of the LEAPP algorithm introduced in \citet{sun2012multiple}. As discussed above, we also consider a modification of \citet{allen2012inference} that only decorrelates along the rows. \begin{figure}[htb] \includegraphics[width=\textwidth]{ROCsimpanelOther} \caption{Performance of Algorithm 2 (GLS) relative to sphering and confounder adjustment, labeled as \texttt{tsphere} and \texttt{cate}, respectively. These are ROC curves for identifying true mean differences. An implementation of the sphering algorithm that does not adjust for $A$ is also included, labeled as \texttt{tsphere\_noA}. Each panel shows the average ROC curves over 200 simulations. We simulate matrix variate data with an $\text{AR1}(0.8)$ model for $A$ and let $s=10$ genes have true mean differences of $0.8$. $B$ is constructed according to a Star-Block model with blocks of size 4, an $\text{AR1}(0.8)$, and an Erd\H{o}s-R\'{e}nyi random graph with $d=n \log n$ edges. All of these use $n=20$ and randomly assign 10 observations to each group.} \label{fig::roc_comparison_other} \end{figure} \begin{figure}[htb] \includegraphics[width=\textwidth]{ROC_simpanel_A-I_B-other} \caption{Performance of Algorithm 2 (GLS) relative to sphering and confounder adjustment, labeled as \texttt{tsphere} and \texttt{cate}, respectively. These are ROC curves for identifying true mean differences. An implementation of the sphering algorithm that does not adjust for $A$ is also included, labeled as \texttt{tsphere\_noA}. Each panel shows the average ROC curves over 200 simulations. We simulate matrix variate data with no gene-wise correlations ($A=I$) and let $s=10$ genes have true mean differences of $0.6$. $B$ is constructed according to a Star-Block model with blocks of size 4, an $\text{AR1}(0.8)$, and an Erd\H{o}s-R\'{e}nyi random graph with $d=n \log n$ edges. All of these use $n=40$ and randomly assign 20 observations to each group.} \label{fig::roc_comparison_I_other} \end{figure} We can see that across a range of dataset sizes our method consistently outperforms sphering in terms of sensitivity and specificity for identifying mean differences. In some settings, CATE improves on Tsphere and $t$-statistics despite being applied on misspecified models, because CATE takes out the additional rank two structure from the mean after OLS regression and does some approximate thresholding on the coefficients. Our method using GLS performs significantly better than CATE in the setting of non-identity $B$, with edges present both within and between groups. Figure~\ref{fig::roc_comparison_other} fixes the sample size and repeats these comparisons on different sample correlation structures (which are described in Section~\ref{sec::Simulations}). Figure~\ref{fig::roc_comparison_I_other} is analogous to Figure~\ref{fig::roc_comparison_other}, but with $A$ as the identity matrix. Algorithm 2 is competitive or superior to the competing methods across a range of topologies. \begin{figure}[tb] \centering \includegraphics[width=0.65\textwidth]{scatterplotCateVsGLS} \caption{Scatterplot of $t$-statistics for CATE and Algorithm 2 applied on the ulcerative colitis data. The 45-degree line is included in black while red dashed line is the linear fit.} \label{fig::scatterplotCateAlg2} \end{figure} \subsection{Comparison on UC data} \label{secComparisonUC} We apply both Algorithm 2 and CATE on the ulcerative colitis data to compare their respective findings on real data. Figure~\ref{fig::scatterplotCateAlg2} presents the test statistics from these algorithms. The test statistics have a correlation of $0.75$. As expected, both methods find that the bulk of genes have small test statistics. Note that the regression line of the CATE test statistics on Algorithm 2's test statistics has a slope less than 1. This implies that Algorithm 2 generates more dispersed test statistics than CATE, and, given that we have shown in Figures~\ref{fig::QQAR} and~\ref{fig::qqdata} that Algorithm 2 provides well-calibrated test statistics, that it also has more power in this situation. Using a threshold of FDR adjusted p-values smaller than $0.1$, both methods find four genes with significant mean differences. However, there is only one gene (DPP10-AS1) that both methods identify. So, although there is significant correlation between the test statistics, the methods do not necessarily identify the same genes. \section{Additional simulation results} \label{sec::simulationAppend} Figure~\ref{fig::gemini_comp} demonstrates the effect of mean structure on covariance estimation. As expected, when there is no mean structure Gemini performs competitively. As more mean structure is added, however, its performance quickly decays to be worse than Algorithm 2. This also provides evidence that the plug-in estimator $\widehat{\tau}_{\text{init}}$ used in Algorithm 2 is appropriately selecting genes to group center, as when there are no or very few differentially expressed genes Algorithm 2 is still never worse than Gemini. Algorithm 1 does not perform as well as Algorithm 2 but still tends to eventually outperform Gemini as more mean structure is added. As the sample size increases, the difference between Algorithm 2 and Algorithm 1 decreases as the added noise from group centering becomes less of a factor. We still recommend using Algorithm 2 in most realistic scenarios, but this reinforces our theoretical finding that the two algorithms have the same error rates. \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{geminiComp} \caption{Performance of Gemini, Algorithm 1, and Algorithm 2 for estimating $B$ under different mean and covariance structures. As the sample size increases, we can see that Algorithm 1 improves relative to Gemini and begins to catch up to Algorithm 2. Gemini's performance always degrades as the true differences grow or more differentially expressed genes are added, while Algorithm 1 and 2 are stable. We set $B^{-1}$ as Erd\H{o}s-R\'{e}nyi (ER) or star-block with blocks of size 4 (SB). All plots use $A$ from an AR1$(0.8)$ model with $m=2000$ and are averaged over 200 replications. In the left plot the first $50$ genes are differentially expressed at the level specified on the $x$-axis. As indicated, the three groups of lines correspond to $n=20$, $40$, and $80$. In the right two columns there are \texttt{m1} number of genes with exponentially decaying true differences between groups, scaled so that the largest difference is 5 (resulting in an average difference of approximately 1). } \label{fig::gemini_comp} \end{figure} \section{Additional data analysis} \label{sec::dataAppend} \begin{figure} \centering \begin{subfigure}[b]{\textwidth} \includegraphics[width=1\linewidth]{inferenceplot} \caption{The first plot displays the estimated design effect vs. the penalty multiplier for Algorithm 2. Each curve corresponds to a different number of columns being group centered. As the curves go from top to bottom, the number of group centered columns increases from $10$ to $2000$. The second plot shows a quantile plot of test statistics from the data vs. simulated test statistics; in the simulation, the population person-person covariance matrix is $\widehat{B}$, as estimated from the UC data.} \label{fig::inference} \end{subfigure} \begin{subfigure}[b]{\textwidth} \includegraphics[width=1\linewidth]{qqcombined} \caption{Quantile plot and inverse covariance graphs. The first two plots correspond to $\lambda =0.4$ and $128$ group centered genes. The third plot corresponds to $\lambda = 0.5$ and $128$ group centered genes. Green circles correspond to twins with UC, orange circles to twins without UC. Twins are aligned vertically.} \label{fig::icovplot} \end{subfigure} \caption[]{} \end{figure} As discussed in Section~\ref{sec::GLSFixedBtheorem}, it is particularly important that the design effect is accurately estimated in order for the test statistics to be properly calibrated. The first plot of Figure~\ref{fig::inference} displays the sensitivity of the estimated design effect \eqref{designEffect} for Algorithm 2 to the GLasso penalty parameter and the number of group centered columns. In the case that all columns are group centered, Algorithm 2 reduces to Algorithm 1. If we group center all genes, the estimated design effect is sensitive to the penalty parameter, but if we group center a small proportion of genes, it is less sensitive to the penalty parameter. This is further evidence that it may be advantageous to avoid over-centering the data when the true mean difference vector $\gamma$ may be sparse. The second plot of Figure~\ref{fig::inference} shows a quantile plot comparing the distribution of test statistics from the UC data to test statistics from a simulation whose population correlation structure is matched to the UC data. The quantile plot reveals that we can reproduce the pattern of overdispersion in the test statistics using simulated data having person-person as well as gene-gene correlations. Such correlations therefore provide a possible explanation for the overdispersion of the test statistics. Figure \ref{fig::icovplot} displays a quantile plot and inverse covariance graph for $\lambda = 0.4$ and $128$ group centered genes. Under these settings the test statistics appear correctly calibrated, coinciding with the central portion of the reference line. Furthermore, the inverse covariance graph is sparse (38 edges). In the inverse covariance graph, there are more edges between subjects with UC than between the healthy subjects, which could be explained by the existence of subtypes of UC inducing correlations between subsets of subjects. The third plot of Figure~\ref{fig::icovplot} displays a sparser inverse covariance graph, corresponding to a larger penalty $\lambda = 0.5$. There are three edges between twin pairs, and there are more edges between subjects with UC than between those without UC. \subsection{Stability simulation} \label{sec::StabilitySim} Table \ref{StabilitySimulationUC} shows the results from a simulation analogous to Table \ref{topTenGenesInCommon}, demonstrating stability across iterations of the procedure. Iteration 1 begins by group centering $1280$ genes and this number is halved in each successive iteration. We can see from the table that the gene rankings generated by Algorithm 2 are robust to misspecifying the number of differentially expressed genes. When the number of group centered genes is $160$ or below (iterations 4 through 8), the commonly selected genes among the top $20$ genes are stable. Furthermore, the true positives remain stable as we decrease the amount of genes centered, while the false positives decrease. \begin{table}[tb] \centering \caption{Number of genes in common among genes ranked in the top $20$ when different numbers of genes are group centered. This simulation is analogous to Table \ref{topTenGenesInCommon}. Note that the maximum possible value for any entry of the table is $20$; if entry $(i, j)$ is $20$, then iterations $i$ and $j$ selected the same top twenty genes. The first $10$ genes have a difference of 1.5 and the second $10$ have a difference of $1$. All remaining genes have a true mean difference of zero. We use $B$ as estimated from the UC data, and $A$ is from an $\text{AR1}(0.8)$ model. These simulations have $n = 20$ individuals and 2000 genes and are averaged over $200$ replications. The last two rows display the average number of true and false positives among the genes ranked in the top 20 of each iteration.} \label{StabilitySimulationUC} \begin{tabular}{r|rrrrrrrr} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline 1 & 20.0 & 17.6 & 15.8 & 14.8 & 14.3 & 14.0 & 14.0 & 13.9 \\ 2 & 17.6 & 20.0 & 17.9 & 16.8 & 16.2 & 15.9 & 15.8 & 15.8 \\ 3 & 15.8 & 17.9 & 20.0 & 18.7 & 18.1 & 17.8 & 17.7 & 17.6 \\ 4 & 14.8 & 16.8 & 18.7 & 20.0 & 19.3 & 19.0 & 18.9 & 18.8 \\ 5 & 14.3 & 16.2 & 18.1 & 19.3 & 20.0 & 19.6 & 19.5 & 19.4 \\ 6 & 14.0 & 15.9 & 17.8 & 19.0 & 19.6 & 20.0 & 19.8 & 19.7 \\ 7 & 14.0 & 15.8 & 17.7 & 18.9 & 19.5 & 19.8 & 20.0 & 19.8 \\ 8 & 13.9 & 15.8 & 17.6 & 18.8 & 19.4 & 19.7 & 19.8 & 20.0 \\ \hline TP & 12.7 & 14.3 & 15.6 & 16.4 & 16.7 & 16.8 & 16.8 & 16.8 \\ FP & 7.3 & 5.7 & 4.4 & 3.6 & 3.3 & 3.2 & 3.2 & 3.2 \\ \hline \end{tabular} \end{table} \section{Proof of Theorem \ref{mainTheoremGroupCentering}} \label{sec::ProofMainThmPartI} Note that the proof in the current Section follows exactly the same steps as the proof of Theorems 3.1 and 3.2 in \citet{Zhou14a}. Theorem \ref{mainTheoremGroupCentering} {\bf Part II} is proved in Section~\ref{sec::ProofMainThmPartII}. To prove Theorem \ref{mainTheoremGroupCentering} {\bf Part I}, we first state Lemma~\ref{boundCovOpFro}, which establishes rates of convergence for estimating $A^{-1}$ and $B^{-1}$ in the operator and the Frobenius norm. We then state the auxiliary Lemma~\ref{lemma:absoluteErrorKroneckerProduct}, which is identical to that for Theorems 11.1 and 11.2 of~\cite{Zhou14a}, except that we plug in $\widetilde{\alpha}$ and $\widetilde{\eta}$ as defined in~\eqref{entrywiseRateBcorr}. Putting these results together proves Theorem~\ref{mainTheoremGroupCentering}, {\bf Part I.} We prove these auxiliary results in Section~\ref{sec::proofsforTheorem2}. Let $\mathcal{X}_0$ denote the event \begin{align} &\forall i, j \qquad \left| \frac{(e_i - p_i)^T XX^T (e_j - p_j)}{\text{tr}(A^*) \sqrt{b_{ii}^* b_{jj}^*}} - \rho_{ij}(B) \right| \leq \widetilde{\alpha} \label{eq:X_0_B} \\ &\forall i, j \qquad \left| \frac{X_i^T (I - P_2)X_j}{\text{tr}(B^*) \sqrt{a_{ii}^* a_{jj}^*}} - \rho_{ij}(A) \right| \leq \widetilde{\eta}, \label{eq:X_0_A} \end{align} with $\mathcal{X}_0(B)$ and $\mathcal{X}_0(A)$ denoting the events defined by equations~\eqref{eq:X_0_B} and~\eqref{eq:X_0_A}, respectively. Let $\widetilde{\alpha}$ and $\widetilde{\eta}$ be as defined in \eqref{entrywiseRateBcorr}. On event $\mathcal{X}_0(A)$, for all $j$, $\hat{\Gamma}_{jj}(A) = \rho_{jj}(A) = 1$ and \begin{eqnarray} \label{eq::delta-nf} \max_{j,k, j\not=k}|\hat{\Gamma}_{jk}(A) - \rho_{jk}(A)| \leq \frac{2\widetilde{\eta}}{1 - \widetilde{\eta}} \end{eqnarray} On event $\mathcal{X}_0(B)$, for all $j$, $\hat{\Gamma}_{jj}(B) = \rho_{jj}(B) = 1$ and \begin{eqnarray} \label{eq::delta-nm} \max_{j,k, j\not=k}|\hat{\Gamma}_{jk}(B) - \rho_{jk}(B)| \leq \frac{2\widetilde{\alpha}}{1 - \widetilde{\alpha}}. \end{eqnarray} \begin{lemma} \label{boundCovOpFro} Suppose (A1) and (A2) hold. Let $\widehat{W}_1$ and $\widehat{W}_2$ be as defined in \eqref{W1hatW2hat}. Let $\widehat{A}_{\rho}$ and $\widehat{B}_{\rho}$ be as defined in (\ref{geminiObjectiveFnA}) and (\ref{geminiObjectiveFnB}). For some absolute constants $18 < C, C' < 36$, the following events hold with probability at least $1 - 2/(n \vee m)^2$, \begin{align} \delta_{A, 2} &:= \lVert \widehat{W}_1 \widehat{A}_\rho \widehat{W}_1 / \operatorname{tr}(B) - A \rVert_2 \leq C a_{\max} \kappa(\rho(A))^2 \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \\ \delta_{B, 2} &:= \lVert \widehat{W}_2 \widehat{B}_\rho \widehat{W}_2 / \operatorname{tr}(A) - B \rVert_2 \leq C' b_{\max} \kappa(\rho(B))^2 \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee 1} \\ \delta_{A, F} &:= \lVert \widehat{W}_1 \widehat{A}_\rho \widehat{W}_1 / \operatorname{tr}(B) - A \rVert_F \leq C a_{\max} \kappa(\rho(A))^2 \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m} \\ \delta_{B, F} &:= \lVert \widehat{W}_2 \widehat{B}_\rho \widehat{W}_2 / \operatorname{tr}(A) - B \rVert_F \leq C' b_{\max} \kappa(\rho(B))^2 \lambda_A \sqrt{\offzero{B^{-1}} \vee n}; \end{align} and for some $10 < C, C' < 19$, \begin{eqnarray*} \delta_{A, 2}^- &:= & \left \lVert \operatorname{tr}(B) \left( \widehat{W}_1 \widehat{A}_\rho \widehat{W}_1 \right)^{-1} - A^{-1} \right \rVert_2 \leq \frac{C \lambda_B \sqrt{\offzero{A^{-1}} \vee 1}}{a_{\min} \varphi^2_{\min}(\rho(A))} \label{rateCovAInvOp} \\ \delta_{B, 2}^- &:= & \left \lVert \operatorname{tr}(A) \left( \widehat{W}_2 \widehat{B}_\rho \widehat{W}_2\right)^{-1} - B^{-1} \right \rVert_2 \leq \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{b_{\min} \varphi^2_{\min}(\rho(B))} \label{rateCovBInvOp} \\ \delta_{A, F}^- &:= & \left \lVert \operatorname{tr}(B) \left( \widehat{W}_1 \widehat{A}_\rho \widehat{W}_1 \right)^{-1} - A^{-1} \right \rVert_F \leq \frac{C \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{a_{\min} \varphi^2_{\min}(\rho(A))} \\ \delta_{B, F}^- &:= & \left \lVert \operatorname{tr}(A) \left( \widehat{W}_2 \widehat{B}_\rho \widehat{W}_2 \right)^{-1} - B^{-1} \right \rVert_F \leq \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{b_{\min} \varphi^2_{\min}(\rho(B))}. \end{eqnarray*} \end{lemma} Lemma~\ref{lemma:absoluteErrorKroneckerProduct} follows from Theorems 11.1 and 11.2 of~\cite{Zhou14a,Zhou14supp}, where we now plug in $\widetilde{\alpha}$ and $\widetilde{\eta}$ as defined in~\eqref{entrywiseRateBcorr}. For completeness, we provide a sketch in Section~\ref{sec::proof-intermediate-Kron-prod}. \begin{lemma} \label{lemma:absoluteErrorKroneckerProduct} Suppose (A1) and (A2) hold. For $\varepsilon_1, \varepsilon_2 \in (0, 1)$, let \[ \lambda_A = \widetilde{\eta} / \varepsilon_1, \quad \lambda_B = \widetilde{\alpha} / \varepsilon_2, \] for $\widetilde{\alpha}$, $\widetilde{\eta}$ as defined in (\ref{entrywiseRateBcorr}), and suppose $\lambda_A, \lambda_B < 1$. Then on event $\mathcal{X}_0$, for $18 < C, C' < 36$, \begin{eqnarray*} && \lVert \widehat{A \otimes B} - A \otimes B \rVert_2 \leq \frac{\lambda_A \wedge \lambda_B}{2} \lVert A \rVert_2 \lVert B \rVert_2 + C \lambda_B a_{\max} \lVert B \rVert_2 \kappa(\rho(A))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \\ & & + C' \lambda_A b_{\max} \lVert A \rVert_2 \kappa(\rho(B))^2 \sqrt{|B^{-1}|_{0, \text{off}} \vee 1} \\ && + 2\left[ C' \lambda_A b_{\max} \kappa(\rho(B))^2 \sqrt{|B^{-1}|_{0, \text{off}} \vee 1} \right] \left[ C \lambda_B a_{\max} \kappa(\rho(A))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \right], \end{eqnarray*} and for $10 < C, C' < 19$, \begin{align*} &\lVert \widehat{A \otimes B}^{-1} - A^{-1} \otimes B^{-1} \rVert_2 \leq \frac{\lambda_A \wedge \lambda_B}{3} \lVert A^{-1} \rVert_2 \lVert B^{-1} \rVert_2 + C \lambda_B \lVert B^{-1} \rVert_2 \frac{\sqrt{|A^{-1}|_{0, \text{off}} \vee 1}}{a_{\min} \varphi^2_{\min}(\rho(A))} \\ &\quad + C' \lambda_A \lVert A^{-1} \rVert_2 \frac{\sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{b_{\min} \varphi^2_{\min}(\rho(B))} + \frac{3}{2} \left[C \lambda_B \frac{\sqrt{|A^{-1}|_{0, \text{off}} \vee 1}}{a_{\min} \varphi^2_{\min}(\rho(A))} \right] \left[ C' \lambda_A \frac{\sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{b_{\min} \varphi^2_{\min}(\rho(B))} \right]; \end{align*} For $18 < C, C' < 36$, \begin{eqnarray*} & &\lVert \widehat{A \otimes B} - A \otimes B \rVert_F \leq \frac{\lambda_A \wedge \lambda_B}{2} \lVert A \rVert_F \lVert B \rVert_F + C \lambda_B a_{\max} \lVert B \rVert_F \kappa(\rho(A))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee m} \\ & & + C' \lambda_A b_{\max} \lVert A \rVert_F \kappa(\rho(B))^2 \sqrt{|B^{-1}|_{0, \text{off}} \vee n} \\ && + 2\left[ C' \lambda_A b_{\max} \kappa(\rho(B))^2 \sqrt{|B^{-1}|_{0, \text{off}} \vee n} \right] \left[ C \lambda_B a_{\max} \kappa(\rho(A))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee m} \right], \end{eqnarray*} and for $10 < C, C' < 19$, \begin{align*} &\lVert \widehat{A \otimes B}^{-1} - A^{-1} \otimes B^{-1} \rVert_F \leq \frac{\lambda_A \wedge \lambda_B}{3} \lVert A^{-1} \rVert_2 \lVert B^{-1} \rVert_F + C \lambda_B \lVert B^{-1} \rVert_F \frac{\sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{a_{\min} \varphi^2_{\min}(\rho(A))} \\ &\quad + C' \lambda_A \lVert A^{-1} \rVert_F \frac{\sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{b_{\min} \varphi^2_{\min}(\rho(B))} + \frac{7}{5} \left[C \lambda_B \frac{\sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{a_{\min} \varphi^2_{\min}(\rho(A))} \right] \left[ C' \lambda_A \frac{\sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{b_{\min} \varphi^2_{\min}(\rho(B))} \right]. \end{align*} \end{lemma} \subsection{Proof of Theorem \ref{mainTheoremGroupCentering}, Part I} \label{app::proofKroneckerProduct_details} We state additional helpful bounds: \begin{align} &(a_{\min} \vee \varphi_{\min}(A)) \sqrt{m} \leq \lVert A \rVert_F = \left( \sum_{i = 1}^m \varphi_i^2(A) \right)^{1/2} \leq \sqrt{m} \lVert A \rVert_2, \label{boundsAFro} \\ &(b_{\min} \vee \varphi_{\min}(B)) \sqrt{n} \leq \lVert B \rVert_F = \left( \sum_{i = 1}^m \varphi_i^2(B) \right)^{1/2} \leq \sqrt{n} \lVert B \rVert_2, \label{boundsBFro} \\ & \sqrt{m} / a_{\max} = \left( \frac{1}{a_{\max}} \vee \frac{1}{\varphi_{\max}(A)} \right) \sqrt{m} \leq \lVert A^{-1} \rVert_F \leq \sqrt{m} \lVert A^{-1} \rVert_2, \label{boundsAinvFro} \end{align} and \begin{align} & \sqrt{n} / b_{\max} = \left( \frac{1}{b_{\max}} \vee \frac{1}{\varphi_{\max}(B)} \right) \sqrt{n} \leq \lVert B^{-1} \rVert_F \leq \sqrt{n} \lVert B^{-1} \rVert_2. \label{boundsBinvFro} \end{align} \begin{proofof}{Theorem \ref{mainTheoremGroupCentering}, {\bf Part I}} We plug in bounds as in \eqref{boundOpNormAandB} and \eqref{boundOpNormAandBII} into Lemma \ref{lemma:absoluteErrorKroneckerProduct} to obtain under (A1) and (A2), $\left \lVert \widehat{A \otimes B} - A \otimes B \right \rVert_2 \leq \lVert A \rVert_2 \lVert B \rVert_2 \delta$, where \begin{align*} \delta &= \frac{\lambda_A \wedge \lambda_B}{2} + \frac{C r_a \kappa(\rho(A))}{\varphi_{\min}(\rho(A))} \lambda_B \sqrt{ |A^{-1}|_{0, \text{off}} \vee 1} + \frac{C' r_b \kappa(\rho(B))}{\varphi_{\min}(\rho(B))} \lambda_A \sqrt{ |B^{-1}|_{0, \text{off}} \vee 1} \\ &+ 2\left[ \frac{C r_a \kappa(\rho(A))}{\varphi_{\min}(\rho(A))} \lambda_B \sqrt{ |A^{-1}|_{0, \text{off}} \vee 1} \right] \left[ \frac{C' r_b \kappa(\rho(B))}{\varphi_{\min}(\rho(B))} \lambda_A \sqrt{ |B^{-1}|_{0, \text{off}} \vee 1} \right] \\ &= \frac{\lambda_A \wedge \lambda_B}{2} + \log^{1/2}(m \vee n)\left( \sqrt{\frac{ |A^{-1}|_{0, \text{off}} \vee 1}{m}}+ \sqrt{\frac{ |B^{-1}|_{0, \text{off}} \vee 1}{n}} \right) + o(1). \end{align*} For the inverse, we plug in bounds as in \eqref{ReciprocalMinEvalCorrA} and \eqref{ReciprocalMinEvalCorrB} into Lemma \ref{lemma:absoluteErrorKroneckerProduct} to obtain under (A1) and (A2), $\left \lVert \widehat{A \otimes B}^{-1} - A^{-1} \otimes B^{-1} \right \rVert_2 \leq \lVert A^{-1} \rVert_2 \lVert B^{-1} \rVert_2 \delta'$, where \begin{align*} \delta' &= \frac{\lambda_A \wedge \lambda_B}{3} + \frac{C r_a \lambda_B \sqrt{ |A^{-1}|_{0, \text{off}} \vee 1}}{\varphi_{\min}(\rho(A))} + \frac{C' r_b \lambda_A \sqrt{ |B^{-1}|_{0, \text{off}} \vee 1}}{\varphi_{\min}(\rho(B))} \\ &+ \frac{3}{2} \left[ \frac{C r_a \lambda_B \sqrt{ |A^{-1}|_{0, \text{off}} \vee 1}}{\varphi_{\min}(\rho(A))} \right] \left[ \frac{C' r_b \lambda_A \sqrt{ |B^{-1}|_{0, \text{off}} \vee 1}}{\varphi_{\min}(\rho(B))} \right] \\ &\asymp \frac{\lambda_A \wedge \lambda_B}{3} + \log^{1/2}(m \vee n) \left( \sqrt{\frac{ |A^{-1}|_{0, \text{off}} \vee 1}{m}} + \sqrt{\frac{ |B^{-1}|_{0, \text{off}} \vee 1}{n}} \right) + o(1). \end{align*} The bounds in the Frobenius norm are proved in a similar manner; see~\cite{Zhou14a} to finish. \end{proofof} \silent{ To bound the error in Frobenius norm, we plug in bounds as in \eqref{boundsAFro} and \eqref{boundsBFro} into Lemma \ref{lemma:absoluteErrorKroneckerProduct} to obtain under (A1) and (A2), $\left \lVert \widehat{A \otimes B} - A \otimes B \right \rVert_F \leq \lVert A \rVert_F \lVert B \rVert_F \delta$, where \begin{eqnarray*} \delta &= & \frac{\lambda_A + \lambda_B}{2} + C r_a \kappa(\rho(A))^2 \lambda_B \sqrt{ \frac{ |A^{-1}|_{0, \text{off}} \vee m}{m} } + C' r_b \kappa(\rho(B))^2 \lambda_A \sqrt{ \frac{ |B^{-1}|_{0, \text{off}} \vee n}{n}} \\ & & + 2\left[ C r_a \kappa(\rho(A))^2 \lambda_B \sqrt{ \frac{ |A^{-1}|_{0, \text{off}} \vee m}{m} } \right] \left[ C' r_b \kappa(\rho(B))^2 \lambda_A \sqrt{ \frac{ |B^{-1}|_{0, \text{off}} \vee n}{n}} \right] \\ &= & O\left( \lambda_B \sqrt{ \frac{ |A^{-1}|_{0, \text{off}} \vee m}{m} } + \lambda_A \sqrt{ \frac{ |B^{-1}|_{0, \text{off}} \vee n}{n} } \right). \end{eqnarray*} To bound the error in Frobenius norm of the inverse, we plug in bounds as in \eqref{boundsAinvFro} and \eqref{boundsBinvFro} into Lemma \ref{lemma:absoluteErrorKroneckerProduct} to obtain under (A1) and (A2), $\left \lVert \widehat{A \otimes B}^{-1} - A^{-1} \otimes B^{-1} \right \rVert_F \leq \lVert A^{-1} \rVert_F \lVert B^{-1} \rVert_F \delta'$, where \begin{align*} \delta' &= \frac{\lambda_A + \lambda_B}{3} + \frac{C \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{a_{\min} \lVert A^{-1} \rVert_F \varphi_{\min}^2(\rho(A))} + \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{b_{\min} \lVert B^{-1} \rVert_F \varphi_{\min}^2(\rho(B))} \\ &+ \frac{7}{5} \left[ \frac{C \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{a_{\min} \lVert A^{-1} \rVert_F \varphi_{\min}^2(\rho(A))} \right] \left[ \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{b_{\min} \lVert B^{-1} \rVert_F \varphi_{\min}^2(\rho(B))} \right] \\ &= O\left( \lambda_B \sqrt{ \frac{ |A^{-1}|_{0, \text{off}} \vee m}{m} } + \lambda_A \sqrt{ \frac{ |B^{-1}|_{0, \text{off}} \vee n}{n} } \right) \end{align*} Thus \begin{align*} \delta' &\leq \frac{\lambda_A + \lambda_B}{3} + \frac{C \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{\varphi_{\min}^2(\rho(A))} + \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{\varphi_{\min}^2(\rho(B))} \\ &+ \frac{7}{5} \left[ \frac{C \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m}}{\varphi_{\min}^2(\rho(A))} \right] \left[ \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee n}}{\varphi_{\min}^2(\rho(B))} \right]. \end{align*} The result follows because $\delta' = O(\delta)$. } \subsection{Proof of Theorem \ref{mainTheoremGroupCentering}, Part II} \label{sec::ProofMainThmPartII} \begin{proofof2} Let $\widehat{B}^{-1} = \widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2$. Let $\widehat{\Delta} = \widehat{B}^{-1} - B^{-1}$. Let $\mathcal{E}_0(B)$ denote the event given by equations (\ref{rateCovAInvOp}) and (\ref{rateCovBInvOp}), which we know has probability at least $1 - 2/(n \vee m)^2$ from Lemma~\ref{boundCovOpFro}, and define the event \begin{equation} \mathcal{E}_4 = \left\{ \lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \beta_j^* \rVert_2 \leq s_{n, m} + t_{n, m}' \right\}, \end{equation} where $s_{n, m}$ is as defined in (\ref{def:stFixedDelta}) and \begin{equation} t_{n, m}' := C \lambda_A \sqrt{ \frac{n_{\text{ratio}} \left(|B_0^{-1}|_{0, \text{off}} \vee 1\right)}{n_{\min}}}. \end{equation} Under $\mathcal{E}_0(B)$, we see that \begin{equation} \label{eq:Delta_o_const} \lVert \widehat{\Delta} \rVert_2 \leq \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{b_{\min} \varphi^2_{\min}(\rho(B))} = o(1). \end{equation} Using Proposition \ref{L2boundInverseOfDtransBinvD} and the fact that $\lVert D \rVert_2 = \sqrt{n_{\max}}$, we get that \begin{equation} \label{eq:D_delta_D_intermediate} \lVert \Omega D^T \widehat{\Delta} D \rVert_2 \leq n_{\text{ratio}} \lVert B \rVert_2 \lVert \widehat{\Delta} \rVert_2, \end{equation} From \eqref{eq:Delta_o_const} we know that $\lVert \widehat{\Delta} \rVert_2 \leq 1 / (n_{\text{ratio}} \lVert B \rVert_2)$, which we can plug into \eqref{eq:D_delta_D_intermediate} to show that $\lVert \Omega D^T \widehat{\Delta} D \rVert_2 < 1$. This implies that $\widetilde{C} n_{\min}^{-1/2} \lVert \widehat{\Delta} \rVert_2 \leq t_{n, m}'$. Therefore, we can apply Theorem~\ref{thm::GLSFixedB} to get that the conditional probability of $\mathcal{E}_4$ given $\mathcal{E}_0(B)$ is at least $1-4/(n \vee m)^2$. We can then bound the unconditional probability, \begin{eqnarray*} P(\mathcal{E}_4^c) &\leq & P\left( \mathcal{E}_4^c \mid \mathcal{E}_0(B) \right) P\left( \mathcal{E}_0(B) \right) + P\left( \mathcal{E}_0(B)^c \right) \\ &\leq & P\left( \mathcal{E}_4^c \mid \mathcal{E}_0(B) \right) + P\left( \mathcal{E}_0(B)^c \right) \\ &\leq & \frac{4}{(n \vee m)^2} + \frac{2}{(n \vee m)^2}. \end{eqnarray*} \end{proofof2} \section{Preliminary results} \label{sec::apppreliminary} In this section, we refresh notation and introduce propositions that are shared in the proofs of the theorems. For convenience, we first restate some notation. \begin{align} &D = \begin{bmatrix} 1_{n_1} & 0 \\ 0 & 1_{n_2} \end{bmatrix} \in \mathbb{R}^{n \times 2} \label{def:D} \\ &\Omega = (D^T B^{-1} D)^{-1} \text{ and } \Omega_{n, m} = (D^T B_{n, m}^{-1} D)^{-1} \label{def:Omega_app}\\ &\Delta = B_{n, m}^{-1} - B^{-1} \\ &\widehat{\beta}(\widehat{B}^{-1}) = (D^T \widehat{B}^{-1} D)^{-1} D^T \widehat{B}^{-1} X \in \mathbb{R}^{2 \times m} \end{align} When $D$ has the form (\ref{def:D}), the singular values are $\sigma_{\max}(D) = \sqrt{n_{\max}}$ and $\sigma_{\min}(D) = \sqrt{n_{\min}}$. The condition number is $\kappa(D) = \sigma_{\max}(D) / \sigma_{\min}(D) = \sqrt{n_{\text{ratio}}}$ where $n_{\text{ratio}} = \max(n_1, n_2) / \min(n_1, n_2)$. We first state some convenient notation and bounds. \begin{align} & r_a := a_{\max} / a_{\min} \text{ and } r_b := b_{\max} / b_{\min}; \notag \\ & 1/ \varphi_{\min}(A) = \lVert A^{-1} \rVert_2 \leq \lVert \rho(A)^{-1} \rVert_2 / a_{\min} = \frac{1}{a_{\min} \varphi_{\min}(\rho(A))}, \label{ReciprocalMinEvalA} \\ & 1/ \varphi_{\min}(B) = \lVert B^{-1} \rVert_2 \leq \lVert \rho(B)^{-1} \rVert_2 / b_{\min} = \frac{1}{b_{\min} \varphi_{\min}(\rho(B))}, \label{ReciprocalMinEvalB} \\ &1/ \varphi_{\min}(\rho(A)) = \lVert \rho(A)^{-1} \rVert_2 \leq a_{\max} \lVert A^{-1} \rVert_2, \label{ReciprocalMinEvalCorrA} \\ &1/ \varphi_{\min}(\rho(B)) = \lVert \rho(B)^{-1} \rVert_2 \leq b_{\max} \lVert B^{-1} \rVert_2 \label{ReciprocalMinEvalCorrB} \\ & \lVert A \rVert_2 \leq a_{\max} \lVert \rho(A) \rVert_2, \quad \lVert B \rVert_2 \leq b_{\max} \lVert \rho(B) \rVert_2, \label{boundOpNormAandB} \\ & \lVert \rho(A) \rVert_2 \leq \lVert A \rVert_2 / a_{\min}, \quad \text{ and } \quad \lVert \rho(B) \rVert_2 \leq \lVert B \rVert_2 / b_{\min}.\label{boundOpNormAandBII} \end{align} The eigenvalues of the correlation matrices satisfy \begin{eqnarray} \label{evalsInequalityCorrA} 0 < \varphi_{\min}(\rho(A)) \leq 1 \leq \varphi_{\max}(\rho(A)) \; \text{ and } \; 0 < \varphi_{\min}(\rho(B)) \leq 1 \leq \varphi_{\max}(\rho(B)). \end{eqnarray} In the remainder of this section, we state preliminary results and highlight important intermediate steps that are used in the proofs of Theorems \ref{thm::GLSFixedB} and \ref{mainTheoremGroupCentering}. First we state propositions used in mean estimation for Theorems \ref{thm::GLSFixedB} and \ref{mainTheoremGroupCentering}. \subsection{Propositions} We now state propositions used in the proofs of Lemmas \ref{HansonWrightBetaBinvBetaStar} and \ref{BetaBtildeBetaBStarHansonWright}. We defer the proof of Proposition~\ref{L2boundInverseOfDtransBinvD} to Section~\ref{sec::proofofBinvD}. \begin{proposition} \label{L2boundInverseOfDtransBinvD} For $\Omega$ as defined in (\ref{def:Omega_app}) and some design matrix $D$, \[ \lVert \Omega \rVert_2 \leq \lVert B \rVert_2 / \sigma_{\min}^2(D) \] In the case that $D$ is defined as in (\ref{def:D}), we have $\lVert \Omega \rVert_2 \leq \lVert B \rVert_2 / n_{\min}$. Furthermore, \begin{equation} \label{minEvalOmega} \lambda_{\min}(\Omega) \geq \frac{\lambda_{\min}(B) }{n_{\max}}. \end{equation} \end{proposition} We state the following perturbation bound. \begin{theorem}[Golub \& Van Loan, Theorem 2.3.4] \label{GolubInversePerturbation} If $A$ is invertible and $\lVert A^{-1} E \rVert_p < 1$, then $A + E$ is invertible and \[ \lVert (A + E)^{-1} - A^{-1} \rVert_p \leq \frac{\lVert E \rVert_p \lVert A^{-1} \rVert_p^2}{1 - \lVert A^{-1} E \rVert_p} \leq \frac{\lVert E \rVert_p \lVert A^{-1} \rVert_p^2}{1 - \lVert A^{-1} \rVert_p \lVert E \rVert_p}. \] \end{theorem} In Proposition \ref{L2BoundDTBiHatDinvMinusDTBiDInv}, we provide auxiliary upper bounds that depend on $\lVert \Delta \rVert_2$, $\lVert B \rVert_2$, $\kappa(D)$, and $\sigma_{\min}(D)$. We defer the proof of Proposition \ref{L2BoundDTBiHatDinvMinusDTBiDInv} to the end of this section, for clarity of presentation. \begin{proposition} \label{L2BoundDTBiHatDinvMinusDTBiDInv} Let $\Delta = B_{n, m}^{-1} - B^{-1}$. \begin{align} \delta_0(\Delta) &:= \lVert \Omega_{n, m} - \Omega \rVert_2 \leq \frac{1}{\sigma_{\min}^2(D)} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2}{1/ \kappa^2(D) - \lVert B \rVert_2 \lVert \Delta \rVert_2} \label{def:delta0} \\ \delta_1(\Delta) &:= \left \lVert \Omega D^T \Delta \right \rVert_2 \leq \sigma_{\max}(D) \lVert B \rVert_2 \lVert \Delta \rVert_2 / \sigma_{\min}^2(D) = \frac{\sqrt{n_{\max}}}{n_{\min}} \lVert B \rVert_2 \lVert \Delta \rVert_2. \label{def:delta1} \end{align} If $\lVert (D^T B^{-1}D)^{-1} D^T \Delta D \rVert_2 < 1$, then \begin{align} \delta_2(\Delta) &:= \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T \Delta \right \rVert_2 \leq \frac{\kappa(D)}{\sigma_{\min}(D)} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2^2}{1/ \kappa^2(D) - \lVert B \rVert_2 \lVert \Delta \rVert_2} \label{L2BoundOmegaTildeMinusOmegaDTDelta} \\ \delta_3(\Delta) &:= \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T B^{-1} \right \rVert_2 \leq \frac{\kappa(D)}{\sigma_{\min}(D)} \frac{\lVert B \rVert_2^2 \lVert B^{-1} \rVert_2 \lVert \Delta \rVert_2}{1/ \kappa^2(D) - \lVert B \rVert_2 \rVert_2 \lVert \Delta \rVert_2} \label{L2BoundOmegaTildeMinusOmegaDTBinv} \end{align} \end{proposition} The following proposition is a corollary of Proposition \ref{L2BoundDTBiHatDinvMinusDTBiDInv}. \begin{proposition} \label{L2BoundsOmegaTwoGroup} When $D$ has the form (\ref{def:D}), and $\Omega$ is as defined in \eqref{def:Omega_app}, \begin{align*} \delta_0(\Delta) &= \lVert \Omega_{n, m} - \Omega \rVert_2 \leq \frac{1}{n_{\min}} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2}{1/n_{\text{ratio}} - \lVert B \rVert_2 \lVert \Delta \rVert_2} \\ \delta_1(\Delta) &= \left \lVert \Omega D^T \Delta \right \rVert_2 \leq \frac{\sqrt{n_\text{ratio}}}{\sqrt{n_{\min}}} \lVert B \rVert_2 \lVert \Delta \rVert_2 \\ \delta_2(\Delta) &= \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T \Delta \right \rVert_2 \leq \frac{\sqrt{n_\text{ratio}}}{\sqrt{n_{\min}}} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2^2}{1/n_{\text{ratio}} - \lVert B \rVert_2 \lVert \Delta \rVert_2} \end{align*} \end{proposition} Let $K$ be defined as in Theorem \ref{thm::GLSFixedB}. We express the entrywise rates of convergence of the sample correlation matrices $\widehat{\Gamma}(B)$ and $\widehat{\Gamma}(A)$, respectively, in terms of the following quantities: \begin{eqnarray} \label{entrywiseRateBcorr} \widetilde{\alpha} = C_A K \frac{\log^{1/2}(m)}{\sqrt{m}} \left(1 + \frac{\norm{B}_1}{n} \right) + \frac{\lVert B \rVert_1}{n_{\min}} \; \text{ and } \; \widetilde{\eta} = C_B K \frac{\log^{1/2}(m \vee n)}{\sqrt{n}} + \frac{\lVert B \rVert_1}{n}. \end{eqnarray} \section{Proof of Theorem \ref{thm::GLSFixedB} and Corollary \ref{theoremInference}} \label{sec::proofsOfTheorems} \subsection{Proof of Theorem \ref{thm::GLSFixedB}} \label{sec::ProofThm1} \begin{proofof2} Let $B_{n, m} \in \mathbb{R}^{n \times n}$ denote a fixed positive definite matrix. Let $D$ be as defined as in \eqref{meanMatrixTwoGroups}. Define $\Delta_{n, m} = B_{n, m}^{-1} - B^{-1}$ and \begin{equation} \label{def:Omega} \Omega = (D^T B^{-1} D)^{-1} \text{ and } \Omega_{n, m} = (D^T B_{n, m}^{-1} D)^{-1}. \end{equation} Note that we can decompose the error for all $j$ as \begin{equation} \label{GLSDecompTriangleIneq} \lVert \widehat{\beta}_j(B_{n, m}^{-1}) - \beta_j^* \rVert_2 \leq \lVert \widehat{\beta}_j(B^{-1}) - \beta_j^* \rVert_2 + \lVert \widehat{\beta}_j(B_{n, m}^{-1}) - \widehat{\beta}_j(B^{-1}) \rVert_2 =: \text{I} + \text{II}. \end{equation} We will use the following lemmas, which are proved in subsections \ref{sec::rateBetaHatBHatinvBetaHatBinv} and \ref{sec::rateBetaHatBinvBetaStar}, to bound these two terms on the right-hand side, respectively. \begin{lemma} \label{HansonWrightBetaBinvBetaStar} Let $\mathcal{E}_2$ denote the event \begin{equation} \label{eventE2} \mathcal{E}_2 = \left\{ \lVert \widehat{\beta}_j(B^{-1}) - \beta_j^* \rVert_2 \leq s_{n, m} \right \}, \quad \text{with} \quad s_{n, m} = C_3 d^{1/2} \sqrt{\frac{\log(m) \lVert B \rVert_2}{n_{\min}}}. \end{equation} Then $P(\mathcal{E}_2) \geq 1 - 2 / m^d$. \end{lemma} \begin{lemma} \label{BetaBtildeBetaBStarHansonWright} Let $B_{n, m} \in \mathbb{R}^{n \times n}$ denote a fixed matrix such that $B_{n, m} \succ 0$. Let $X_j \in \mathbb{R}^n$ denote the $j$th column of $X$, where $X$ is a realization of model (\ref{modelMeanCov}). Let $\mathcal{E}_3$ denote the event \begin{equation} \mathcal{E}_3 = \left \{ \lVert \widehat{\beta}_j(B_{n, m}^{-1}) - \widehat{\beta}_j(B^{-1}) \rVert_2 \leq t_{n, m} \right\}, \quad \text{ with} \quad t_{n, m} = \widetilde{C} n_{\min}^{-1/2} \lVert \Delta_{n, m} \rVert_2. \end{equation} for some absolute constant $\widetilde{C}$. Then $P(\mathcal{E}_3) \geq 1 - 2 / m^d$. \end{lemma} The proof of \eqref{rateBetaHatFixedB1} follows from the union bound $P(\mathcal{E}_2 \cap \mathcal{E}_3) \geq 1 - P(\mathcal{E}_2) - P(\mathcal{E}_3) \geq 1 - 4 / m^d$. Next we prove \eqref{rateGammaHatFixedDelta}. Let $r_{n, m} = s_{n, m} + t_{n, m}$, as defined in \eqref{rateBetaHatFixedB1}. Let $\delta = (1, -1) \in \mathbb{R}^2$. Then \[ | \widehat{\gamma}_j(B^{-1}_{n, m}) - \gamma_j | = \left | \delta^T \left( \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \right) \right | \leq \lVert \delta \rVert_2 \lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \rVert_2 = \sqrt{2} \lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \rVert_2, \] where we used the Cauchy-Schwarz inequality. Hence if $\lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j \rVert_2 \leq r_{n, m}$, it follows that $| \widehat{\gamma}_j(B^{-1}_{n, m}) - \gamma_j | \leq \sqrt{2} r_{n, m}$. The result holds by applying a union bound over the variables $j = 1, \ldots, m$. \end{proofof2} This completes the proof of Theorem \ref{thm::GLSFixedB}. \subsection{Proof of Corollary \ref{theoremInference} and Corollary \ref{corDesignEffectAlg2}} \label{proofTheoremInference} First note that by Proposition~\ref{L2BoundsOmegaTwoGroup}, \begin{align} \abs{\delta^T (D^T \widehat{B}^{-1} D)^{-1} \delta - \delta^T (D^T B^{-1} D)^{-1} \delta} &= \abs{\delta^T \Parens{(D^T \widehat{B}^{-1} D)^{-1} - (D^T B^{-1} D)^{-1}} \delta} \notag \\ &\leq \twonorm{\delta}^2 \twonorm{(D^T \widehat{B}^{-1} D)^{-1} - (D^T B^{-1} D)^{-1}} \notag \\ &= 2 \twonorm{(D^T \widehat{B}^{-1} D)^{-1} - (D^T B^{-1} D)^{-1}} \notag \\ &\leq 2 \frac{\twonorm{B}^2 \twonorm{\Delta} }{n_{\min}}. \label{numeratorCor2} \end{align} Note that by Proposition~\ref{L2boundInverseOfDtransBinvD}, \begin{equation} \label{denomCor2} |\delta^T \Omega \delta| \geq \frac{\lambda_{\min}(B)}{n_{\max}}. \end{equation} Corollary \ref{theoremInference} follows from \eqref{numeratorCor2} and \eqref{denomCor2}, which provide an upper bound on the numerator and lower bound on the denominator, respectively. Corollary \ref{corDesignEffectAlg2} holds because by \eqref{mainThmErrorCov} of Theorem \ref{mainTheoremModSel}, \begin{equation} \abs{\delta^T \Parens{\widehat{\Omega} - \Omega} \delta} \leq 2 \frac{\twonorm{B}^2 }{n_{\min}} \Parens{ \frac{C' \lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1 } }{b_{\min} \varphi_{\min}^2(\rho(B))} } \leq 2 C' \frac{\kappa(B)}{n_{\min}} \lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1} \end{equation} \subsection{Proof of Lemma~\ref{HansonWrightBetaBinvBetaStar}} \label{sec::rateBetaHatBinvBetaStar} \begin{proofof2} First, we show that \begin{equation} \label{snmBound} \lVert \Omega^{1/2} \rVert_F + d^{1/2} K^2 \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} / \sqrt{c} \leq s_{n, m}, \end{equation} with $s_{n, m}$ as defined in \eqref{def:stFixedDelta}. Because $\lVert \Omega^{1/2} \rVert_F \leq \sqrt{2} \lVert \Omega^{1/2} \rVert_2$, it follows that \begin{eqnarray*} \lVert \Omega^{1/2} \rVert_F + d^{1/2} K^2 \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} / \sqrt{c} &\leq & \left(\sqrt{2} + d^{1/2} K^2 \sqrt{\log(m)} / \sqrt{c} \right) \lVert \Omega \rVert_2^{1/2} \\ &\leq & C_3 d^{1/2} \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} \leq C_3 d^{1/2} \sqrt{ \frac{\log(m) \lVert B \rVert_2}{n_{\min}}}, \end{eqnarray*} where the last step follows from Proposition \ref{L2boundInverseOfDtransBinvD}. Next, we express $\widehat{\beta}_j(B^{-1}) - \beta_j^*$ as \begin{eqnarray*} \widehat{\beta}_j(B^{-1}) - \beta_j^* = \Omega^{1/2} \eta_j, \quad\text{where } \quad \eta_j = \Omega^{-1/2} \left(\widehat{\beta}_j(B^{-1}) - \beta_j^* \right). \end{eqnarray*} By the bound (\ref{snmBound}), event $\mathcal{E}_2^c$ implies $\{ \lVert \Omega^{1/2} \eta_j \rVert_2 > \lVert \Omega^{1/2} \rVert_F + d^{1/2} K^2 \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} / \sqrt{c} \}$. Therefore, \begin{eqnarray*} P\left( \lVert \Omega \eta_j \rVert_2 \geq s_{n, m} \right) &\leq & P\left( \lVert \Omega \eta_j \rVert_2 > \lVert \Omega^{1/2} \rVert_F + d^{1/2} K^2 \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} / \sqrt{c} \right) \\ &\leq & P\left( \left | \lVert \Omega^{1/2} \eta_j \rVert_2 - \lVert \Omega^{1/2} \rVert_F \right| > d^{1/2} K^2 \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} / \sqrt{c} \right) \\ &\leq & 2 \exp\left( \frac{-c \left( d^{1/2} K^2 \sqrt{\log(m)} \lVert \Omega \rVert_2^{1/2} / \sqrt{c} \right)^2}{K^4 \lVert \Omega^{1/2} \rVert_2^2 } \right) \\ &= & 2\exp\left( \frac{- d \log(m) \lVert \Omega \rVert_2}{ \lVert \Omega^{1/2} \rVert_2^2} \right) = 2\exp\left( - d \log(m) \right) = 2/m^d. \end{eqnarray*} \end{proofof2} \subsection{Proof of Lemma~\ref{BetaBtildeBetaBStarHansonWright}} \label{sec::rateBetaHatBHatinvBetaHatBinv} \begin{proofof2} The proof will proceed in the following steps. First, we show that $\widehat{\beta}_j(B^{-1}_{n, m}) - \widehat{\beta}_j(B^{-1})$ can be expressed as $V Z_j$, where \[ V = \left( \Omega_{n, m} D^T B_{n, m}^{-1} - \Omega D^T B^{-1} \right)B^{1/2} \in \mathbb{R}^{2 \times m} \] is a fixed matrix, and $Z_j = B^{-1/2}X_j$. Second, we show that \[ \lVert V \rVert_F + d^{1/2} K^2 \log^{1/2}(m) \lVert V \rVert_2 / \sqrt{c} \leq \widetilde{C} n_{\min}^{-1/2} \lVert \Delta \rVert_2. \] Third, we use the first and second steps combined with the Hanson-Wright inequality to show that with high probability, $\lVert VZ_j \rVert_2$ is at most $\widetilde{C} n_{\min}^{-1/2} \lVert \Delta \rVert_2$. For the first step of the proof, let $Z_j = B^{-1/2} X_j$, and note that $\widehat{\beta}_j(B^{-1}_{n, m}) - \widehat{\beta}_j(B^{-1}) = VZ_j$, where $V \in \mathbb{R}^{2 \times m}$ is a fixed matrix, because \begin{align*} \widehat{\beta}_j(B_{n, m}^{-1}) - \widehat{\beta}_j(B^{-1}) &= \left[ (D^T B_{n, m}^{-1} D)^{-1} D^T B_{n, m}^{-1} - \Omega D^T B^{-1} \right] B^{1/2} (B^{-1/2} X_j) \\ &= \left[ (D^T B_{n, m}^{-1} D)^{-1} D^T B_{n, m}^{-1} - \Omega D^T B^{-1} \right] B^{1/2} Z_j. \end{align*} For the second step of the proof, we show that $\lVert V \rVert_F + d^{1/2} K^2 \log^{1/2}(m) \lVert V \rVert_2 / \sqrt{c} \leq \widetilde{C} n_{\min}^{-1/2} \lVert \Delta \rVert_2$. First we obtain an upper bound on $V$. By the triangle inequality, \begin{eqnarray*} \lefteqn{ \lVert \Omega_{n, m} D^T B_{n, m}^{-1} - \Omega D^T B^{-1} \rVert_2 = \left \lVert \Omega_{n, m} D^T B_{n, m}^{-1} - \Omega D^T B^{-1} \right \rVert_2 } \\ &\leq & \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T (B_{n, m}^{-1} - B^{-1}) \right \rVert_2 + \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T B^{-1} \right \rVert_2 + \left \lVert \Omega D^T \Delta \right \rVert_2 \\ &= & \delta_2(\Delta) + \delta_3(\Delta) + \delta_1(\Delta). \end{eqnarray*} We bound each of the three terms using Proposition \ref{L2BoundDTBiHatDinvMinusDTBiDInv}, \begin{align*} \delta_2(\Delta) &= \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T \Delta \right \rVert_2 \leq \frac{\sqrt{n_\text{ratio}}}{\sqrt{n_{\min}}} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2^2}{1/n_{\text{ratio}} - \lVert B \rVert_2 \lVert \Delta \rVert_2} \\ \delta_3(\Delta) &= \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T B^{-1} \right \rVert_2 \leq \frac{\sqrt{n_\text{ratio}}}{\sqrt{n_{\min}}} \frac{\lVert B \rVert_2^2 \lVert B^{-1} \rVert_2 \lVert \Delta \rVert_2}{1/n_{\text{ratio}} - \lVert B \rVert_2 \rVert_2 \lVert \Delta \rVert_2} \\ \delta_1(\Delta) &= \left \lVert \Omega D^T \Delta \right \rVert_2 \leq \frac{\sqrt{n_\text{ratio}}}{\sqrt{n_{\min}}} \lVert B \rVert_2 \lVert \Delta \rVert_2. \end{align*} Applying the above bounds yields \begin{eqnarray*} \lVert V \rVert_2 &\leq & \frac{\sqrt{n_\text{ratio}}}{\sqrt{n_{\min}}} \lVert \Delta \rVert_2 \lVert B \rVert_2^{1/2} \left( \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2}{1/ \kappa^2(D) - \lVert B \rVert_2 \lVert \Delta \rVert_2} + \frac{\lVert B \rVert_2^2 \lVert B^{-1} \rVert_2}{1/ \kappa^2(D) - \lVert B \rVert_2 \rVert_2 \lVert \Delta \rVert_2} + \lVert B \rVert_2 \right) \\ & \leq & \widetilde{C} n_{\min}^{-1/2} \lVert \Delta \rVert_2. \end{eqnarray*} For the third step of the proof, we use the Hanson-Wright inequality to bound $\lVert V Z_j \rVert_2$: \begin{eqnarray*} \lefteqn{ P\left( \lVert V Z_j \rVert_2 > \widetilde{C} n_{\min}^{-1/2} \lVert \Delta \rVert_2 \right) \leq P\left( \lVert V Z_j \rVert_2 > \lVert V \rVert_F + d^{1/2} K^2 \log^{1/2}(m) \lVert V \rVert_2 / \sqrt{c} \right) } \\ &= & P\left( \lVert V Z_j \rVert_2 - \lVert V \rVert_F > d^{1/2} K^2 \log^{1/2}(m) \lVert V \rVert_2 / \sqrt{c} \right) \\ &\leq & P\left( \left | \lVert V Z_j \rVert_2 - \lVert V \rVert_F \right | > d^{1/2} K^2 \log^{1/2}(m) \lVert V \rVert_2 / \sqrt{c} \right) \\ &\leq & 2\exp\left(- \frac{c\left( d^{1/2} K^2 \log^{1/2}(m) \lVert V \rVert_2 / \sqrt{c} \right)^2}{K^4 \lVert V \rVert_2^2} \right) \qquad \text{(Hanson-Wright inequality)} \\ &= & 2 \exp\left( -d \log(m) \right) = 2 / m^d. \end{eqnarray*} \end{proofof2} \subsection{Proof of Proposition~\ref{L2boundInverseOfDtransBinvD}} \label{sec::proofofBinvD} \begin{proofof2} Let $D = U \Psi V^T$ be the singular value decomposition of $D$, with $U \in \mathbb{R}^{n \times 2}$, $\Psi \in \mathbb{R}^{2 \times 2}$, and $V \in \mathbb{R}^{2 \times 2}$. Then $(D^T B^{-1} D)^{-1} = (V \Psi U^T B^{-1} U \Psi V^T)^{-1} = V \Psi^{-1} (U^T B^{-1} U)^{-1} \Psi^{-1} V^T.$ Thus \begin{align*} \lVert (D^T B^{-1} D)^{-1} \rVert_2 &= \lVert \Psi^{-1} (U^T B^{-1} U)^{-1} \Psi^{-1} \rVert_2 \qquad \text{(because $V$ is square, orthonormal)} \\ &\leq \lVert \Psi^{-1} \rVert_2^2 \lVert (U^T B^{-1} U)^{-1} \rVert_2 \qquad \text{(sub-multiplicative property)} \\ &= \sigma_{\max}^2(\Psi^{-1}) \lVert (U^T B^{-1} U)^{-1} \rVert_2 \\ &= \lVert (U^T B^{-1} U)^{-1} \rVert_2 / \sigma_{\min}^2(\Psi) = \lVert (U^T B^{-1} U)^{-1} \rVert_2 / \sigma_{\min}^2(D), \end{align*} where $\sigma_{\min}(D) = \sigma_{\min}(\Psi)$, because $\Psi$ is the diagonal matrix of singular values of $D$. Next, note that $ \lVert (U^T B^{-1} U)^{-1} \rVert_2 = 1 / \varphi_{\min}(U^T B^{-1} U)$ and \begin{eqnarray*} \varphi_{\min}(U^T B^{-1} U)& = &\min_{\eta \in \mathbb{R}^2} \eta^T U^T B^{-1} U \eta / \eta^T \eta. \end{eqnarray*} We perform the change of variables $\gamma = U \eta$, under which $\eta^T \eta = \gamma^T U^T U \gamma = \gamma^T \gamma$ (that is, $U$ preserves the length of $\eta$ because the columns of $U$ are orthonormal). Hence \begin{align*} \varphi_{\min}(U^T B^{-1} U) &= \min_{\gamma \in \text{col}(U), \gamma \neq 0} \gamma^T B^{-1} \gamma / \gamma^T \gamma \\ &\geq \min_{\gamma \neq 0} \gamma^T B^{-1} \gamma / \gamma^T \gamma \\ &= \varphi_{\min}(B^{-1}) = 1 / \lVert B \rVert_2. \end{align*} We have shown that $1 / \varphi_{\min}(U^T B^{-1} U) \leq \lVert B \rVert_2$, which implies that \[ \lVert (U^T B^{-1} U)^{-1} \rVert_2 \leq \lVert B \rVert_2. \] Therefore \[ \lVert (D^T B^{-1} D)^{-1} \rVert_2 \leq \lVert B \rVert_2 / \sigma_{\min}^2(D). \] In the special case of the two-group design matrix, $\sigma_{\min}^2(D) = n_{\min}$, so \\ $ \lVert (D^T B^{-1} D)^{-1} \rVert_2 \leq \lVert B \rVert_2 / n_{\min}.$ The proof of \eqref{minEvalOmega} is as follows: \begin{align*} \lambda_{\min}(\Omega) &= \frac{1}{\lambda_{\max}\Parens{\Omega^{-1}} } = \frac{1}{\lambda_{\max}\Parens{D^T B^{-1} D} } \geq \frac{1}{\twonorm{D}^2 \lambda_{\max}(B^{-1}) } = \frac{\lambda_{\min}(B) }{\twonorm{D}^2} = \frac{\lambda_{\min}(B) }{n_{\max}}. \end{align*} \end{proofof2} \subsection{Proof of Proposition~\ref{L2BoundDTBiHatDinvMinusDTBiDInv}} \begin{proofof2} By the definitions of $\Omega_{n, m}$ in \eqref{def:Omega_app} and $\Delta = B_{n, m}^{-1} - B^{-1}$, we have by Theorem \ref{GolubInversePerturbation} \begin{align*} \lVert \Omega_{n, m} - \Omega \rVert_2 &= \lVert (D^T B_{n, m} D)^{-1} - \Omega \rVert_2 \\ &= \left \lVert \left(D^T B_{n, m}^{-1} D- D^T B^{-1} D + D^T B^{-1} D \right)^{-1} - \Omega \right \rVert_2 \\ &= \left \lVert \left( D^T B^{-1} D + D^T \Delta D \right)^{-1} - \Omega \right \rVert_2 \\ &\leq \frac{\lVert D^T \Delta D \rVert_2 \lVert \Omega \rVert_2^2}{1 - \lVert \Omega \rVert_2 \lVert D^T \Delta D \rVert_2} \qquad \text{(by Theorem \ref{GolubInversePerturbation})} \\ &\leq \frac{\left( \sigma_{\max}^2(D) / \sigma_{\min}^4(D) \right) \lVert B \rVert_2^2 \lVert \Delta \rVert_2 ) } {1 - \kappa^2(D) \lVert B \rVert_2 \lVert \Delta \rVert_2 }. \end{align*} In the last step we apply Proposition \ref{L2boundInverseOfDtransBinvD}. Thus \begin{eqnarray*} \lVert \Omega_{n, m} - \Omega \rVert_2 &\leq & \frac{1}{\sigma_{\min}^2(D)} \frac{\kappa^2(D) \lVert B \rVert_2^2 \lVert \Delta \rVert_2} {1 - \kappa^2(D) \lVert B \rVert_2 \lVert \Delta \rVert_2} \\ & = & \frac{1}{\sigma_{\min}^2(D)} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2}{(1/ \kappa^2(D)) - \lVert B \rVert_2 \lVert \Delta \rVert_2}. \end{eqnarray*} We prove \eqref{def:delta1} using the submultiplicative property of the operator norm and Proposition \ref{L2boundInverseOfDtransBinvD}: \begin{eqnarray*} \left \lVert \Omega D^T \Delta \right \rVert_2 \leq \frac{\lVert B \rVert_2 }{\sigma_{\min}^2(D)} \sigma_{\max}(D) \lVert \Delta \rVert_2 = \frac{\kappa(D)}{\sigma_{\min}(D)} \lVert B \rVert_2 \lVert \Delta \rVert_2. \end{eqnarray*} We prove (\ref{L2BoundOmegaTildeMinusOmegaDTDelta}), as follows: \begin{align*} \left \lVert \left(\Omega_{n, m} - \Omega \right) D^T \Delta \right \rVert_2 &\leq \left \lVert \Omega_{n, m} - \Omega \right \rVert_2 \left \lVert D^T \right \rVert_2 \left \lVert \Delta \right \rVert_2 \\ &\leq \left[ \frac{1}{\sigma_{\min}^2(D)} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2}{\left( 1/ \kappa^2(D) \right) - \lVert B \rVert_2 \lVert \Delta \rVert_2} \right] \sigma_{\max}(D) \lVert \Delta \rVert_2 \qquad \text{(by Proposition \ref{L2BoundDTBiHatDinvMinusDTBiDInv})} \\ &= \frac{\kappa(D)}{\sigma_{\min}(D)} \frac{\lVert B \rVert_2^2 \lVert \Delta \rVert_2^2}{\left( 1/\kappa^2(D) \right) - \lVert B \rVert_2 \lVert \Delta \rVert_2}. \end{align*} The proof of (\ref{L2BoundOmegaTildeMinusOmegaDTBinv}) is analogous. \end{proofof2} \section{Proof of Theorem \ref{mainTheoremModSel} } \label{sec::proofTheorem3} \subsection{Notation} \begin{center} \begin{tabular}{| c c c |} \hline Notation & Meaning & \\ \hline \hline Mean structure & & \\ \hline $\mu \in \mathbb{R}^m$ & Vector of grand means of each gene & \\ $\gamma \in \mathbb{R}^m$ & Vector of mean differences for each gene & \\ $\nu = \inv{2} \begin{bmatrix} \inv{n_1} \Ones{n_1}^T & \inv{n_2} \Ones{n_2}^T \end{bmatrix}^T \in \mathbb{R}^n$ & Inner product with $\nu$ computes global mean & \\ \hline Outcome of model selection step & & \\ \hline $J_0 \subset \Braces{ 1, 2, \ldots, m}$ & Indices selected for group centering & \\ $J_1 \subset \Braces{ 1, 2, \ldots, m}$ & Indices selected for global centering & \\ \hline Sizes of gene subsets & & \\ \hline $m_0 = \abs{J_0}$ & Number of group centered genes & \\ $m_1 = \abs{J_1}$ & Number of globally centered genes & \\ \hline Projection matrices & & \\ \hline $P_1 = \Ones{n} \nu^T$ & Projection matrix that performs global centering & \\ $P_2$ (as in \eqref{suppPGroup})& Projection matrix that performs group centering & \\ \hline Sample covariance matrices & & \\ \hline $S(B, J_0, J_1) = \frac{m_1}{m} S_1(B) + \frac{m_0}{m} S_{2}(B)$ & Model selection sample covariance matrix & \\ $S_1(B, J_1) = \frac{1}{m_1} \sum_{j \in J_1} (I - \PGlobal) X_j X_j^T (I - \PGlobal)$ & Globally centered sample covariance matrix & \\ $S_2(B, J_0) = \frac{1}{m_0} \sum_{j \in J_0} (I - \PGroup) X_j X_j^T (I - \PGroup)$ & Group centered sample covariance matrix & \\ \hline Decomposition of $S(B, J_0, J_1)$ & & \\ \hline $S_{\operatorname{I}} = S(B, J_0, J_1) - \mathbb{E}\Brackets{S(B, J_0, J_1)}$ & Bias & \\ $S_{\operatorname{II}} = \inv{m} (I - \PGlobal) M_{J_1} M_{J_1}^T (I - \PGlobal)$ & False negatives (deterministic) & \\ $S_{\operatorname{III}} = \inv{m} (I - \PGlobal) M_{J_1} \varepsilon^T (I - \PGlobal)$ & False negatives (random) & \\ $S_{\operatorname{IV}} = m^{-1} (I - P_2) \varepsilon_{J_0} \varepsilon_{J_0}^T (I - P_2) + $ & True negatives & \\ \qquad \qquad $m^{-1} (I - P_1) \varepsilon_{J_1} \varepsilon_{J_1}^T (I - P_1)$ & & \\ \hline \end{tabular} \end{center} \subsection{Two-Group Model and Centering} We begin by introducing some relevant notation for the two-group model and centering. Define the group membership vector $\delta_n \in \mathbb{R}^n$ as \begin{equation} \label{def:GroupIndicators} \delta_n := \begin{bmatrix} \Ones{n_1}^T & -\Ones{n_2}^T \end{bmatrix}^T \in \mathbb{R}^n. \end{equation} In the two-group model, the mean matrix $M$ can be expressed as \begin{equation} \label{meanDecomposition} M = \Ones{n} \mu^T + (1/2) \delta_n \gamma^T, \end{equation} where $\mu \in \mathbb{R}^m$ is a vector of grand means, and $\gamma \in \mathbb{R}^m$ is the vector of mean differences. According to \eqref{meanDecomposition}, the $(i, j)$th entry of $M$ can be expressed as \begin{equation} m_{ij} = \begin{cases} \mu_j + \gamma_j / 2 & \text{if sample $i$ is in group one} \\ \mu_j - \gamma_j / 2 & \text{if sample $i$ is in group two}. \end{cases} \end{equation} Define the vector $\nu \in \mathbb{R}^n$ as \begin{equation} \label{def:VecAvgSampleMeans} \nu = \inv{2} \begin{bmatrix} \inv{n_1} \Ones{n_1}^T & \inv{n_2} \Ones{n_2}^T \end{bmatrix}^T \in \mathbb{R}^n, \end{equation} so that for the $j$th column of the data matrix $X_j \in \mathbb{R}^n$, \begin{equation} \expct{\nu^T X_j} = \inv{2} \expct{ \inv{n_1} \sum_{k = 1}^{n_1} X_{jk} + \inv{n_2} \sum_{k = n_1 + 1}^{n} X_{jk} } = \mu_j. \end{equation} Note that \begin{equation} \label{innerProductsNu} \nu^T \Ones{n} = (1/2) ( 1 + 1) = 1, \quad \text{ and } \quad \nu^T \delta_n = (1/2) (1 - 1) = 0. \end{equation} Next we define a projection matrix that performs global centering. Define the non-orthogonal projection matrix \begin{equation} \label{def:PGlobal} P_1 := \Ones{n} \nu^T \in \mathbb{R}^{n \times n}. \end{equation} Applying the projection matrix to the mean matrix yields \begin{equation} P_1 M = \Ones{n} \nu^T \left( \Ones{n} \mu^T + (1/2) \delta_n \gamma^T \right) = \Ones{n} \mu^T + (1/2) (\nu^T \delta_n ) \Ones{n} \gamma^T = \Ones{n} \mu^T, \end{equation} with residuals \begin{equation} \label{ResdualsM} (I - \PGlobal) M = M - P_1 M = M - \Ones{n} \mu^T = (1/2) \delta_n \gamma^T. \end{equation} Define \begin{equation} \label{suppPGroup} P_2 = \begin{bmatrix} n_1^{-1} \Ones{n_1} \Ones{n_1}^T & \\ & n_2^{-1} \Ones{n_2} \Ones{n_2}^T \end{bmatrix}. \end{equation} Note that $P_2 \Ones{n} = \Ones{n}$ and $P_2 \delta_n = \delta_n$, so \begin{equation} P_2 M = P_2 \Ones{n} \mu^T + (1/2) P_2 \delta_n \gamma^T = \Ones{n} \mu^T + (1/2) \delta_n \gamma^T = M, \end{equation} and therefore $(I - P_2)M = 0$. Define \begin{align} \widecheck{B} &= (I - P_1) B (I - P_1) \label{BP1P1} = \Parens{ \widecheck{b}_{ij}} \\ \widetilde{B} &= (I - P_2) B (I - P_2) \label{BP2P2} = \Parens{ \widetilde{b}_{ij}} \\ \breve{B} &= (I - P_1) B (I - P_2) \label{BP1P2} = \Parens{ \breve{b}_{ij}}. \end{align} Let $\widecheck{b}_{\max}$, $\widetilde{b}_{\max}$, and $\breve{b}_{\max}$ denote the maximum diagonal entries of $\widecheck{B}$, $\widetilde{B}$, and $\breve{B}$, respectively. \subsection{Model Selection Centering} For a subset $J \subset \{ 1, \ldots, m\}$, let $X_J$ denote the submatrix of $X$ consisting of columns indexed by $J$. For the fixed sets of genes $J_0$ and $J_1$, define the sample covariance \begin{equation} \label{SampleCovFixedGeneSets} S(B, J_0, J_1) = m^{-1} \sum_{k \in J_0} (I - P_2) X_k X_k^T (I - P_2)^T + m^{-1} \sum_{k \in J_1} (I - P_1) X_k X_k^T (I - P_1)^T =: \operatorname{I} + \operatorname{II}. \end{equation} Note that $\mathbb{E}\Brackets{S(B, J_0, J_1)} = B^\sharp$, with \begin{equation} \label{expectedValueModSelSB} B^\sharp = \frac{\Trace{A_{J_0}}}{m} (I - P_2) B(I - P_2) + \frac{\Trace{A_{J_1}}}{m} (I - P_1) B (I - P_1). \end{equation} Define the sample correlation matrix, \begin{equation} \label{defGammaBModSel} \widehat{\Gamma}_{ij}(B) = \frac{(S(B, J_0, J_1))_{ij} }{ \sqrt{(S(B, J_0, J_1))_{ii}(S(B, J_0, J_1))_{jj}}}. \end{equation} The baseline Gemini estimators \cite{Zhou14a} are then defined as follows, using a pair of penalized estimators for the correlation matrices $\rho(A) = (a_{ij}/\sqrt{a_{ii} a_{jj}})$ and $\rho(B) = (b_{ij}/\sqrt{b_{ii} b_{jj}})$: \begin{subeqnarray} \label{geminiObjectiveFnASupplement} \widehat{A}_\rho &= & \mathop{\text{arg\,min}\kern.2ex}_{A_\rho \succ 0} \left\{ \tr{\widehat{\Gamma}(A) A_\rho^{-1}} + \log |A_\rho| + \lambda_B |A_\rho^{-1}|_{1, \text{off}} \right\}, \\ \label{geminiObjectiveFnBSupplement} \widehat{B}_\rho &= & \mathop{\text{arg\,min}\kern.2ex}_{B_\rho \succ 0} \left\{ \tr{\widehat{\Gamma}(B) B_\rho^{-1}} + \log |B_\rho| + \lambda_A |B_\rho^{-1}|_{1, \text{off}} \right\}. \end{subeqnarray} We will focus on $\widehat{B}_\rho$ using the input as defined in \eqref{defGammaBModSel}. The proof proceeds as follows. Lemma~\ref{entrywiseErrorFixedSetGenes}, the equivalent of Proposition~\ref{rateConvergenceSampleCovToB} for Algorithm 1, establishes entry-wise convergence rates of the sample covariance matrix for fixed sets of group and globally centered genes. We use this to prove Theorem~\ref{mainTheoremFixedGenes} below in Section~\ref{MainTheoremFixedGenes} and to prove Theorem~\ref{mainTheoremModSel} in Section~\ref{proofMainThmAlgTwo}. \subsection{Convergence for fixed gene sets} \label{MainTheoremFixedGenes} We first state a standalone result, Theorem~\ref{mainTheoremFixedGenes}, which provides rates of convergence when $S(B, J_0, J_1)$ as in \eqref{SampleCovFixedGeneSets} is calculated using fixed sets of group centered and globally centered genes, $J_0$ and $J_1$, respectively. This result shows how the algorithm used in the preliminary step to choose which genes to group center can be decoupled from the rest of the estimation procedure. The proof is presented below in Section~\ref{proofMainTheoremFixedGenes}. \begin{theorem} \label{mainTheoremFixedGenes} Suppose that (A1), (A2'), and (A3) hold. Let $J_0$ and $J_1$ denote sets such that $J_0 \cap J_1 = \emptyset$ and $J_0 \cup J_1 = \{ 1, \ldots, m \}$. Let $m_0 = \abs{J_0}$ and $m_1 = \abs{J_1}$ denote the sizes of the sets. Let $\tau_{\text{global}} > 0$ satisfy \begin{equation} \label{smallFalseNegatives} \max_{j \in J_1} \abs{\gamma_j} \leq \tau_{\text{global}} , \end{equation} for $\tau_{\text{global}} = C \sqrt{\log(m)} \lVert (D^T B^{-1}D)^{-1} \rVert_2^{1/2} \asymp \sqrt{\frac{\log(m)}{n}}$. Consider the data as generated from model \eqref{meanDecomposition} with $\varepsilon = B^{1/2} Z A^{1/2}$, where $A \in \mathbb{R}^{m \times m}$ and $B \in \mathbb{R}^{n \times n}$ are positive definite matrices, and $Z$ is an $n \times m$ random matrix as defined in Theorem \ref{thm::GLSFixedB}. Let $\lambda_A$ denote the penalty parameter for estimating $B$. Suppose the penalty parameter $\lambda_A$ in \eqref{geminiObjectiveFnBSupplement} satisfies \begin{equation} \lambda_A \geq C'' \Brackets{ C_A K \frac{\log^{1/2}(m \vee n)}{\sqrt{m}} + \frac{\lVert B \rVert_1}{n_{\min}} }. \end{equation} where $C''$ is an absolute constant. \textbf{(I)} Let $\mathcal{E}_4(J_0, J_1)$ be the event such that \begin{equation} \label{GoodAlgTwoOpErrorBinv} \twonorm{ \Trace{A} \Parens{\widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2}^{-1} - B^{-1} } \leq \frac{C' \lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1 } }{b_{\min} \varphi_{\min}^2(\rho(B))}. \end{equation} Then $P(\mathcal{E}_4(J_0, J_1)) \geq 1 - C / m^d$. \textbf{(II)} With probability at least $1 - C' / m^d$, for all $j$, \begin{equation} \label{mainThmBoundTailCutpointSimplified} \lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \beta_j^* \rVert_2 \leq C_1 \lambda_A \sqrt{ \frac{n_{\text{ratio}} \left(|B^{-1}|_{0, \text{off}} \vee 1\right)}{n_{\min}}} + C_2 \sqrt{\log(m)} \lVert (D^T B^{-1} D)^{-1} \rVert_2^{1/2}. \end{equation} \end{theorem} \subsubsection{Decomposition of sample covariance matrix} The error in the sample covariance $S(B, J_0, J_1)$ can be decomposed as \begin{equation} S(B, J_0, J_1) - B = \Brackets{B^\sharp - B} + \Brackets{S(B, J_0, J_1) - B^\sharp}, \end{equation} where the first term corresponds to bias and the second term to variance. We now further decompose the variance term. The first term of $S(B, J_0, J_1)$ in \eqref{SampleCovFixedGeneSets} can be decomposed as, \begin{align} \operatorname{I} &= m^{-1} (I - P_2) X_{J_0} X_{J_0}^T (I - P_2) \notag \\ &= m^{-1} (I - P_2) (M_{J_0} + \varepsilon_{J_0})(M_{J_0} + \varepsilon_{J_0})^T (I - P_2) \notag \\ &= m^{-1} (I - P_2) \varepsilon_{J_0} \varepsilon_{J_0}^T (I - P_2) + m^{-1} (I - P_2) M_{J_0} \varepsilon_{J_0}^T (I - P_2) \notag \\ &\qquad + m^{-1} (I - P_2) \varepsilon_{J_0} M_{J_0}^T (I - P_2) + m^{-1} (I - P_2) M_{J_0} M_{J_0}^T (I - P_2), \label{SampleCovGroupDecomp} \end{align} and the second term can be decomposed analogously, as \begin{align} \operatorname{II} &= m^{-1} (I - P_1) \varepsilon_{J_1} \varepsilon_{J_1}^T (I - P_1) + m^{-1} (I - P_1) M_{J_1} \varepsilon_{J_1}^T (I - P_1) \notag \\ &\qquad + m^{-1} (I - P_1) \varepsilon_{J_1} M_{J_1}^T (I - P_1) + m^{-1} (I - P_1) M_{J_1} M_{J_1}^T (I - P_1). \label{SampleCovGlobalDecomp} \end{align} By the above decompositions, it follows that $S(B, J_0, J_1)$ can be expressed as \begin{equation} \label{eq::modSelSampleCovDecomposition} S(B, J_0, J_1) = S_{\operatorname{II}} + S_{\operatorname{III}} + S_{\operatorname{III}}^T + S_{\operatorname{IV}}, \end{equation} with \begin{align} S_{\operatorname{II}} &= m^{-1} (I - P_2) M_{J_0} M_{J_0}^T (I - P_2) + m^{-1} (I - P_1) M_{J_1} M_{J_1}^T (I - P_1). \label{SampleCovTermThreeFull} \\ S_{\operatorname{III}} &= m^{-1} (I - P_2) M_{J_0} \varepsilon_{J_0}^T (I - P_2) + m^{-1} (I - P_1) M_{J_1} \varepsilon_{J_1}^T (I - P_1) \label{SampleCovTermTwoFull} \\ S_{\operatorname{IV}} &= m^{-1} (I - P_2) \varepsilon_{J_0} \varepsilon_{J_0}^T (I - P_2) + m^{-1} (I - P_1) \varepsilon_{J_1} \varepsilon_{J_1}^T (I - P_1) \label{SampleCovTermOne} \end{align} For each of $S_{\operatorname{II}}$, $S_{\operatorname{III}}$, and $S_{\operatorname{IV}}$, the first term comes from \eqref{SampleCovGroupDecomp} and the second term comes from \eqref{SampleCovGlobalDecomp}. The terms $S_{\operatorname{II}}$ and $S_{\operatorname{III}}$ can be simplified, as follows. Because $(I - P_2)M_{J_0} = 0$, it follows that the first term of $S_{\operatorname{II}}$ is zero: \[ m^{-1} (I - P_2) M_{J_0} M_{J_0}^T (I - P_2) = 0. \] and the first term of $S_{\operatorname{III}}$ is also zero, \[ m^{-1} (I - P_2) M_{J_0} \varepsilon_{J_0}^T (I - P_2) = 0, \] Therefore the terms $S_{\operatorname{II}}$ and $S_{\operatorname{III}}$ are equal to \begin{align} S_{\operatorname{II}} &= m^{-1} (I - P_1) M_{J_1} M_{J_1}^T (I - P_1), \label{SampleCovTermThree} \\ S_{\operatorname{III}} &= m^{-1} (I - P_1) M_{J_1} \varepsilon_{J_1}^T (I - P_1). \label{SampleCovTermTwo} \end{align} Let $S_{\operatorname{I}} = B^\sharp - B$. We have thus decomposed the error in the sample covariance as \begin{equation} \label{ModSelBiasVarDecomp} S(B, J_0, J_1) - B = \underbrace{S_{\operatorname{I}}}_{\text{bias}} +\underbrace{\Brackets{ \Parens{S_{\operatorname{IV}} - B^{\sharp}} + S_{\operatorname{III}} + S_{\operatorname{II}}}}_{\text{variance}}. \end{equation} In Lemma \ref{LemmaEntrywiseSBModSel}, we provide an error bound for each term in the decomposition \eqref{ModSelBiasVarDecomp}. We next state Lemma~\ref{entrywiseErrorFixedSetGenes}, which establishes the maximum of entry-wise errors for estimating $B$ using the sample covariance for fixed gene sets as defined in \eqref{ModSelBiasVarDecomp}. Lemma~\ref{entrywiseErrorFixedSetGenes} is used in the proof of Theorem \ref{mainTheoremFixedGenes}. Following, we state Lemma~\ref{LemmaEntrywiseSBModSel}, which is used in the proof of Lemma~\ref{entrywiseErrorFixedSetGenes}. \begin{lemma} \label{entrywiseErrorFixedSetGenes} Suppose the conditions of Theorem \ref{mainTheoremFixedGenes} hold. Let $\mathcal{E}_6(J_0, J_1)$ denote the event \begin{equation} \label{entrywiseBoundModSel} \mathcal{E}_6(J_0, J_1) = \Braces{ \InfNorm{S(B, J_0, J_1) - B} \leq C_A K \frac{\log^{1/2}(m \vee n)}{\sqrt{m}} + \frac{\lVert B \rVert_1}{n_{\min}} }. \end{equation} Then $\mathcal{E}_6(J_0, J_1)$ holds with probability at least $1 - \frac{8}{(m \vee n)^2}$. \end{lemma} \begin{lemma} \label{LemmaEntrywiseSBModSel} Let the model selection-based sample covariance $S(B, J_0, J_1)$ be as defined in \eqref{SampleCovFixedGeneSets}, where $J_1$ and $J_0$ are fixed sets of variables that are globally centered, and group centered, respectively. Let $m_0 = \abs{J_0}$ and $m_1 = \abs{J_1}$. Define the rates \begin{align} r_1 &= \frac{3 \OneNorm{B}}{n_{\min}}, \\ r_2 &= (4 m)^{-1} \twonorm{\gamma_{J_1}}^2, \label{r2Rate} \\ r_3 &= C_3 d^{1/2} K^2 \log^{1/2}(m) m^{-1} \Parens{ \gamma_{J_1}^T A_{J_1} \gamma_{J_1}}^{1/2} \widecheck{b}_{\max}^{1/2}, \label{r3Rate} \\ r_4 &= C_4 d^{1/2} K \log^{1/2}(m) m^{-1} \fnorm{A} \twonorm{B}. \end{align} \textbf{(I)} Deterministically, \begin{align} \label{entrywiseBoundSThreeSFour} \InfNorm{B^\sharp - B} \leq r_1\quad \text{ and } \quad \InfNorm{S_{\operatorname{II}}} &\leq r_2. \end{align} \textbf{(II)} Define the events \begin{equation} \label{entrywiseProbs} \mathcal{E}_{\operatorname{I}} = \Braces{ \InfNorm{S_{\operatorname{IV}} - B^{\sharp}} \leq r_4 } \quad \text{ and } \quad \mathcal{E}_{\operatorname{II}} = \Braces{ \InfNorm{S_{\operatorname{III}}} \leq r_3}. \end{equation} Then $\mathcal{E}_{\operatorname{I}}$ and $\mathcal{E}_{\operatorname{II}}$ occur with probability at least $1 - 2/m^d$. \end{lemma} Lemmas~\ref{entrywiseErrorFixedSetGenes} and~\ref{LemmaEntrywiseSBModSel} are proved in Section~\ref{sec::LemmasForTheorem3}. We analyze term $S_{\operatorname{I}}$ in Section \ref{entrywiseTermI}, term $S_{\operatorname{II}}$ in Section \ref{entrywiseTermII}, term $S_{\operatorname{III}}$ in Section \ref{entrywiseTermIII}, and term $S_{\operatorname{IV}}$ in Section \ref{entrywiseTermIV}. \subsubsection{Proof of Theorem \ref{mainTheoremFixedGenes}} \label{proofMainTheoremFixedGenes} Let us first define the event $\mathcal{E}_{\text{global}}$, that is, the GLS error based on the true $B^{-1}$ is small: \begin{equation} \label{EGlobal} \mathcal{E}_{\text{global}} = \Braces{ \InfNorm{\widehat{\gamma}(B^{-1}) - \gamma} < \sqrt{\log(m)} \lVert (D^T B^{-1}D)^{-1} \rVert_2^{1/2}}. \end{equation} Let $\mathcal{E}_4(J_0, J_1)$ be defined as in \eqref{GoodAlgTwoOpErrorBinv}, denoting small operator norm error in estimating $B^{-1}$: \begin{equation} \mathcal{E}_4(J_0, J_1) = \Braces{ \twonorm{ \Trace{A} \Parens{\widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2}^{-1} - B^{-1} } \leq \frac{C' \lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1 } }{b_{\min} \varphi_{\min}^2(\rho(B))} }. \end{equation} Note that $\mathcal{E}_4(J_0, J_1)$ holds deterministically under event $\mathcal{E}_{6}(J_0, J_1)$ as defined in \eqref{entrywiseBoundModSel} of Lemma~\ref{entrywiseErrorFixedSetGenes}. Define the event bounding the perturbation in mean estimation due to error in estimating $B^{-1}$: \begin{equation} \label{GoodMeanFixedB} \mathcal{E}_5(J_0, J_1) = \Braces{ \InfNorm{ \widehat{\gamma}(\widehat{B}^{-1}) - \widehat{\gamma}(B^{-1}) } < C n_{\min}^{-1/2} \twonorm{\widehat{B}^{-1} - B^{-1}} }. \end{equation} Conditional on a fixed matrix $\widehat{B}^{-1}$ that satisfies $\mathcal{E}_4(J_0, J_1)$, event $\mathcal{E}_5(J_0, J_1)$ holds with probability at least $1 - C / m^d$, by Lemma \ref{BetaBtildeBetaBStarHansonWright} (used in the proof of Theorem \ref{thm::GLSFixedB}). The overall rate of convergence follows by applying the union bound to the events $\mathcal{E}_{\text{global}} \cap \mathcal{E}_4(J_0, J_1) \cap \mathcal{E}_5(J_0, J_1)$, as follows: \begin{align*} &P(\mathcal{E}_{\text{global}}^c \cup \mathcal{E}_4(J_0, J_1)^c \cup \mathcal{E}_5(J_0, J_1)^c) \\ &\leq P(\mathcal{E}_{\text{global}}^c) + P(\mathcal{E}_4(J_0, J_1)^c) + P(\mathcal{E}_5(J_0, J_1)^c \mid \mathcal{E}_4(J_0, J_1))P(\mathcal{E}_4(J_0, J_1)) \\ &\qquad + P(\mathcal{E}_5(J_0, J_1)^c \mid \mathcal{E}_4(J_0, J_1)^c) P(\mathcal{E}_4(J_0, J_1)^c) \\ &\leq P(\mathcal{E}_{\text{global}}^c) + P(\mathcal{E}_4(J_0, J_1)^c) + P(\mathcal{E}_4(J_0, J_1)^c) + P(\mathcal{E}_5(J_0, J_1)^c \mid \mathcal{E}_4(J_0, J_1)) \\ &= P(\mathcal{E}_{\text{global}}^c) + 2P(\mathcal{E}_4(J_0, J_1)^c) + P(\mathcal{E}_5(J_0, J_1)^c \mid \mathcal{E}_4(J_0, J_1)), \end{align*} where $P(\mathcal{E}_{\text{global}}^c)$ and $P(\mathcal{E}_5(J_0, J_1)^c \mid \mathcal{E}_4(J_0, J_1))$ are bounded in Theorem \ref{thm::GLSFixedB}, and $P(\mathcal{E}_4(J_0, J_1)^c)$ has high probability under Lemma \ref{entrywiseErrorFixedSetGenes}. \subsection{Proof of Theorem \ref{mainTheoremModSel}} \label{proofMainThmAlgTwo} Let $\widehat{\gamma}^{\operatorname{init}}$ denote the output from Algorithm 1. By our choice of the threshold parameter $\tau_{\text{init}}$ as in \eqref{modSelThresh}, that is, \[ \tau_{\text{init}} =C \left( \frac{\log^{1/2}(m)}{\sqrt{m}} + \frac{\lVert B \rVert_1}{n_{\min}} \right) \sqrt{ \frac{n_{\text{ratio}} \left(|B^{-1}|_{0, \text{off}} \vee 1\right)}{n_{\min}}} + C\sqrt{\log(m)} \lVert (D^T B^{-1}D)^{-1} \rVert_2^{1/2}, \] we have a partition $( \widetilde{J}_0, \widetilde{J}_1)$ such that $\widetilde{J}_0$ is the set of variables selected for group centering and $\widetilde{J}_1$ is the set of variables selected for global centering. The partition results in a sample covariance matrix $S(B, \widetilde{J_0}, \widetilde{J_1})$ as defined in \eqref{SampleCovFixedGeneSets}. Define the event that the Algorithm 1 estimate $\widehat{\gamma}^{\operatorname{init}}$ is close to $\gamma$ in the sense that \begin{equation} \label{AlgOneGood} \mathcal{E}_{A1} = \Braces{ \InfNorm{\widehat{\gamma}^{\operatorname{init}} - \gamma} < \tau_{\text{init}}}. \end{equation} Note that the event $\mathcal{E}_{A1}$ implies that the false negatives have small true mean differences. That is, on event $\mathcal{E}_{A1}$, by the triangle inequality, \begin{align} \InfNorm{\gamma_{\widetilde{J}_1}} \leq \InfNorm{\gamma_{\widetilde{J}_1} - \widehat{\gamma}^{\operatorname{init}}_{\widetilde{J}_1} } + \InfNorm{\widehat{\gamma}^{\operatorname{init}}_{\widetilde{J}_1}} \leq \tau_{\text{init}} + \tau_{\text{init}} = 2\tau_{\text{init}}, \end{align} where $\InfNorm{\widehat{\gamma}^{\operatorname{init}}_{\widetilde{J}_1}} < \tau_{\text{init}}$ by definition of $\mathcal{E}_{A1}$, and $\InfNorm{\gamma_{\widetilde{J}_1} - \widehat{\gamma}^{\operatorname{init}}_{\widetilde{J}_1} } < \tau_{\text{init}}$ by definition of the thresholding set $\widetilde{J}_1$. Under the assumptions of Theorem \ref{mainTheoremFixedGenes}, $\tau_{\text{init}} \leq \tau_{\text{global}}$ with $\tau_{\text{global}}$ as defined in \eqref{smallFalseNegatives}, so condition \eqref{smallFalseNegatives} of Theorem \ref{mainTheoremFixedGenes} is satisfied. Under the conditions of Theorem \ref{mainTheoremFixedGenes}, event $\mathcal{E}_6(J_0, J_1)$ as defined in Lemma \ref{entrywiseErrorFixedSetGenes} holds with high probability; that is, the entrywise error in the sample covariance matrix is small. Let $\mathcal{E}_B$ denote event \eqref{mainThmErrorCov} in Theorem \ref{mainTheoremModSel}. In view of Theorem \ref{thm::frob} and Lemma \ref{boundW1W2errorOpFro}, event $\mathcal{E}_B$ holds on $\mathcal{E}_6(J_0, J_1)$. Hence \begin{align*} P\Parens{ \mathcal{E}_B^c } &= P\Parens{ \mathcal{E}_6(J_0, J_1)^c \mid \mathcal{E}_{A1}} P\Parens{ \mathcal{E}_{A1} } + P\Parens{ \mathcal{E}_6(J_0, J_1)^c \mid \mathcal{E}_{A1}^c } P\Parens{ \mathcal{E}_{A1}^c } \\ &\leq P\Parens{ \mathcal{E}_6(J_0, J_1)^c \mid \mathcal{E}_{A1}} + P\Parens{ \mathcal{E}_{A1}^c } \\ &\leq 2 / m^d + 2 / m^d, \end{align*} where the first term is bounded in Lemma \ref{entrywiseErrorFixedSetGenes} and the second in Theorem \ref{mainTheoremGroupCentering}. Recall the event $\mathcal{E}_{\text{global}}$ as defined in \eqref{EGlobal}. Event \eqref{mainThmBoundTailCutpoint} in Theorem \ref{mainTheoremModSel} holds under the intersection of events $\mathcal{E}_{\text{global}} \cap \mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1) \cap \mathcal{E}_B \cap \mathcal{E}_{A1}$. Hence the probability of \eqref{mainThmBoundTailCutpoint} can be bounded as follows: \begin{align*} &P(\mathcal{E}_{\text{global}}^c \cup \mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1)^c \cup \mathcal{E}_B^c \cup \mathcal{E}_{A1}^c ) \\ &\leq P(\mathcal{E}_{\text{global}}^c) + P(\mathcal{E}_B^c) + P(\mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1)^c \mid \mathcal{E}_B)P(\mathcal{E}_B) \\ &\qquad + P(\mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1)^c \mid \mathcal{E}_B^c) P(\mathcal{E}_B^c) + P\Parens{ \mathcal{E}_{A1}^c} \\ &\leq P(\mathcal{E}_{\text{global}}^c) + P(\mathcal{E}_B^c) + P(\mathcal{E}_B^c) + P(\mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1)^c \mid \mathcal{E}_B) + P\Parens{ \mathcal{E}_{A1}^c} \\ &= P(\mathcal{E}_{\text{global}}^c) + 2P(\mathcal{E}_B^c) + P(\mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1)^c \mid \mathcal{E}_B) + P\Parens{ \mathcal{E}_{A1}^c}, \end{align*} where $P(\mathcal{E}_{\text{global}}^c)$ and $P(\mathcal{E}_5(\widetilde{J}_0, \widetilde{J}_1)^c \mid \mathcal{E}_B)$ are bounded in Theorem 1, $P(\mathcal{E}_B^c)$ is bounded above, and $P\Parens{ \mathcal{E}_{A1}^c}$ is bounded in Theorem \ref{mainTheoremGroupCentering}. \section{Proof of Lemmas \ref{entrywiseErrorFixedSetGenes} and \ref{LemmaEntrywiseSBModSel}} \label{sec::LemmasForTheorem3} We first prove Lemma~\ref{entrywiseErrorFixedSetGenes} in Section~\ref{proofEntrywiseErrorFixedSetGenes}. The rest of the section contains the proof of Lemma \ref{LemmaEntrywiseSBModSel}, where part I is proved in Sections \ref{entrywiseTermI} and \ref{entrywiseTermII} and part II in Sections \ref{entrywiseTermIII} and \ref{entrywiseTermIV}. \subsection{Proof of Lemma \ref{entrywiseErrorFixedSetGenes}} \label{proofEntrywiseErrorFixedSetGenes} The entrywise error in the sample covariance matrix \eqref{SampleCovFixedGeneSets} can be decomposed as \begin{align} \InfNorm{ S(B, J_0, J_1) - B} &\leq \InfNorm{S(B, J_0, J_1) - B^\sharp} + \InfNorm{B^\sharp - B} \\ &\leq \InfNorm{ S_{\operatorname{IV}} - B^\sharp} + 2\InfNorm{ S_{\operatorname{III}}} + \InfNorm{S_{\operatorname{II}}} + \InfNorm{B^\sharp - B}. \label{SampleCovDecomp} \end{align} Let $r_{n, m} = r_1 + r_2 + 2 r_3 + r_4$. By parts I and II of Lemma \ref{LemmaEntrywiseSBModSel}, \begin{align*} &P\Parens{ \InfNorm{ S(B, J_0, J_1) - B} \geq r_{n, m} } \\ &\leq P\Parens{ \InfNorm{ S_{\operatorname{IV}} - B^\sharp} + 2\InfNorm{ S_{\operatorname{III}}} + \InfNorm{S_{\operatorname{II}}} + \InfNorm{B^\sharp - B} \geq r_{n, m} } \quad \text{(by \eqref{SampleCovDecomp})} \\ &\leq P\Parens{ \InfNorm{ S_{\operatorname{IV}} - B^\sharp} + 2\InfNorm{ S_{\operatorname{III}}} + r_2 + r_1 \geq r_{n, m} } \quad \text{(by \eqref{entrywiseBoundSThreeSFour})} \\ &= P\Parens{ \InfNorm{ S_{\operatorname{IV}} - B^\sharp} + 2\InfNorm{ S_{\operatorname{III}}} \geq r_4 + 2r_3 } \\ &\leq P\Parens{ \InfNorm{ S_{\operatorname{IV}} - B^\sharp} \geq r_4} + P\Parens{2\InfNorm{ S_{\operatorname{III}}} \geq 2r_3} \quad \text{(by \eqref{entrywiseProbs})} \\ &\leq \frac{2}{m^d} + \frac{2}{m^d} = \frac{4}{m^d}. \end{align*} We show that under the assumptions of Theorem \ref{mainTheoremFixedGenes}, the entrywise error in terms $S_{\operatorname{II}}$ and $S_{\operatorname{III}}$ is $O\Parens{C_A \sqrt{\frac{\log(m)}{m}}}$. Recall that the entrywise rates of convergence of $S_{\operatorname{II}}$ and $S_{\operatorname{III}}$ are stated in equations \eqref{r2Rate} and \eqref{r3Rate}, respectively. Let $s = \abs{\text{supp}(\gamma)}$ denote the sparsity of $\gamma$. Let $m_{01} = \abs{\text{supp}\Parens{ \gamma_{J_1} }}$ denote the number of false negatives. First, we express the entrywise rate of convergence of $S_{\operatorname{II}}$ in terms of $\tau_{\text{global}}$. By \eqref{smallFalseNegatives}, $\InfNorm{\gamma_{J_1}} \leq \tau_{\text{global}}$, which implies that $\twonorm{\gamma_{J_1}}^2 \leq m_{01} \tau_{\text{global}}^2 \leq s \tau_{\text{global}}^2$, where the last inequality holds because $m_{01} \leq s$ by definition. Therefore, \begin{equation} \label{r2BoundtauGlobal} r_2 = (4 m)^{-1} \twonorm{\gamma_{J_1}}^2 \leq \frac{s \tau_{\text{global}}^2 }{4m} \leq C \frac{s \log(m)}{4 n m} \twonorm{B} , \end{equation} where the last step holds because $\tau_{\text{global}} = C \sqrt{\log(m)} \lVert (D^T B^{-1}D)^{-1} \rVert_2^{1/2} \asymp \sqrt{\frac{\log(m)}{n}} \twonorm{B}^{1/2}$ by assumption. Applying (A3) to the right-hand side of \eqref{r2BoundtauGlobal} implies that $r_2 = O\Parens{C_A \sqrt{\frac{\log(m)}{m}}}$. Next, consider term $S_{\operatorname{III}}$. First note that \begin{equation} \gamma_{J_1}^T A_{J_1} \gamma_{J_1} \leq \twonorm{\gamma_{J_1}}^2 \twonorm{A_{J_1}} \leq m_{01} \tau_{\text{global}}^2 \twonorm{A_{J_1}}, \end{equation} where the last inequality holds by \eqref{smallFalseNegatives}. This implies that $r_3$ is on the order \begin{align} \frac{\log^{1/2}(m)}{m} \Parens{ \widecheck{b}_{\max} \gamma_{J_1}^T A_{J_1} \gamma_{J_1}}^{1/2} &\leq \widecheck{b}_{\max}^{1/2} \twonorm{A_{J_1}}^{1/2} \Parens{ \frac{\log^{1/2}(m) m_{01}^{1/2}}{m} } \tau_{\text{global}} \notag \\ &\leq C \frac{\log(m)}{\sqrt{n}} \frac{\sqrt{s}}{m} \twonorm{A_{J_1}}^{1/2} \twonorm{B}^{1/2} \widecheck{b}_{\max}^{1/2}, \label{entrywiseSThreeUnderA2primeBound} \end{align} where the last inequality holds because $m_{01} \leq s \leq m$ and $\tau_{\text{global}} \asymp \sqrt{\frac{\log(m)}{n}} \twonorm{B}^{1/2}$. Under (A2'), the right-hand side of \eqref{entrywiseSThreeUnderA2primeBound} satisfies \begin{equation} \frac{\log(m)}{\sqrt{n}} \frac{\sqrt{s}}{m} \twonorm{A_{J_1}}^{1/2} \twonorm{B}^{1/2} \widecheck{b}_{\max}^{1/2} \leq \sqrt{\log(m)} \frac{\sqrt{s}}{m} C_A \frac{\twonorm{A_{J_1}}^{1/2} }{\twonorm{A}^{1/2} } \leq C_A \sqrt{ \frac{\log(m)}{m} }, \end{equation} where the last inequality holds because $s \leq m$. \subsection{Proof of part I of Lemma \ref{LemmaEntrywiseSBModSel}, term I} \label{entrywiseTermI} We bound the entrywise bias, \begin{align} \MaxNorm{B^\sharp - B} &= \MaxNorm{ \frac{\Trace{A_{J_0}}}{m} \widetilde{B} + \frac{\Trace{A_{J_1}} }{m} \widecheck{B} - B} \notag \\ &\leq \frac{\Trace{A_{J_0}}}{m} \MaxNorm{ \widetilde{B} - B} + \frac{\Trace{A_{J_1}}}{m} \MaxNorm{\widecheck{B} - B}.\label{modSelBBiasMaxNorm} \end{align} Note that \begin{align} \MaxNorm{ \widecheck{B} - B } &= \MaxNorm{ (I - P_1) B (I - P_1) - B } = \MaxNorm{P_1 B P_1 - P_1 B - BP_1} \notag \\ &\leq \MaxNorm{P_1 B P_1} + \MaxNorm{P_1 B} + \MaxNorm{BP_1}. \label{maxEntryBGlobalCen} \end{align} We bound the first term of \eqref{maxEntryBGlobalCen} as follows: \begin{align*} \abs{ \Parens{ P_1 B P_1 }_{ij} } &\leq \twonorm{p^{(1)}_i } \twonorm{ p^{(1)}_j } \twonorm{ B } \leq \frac{\twonorm{ B }}{n_{\min}}. \end{align*} For the second term of \eqref{maxEntryBGlobalCen}, \begin{align*} \Parens{P_1 B}_{ij} &= \abs{ b_i^T p_j^{(1)} } \leq \OneNorm{ b_i } \InfNorm{p_j^{(1)}} \leq \OneNorm{ B } \InfNorm{p_j^{(1)}} \leq \frac{\OneNorm{ B }}{n_{\min}}, \end{align*} where $\InfNorm{p_j^{(1)}} \leq \frac{1}{n_{\min}}$ by the definition of $P_1$ in \eqref{def:PGlobal}. We have shown $\MaxNorm{ BP_1} \leq \frac{\OneNorm{ B }}{n_{\min}}$. Likewise, $\MaxNorm{ B P_1 } \leq \frac{\OneNorm{ B }}{n_{\min}}$. Therefore, \begin{equation} \label{biasGlobalMaxNormNmin} \MaxNorm{ \widecheck{B} - B} \leq 3 \frac{ \OneNorm{B} }{n_{\min}}. \end{equation} Because the projection matrix $P_2$ satisfies $\InfNorm{p_j^{(2)}} \leq \frac{1}{n_{\min}}$, an analogous proof shows that \begin{equation} \label{biasGroupMaxNormNmin} \MaxNorm{\widetilde{B} - B} \leq \frac{3 \OneNorm{B}}{n_{\min}}. \end{equation} Substituting \eqref{biasGlobalMaxNormNmin} and \eqref{biasGroupMaxNormNmin} into \eqref{modSelBBiasMaxNorm} yields \begin{align} \MaxNorm{B^\sharp - B} &\leq \frac{\Trace{A_{J_0}}}{m} \MaxNorm{ \widecheck{B} - B} + \frac{\Trace{A_{J_1}}}{m} \MaxNorm{\widetilde{B} - B} \notag \\ &\leq \Parens{\frac{\Trace{A_{J_0}}}{m} + \frac{\Trace{A_{J_1}}}{m} } \frac{3 \OneNorm{B}}{n_{\min}} \notag \\ &= \frac{\Trace{A}}{m} \frac{3 \OneNorm{B}}{n_{\min}} \notag \\ &= \frac{3 \OneNorm{B}}{n_{\min}}. \end{align} \subsection{Proof of part I of Lemma \ref{LemmaEntrywiseSBModSel}, term II} \label{entrywiseTermII} In this section we prove a deterministic entrywise bound on $S_{\operatorname{II}}$. By \eqref{ResdualsM}, it follows that \begin{align*} (I - \PGlobal) M_{J_1} M_{J_1}^T (I - \PGlobal) = (1/4) \twonorm{\gamma_{J_1}}^2 \delta_n \delta_n ^T, \end{align*} which implies \[ \InfNorm{ (I - \PGlobal) M_{J_1} M_{J_1}^T (I - \PGlobal) } = \InfNorm{ (1/4) \twonorm{\gamma_{J_1}}^2 \delta_n \delta_n ^T } = (1/4) \twonorm{\gamma_{J_1}}^2. \] Therefore $S_{\operatorname{II}}$ satisfies the maximum entrywise bound \[ \InfNorm{S_{\operatorname{II}}} = \InfNorm{ m^{-1} (I - \PGlobal) M_{J_1} M_{J_1}^T (I - \PGlobal) } = \InfNorm{ (4 m)^{-1} \twonorm{\gamma_{J_1}}^2 \delta_n \delta_n ^T } = (4 m)^{-1} \twonorm{\gamma_{J_1}}^2, \] so \[ \InfNorm{S_{\operatorname{II}}} = r_2. \] Note that if $J_1$ is chosen so that $\InfNorm{\gamma_{J_1}} \leq \tau$, then $\twonorm{\gamma_{J_1}}^2 \leq m_{01} \tau^2$, where $m_{01}$ is the number of false negatives, so \begin{equation} \label{boundTauGlobalThree} \frac{\twonorm{\gamma_1}^2}{4 m} \leq \frac{m_{01}}{4 m} \tau^2 \leq \frac{\tau^2}{4}. \end{equation} which implies that the entrywise rate of convergence of $S_{\operatorname{II}}$ is $O(\tau^2)$. \subsection{Proof of part II of Lemma \ref{LemmaEntrywiseSBModSel}, term III} \label{entrywiseTermIII} Let $p_i$ denote the $i$th column of $P_1^T$, for $i = 1, \ldots, n$. Let $m_k$ denote the $k$th column of $M$. Let $\varepsilon_k$ denote the $k$th column of $\varepsilon$. The term $S_{\operatorname{III}}$ can be expressed as \begin{align*} (S_{\operatorname{III}})_{ij} &= m^{-1} (e_i - p_i)^T M_{J_1} \varepsilon_{J_1}^T (e_j - p_j) \\ &= m^{-1} \Trace{\varepsilon_{J_1}^T (e_j - p_j)(e_i - p_i)^T M_{J_1}} \\ &= m^{-1} \sum_{k \in J_1} \varepsilon_k^T (e_j - p_j)(e_i - p_i)^T m_k \\ &= m^{-1} \Vectorize{\varepsilon_{J_1}}^T \Parens{ \Identity{m_1} \otimes (e_j - p_j)(e_i - p_i)^T } \Vectorize{M_{J_1}} \\ &= m^{-1} \Vectorize{Z}^T \Parens{ A_{J_1}^{1/2} \otimes B^{1/2} (e_j - p_j)(e_i - p_i)^T } \Vectorize{M_{J_1}} \\ &= \Vectorize{Z}^T \psi_{ij}, \end{align*} where \begin{equation} \label{def:VecGlobalTwo} \psi_{ij} := m^{-1} \Parens{ A_{J_1}^{1/2} \otimes B^{1/2} (e_j - p_j)(e_i - p_i)^T } \Vectorize{M_{J_1}}. \end{equation} The squared Euclidean norm of $\psi_{ij}$ is \begin{align} \twonorm{\psi_{ij}}^2 &= \Vectorize{M_{J_1}}^T \Parens{ A_{J_1} \otimes (e_i - p_i) (e_j - p_j)^T B (e_j - p_j)(e_i - p_i)^T } \Vectorize{M_{J_1}} / m^2 \notag \\ &= \Vectorize{M_{J_1}}^T \Parens{A_{J_1} \otimes \widecheck{b}_{jj} (e_i - p_i) (e_i - p_i)^T} \Vectorize{M_{J_1}} / m^2 \notag \\ &= \widecheck{b}_{jj} \sum_{k \in J_1} \sum_{\ell \in J_1} \EntryTrueA{k \ell} m_k^T (e_i - p_i)(e_i - p_i)^T m_{\ell} / m^2 \notag \\ &= \widecheck{b}_{jj} \sum_{k \in J_1} \sum_{\ell \in J_1} \EntryTrueA{k \ell} (\delta_n )_i \gamma_k (\delta_n )_i \gamma_\ell / \Parens{4 m^2} \notag \\ &= \widecheck{b}_{jj} \sum_{k \in J_1} \sum_{\ell \in J_1} \EntryTrueA{k \ell} \gamma_k \gamma_\ell / \Parens{ 4 m^2} \notag \\ &= \widecheck{b}_{jj} \gamma_{J_1}^T A_{J_1} \gamma_{J_1} / \Parens{4 m^2}. \label{squaredNormPsiijRate3} \end{align} By the Hanson-Wright inequality (Theorem 2.1), \begin{equation} \label{HansonWrightGlobalTwo} \prob{ \abs{ \Vectorize{Z}^T \psi_{ij} - \twonorm{\psi_{ij}} } > d^{1/2} K^2 \sqrt{\log(m)} \twonorm{\psi_{ij}} } \leq 2 \expf{ -d \log(m)} = 2 / m^d. \end{equation} Therefore \begin{align*} \prob{ \abs{ \EntryGlobalThree{ij} } > \Parens{1 + d^{1/2} K^2 \sqrt{\log(m)} } \twonorm{\psi_{ij}} } &= \prob{ \abs{ \Vectorize{Z}^T \psi_{ij} } > \twonorm{\psi_{ij}} + d^{1/2} K^2 \sqrt{\log(m)} \twonorm{\psi_{ij}}} \\ &\leq \prob{ \abs{ \Vectorize{Z}^T \psi_{ij} - \twonorm{\psi_{ij}} } > d^{1/2} K^2 \sqrt{\log(m)} \twonorm{\psi_{ij}} } \\ &\leq 2/m^d, \end{align*} where the last step follows from \eqref{HansonWrightGlobalTwo}. By \eqref{squaredNormPsiijRate3}, it follows that \begin{equation} \label{r3UpperBoundEntrywise} \Parens{ 1 + d^{1/2} K^2 \sqrt{\log(m)} } \twonorm{\psi_{ij}} \leq r_3, \end{equation} so \begin{equation} \prob{ \abs{ \EntryGlobalThree{ij} } > r_3 } \leq \prob{ \abs{ \EntryGlobalThree{ij} } > \Parens{1 + d^{1/2} K^2 \sqrt{\log(m)} } \twonorm{\psi_{ij}} } \leq 2 / m^d, \end{equation} by \eqref{r3UpperBoundEntrywise}. By the union bound, \begin{equation*} \prob{ \InfNorm{ S_{\operatorname{III}} } > r_3 } \leq \sum_{i = 1}^m \sum_{j = 1}^m \prob{ \abs{ \EntryGlobalThree{ij} } > r_3 } \leq 2 / m^{d - 2}. \end{equation*} \subsection{Proof of part II of Lemma \ref{LemmaEntrywiseSBModSel}, term IV} \label{entrywiseTermIV} We now analyze term $S_{\operatorname{IV}}$. To do so, we express $S_{\text{IV}}$ as a quadratic form in order to apply the Hanson-Wright inequality. Let $p_i^{(1)}$ denote the $i$th column of $P_1^T$. Let $p_i^{(2)}$ denote the $i$th column of $P_2^T$. Define \begin{equation} H_{\operatorname{group}}^{ij} = I_{m_0} \otimes \Parens{ e_j - p_j^{(2)} } \Parens{ e_j - p_j^{(2)} } ^T \qquad \text{ and } \quad H_{\operatorname{global}}^{ij} = I_{m_1} \otimes \Parens{ e_j - p_j^{(1)} } \Parens{ e_j - p_j^{(1)} } ^T, \end{equation} and let \begin{equation} H^{ij}(J_0, J_1) = \begin{bmatrix} H_{\operatorname{group}}^{ij} \\ & H_{\operatorname{global}}^{ij} \end{bmatrix}, \end{equation} where $H_{\operatorname{group}}^{ij} \in \mathbb{R}^{m_0 n \times m_0 n}$, $H_{\operatorname{global}}^{ij} \in \mathbb{R}^{m_1 n \times m_1 n}$, and $H^{ij}(J_0, J_1) \in \mathbb{R}^{mn \times mn}$. Recall that \begin{equation*} S_{\operatorname{IV}} = m^{-1} (I - P_2) \varepsilon_{J_0} \varepsilon_{J_0}^T (I - P_2) + m^{-1} (I - P_1) \varepsilon_{J_1} \varepsilon_{J_1}^T (I - P_1). \end{equation*} The second term of $S_{\operatorname{IV}}$ can be expressed as a quadratic form, as follows (where $\varepsilon_k$ denotes the $k$th column of $\varepsilon \in \mathbb{R}^{n \times m}$): \begin{align} m^{-1} (I - P_1) \varepsilon_{J_1} \varepsilon_{J_1}^T (I - P_1) &= m^{-1} \sum_{k \in J_1} \Parens{ e_i - p_i^{(1)} }^T \varepsilon_k \varepsilon_k^T \Parens{ e_j - p_j^{(1)} } \notag \\ &= m^{-1} \sum_{k \in J_1} \Trace{ \Parens{ e_i - p_i^{(1)} }^T \varepsilon_k \varepsilon_k^T \Parens{ e_j - p_j^{(1)} } } \notag \\ &= m^{-1} \sum_{k \in J_1} \varepsilon_k^T \Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} }^T \varepsilon_k \notag \\ &= m^{-1} \Vectorize{\varepsilon_{J_1}}^T \Parens{ I_{m_1} \otimes \Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} } ^T } \Vectorize{\varepsilon_{J_1}}^T \notag \\ &= m^{-1} \Vectorize{\varepsilon_{J_1}}^T H_{\operatorname{global}}^{ij} \Vectorize{\varepsilon_{J_1}}^T. \label{termOneGlobalQuadFrom} \end{align} Analogously, the first term of $S_{\operatorname{IV}}$ can be expressed as a quadratic form: \begin{align} m^{-1} (I - P_2) \varepsilon_{J_0} \varepsilon_{J_0}^T (I - P_2) &= m^{-1} \sum_{k \in J_0} \Parens{ e_i - p_i^{(2)} }^T \varepsilon_k \varepsilon_k^T \Parens{ e_j - p_j^{(2)} } \notag \\ &= m^{-1} \Vectorize{\varepsilon_{J_0}}^T H_{\operatorname{group}}^{ij} \Vectorize{\varepsilon_{J_0}}^T. \label{termOneGroupQuadFrom} \end{align} We now express $S_{\operatorname{IV}}$ as a quadratic form. Let $\pi(X)$ denote the matrix $X$ with reordered columns: \begin{equation} \label{permuted} \pi( X ) = \begin{bmatrix} X_{J_0} & X_{J_1} \end{bmatrix} \quad \text{ and } \quad \pi(A) = \operatorname{Cov}\Parens{ \Vectorize{\pi(X)}}. \end{equation} Then by \eqref{termOneGlobalQuadFrom} and \eqref{termOneGroupQuadFrom}, \begin{align*} (S_{\operatorname{IV}})_{ij} &= m^{-1} \Vectorize{\varepsilon_{J_0}}^T H_{\operatorname{group}}^{ij} \Vectorize{\varepsilon_{J_0}}^T + m^{-1} \Vectorize{\varepsilon_{J_1}}^T H_{\operatorname{global}}^{ij} \Vectorize{\varepsilon_{J_1}}^T \\ &= m^{-1} \Vectorize{\pi \Parens{ \varepsilon }}^T H^{ij}(J_0, J_1) \Vectorize{\pi \Parens{ \varepsilon }} \\ &= m^{-1} \Vectorize{Z}^T \Parens{ \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} H^{ij}(J_0, J_1) \Parens{ \pi(A)^{1/2} \otimes B^{1/2}}} \Vectorize{Z}, \end{align*} where the last step holds by decorrelation, with $Z \in \mathbb{R}^{n \times m}$ as a random matrix with independent subgaussian entries. Note that the $(i, j)$th entry of $S_{\operatorname{IV}}$ can be expressed as \begin{equation} \label{SOneQuadraticForm} \Parens{S_{\operatorname{IV}}}_{ij} = \Vectorize{Z}^T \Phi_{i, j} \Vectorize{Z}, \end{equation} with \begin{equation} \label{DecorrHij} \Phi_{i, j} = m^{-1} \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} H^{ij}(J_0, J_1) \Parens{ \pi(A)^{1/2} \otimes B^{1/2}}. \end{equation} Having expressed $\Parens{S_{\operatorname{IV}}}_{ij}$ as a quadratic form in \eqref{SOneQuadraticForm}, we find the trace and Frobenius norm of $\Phi_{i, j}$, then apply the Hanson-Wright inequality. First we find the trace of $\Phi_{i, j}$. Let \begin{equation} \mathcal{I}_0 = \begin{bmatrix} I_{m_0 \times m_0} & 0_{m_0 \times m_1} \\ 0_{m_1 \times m_0} & 0_{m_1 \times m_1} \end{bmatrix} \quad \text{ and } \quad \mathcal{I}_1 = \begin{bmatrix} 0_{m_0 \times m_0} & 0_{m_0 \times m_1} \\ 0_{m_1 \times m_0} & I_{m_1 \times m_1} \end{bmatrix}. \end{equation} Note that $H^{ij}(J_0, J_1)$ can be written as a sum of Kronecker products, \begin{equation} \label{HijSumKronecker} H^{ij}(J_0, J_1) = \mathcal{I}_0 \otimes \Parens{ e_j - p_j^{(2)} } \Parens{ e_i - p_i^{(2)} } ^T + \mathcal{I}_1 \otimes \Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} }^T, \end{equation} hence \eqref{DecorrHij} can be expressed as \begin{align} &m^{-1} \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} \Parens{\mathcal{I}_0 \otimes \Parens{ e_j - p_j^{(2)} } \Parens{ e_i - p_i^{(2)} } ^T } \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} \label{DecorrHijTermGlobal} \\ &\qquad + m^{-1} \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} \Parens{\mathcal{I}_1 \otimes \Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} }^T } \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} \label{DecorrHijTermGroup}. \end{align} The trace of the term \eqref{DecorrHijTermGlobal} is \begin{align*} & m^{-1} \Trace{\Parens{\pi(A)^{1/2} \otimes B^{1/2}} \Parens{\mathcal{I}_0 \otimes \Parens{ e_j - p_j^{(2)} } \Parens{ e_i - p_i^{(2)} } ^T} \Parens{\pi(A)^{1/2} \otimes B^{1/2}}} \\ &= m^{-1} \Trace{ \pi(A)^{1/2} \mathcal{I}_0 \pi(A)^{1/2} \otimes B^{1/2} \Parens{ e_j - p_j^{(2)} } \Parens{ e_i - p_i^{(2)} }^T B^{1/2}} \\ &= m^{-1} \Trace{\pi(A)^{1/2} \mathcal{I}_0 \pi(A)^{1/2}} \Trace{B^{1/2} \Parens{ e_j - p_j^{(2)} } \Parens{ e_i - p_i^{(2)} }^T B^{1/2} } \\ &= m^{-1} \Trace{\mathcal{I}_0 \pi(A)} \Parens{ \Parens{ e_i - p_i^{(2)} }^T B \Parens{ e_j - p_j^{(2)} } } \\ &= m^{-1} \Trace{A_{J_0} } \Brackets{ (I - P_2) B (I - P_2)) }_{ij} \\ &= m^{-1} \Trace{A_{J_0} } \widetilde{b}_{ij}. \end{align*} Analogously, the trace of the term \eqref{DecorrHijTermGroup} is \begin{align*} &m^{-1} \Trace{\Parens{\pi(A)^{1/2} \otimes B^{1/2}} \Parens{\mathcal{I}_1 \otimes \Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} } ^T} \Parens{\pi(A)^{1/2} \otimes B^{1/2}}} \\ &= m^{-1} \Trace{A_{J_1}} \Brackets{ (I - P_1) B (I - P_1)) }_{ij} \\ &= m^{-1} \Trace{A_{J_1}} \widecheck{b}_{ij}. \end{align*} Let $b^\sharp_{ij}$ denote the $(i, j)$th entry of $B^\sharp$ defined in \eqref{expectedValueModSelSB}. We have shown that the trace of $\Phi_{i, j}$ (as defined in \eqref{DecorrHij}) is \begin{equation} \label{traceQuadFromTermOne} \Trace{\Phi_{i, j}} = m^{-1} \Trace{A_{J_0} } \widetilde{b}_{ij} + m^{-1} \Trace{A_{J_1}} \widecheck{b}_{ij} = b^\sharp_{ij}. \end{equation} Next, we find the Frobenius norm of $\Phi_{i, j}$. For convenience, define \begin{align} \mathcal{A}_{0} = \pi(A)^{1/2} \mathcal{I}_0 \pi(A)^{1/2} \quad &\text{ and } \quad \mathcal{A}_{1} = \pi(A)^{1/2} \mathcal{I}_1 \pi(A)^{1/2} \\ \mathcal{B}_{2, ij} = B^{1/2} \Parens{ e_j - p_j^{(2)} } \Parens{ e_i - p_i^{(2)} }^T B^{1/2} \quad &\text{ and } \quad \mathcal{B}_{1, ij} = B^{1/2} \Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} }^T B^{1/2}. \end{align} Then \begin{align} \FroNorm{ \Phi_{i, j} }^2 &= \FroNorm{ m^{-1} \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} H^{ij}(J_0, J_1) \Parens{ \pi(A)^{1/2} \otimes B^{1/2}} }^2 \notag \\ &= m^{-2} \FroNorm{ \mathcal{A}_0 \otimes \mathcal{B}_{2, ij} + \mathcal{A}_1 \otimes \mathcal{B}_{1, ij} }^2 \notag \\ &= m^{-2} \Trace{\Parens{\mathcal{A}_0 \otimes \mathcal{B}_{2, ij} + \mathcal{A}_1 \mathcal{B}_{1, ij} }^T \Parens{\mathcal{A}_0 \otimes \mathcal{B}_{2, ij} + \mathcal{A}_1 \otimes \mathcal{B}_{1, ij} } } \notag \\ &= m^{-2} \Trace{ \mathcal{A}_0^T \mathcal{A}_0 \otimes \mathcal{B}_{2, ij} ^T \mathcal{B}_{2, ij} } + m^{-2} \Trace{ \mathcal{A}_1^T \mathcal{A}_1 \otimes \mathcal{B}_{1, ij} ^T \mathcal{B}_{1, ij} } \notag \\ &\qquad + m^{-2} \Trace{ \mathcal{A}_0^T \mathcal{A}_1 \otimes \mathcal{B}_{2, ij} ^T \mathcal{B}_{1, ij} } + m^{-2} \Trace{ \mathcal{A}_1^T \mathcal{A}_0 \otimes \mathcal{B}_{1, ij} ^T \mathcal{B}_{2, ij} }. \label{TermOneFroDecomp} \end{align} We now find the traces of each of the terms in \eqref{TermOneFroDecomp}. First, note that \begin{align} \label{A0A0} \Trace{ \mathcal{A}_0^T \mathcal{A}_0 } &= \Trace{ \mathcal{I}_0 \pi(A) \mathcal{I}_0 \pi(A) } = \Trace{ A_{J_0}^2 } = \FroNorm{ A_{J_0} }^2. \end{align} Analogously, \begin{equation} \label{A1A1} \Trace{ \mathcal{A}_1^T \mathcal{A}_1} = \FroNorm{ A_{J_1} }^2. \end{equation} For the cross-term, let $A_{J_0 J_1}$ denote the $m_0 \times m_1$ submatrix of $\pi(A)$ given by columns of $A$ in $J_0$ and rows of $A$ in $J_1$. Then \begin{align} \Trace{ \mathcal{A}_0^T \mathcal{A}_1} &= \Trace{ \mathcal{I}_0 \pi(A) \mathcal{I}_1 \pi(A) } \notag \\ &= \Trace{ \begin{bmatrix} 0_{m_0 \times m_0} & A_{J_0 J_1} \\ 0_{m_1 \times m_0} & 0_{m_1 \times m_1} \end{bmatrix} \pi(A) } \notag \\ &= \Trace{ A_{J_0 J_1}^T A_{J_0 J_1}} \notag \\ &= \FroNorm{ A_{J_0 J_1}}^2. \label{A0A1} \end{align} Next, \begin{align} \Trace{ \mathcal{B}_{1, ij} ^T \mathcal{B}_{1, ij} } &= \Trace{B^{1/2} \Parens{ e_i - p_i^{(1)} } \Parens{ e_j - p_j^{(1)} }^T B\Parens{ e_j - p_j^{(1)} } \Parens{ e_i - p_i^{(1)} }^T B^{1/2} } \notag \\ &= \Parens{ \Parens{ e_j - p_j^{(1)} }^T B\Parens{ e_j - p_j^{(1)} } } \Parens{ \Parens{ e_i - p_i^{(1)} }^T B\Parens{ e_i - p_i^{(1)} } } \notag \\ &= \widecheck{b}_{jj} \widecheck{b}_{ii}. \label{B1B1} \end{align} Analogously, \begin{align} \Trace{ \mathcal{B}_{2, ij} ^T \mathcal{B}_{2, ij} } &= \Parens{ \Parens{ e_j - p_j^{(2)} }^T B\Parens{ e_j - p_j^{(2)} } } \Parens{ \Parens{ e_i - p_i^{(2)} }^T B\Parens{ e_i - p_i^{(2)} } } \notag \\ &= \widetilde{b}_{jj} \widetilde{b}_{ii}. \label{B2B2} \end{align} The cross-terms yield \begin{align} \label{B1B2} \Trace{ \mathcal{B}_{1, ij} ^T \mathcal{B}_{2, ij} } &= \Parens{ \Parens{ e_j - p_j^{(1)} }^T B\Parens{ e_j - p_j^{(2)} } } \Parens{ \Parens{ e_i - p_i^{(2)} }^T B\Parens{ e_i - p_i^{(1)} } } = \breve{b}_{ii} \breve{b}_{jj}. \end{align} The squared Frobenius norm of $\Phi_{i, j}$ is \begin{align*} \FroNorm{\Phi_{i, j}}^2 &= \frac{1}{m^2} \Parens{ \FroNorm{A_{J_0}}^2 \widecheck{b}_{ii} \widecheck{b}_{jj} + \FroNorm{A_{J_1}}^2 \widetilde{b}_{ii} \widetilde{b}_{jj} + 2 \FroNorm{A_{J_0, J_1}}^2 \breve{b}_{ii} \breve{b}_{jj} } \\ &\leq \frac{1}{m^2} C \Parens{ \FroNorm{A_{J_0}}^2 + \FroNorm{A_{J_1}}^2 + 2 \FroNorm{A_{ J_0 J_1 }}^2 } \twonorm{B}^2 \\ &= C \frac{1}{m^2} \FroNorm{A}^2 \twonorm{B}^2. \end{align*} We now apply the Hanson-Wright inequality, \begin{align*} \prob{ \abs{ \EntryGlobalOne{ij} - b^\sharp_{ij} } > r_4} &= \prob{ \abs{ \Vectorize{Z}^T \Phi_{i, j} \Vectorize{Z} - \Trace{\Phi_{i, j}} } > r_4} \\ &\leq 2 \exp\left( -c \min\left\{ d \log(m) , d^{1/2} \sqrt{\log(m)} \frac{\fnorm{\Phi_{i, j}} }{ \twonorm{\Phi_{i, j}} } \right\} \right) \\ &\leq 2 \max\left( m^{-d} , \exp\left( d^{1/2} \sqrt{\log(m)} r^{1/2}(\Phi_{i, j}) \right) \right). \end{align*} The first step holds by \eqref{SOneQuadraticForm} and \eqref{traceQuadFromTermOne}. \section{Conclusion} \label{sec::conclude} It has long been known that heteroscedasticity and dependence between observations impacts the precision and degree of uncertainty for estimates of mean values and regression coefficients. Further, data that are modeled for convenience as being independent observations may in fact show unanticipated dependence ~\citep{kruskal1988miracles}. This has motivated the development of numerous statistical methods, including generalized/weighted least squares (GLS/WLS), mixed effect models, and generalized estimating equations (GEE). Our approach utilizes recent advances in high dimensional statistics to permit estimation of an inter-observation dependence structure (reflected in the matrix $B$ in our model). Like GLS/GEE, we use an approach that alternates between mean and covariance estimation, but limit it in Algorithm 1 to a mean estimation step, followed by a covariance update, followed by a mean update, with an additional covariance and mean update if Algorithm 2 is used. We provide convergence guarantees and rates for both algorithms. Estimation of dependence or covariance structures usually requires some form of replication, and/or strong models. We require a relatively weak form of replication and a relatively weak model. In our framework, the dependence among observations must be common (up to proportionality) across a set of ``quasi-replicates'' (the columns of $X$, or the genes in our UC example). These quasi-replicates may be statistically dependent, and may have different means. We also require the precision matrices for the dependence structures to be sparse, which is a commonly used condition in recent high-dimensional analyses. In addition to providing theoretical guarantees, we also show through simulations and a genomic data analysis that the approach improves estimation accuracy for the mean structure, and appears to mitigate test statistic overdispersion, leading to test statistics that do not require post-hoc correction. The latter observation suggests that undetected dependence among observations may be one reason that genomic analyses are sometimes less reproducible than traditional statistical methods would suggest, an observation made previously by \citet{Efr09} and others. Although our theoretical analysis guarantees the convergence of our procedure even with a single observation of the random matrix X, there are reasons to expect this estimation problem to be fundamentally challenging. One reason for this as pointed out by \citet{Efr09} and subsequently explored by ~\citet{Zhou14a}, is that the row-wise and column-wise dependence structures are somewhat non-orthogonal, in that row-dependence can ``leak'' into the estimates of column-wise dependence, and vice-versa. Our results suggest that while row-wise correlations make it more difficult to estimate column-wise correlations (and vice-versa), when the emphasis is on mean structure estimation, even a somewhat rough estimate of the dependence structure ($B$) can substantially improve estimation and inference. \section{Simulations} \label{sec::Simulations} We present simulations to compare Algorithms 1 and 2 to both sample mean based analysis and oracle algorithms that use knowledge of the true correlation structures $A$ and $B$. We show these results for a variety of population structures and sample sizes. We construct covariance matrices for $A$ and $B$ from one of: \begin{itemize} \item AR1$(\rho)$ model. The covariance matrix is of the form $B =\{\rho^{|i-j|}\}_{i,j}$, and the graph corresponding to $B^{-1}$ is a chain. \item Star-Block model. The covariance matrix is block-diagonal with equal-sized blocks whose inverses correspond to star structured graphs, where $B_{ii} = 1$, for all $i$. In each subgraph, a central hub node connects to all other nodes in the subgraph, with no additional edges. The covariance matrix for each block $S$ in $B$ is generated as in \cite{RWRY08}: $S_{ ij} = \rho = 0.5$ if $(i,j) \in E$ and $S_{ij} = \rho^2$ otherwise. \item Erd\H{o}s-R\'{e}nyi model. We use the random concentration matrix model in \cite{ZLW08}. The graph is generated according to a type of Erd\H{o}s-R\'{e}nyi random graph. Initially we set $B^{-1} = 0.25 I_{n \times n}$. Then, we randomly select $d$ edges and update $B^{-1}$ as follows: for each new edge $(i, j)$, a weight $w >0$ is chosen uniformly at random from $[w_{\min}, w_{\max}]$ where $w_{\min} = 0.6$ and $w_{\max} = 0.8$; we subtract $w$ from $B^{-1}_{ij}$ and $B^{-1}_{ji}$, and increase $B^{-1}_{ii}$ and $B^{-1}_{jj}$ by $w$. This keeps $B^{-1}$ positive definite. We then rescale so that $B^{-1}$ is an inverse correlation matrix. \end{itemize} \subsection{Accuracy of $\widehat{\gamma}$ and its implication for variable ranking} Table~\ref{CovMeanMetrics} displays metrics that reflect how the choice of different population structures $B$ can affect the difficulty of the mean and covariance estimation problems. Column 2 is a measure discussed by \citet{efron2007correlation}. Column 3 appears directly in the theoretical analysis, reflecting the entry-wise error in the sample correlation $\widehat{\Gamma}(B)$. Columns 4 analogously reflects the entry-wise error for the Flip-Flop procedure in \citet{Zhou14a}, and is included here for completeness. Column 5 displays the value of $\sqrt{\delta^T (D^T B^{-1} D)^{-1} \delta} $, where $\delta = (1, -1) \in \mathbb{R}^2$, which represents the standard deviation of the difference in means estimated using GLS with the true $B^{-1}$. Column 6 displays what we call the standard deviation ratio, namely \begin{equation} \label{sdRatio} \sqrt{\frac{u^TBu}{\delta^T (D^T B^{-1} D)^{-1} \delta}}, \end{equation} where $u = ( \underbrace{1/n_1, \ldots, 1/n_1}_{n_1}, \underbrace{-1/n_2, \ldots, -1/n_2}_{n_2}) \in \mathbb{R}^n$ and $\delta = (1, -1) \in \mathbb{R}^2$, which reflects the potential efficiency gain for GLS over sample mean based method~\eqref{eq::sampleMean} for estimating $\gamma$. Note that the standard deviation ratio depends on the relationship between the covariance matrix $B$ and the design matrix $D$. In Table~\ref{CovMeanMetrics}, the first $n/2$ individuals are in group one, and the following $n/2$ are in group two. The values in Column 6 show that substantial improvement is possible in mean estimation. For an AR1 covariance matrix, the standard deviation ratio increases as the AR1 parameter increases; as the correlations get stronger, the potential improvement in mean estimation due to GLS grows. For the Star Block model with fixed block size, the standard deviation ratio is stable as $n$ increases. \begin{table}[tb] \centering \begin{tabular}{rlrrrrr} \hline & $B$ & $\rho_B^2$ & $\lVert B \rVert_F / \text{tr}(B)$ & $\left| \rho(B)^{-1} \right |_{1, \text{off}}$ & sd GLS & sd ratio \\ \hline \multicolumn{7}{l}{$n = 80$} \\ \hline 1 & AR1(0.2) & 0.00 & 0.12 & 32.92 & 0.27 & 1.00 \\ 2 & AR1(0.4) & 0.00 & 0.13 & 75.24 & 0.33 & 1.02 \\ 3 & AR1(0.6) & 0.01 & 0.16 & 148.12 & 0.40 & 1.07 \\ 4 & AR1(0.8) & 0.04 & 0.24 & 351.11 & 0.46 & 1.32 \\ 5 & StarBlock(4, 20) & 0.02 & 0.18 & 101.33 & 0.35 & 1.51 \\ 6 & ER(0.6, 0.8) & 0.01 & 0.14 & 92.75 & 0.17 & 1.21 \\ \hline \multicolumn{7}{l}{$n = 40$} \\ \hline 1 & AR1(0.2) & 0.00 & 0.16 & 16.25 & 0.38 & 1.01 \\ 2 & AR1(0.4) & 0.01 & 0.19 & 37.14 & 0.45 & 1.03 \\ 3 & AR1(0.6) & 0.03 & 0.23 & 73.12 & 0.53 & 1.12 \\ 4 & AR1(0.8) & 0.08 & 0.33 & 173.33 & 0.53 & 1.47 \\ 5 & StarBlock(2, 20) & 0.04 & 0.25 & 50.67 & 0.50 & 1.51 \\ 6 & ER(0.6, 0.8) & 0.02 & 0.21 & 47.24 & 0.25 & 1.23 \\ \hline \end{tabular} \caption{Assessment of the difficulty of estimating $B^{-1}$ and the potential gain from GLS. The total correlation $\rho_B$ is the average squared off-diagonal value of the correlation matrix $\rho(B)$. The fourth column is the design effect as defined in \eqref{designEffect}. The last column (sd ratio) presents the ratio of the standard deviation of the difference in sample means in~\eqref{eq::sampleMean} to the standard deviation of the GLS estimator of the difference in means. The first three columns of the table reflect the difficulty of estimating $B$, whereas the last two columns reflect the potential improvement of GLS over the sample mean based method~\eqref{eq::sampleMean}. In the notation StarBlock$(a, b)$, $a$ refers to the number of blocks, and $b$ refers to the block size. } \label{CovMeanMetrics} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.9 \textwidth]{ROC} \caption{ROC curves. For each plot, the horizontal axis is false positive rate (FPR) and the vertical axis is true positive rate (TPR), as we vary a threshold for classifying variables as null or non-null. The covariance matrices $A$ and $B$ are both AR1 with parameter $0.8$, with $m = 2000$ and $n = 40$, $80$, and $160$ in column one, two, and three, respectively. Ten variables in $\gamma$ have nonzero entries. On each trial, the group labels are randomly assigned, with equal sample sizes. The marginal variance of each entry of the data matrix is equal to one. For the first row of plots, the magnitude of each nonzero entry of $\gamma$ is $0.2$, and for the second and third rows of plots, the magnitude of each nonzero entry of $\gamma$ is $0.3$. In the first two rows we display ROC curves for Algorithms 1 and 2 with penalty parameters chosen to maximize area under the curve. The third row displays an ROC curves for Algorithm 1, sweeping out penalty parameters.} \label{fig::ROC} \end{figure} In Figure \ref{fig::ROC}, we use ROC curves to illustrate the sensitivity and specificity for variable selection in the sense of how well we can identify the support for $\{ i: \gamma_i \neq 0\}$ when we threshold $\hat{\gamma}_i$ at various values. To evaluate and compare different methods, we let $\widehat{\gamma}$ be the output of Algorithm 1, Algorithm 2, the oracle GLS, and the sample mean based method~\eqref{eq::sampleMean}. These correspond to the four curves on each plot of the top two rows of plots. We find that Algorithm 1 and Algorithm 2 perform better than the sample mean based method~\eqref{eq::sampleMean}, and in some cases perform comparably to the oracle GLS. Plots in the third row of Figure \ref{fig::ROC} illustrate the sensitivity of Algorithm 1 to the choice of the graphical lasso (GLasso) penalty parameter~\eqref{GLassoPenaltyB}; the simulations are run using the \texttt{glasso} R package \citep{FHT07} to estimate $B$ via \eqref{geminiObjectiveFnB}. The performance can degenerate to that of the sample mean based method~\eqref{eq::sampleMean}, if the penalty is too high. \begin{figure}[h] \centering \includegraphics[width=.95\textwidth]{sixPlotPanel} \caption{Performance of centering methods as $n$ and $m$ are varied, with $n$ shown on the horizontal axis. In the first column of plots, the number of edges is proportional to $\sqrt{m / \log(m)}$. In the second and third columns of plots, the number of edges is proportional to $m$. In the first two columns of plots, $B^{-1}$ is an Erd\H{o}s-R\'{e}nyi inverse covariance matrix. In the third column, $B^{-1}$ is star block with blocks of size $10$. The first row of plots shows RMSE for estimating $\gamma$, whereas the second row shows average relative Frobenius error in estimating $B^{-1}$. All panels are based on 250 simulation replications.} \label{fig::sixPlotPanel} \end{figure} In the top row of Figure~\ref{fig::sixPlotPanel} we plot the root mean squared error (RMSE) when estimating the mean differences $\gamma$ for Algorithm 1, Algorithm 2, OLS (i.e. sample means) and the oracle GLS estimate. The population structures for $B$ are Erd\H{o}s-R\'{e}nyi and Star Block. Both Algorithms 1 and 2 consistently outperform the sample mean based method~\eqref{eq::sampleMean} for mean estimation, and Algorithm 2 even achieves comparable performance to the oracle GLS in some settings. The bottom row displays the relative Frobenius error for estimating $B^{-1}$. Algorithm 2 outperforms Algorithm 1 in terms of covariance estimation and is comparable to oracle model selection, which only centers the columns with a true mean difference. \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{RankCorrplugin} \caption{This figure displays the correlation between the rankings of the components of $\gamma$ and $\widehat{\gamma}$, sorted by magnitude, denoted Corr(Ranks$(\gamma)$, Ranks$(\widehat{\gamma}))$ in the axis label. The vector of mean differences is chosen as $\gamma_j = C \exp(- (3/2000) j)$, for $j = 1, \ldots, 2000$. We also present the Algorithm 2 results with a multiplier on the threshold as described in Section \ref{sec::ModelBasedCenter}. In the top row, the true $B$ is AR1(0.8), with $n = 40$ and $m = 2000$. In the bottom row, the true $B$ is chosen as an estimate from the UC data, with $n = 20$ and $m = 2000$. For the top row, the group labels are randomly assigned; for the bottom row, the first ten rows of the data are in group one, and the other ten are in group two. The figure is averaged over $200$ replications. The top and bottom horizontal lines represent GLS with true $B$ and OLS, respectively. The vertical axis displays the correlation of ranks between $\widehat{\gamma}$ and $\gamma$, and the horizontal axis displays the GLasso penalty parameter.} \label{RankCorrplugin_decayExp_m1_2e3_v1} \end{figure} In Figure \ref{RankCorrplugin_decayExp_m1_2e3_v1}, we illustrate that Algorithm 2 can perform well using a plug-in estimator $\widehat{\tau}_{\text{\rm init}}$ as in \eqref{modSelThresh}. We compare the methods when the true mean structure is a decaying exponential; we display the correlation of the ranks of the entries of $\gamma$ to the ranks of the estimates of $\gamma$. Algorithm 2 with a plugin estimator $\widehat{\tau}_{\text{\rm init}}$ can nearly reach the performance of GLS with the true $B$. Furthermore, the plug-in version of Algorithm 2 also consistently outperforms Algorithm 1. We also assess sensitivity to the choice of threshold: the curve labeled ``Algorithm 2'' uses the plug-in estimate $\widehat{\tau}_{\text{\rm init}}$, whereas ``Algorithm 2 with threshold multiplier'' uses a plug-in estimate of the lower bound given in \eqref{threshTheorem3} in Theorem \ref{mainTheoremModSel}. These two-plug in estimators exhibit similar performance, showing robustness of Algorithm 2 to the choice of the threshold parameter. In real data analysis, we validate this further. For the top row (AR1), the ratio of thresholds \eqref{threshTheorem3} to \eqref{modSelThresh} is $0.75$, and for the bottom row (UC), the ratio is $0.17$. In Web Supplement Section~\ref{secComparison}, we perform additional simulations to compare Algorithm 2 to two similar methods using ROC curves, namely, the sphering method of \citet{allen2012inference}, which uses a matrix-variate model similar to ours, and the confounder adjustment method of \citet{wang2015confounder}, which uses a latent factor model. Our simulations show that Algorithm 2 consistently outperforms these competing methods in a variety of simulation settings using matrix-variate data. \subsection{Inference for the mean difference $\widehat{\gamma}$} \label{inferenceGamma} Two basic approaches to conducting inference for mean differences are paired and unpaired t statistics. The unpaired t statistic is defined as follows. Let $X = (X_{ij})$. Then the $j$th unpaired t statistic is \begin{eqnarray} \label{unpairedTstat} T_j & = & \left( \widetilde{\beta}_j^{(1)} - \widetilde{\beta}_j^{(2)} \right) \widehat{\sigma}_j^{-1} (n_1^{-1} + n_2^{-1})^{-1/2}, \text{ where} \; \; \\ \nonumber \widehat{\sigma}_j^2 & = & (n_1 + n_2 - 2)^{-1} \sum_{k = 1}^{2} \sum_{i \in \mathcal{G}_k} \left(X_{ij} - \widetilde{\beta}_j^{(k)} \right)^2, \end{eqnarray} where $\widetilde{\beta}_j^{(k)}$, $k = 1, 2$, and $j = 1, \ldots, m$, denotes the sample mean of group $k$ and variable $j$ as defined in \eqref{eq::sampleMean}, and $\mathcal{G}_k$ is the set of indices corresponding to group $k$. When there is a natural basis for pairing the observations, and paired units are anticipated to be positively correlated, we can calculate paired t statistics. For the paired t statistic, suppose observations $i$ and $i' = i + n/2$ are paired, for $i \in \{1, \ldots, n/2\}$. Note that samples can always be permuted so as to be paired in this way. Define the paired differences $d_{ij} = X_{ij} - X_{i'j}$, for $i \in \{1, \ldots, n/2\}$. Then the paired t statistic is $\overline{d}_j (n/2 - 1)^{1/2} / \left( \sum_{i = 1}^{n/2} (d_{ij} - \overline{d}_j)^2 \right)^{1/2}$, where $\overline{d}_j = (n/2)^{-1} \sum_{i = 1}^{n/2} d_{ij}$. \begin{figure}[tb] \centering \includegraphics[width=0.95\textwidth]{boxplots} \caption{Ratio of estimated design effect to true design effect when $B^{-1}$ is Erd\H{o}s-R\'{e}nyi, and $A$ is AR1$(0.8)$. Figures (A) and (B) correspond to sample size $n = 80$; (C) and (D) correspond to $n = 40$. Figures (A) and (C) correspond to Algorithm 1; Figures (B) and (D) correspond to Algorithm 2, with ten columns group centered. These results are based on dimension parameter $m=2000$ and $250$ simulation replications.} \label{fig::boxplots} \end{figure} Figure \ref{fig::boxplots} considers estimation of the ``design effect'' $\delta^T (D^T B^{-1} D)^{-1} \delta$, as previously defined in \eqref{designEffect}, with $\delta = (1, -1)^T$. The importance of this object is discussed in Sections~\ref{sec::GLSFixedBtheorem} and \ref{sec::designEffectRate}. The design effect is estimated via $\delta^T (D^T \widehat{B}^{-1} D)^{-1} \delta$, with $\widehat{B}^{-1}$ from Algorithm 1 or 2. The GLasso penalty parameters are chosen as \begin{align} \lambda_A &= f_A \left( C_A K \frac{\log^{1/2}(m \vee n)}{\sqrt{m}} + \frac{\lVert B \rVert_1}{n_{\min}} \right) \label{MultiplierGLassoPenaltyB} \end{align} where we sweep over the factor $f_A$, referred to as the penalty multiplier. Figure \ref{fig::boxplots} displays boxplots of the ratio $\delta^T (D^T \widehat{B}^{-1} D)^{-1} \delta / \delta^T (D^T B^{-1} D)^{-1} \delta$ over $250$ replications for each setting of the penalty multiplier $f_A$. In Figure \ref{fig::boxplots}, $B^{-1}$ follows the Erd\H{o}s-R\'{e}nyi model, and $A$ is AR1$(0.8)$, with $m = 2000$, and $n = 40$ and $80$. Figure \ref{fig::boxplots} shows that Algorithm 2 (plots B and D) estimates the design effect to high accuracy and is quite insensitive to the penalty multiplier as long as it is less than $1$, as predicted by the theoretical analysis. Algorithm 1 also estimates the design effect with high accuracy, but with somewhat greater sensitivity to the tuning parameter. The best penalty parameter for Algorithm 1 is around $0.1$, whereas reasonable penalty parameters for Algorithm 2 are in the range $0.01$ to $0.1$. This is consistent with smaller entrywise error in the sample covariance for model selection centering than for group centering. \begin{figure}[h!] \centering \includegraphics[width=0.9 \textwidth]{qqAR} \caption{Quantile plots of test statistics. Ten genes have nonzero mean differences equal to $2$, $0.8$, and $1$ in the three plots, respectively. In each plot $A$ is AR1$(0.8)$. Covariance structures for $B$ are as indicated. In the third plot, the true $B$ is set to $\widehat{B}$ for the ulcerative colitis data, described in Section \ref{sec::UCData}. For the first two plots there are $n = 40$ samples and $m = 2000$ variables. For the third plot there are $n=20$ samples and $m = 2000$ variables. Each plot has $250$ simulation replications.} \label{fig::QQAR} \end{figure} We next compare the results from Algorithm 2 to results obtained using paired and unpaired t statistics. Figure \ref{fig::QQAR} illustrates the calibration and power of plug-in Z-scores, $\widehat{\gamma}_j / \widehat{\rm SE}(\hat{\gamma}_j)$ derived from Algorithm 2 for three population settings. The standard error is calculated as $\sqrt{\delta^T (D^T \widehat{B}^{-1} D)^{-1} \delta}$, with $\delta = (1, -1$). In the first and second plots, the data was simulated from AR1$(0.8)$ and Erd\H{o}s-R\'{e}nyi, respectively. In the third plot, the data was simulated from $\widehat{B}$ for ulcerative colitis data described in Section \ref{sec::UCData}. To obtain $\widehat{B}$, we apply Algorithm 2 to the ulcerative colitis data, using a Glasso penalty of $\lambda \approx 0.5 [(\log(m) / m) + 3/n]$ in step 1, followed by group centering the top ten genes in step 2, and using a Glasso penalty of $\lambda \approx 0.1 [(\log(m) / m) + 3/n]$ in step 4. In all cases $A$ is AR1(0.8). In each case, we introduce 10 variables with different population means in the two groups, by setting $\gamma=0.8$ for those variables, with the remaining $\gamma$ values equal to zero. The ideal Q-Q plot would follow the diagonal except at the upper end of the range, as do our plug-in GLS test statistics. The t statistics (ignoring dependence) are seen to be overly dispersed throughout the range, and are less sensitive to the real effects. \subsection{Covariance estimation for $A$} \begin{figure} \includegraphics[width=0.95\textwidth]{froberror} \caption{Relative Frobenius error in estimating $A^{-1}$, as $n$ varies. In each plot the matrix $B$ is AR1$(0.8)$ and $A$ is as indicated. The vertical axis is relative Frobenius error, and the horizontal axis $n / (d \log(m))$, where $d$ is the maximum node degree. The GLasso penalty is chosen to minimize the relative Frobenius error. Each point is based on $250$ Monte Carlo replications.} \label{fig::froberror} \end{figure} Figure \ref{fig::froberror} shows the relative Frobenius error in estimating $A^{-1}$ as $n$ grows, for fixed $m$. The horizontal axis is $n / (d\log(m))$, scaled so that the curves align, where $d$ is the maximum node degree. Because $\lVert A^{-1} \rVert_F$ is of order $\sqrt{m}$, the vertical axis essentially displays $\lVert \widehat{A}^{-1} - A^{-1} \rVert_F / \sqrt{m}$. For estimating $A^{-1}$, the rate of convergence is of order $\sqrt{\log(m) / n}$. For each of the three population structures, accuracy increases with respect to $n$. \section{Introduction} \label{sec:intro} Understanding how changes in gene expression are related to changes in biological state is one of the fundamental tasks in genomics research, and is a prototypical example of ``large scale inference'' \citep{efron2010large}. While some genomics datasets have within-subject replicates or other known clustering factors that could lead to dependence among observations, most are viewed as population cross-sections or convenience samples, and are usually analyzed by taking observations (biological samples) to be statistically independent of each other. Countering this conventional view, \citet{Efr09} proposed that there may be unanticipated correlations between samples even when the study design would not suggest it. To identify and adjust for unanticipated sample-wise correlations, \citet{Efr09} proposed an empirical Bayes approach utilizing the sample moments of the data. In particular, sample-wise correlations may lead to inflated evidence for mean differences, and could be one explanation for the claimed lack of reproducibility in genomics research ~\citep{leek2010tackling, allen2012inference, sugden2013assessing}. A persistent problem in genomics research is that test statistics for mean parameters (e.g. t-statistics for two-group comparisons) often appear to be incorrectly calibrated \citep{efron:05, allen2012inference}. When this happens, for example when test statistics are uniformly overdispersed relative to their intended reference distribution, this is usually taken to be an indication of miscalibration, rather than reflecting a nearly global pattern of differential effects ~\citep{efron2007correlation}. Adjustments such as genomic control ~\citep{devlin1999genomic} can be used to account for this; a related approach is that of~\citet{allen2012inference}. In this work we address unanticipated sample-wise dependence, which can exhibit a strong effect on statistical inference. We propose a new method to jointly estimate the mean and covariance with a single instance of the data matrix, as is common in genetics. The basic idea of our approach is to alternate for a fixed number of steps between mean and covariance estimation. We exploit recent developments in two-way covariance estimation for matrix-variate data \citep{Zhou14a}. We crucially combine the classical idea of generalized least squares (GLS) ~\citep{aitken1936iv} with thresholding for model selection and estimation of the mean parameter vector. Finally, we use Wald-type statistics to conduct inference. We motivate this approach using differential expression analysis in a genomics context, but the method is broadly applicable to matrix-variate data having unknown mean and covariance structures, with or without replications. We illustrate, using theory and data examples, including a genomic study of ulcerative colitis, that estimating and accounting for the sample-wise dependence can systematically improve the calibration of test statistics, therefore reducing or eliminating the need for certain post-hoc adjustments. With regard to variable selection, another major challenge we face is that variables (e.g. genes or mRNA transcripts) have a complex dependency structure that exists together with any dependencies among observations. As pointed out by \citet{Efr09} and others, the presence of correlations among the samples makes it more difficult to estimate correlations among variables, and vice versa. A second major challenge is that due to dependence among both observations and variables, there is no independent replication in the data, that is, we have a single matrix to conduct covariance estimation along both axes. This challenge is addressed in \citet{Zhou14a} when the mean structure is taken to be zero. A third major challenge that is unique to our framework is that covariance structures can only be estimated after removing the mean structure, a fact that is generally not considered in most work on high dimensional covariance and graph estimation, where the population mean is taken to be zero. We elaborate on this challenge next. \subsection{Our approach and contributions} Two obvious approaches for removing the mean structure in our setting are to globally center each column of the data matrix (containing the data for one variable), or to center each column separately within each group of sample points to be compared (subsequently referred to as ``group centering''). Globally centering each column, by ignoring the mean structure, may result in an estimated covariance matrix that is not consistent. Group centering all genes, by contrast, leads to consistent covariance estimation, as shown in Theorem \ref{mainTheoremGroupCentering} with regard to Algorithm 1. However, group centering all genes introduces extraneous noise when the true vector of mean differences is sparse. We find that there is a complex interplay between the mean and covariance estimation tasks, such that overly flexible modeling of the mean structure can introduce large systematic errors in the mean structure estimation. To mitigate this effect, we aim to center the data using a model selection strategy. More specifically, we adopt a model selection centering approach in which only mean parameters having a sufficiently large effect size (relative to the dimension of the data) are targeted for removal. This refined approach has theoretical guarantees and performs well in simulations. The estimated covariance matrix can be used in uncertainty assessment and formal testing of mean parameters, thereby improving calibration of the inference. In Section~\ref{sec::covEstimation}, we define the two group mean model, which is commonly used in the genomics literature, and introduce the GLS algorithm in this context. We bound the statistical error for estimating each column of the mean matrix using the GLS procedure so long as each column of $X$ shares the same covariance matrix $B$, for which we have a close approximation. It is commonly known that genes are correlated, so correlations exist across columns as well as rows of the data matrix. In particular, in Theorem~\ref{thm::GLSFixedB} in Section~\ref{sec::GLSFixedBtheorem}, we establish consistency for the GLS estimator given a deterministic $\widehat{B}$ which is close to $B$ in the operator norm, and present the rate of convergence for mean estimation for data generated according to a subgaussian model to be defined in Definition~\ref{defSubgaussian}. Moreover, we do not impose a separable covariance model in the sense of \eqref{eq::matrix-normal-rep-intro}. What distinguishes our model from those commonly used in the genomics literature is that we do not require that individuals are independent. Our approach to covariance modeling builds on the Gemini method \citep{Zhou14a}, which is designed to estimate a separable covariance matrix for data with two-way dependencies. For matrices $A \in \mathbb{R}^{m \times m}$ and $B \in \mathbb{R}^{n \times n}$, the Kronecker product $A \otimes B \in \mathbb{R}^{mn \times nm}$ is the block matrix for which the $(i, j)$th block is $a_{ij}B$, for $i, j \in \{ 1, \ldots, m\}$. We say that an $n \times m$ random matrix $X$ follows a matrix variate distribution with mean $M \in \mathbb{R}^{n \times m}$ and a separable covariance matrix \begin{eqnarray} \label{eq::matrix-normal-rep-intro} X_{n\times m} \sim \mathcal{L}_{n,m}(M, A_{m \times m} \otimes B_{n \times n}), \end{eqnarray} if $\mvec{X}$ has mean $\mvec{M}$ and covariance $\Sigma = A \otimes B$. Here $\mvec{X}$ is formed by stacking the columns of $X$ into a vector in $\mathbb{R}^{mn}$. For the mean matrix $M$, we focus on the two-group setting to be defined in \eqref{meanMatrixTwoGroups}. Intuitively, $A$ describes the covariance between columns while $B$ describes the covariance between rows of $X$. Even with perfect knowledge of $M$, we can only estimate $A$ and $B$ up to a scaling factor, as $A \eta \otimes \inv{\eta}B = A \otimes B$ for any $\eta > 0$, and hence this will be our goal and precisely what we mean when we say we are interested in estimating covariances $A$ and $B$. However, this lack of identifiability does not affect the GLS estimate, because the GLS estimate is invariant to rescaling the estimate of $B^{-1}$. \subsection{Related work} \label{sec::RelatedWork} \citet{Efr09} introduced an approach for inference on mean differences in data with two-way dependence. His approach uses empirical Bayes ideas and tools from large scale inference, and also explores how challenging the problem of conducting inference on mean parameters is when there are uncharacterized dependences among samples. We combine GLS and variable selection with matrix-variate techniques. \citet{allen2012inference} also consider this question and develop a different approach that uses ordinary least squares (OLS) through the iterations, first decorrelating the residuals and then using OLS techniques again on this adjusted dataset. The confounder adjustment literature in genomics, including \citet{sun2012multiple} and \citet{wang2015confounder}, can also be used to perform large-scale mean comparisons in similar settings that include similarity structures among observations. These methods use the same general matrix decomposition framework, where the mean and noise are separated. They exploit low-rank structure in the mean matrix, as well as using sparse approximation of OLS estimates, for example where thresholding. Our model introduces row-wise dependence through matrix-variate noise, while the confounder adjustment literature instead assumes that a small number of latent factors also affect the mean expression, resulting in additional low-rank structure in the mean matrix. Web Supplement Section \ref{secComparison} contains detailed comparisons between our approach and these alternative methods. Our inference procedures are based on Z-scores and associated FDR values for mean comparisons of individual variables. While we account for sample-wise correlations, gene-gene correlations remain, which we regard as a nuisance parameter. Our estimated correlation matrix among the genes can be used in future work in combination with the line of work that addresses FDR in the presence of gene correlations. This relies on earlier work for false discovery rate estimation using correlated data, including \citet{owen2005variance, benjamini2001control, tony2011optimal, li2014rate, benjamini1995controlling, storey2003positive}. Taking a different approach, \citet{hall2010innovated} develop the innovated higher criticism test statistics to detect differences in means in the presence of correlations between genes. Our estimated gene-gene correlation matrix can be used in combination with this approach; we leave this as future work. Another line of relevant research has focused on hypothesis testing of high-dimensional means, exploiting assumed sparsity of effects, and developing theoretical results using techniques from high dimensional estimation theory. Work of this type includes \citet{caihigh, chen2014twoSampleThresh, bai1996effect,chen2010two}. \citet{Hoff11} adopts a Bayesian approach, using a model that is a generalization of the matrix-variate normal distribution. Our method builds on the Gemini estimator introduced by \citet{Zhou14a}, which estimates covariance matrices when both rows and columns of the data matrix are dependent. In the setting where correlations exist along only one axis of the array, researchers have proposed various covariance estimators and studied their theoretical and numerical properties \citep{BGA08, FFW09, FHT07, LF07, MB06, PWZZ09, RWRY08, RBLZ08, YL07, ZLW08}. Although we focus on the setting of Kronecker products, or separable covariance structures, \citet{cai2015joint} proposed a covariance estimator for a model with several populations, each of which may have a different variable-wise covariance matrix. Our methods can be generalized to this setting. \citet{tan2014sparse} use a similar matrix-variate data setting as in~\eqref{eq::matrix-normal-rep-intro}, but perform biclustering instead of considering a regression problem with a known design matrix. \subsection{Notation and organization} Before we leave this section, we introduce the notation needed for the technical sections. Let $e_1,\ldots, e_p$ be the canonical basis of $\mathbb{R}^p$. For a matrix $A = (a_{ij})_{1\le i,j\le m}$, let $|A|$ denote the determinant and ${\rm tr}(A)$ be the trace of $A$. Let $\norm{A}_{\max} = \max_{i,j} |a_{ij}|$ denote the entry-wise max norm. Let $\norm{A}_{1} = \max_{j}\sum_{i=1}^m\abs{a_{ij}}$ denote the matrix $\ell_1$ norm. The Frobenius norm is given by $\norm{A}^2_F = \sum_i\sum_j a_{ij}^2$. Let $\varphi_{i}(A)$ denote the $i$th largest eigenvalue of $A$, with $\varphi_{\max}(A)$ and $\varphi_{\min}(A)$ denoting the largest and smallest eigenvalues, respectively. Let $\kappa(A)$ be the condition number for matrix $A$. Let $| A |_{1, \text{off}} = \sum_{i \neq j} |a_{ij} |$ denote the sum of the absolute values of the off-diagonal entries and let $| A |_{0, \text{off}}$ denote the number of non-zero off-diagonal entries. Let $a_{\max} = \max_i a_{ii}$. Denote by $r(A)$ the stable rank ${\fnorm{A}^2 }/{\twonorm{A}^2}$. We write $\text{diag}(A)$ for a diagonal matrix with the same diagonal as $A$. Let $I$ be the identity matrix. We let $C, C_1, c, c_1, \ldots$ be positive constants which may change from line to line. For two numbers $a, b$, $a \wedge b := \min(a, b)$ and $a \vee b := \max(a, b)$. Let $(a)_+ := a \vee 0$. For sequences $\{ a_n \}, \{ b_n\}$, we write $a_n =O(b_n)$ if $|a_n| \le C |b_n|$ for some positive absolute constant $C$ which is independent of $n$ and $m$ or sparsity parameters, and write $a_n \asymp b_n$ if $c|a_n| \le |b_n| \le C|a_n|$. We write $a_n = \Omega(b_n)$ if $|a_n| \geq C|b_n|$ for some positive absolute constant $C$ which is independent of $n$ and $m$ or sparsity parameters. We write $a_n = o(b_n)$ if $\lim_{n \rightarrow \infty} a_n/b_n = 0$. For random variables $X$ and $Y$, let $X \sim Y$ denote that $X$ and $Y$ follow the same distribution. The remainder of the paper is organized as follows. In Section~\ref{sec::covEstimation}, we present our matrix-variate modeling framework and methods on joint mean and covariance estimation. In particular, we propose two algorithms for testing mean differences based on two centering strategies. In Section~\ref{sec::mainTheorem}, we present convergence rates for these methods. In Theorems~\ref{mainTheoremGroupCentering} and~\ref{mainTheoremModSel}, we provide joint rates of convergence for mean and covariance estimation using Algorithms 1 and 2, respectively. We also emphasize the importance of the design effect (c.f.\ equation \eqref{glsTestStats}) in testing and present theoretical results for estimating this quantity in Corollary~\ref{theoremInference} and Corollary~\ref{corDesignEffectAlg2}. In Section~\ref{sec::Simulations}, we demonstrate through simulations that our algorithms can outperform OLS estimators in terms of accuracy and variable selection consistency. In Section~\ref{sec::UCData}, we analyze a gene expression dataset, and show that our method corrects test statistic overdispersion that is clearly present when using sample mean based methods (c.f.\ Section \ref{inferenceGamma}). We conclude in Section~\ref{sec::conclude}. We place all technical proofs and additional simulation and data analysis results in the Web Supplement, which is organized as follows. Sections~\ref{sec::simulationAppend} and ~\ref{sec::dataAppend} contain additional simulation and data analysis results. Section~\ref{sec::apppreliminary} contains some preliminary results and notation. In Section~\ref{sec::proofsOfTheorems}, we prove Theorem~\ref{thm::GLSFixedB}. In Sections~\ref{sec::ProofMainThmPartI} and~\ref{sec::proofsforTheorem2} we prove Theorem~\ref{mainTheoremGroupCentering}. In Section~\ref{app::entrywise_sample_corr}, we derive entry-wise rates of convergence for the sample covariance matrices. In Sections~\ref{sec::proofTheorem3} and \ref{sec::LemmasForTheorem3} we prove Theorem~\ref{mainTheoremModSel} and its auxiliary results. In Section~\ref{secComparison} we provide additional comparisons between our method and some related methods on both simulated and real data. \section{Models and methods} \label{sec::covEstimation} In this section we present our model and method for joint mean and covariance estimation. Our results apply to subgaussian data. Before we present the model, we define subgaussian random vectors and the $\psi_2$ norm. The $\psi_2$ condition on a scalar random variable $V$ is equivalent to the subgaussian tail decay of $V$, which means $P(|V| >t) \leq 2 \exp(-t^2/c^2), \; \; \text{for all} \; \; t>0.$ For a vector $y = (y_1, \ldots, y_p) \in \mathbb{R}^p$, denote by $\lVert y \rVert_2 = \sqrt{\sum_{i = 1}^p y_i^2}$. \begin{definition} \label{defSubgaussian} Let $Y$ be a random vector in $\mathbb{R}^p$. (a) $Y$ is called isotropic if for every $y \in \mathbb{R}^p$, $E[ | \langle Y, y \rangle |^2] = \lVert y \rVert_2^2$. (b) $Y$ is $\psi_2$ with a constant $\alpha$ if for every $y \in \mathbb{R}^p$, $$\lVert \langle Y, y \rangle \rVert_{\psi_2} := \; \inf \{t: E[\exp( \langle Y,y \rangle^2/t^2) ] \leq 2 \} \; \leq \; \alpha \lVert y \rVert_2.$$ \end{definition} Our goal is to estimate the group mean vectors $\beta^{(1)}, \beta^{(2)}$, the vector of mean differences between two groups $\gamma = \beta^{(1)} - \beta^{(2)} \in \mathbb{R}^m$, the row-wise covariance matrix $B \in \mathbb{R}^{n \times n}$, and the column-wise covariance matrix $A \in \mathbb{R}^{m \times m }$. In our motivating genomics applications, the people by people covariance matrix $B$ is often incorrectly anticipated to have a simple known structure, for example, $B$ is taken to be diagonal if observations are assumed to be uncorrelated. However, we show by example in Section \ref{sec::UCData} that departures from the anticipated diagonal structure may occur, corroborating earlier claims of this type by \citet{Efr09} and others. Motivated by this example, we define the two-group mean model and the GLS algorithm, which takes advantage of the covariance matrix $B$. \noindent{\bf The model.} Our model for the matrix-variate data $X$ can be expressed as a mean matrix plus a noise term, \begin{equation} \label{modelMeanCov} X = M + \varepsilon, \end{equation} where columns (and rows) of $\varepsilon$ are subgaussian. Let $u, v, \in \mathbb{R}^n$ be defined as \begin{equation} \label{groupIndicators} u = ( \underbrace{1, \ldots, 1}_{n_1}, \underbrace{0, \ldots, 0}_{n_2}) \in \mathbb{R}^n \quad \text{and} \quad v = ( \underbrace{0, \ldots, 0}_{n_1}, \underbrace{1, \ldots, 1}_{n_2}) \in \mathbb{R}^n. \end{equation} Let $\mathbf{1}_n \in \mathbb{R}^n$ denote a vector of ones. For the two-group model, we take the mean matrix to have the form \begin{equation} \label{meanMatrixTwoGroups} M = D \beta = \begin{bmatrix} \mathbf{1}_{n_1} \beta^{(1)T} \\ \mathbf{1}_{n_2} \beta^{(2)T} \end{bmatrix} \in \mathbb{R}^{n \times m}, \quad \text{where} \quad D = \begin{bmatrix} u & v \end{bmatrix} \in \mathbb{R}^{n \times 2} \end{equation} is the design matrix and $\beta = (\beta^{(1)}, \beta^{(2)})^T \in \mathbb{R}^{2 \times m}$ is a matrix of group means. Let $\gamma = \beta^{(1)} - \beta^{(2)} \in \mathbb{R}^m$ denote the vector of mean differences. Let $d_0 = |\operatorname{supp}(\gamma)| = | \{ j: \gamma_j \neq 0 \}|$ denote the size of the support of $\gamma$. To estimate the group means, we use a GLS estimator, \begin{equation} \label{GLSestimator} \widehat{\beta}(\widehat{B}^{-1}) := (D^T \widehat{B}^{-1} D)^{-1} D^T \widehat{B}^{-1} X \in \mathbb{R}^{2 \times m}, \end{equation} where $\widehat{B}^{-1}$ is an estimate of the observation-wise inverse covariance matrix. Throughout the paper, we denote by $\widehat{\beta}(B^{-1})$ the oracle GLS estimator, since it depends on the unknown true covariance $B$. Also, we denote the estimated vector of mean differences as $\widehat{\gamma}(\widehat{B}^{-1}) = \delta^T \widehat{\beta}(\widehat{B}^{-1}) \in \mathbb{R}^m$, where $\delta = (1, -1) \in \mathbb{R}^2$. \subsection{Matrix-variate covariance modeling} In the previous section, we have not yet explicitly constructed an estimator of $B^{-1}$. To address this need, we model the data matrix $X$ with a matrix-variate distribution having a separable covariance matrix, namely, the covariance of $\mvec{X}$ follows a Kronecker product covariance model. When $\varepsilon$ \eqref{modelMeanCov} follows a matrix-variate normal distribution $\mathcal{N}_{n, m}(0, A \otimes B)$, as considered in ~\citet{Zhou14a}, the support of $B^{-1}$ encodes conditional independence relationships between samples, and likewise, the support of $A^{-1}$ encodes conditional independence relationships among genes. The inverse covariance matrices $A^{-1}$ and $B^{-1}$ have the same supports as their respective correlation matrices, so edges of the dependence graphs are identifiable under the model $\text{Cov}(\text{vec}(\varepsilon)) = A \otimes B$. When the data is subgaussian, the method is still valid for obtaining consistent estimators of $A$, $B$, and their inverses, but the interpretation in terms of conditional independence does not hold in general. Our results do not assume normally distributed data; we analyze the subgaussian correspondent of the matrix variate normal model instead. In the Kronecker product covariance model we consider in the present work, the noise term has the form $ \varepsilon = B^{1/2} Z A^{1/2}$ for a mean-zero random matrix $Z$ with independent subgaussian entries satisfying $1 = {\mathbb E} Z_{ij}^2 \le \norm{Z_{ij}}_{\psi_2} \leq K$. Clearly, $\mvec{\varepsilon} = A \otimes B$. Here, the matrix $A$ represents the shared covariance among variables for each sample, while $B$ represents the covariance among observations which in turn is shared by all genes. For identifiability, and convenience, we define \begin{equation} \label{KroneckerIdentifiability} A^* = \frac{m}{\operatorname{tr}(A)}A \quad \text{ and } \quad B^* = \frac{\operatorname{tr}(A)}{m} B, \end{equation} where the scaling factor is chosen so that $A^*$ has trace $m$. For the rest of the paper $A$ and $B$ refer to $A^*$ and $B^*$, as defined in \eqref{KroneckerIdentifiability}. Let $S_A$ and $S_B$ denote sample covariance matrices to be specified. Let the corresponding sample correlation matrices be defined as \begin{eqnarray} \label{defGammaAGammaB} \widehat{\Gamma}_{ij}(A) = \frac{(S_A)_{ij} }{ \sqrt{(S_A)_{ii}(S_A)_{jj}}} \quad \text{ and } \quad \widehat{\Gamma}_{ij}(B) = \frac{(S_B)_{ij} }{ \sqrt{(S_B)_{ii}(S_B)_{jj}}}. \end{eqnarray} The baseline Gemini estimators ~\citep{Zhou14a} are defined as follows, using a pair of penalized estimators for the correlation matrices $\rho(A) = (a_{ij}/\sqrt{a_{ii} a_{jj}})$ and $\rho(B) = (b_{ij}/\sqrt{b_{ii} b_{jj}})$, \begin{subeqnarray} \label{geminiObjectiveFnA} \widehat{A}_\rho &= & \mathop{\text{arg\,min}\kern.2ex}_{A_\rho \succ 0} \left\{ \tr{\widehat{\Gamma}(A) A_\rho^{-1}} + \log |A_\rho| + \lambda_B |A_\rho^{-1}|_{1, \text{off}} \right\}, \; \text{ and } \\ \label{geminiObjectiveFnB} \widehat{B}_\rho &= & \mathop{\text{arg\,min}\kern.2ex}_{B_\rho \succ 0} \left\{ \tr{\widehat{\Gamma}(B) B_\rho^{-1}} + \log |B_\rho| + \lambda_A |B_\rho^{-1}|_{1, \text{off}} \right\}, \end{subeqnarray} where the input are a pair of sample correlation matrices as defined in \eqref{defGammaAGammaB}. Let $\widehat{M}$ denote the estimator of the mean matrix $M$ in \eqref{eq::matrix-normal-rep-intro}. Denote the centered data matrix and the sample covariance matrices as \begin{eqnarray} \nonumber X_{\operatorname{cen}} & = & X - \widehat{M}, \quad \text{for $\widehat{M}$ to be specified in Algorithms 1 and 2,} \\ \label{SampleCovAB} S_B & = & X_{\operatorname{cen}} X_{\operatorname{cen}}^T / m, \quad \text{ and } \quad S_A = X_{\operatorname{cen}}^T X_{\operatorname{cen}} / n. \end{eqnarray} Define the diagonal matrices of sample standard deviations as \begin{eqnarray} \label{W1hatW2hat} && \widehat{W}_1 = \sqrt{n} \text{diag}(S_A)^{1/2} \in \mathbb{R}^{m \times m}, \quad \widehat{W}_2 = \sqrt{m} \text{diag}(S_B)^{1/2} \in \mathbb{R}^{n \times n},\\ && \label{estimatorAKroneckerB} \text{ and } \; \widehat{A \otimes B} = \left( \widehat{W}_1 \widehat{A}_{\rho} \widehat{W}_1 \right) \otimes \left( \widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2 \right) / \lVert X_{\operatorname{cen}} \rVert_F^2. \end{eqnarray} \subsection{Group based centering method} \label{sec::GroupCenter} We now discuss our first method for estimation and inference with respect to the vector of mean differences $\gamma = \beta^{(1)} - \beta^{(2)}$, for $\beta^{(1)}$ and $\beta^{(2)}$ as in \eqref{meanMatrixTwoGroups}. Our approach in Algorithm 1 is to remove all possible mean effects by centering each variable within every group. \noindent{\textbf{Algorithm 1: GLS-Global group centering} }\\ \noindent\rule{16cm}{0.4pt} \\ Input: $X$; and $\mathcal{G}(1), \mathcal{G}(2)$: indices of group one and two, respectively. \\ Output: $\widehat{A}^{-1}$, $\widehat{B}^{-1}$, $\widehat{A \otimes B}$, $\widehat{\beta}(\widehat{B}^{-1})$, $\widehat{\gamma}$, $T_j$ for all $j$ \\ \noindent\rule{16cm}{0.4pt} \begin{description} \item [1. Group center the data.] Let $Y_i$ denote the $i$th row of the data matrix. To estimate the group mean vectors $\beta^{(1)}, \beta^{(2)} \in \mathbb{R}^m$: Compute sample mean vectors \begin{eqnarray} \label{eq::sampleMean} \widetilde{\beta}^{(1)} = \frac{1}{n_1} \sum_{i \in \mathcal{G}(1)} && Y_i \quad \text{ and } \quad \widetilde{\beta}^{(2)} = \frac{1}{n_2} \sum_{i \in \mathcal{G}(2)} Y_i; \quad \text{set} \quad \widehat{\gamma}^{\operatorname{OLS}} = \widetilde{\beta}^{(1)} - \widetilde{\beta}^{(2)}. \\ \nonumber \text{Center the data by } && X_{\text{cen}} = X - \widehat{M}, \text{ with } \widehat{M} = \begin{bmatrix} 1_{n_1} \widetilde{\beta}^{(1)T} \\ 1_{n_2} \widetilde{\beta}^{(2)T} \end{bmatrix}. \notag \end{eqnarray} \item [2. Obtain regularized correlation estimates.] \begin{enumerate} \item[(2a)] The centered data matrix used to calculate $S_A$ and $S_B$ for Algorithm 1 is $X_{\operatorname{cen}} = (I - P_2)X$, where $P_2$ is the projection matrix that performs within-group centering, \begin{equation} \label{def:withinGroupProjection} P_2 = \begin{bmatrix} n_1^{-1} 1_{n_1} 1_{n_1}^T & 0 \\ 0 & n_2^{-1} 1_{n_2} 1_{n_2}^T \end{bmatrix} = uu^T / n_1 + vv^T / n_2, \end{equation} with $u$ and $v$ as defined in \eqref{groupIndicators}. Compute sample covariance matrices based on group-centered data: $S_A= \inv{n}X_{\text{cen}}^T X_{\text{cen}} = \inv{n}X^T (I - P_2) X$\; \text{ and} \\ $S_B = \inv{m} X_{\text{cen}} X_{\text{cen}}^T = \inv{m}(I - P_2)XX^T(I - P_2)$. \item[(2b)] Compute~\eqref{defGammaAGammaB} to obtain penalized correlation matrices $\widehat{A}_\rho$ and $\widehat{B}_\rho$ using the Gemini estimators as defined in (\ref{geminiObjectiveFnA}) and (\ref{geminiObjectiveFnB}) with tuning parameters to be defined in~\eqref{GLassoPenaltyB}. \end{enumerate} \item [3. Rescale the estimated correlation matrices to obtain penalized covariance] \begin{eqnarray} \label{BiHat} \widehat{B}^{-1} = m \widehat{W}_2^{-1} \widehat{B}_{\rho} \widehat{W}_2^{-1} \; \text{ and } \; \widehat{A}^{-1} = (\lVert X_{\operatorname{cen}} \rVert_F^2 / m) \widehat{W}_1^{-1} \widehat{A}_{\rho} \widehat{W}_1^{-1}. \end{eqnarray} \item [4. Estimate the group mean matrix] using the GLS estimator as defined in \eqref{GLSestimator}. \item [5. Obtain test statistics.] The $j$th test statistic is defined as \begin{equation} \label{glsTestStats} T_j = \frac{\widehat{\gamma}_j(\widehat{B}^{-1}) }{ \sqrt{\delta^T (D^T \widehat{B}^{-1} D)^{-1} } \delta}, \qquad \text{with } \delta = (1, -1) \in \mathbb{R}^2, \end{equation} and $\widehat{\gamma}_j(\widehat{B}^{-1}) = \delta^T \widehat{\beta}_j(\widehat{B}^{-1})$, for $j = 1, \ldots, m$. Note that $T_j$ as defined in \eqref{glsTestStats} is essentially a Wald test and the denominator is a plug-in standard error of $\widehat{\gamma}_j(B^{-1})$. \end{description} \subsection{Model selection centering method} \label{sec::ModelBasedCenter} In this section we present Algorithm 2, which aims to remove mean effects that are strong enough to have an impact on covariance estimation. The strategy here is to use a model selection step to identify variables with strong mean effects. {\noindent \textbf {Algorithm 2: GLS-Model selection centering} }\\ \noindent\rule{16cm}{0.4pt} \\ Input: $X$, and $\mathcal{G}(1), \mathcal{G}(2)$: indices of group one and two, respectively. \\ Output: $\widehat{A}^{-1}$, $\widehat{B}^{-1}$, $\widehat{A \otimes B}$, $\widehat{\beta}(\widehat{B}^{-1})$, $\widehat{\gamma}$, $T_j$ for all $j$ \\ \noindent\rule{16cm}{0.4pt} \begin{description} \item[1. Run Algorithm 1.] Use the group centering method to obtain initial estimates $\widehat{\gamma}^{\text{init}}_j = \widehat{\beta}_j^{(1)} - \widehat{\beta}_j^{(2)}$ for all $j = 1, \ldots, m$. Let $\hat{B}^{-1}_{\text{\rm init}}$ and $\hat{B}_{\text{\rm init}}$ be as obtained in \eqref{BiHat}. \item[2. Select genes with large estimated differences in means.] Let $\widetilde{J}_0 = \{j: | \widehat{\gamma}^{\text{init}}_j | > 2\widehat{\tau}_{\text{init}} \}$ denote the set of genes which we consider as having strong mean effects, where \begin{equation} \label{modSelThresh} \widehat{\tau}_{\text{init}} \asymp \left( \frac{\log^{1/2}m}{\sqrt{m}} + \frac{\lVert \hat{B}_{\text{\rm init}} \rVert_1}{n_{\min}} \right) \sqrt{ \frac{n_{\text{ratio}} |\hat{B}_{\text{\rm init}}^{-1}|_{0, \text{off}} }{n_{\min}}} + \sqrt{\log m } \lVert (D^T \hat{B}_{\text{\rm init}}^{-1}D)^{-1} \rVert_2^{1/2}, \end{equation} with $n_{\min} = n_1 \wedge n_2$, $n_{\max} =n_1 \vee n_2$, and $n_{\operatorname{ratio}} = n_{\max} / n_{\min}$. \item[3. Calculate Gram matrices based on model selection centering.] Global centering can be expressed in terms of the projection matrix $P_1 = n^{-1} \mathbf{1}_n \mathbf{1}_n^T$. Compute the centered data matrix \[ X_{\operatorname{cen}, j} = \begin{cases} X_j - P_2 X_j & \text{if $j \in \widetilde{J}_0$} \\ X_j - P_1 X_j & \text{if $j \in \widetilde{J}_0^c$}, \end{cases} \] where $X_{\operatorname{cen}, j}$ denotes the $j$th column of the centered data matrix $X_{\operatorname{cen}}$. Compute the sample covariance and correlation matrices with $X_{\operatorname{cen}}$ following \eqref{SampleCovAB} and \eqref{defGammaAGammaB}. \item[4. Estimate covariances and means.] \begin{enumerate} \item[(4a)] Obtain the penalized correlation matrices $\widehat{B}_{\rho}$ and $\widehat{A}_{\rho}$ using Gemini estimators as defined in (\ref{geminiObjectiveFnA}) and \eqref{geminiObjectiveFnB} with tuning parameters of the same order as those in \eqref{GLassoPenaltyB}. \item[(4b)] Obtain inverse covariance estimates $\widehat{B}^{-1}$, $\widehat{A}^{-1}$ using \eqref{BiHat}. \item[(4c)] Calculate the GLS estimator $\widehat{\beta}(\widehat{B}^{-1})$ as in \eqref{GLSestimator}, as well as the vector of mean differences $\widehat{\gamma}(\widehat{B}^{-1}) = \delta^T \widehat{\beta}(\widehat{B}^{-1})$, for $\delta = (1, -1) \in \mathbb{R}^2$. \end{enumerate} \item [5. Obtain test statistics.] Calculate test statistics as in \eqref{glsTestStats}, now using $\widehat{B}^{-1}$ as estimated in Step 4. \end{description} \noindent{\bf Remarks.} In the case that $\gamma$ is sparse, we show that this approach can perform better than the approach in Section \ref{sec::GroupCenter}, in particular when the sample size is small. We now consider the expression $\widehat{\tau}_{\text{init}}$ in \eqref{modSelThresh} as an upper bound on the threshold in the sense that it is chosen to tightly control false positives. In Section~\ref{inferenceGamma} we show in simulations that with this plug-in estimate $\widehat{\tau}_{\text{init}}$, Algorithm 2 can nearly reach the performance of GLS with the true $B$. Since this choice of $\widehat{\tau}_{\text{init}}$ acts as an order on the threshold we need, the plug-in method can also be applied with a multiplier between $0$ and $1$. When we set $\widehat{\tau}_{\text{init}}$ at its lower bound, namely, \[ \sqrt{\log m } \lVert (D^T \hat{B}_{\text{\rm init}}^{-1}D)^{-1} \rVert_2^{1/2}, \; \; \text {where} \; \; \hat{B}_{\text{\rm init}}^{-1} \; \; \text{is obtained as in Step 3 from Algorithm 1,} \] we anticipate many false positives. In Figure~\ref{RankCorrplugin_decayExp_m1_2e3_v1}, we show that the performance of Algorithm 2 is stable in the setting of small $n$ and sparse $\gamma$ for different values of $\widehat{\tau}_{\text{init}}$, demonstrating robustness of our methods to the multiplier; there we observe that the performance can degrade if the threshold is set to be too small, eventually reaching the performance of Algorithm 1. Second, if an upper bound on the number of differentially expressed genes is known a priori, one can select a set of genes $\widecheck{J}_0$ to group center such that the cardinality $|\widecheck{J}_0|$ is understood to be chosen as an upper bound on $d_0 =|\operatorname{supp}(\gamma)|$ based on prior knowledge. We select the set $\widecheck{J}_0$ by ranking the components of the estimated vector of mean differences $\widehat{\gamma}$. In the data analysis in Section~\ref{sec::UCData} we adopt this strategy in an iterative manner by successively halving the number of selected genes, choosing at each step the genes with largest estimated mean differences from the previous step. We show in this data example and through simulation that the proposed method is robust to the choice of $|\widecheck{J}_0|$. Finally, it is worth noting that these algorithms readily generalize to settings with more than two groups, in which case we simply group center within each group. This is equivalent to applying the method with a different design matrix $D$. In fact, we can move beyond group-wise mean comparisons to a regression analysis with a fixed design matrix $D$, which includes the $k$-group mean analysis as a special case. \section{Theoretical results} \label{sec::mainTheorem} We first state Theorem~\ref{thm::GLSFixedB}, which provides the rate of convergence of the GLS estimator \eqref{GLSestimator} when we use a fixed approximation of the covariance matrix $B$. We then provide in Theorems~\ref{mainTheoremGroupCentering} and~\ref{mainTheoremModSel} the convergence rates for estimating the group mean matrix $\beta \in \mathbb{R}^{2 \times m}$ for Algorithms 1 and 2 respectively. In Theorem~\ref{mainTheoremGroupCentering} we state rates of convergence for the Gemini estimators of $B^{-1}$ and $A^{-1}$ when the input sample covariance matrices use the group centering approach as defined in Algorithm 1, while in Theorem~\ref{mainTheoremModSel}, we state only the rate of convergence for estimating $B^{-1}$, anticipating that the rate for $A^{-1}$ can be similarly obtained, using the model selection centering approach as defined in Algorithm 2. \subsection{GLS under fixed covariance approximation} \label{sec::GLSFixedBtheorem} We now state a theorem on the rate of convergence of the GLS estimator \eqref{GLSestimator}, where we use a fixed approximation $B^{-1}_{n, m}$ to $B^{-1}$, where the operator norm of $\Delta_{n, m} = B^{-1}_{n, m} - B^{-1}$ is small in the sense of \eqref{DeltaConditionTwoGroupD}. We will specialize Theorem~\ref{thm::GLSFixedB} to the case where $B^{-1}$ is estimated using the baseline method in~\cite{Zhou14a} when $X$ follows subgaussian matrix-variate distribution as in \eqref{eq::matrix-normal-rep-intro}. We prove Theorem \ref{thm::GLSFixedB} in Web Supplement Section~\ref{sec::proofsOfTheorems}. \begin{theorem} \label{thm::GLSFixedB} Let $Z$ be an $n \times m$ random matrix with independent entries $Z_{ij}$ satisfying ${\mathbb E} Z_{ij} = 0$, $1 = {\mathbb E} Z_{ij}^2 \le \norm{Z_{ij}}_{\psi_2} \leq K$. Let $Z_1, \ldots, Z_m \in \mathbb{R}^n$ be the columns of $Z$. Suppose the $j$th column of the data matrix satisfies $X_j \sim B^{1/2} Z_j$. Suppose $B_{n, m} \in \mathbb{R}^{n \times n}$ is a positive definite symmetric matrix. Let $\Delta_{n, m} := B^{-1}_{n, m} - B^{-1}$. Suppose \begin{equation} \label{DeltaConditionTwoGroupD} \lVert \Delta_{n, m} \rVert_2 < \frac{1}{\left( n_{\max} / n_{\min} \right) \lVert B \rVert_2}, \text{ where } n_{\min} = n_1 \wedge n_2 \; \; \text{ and } \; \; n_{\max} =n_1 \vee n_2. \end{equation} Then with probability at least $1 - 8 / (m \vee n)^2$, for some absolute constants $C$, $C'$, \begin{eqnarray} \label{rateBetaHatFixedB1} && \forall j, \quad \lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \rVert_2 \; \leq \; r_{n, m} := s_{n, m} + t_{n, m}, \quad \text{ where } \\ \label{def:stFixedDelta} && s_{n, m} = C \sqrt{{\log m \lVert B \rVert_2}/{n_{\min}}} \quad \text{ and} \quad t_{n, m} = C' {\lVert \Delta_{n, m} \rVert_2}/{n_{\min}^{1/2}}; \\ \label{rateGammaHatFixedDelta} \text{ and } && \lVert \widehat{\gamma}(B_{n, m}) - \gamma \rVert_\infty \leq \sqrt{2} \left( C \sqrt{\frac{\log m \lVert B \rVert_2}{n_{\min}}} + C' n_{\min}^{-1/2} \lVert \Delta_{n, m} \rVert_2 \right). \end{eqnarray} \end{theorem} \noindent{\textbf{Remarks.}} If the operator norm of $B$ is bounded, that is $\lVert B \rVert_2 < W$, then condition \eqref{DeltaConditionTwoGroupD} is equivalent to $\lVert \Delta_{n, m} \rVert_2 < 1 / (W n_{\operatorname{ratio}})$. The term $t_{n, m}$ in \eqref{def:stFixedDelta} reflects the error due to approximating $B^{-1}$ with $B_{n,m}^{-1}$, whereas $s_{n, m}$ reflects the error in estimating the mean matrix \eqref{GLSestimator} using GLS with the true $B^{-1}$ for the random design $X$. The term $s_{n, m}$ is $O( \sqrt{\log m / n} )$, whereas $t_{n, m}$ is $O(1/\sqrt{n})$. The dominating term $s_{n, m}$ in \eqref{def:stFixedDelta} can be replaced by the tighter bound, namely, $s_{n, m}' = C' \log^{1/2} (m) \sqrt{\delta^T (D^T B^{-1}D)^{-1} \delta}$, with $\delta = (1, -1) \in \mathbb{R}^2$. This bound correctly drops the factor of $\lVert B \rVert_2$ present in \eqref{def:stFixedDelta} and \eqref{rateGammaHatFixedDelta}, while revealing that variation aligned with the column space of $D$ is especially important in mean estimation. Note that the condition \eqref{DeltaConditionTwoGroupD} is not stringent, and that the $\hat{B}$ estimates used in Algorithms 1 and 2 have much lower errors than this. When $M = 0$ is known, $S_A$ and $S_B$ can be the usual Gram matrices, and the theory in~\cite{Zhou14a} guarantees that $t_{n, m}$ as defined in \eqref{def:stFixedDelta} has rate $C_A \sqrt{\log m / m}$, with $C_A = \sqrt{m} \lVert A \rVert_F / \operatorname{tr}(A)$. However in our setting, $M$ in general is nonzero. In Sections \ref{sec::GroupCenter} and \ref{sec::ModelBasedCenter} we provide two constructions for $S_A$ and $S_B$, which differ in how the data are centered. These constructions have a different bound $t_{n, m}$, as we will discuss in Theorems \ref{mainTheoremGroupCentering} and \ref{mainTheoremModSel}. In Section \ref{sec::Simulations}, we present simulation results that demonstrate the advantage of the oracle GLS and GLS with estimated $\hat{B}$ \eqref{GLSestimator} over the sample mean based (OLS) method (c.f. \eqref{eq::sampleMean} and \eqref{unpairedTstat}) for mean estimation as well as the related variable selection problem with respect to $\gamma$. There, we scrutinize this quantity and its estimation procedure in detail. \noindent{\textbf{Design effect.}} The ``design effect'' is the variance of the ``oracle'' GLS estimator \eqref{GLSestimator} of $\gamma_j$ using the true $B$, that is, \begin{equation} \label{designEffect} \delta^T (D^T B^{-1}D)^{-1} \delta = \operatorname{Var}(\widehat{\gamma}_j(B^{-1})), \; \forall j = 1, \ldots, m. \end{equation} The design effect reflects the potential improvement of GLS over OLS. It appears as a factor above in $s_{n, m}'$, so it contributes to the rate of mean parameter estimation as characterized in Theorem \ref{thm::GLSFixedB}. Lower variance in the GLS estimator of the mean difference contributes to greater power of the test statistics relative to OLS. The design effect also appears as a scale factor in the test statistics for $\hat{\gamma}$ \eqref{glsTestStats}, and therefore it is particularly important that the design effect is accurately estimated in order for the test statistics to be properly calibrated. In a study focusing on mean differences, it may be desirable to assess the sample size needed to detect a given effect size using our methodology. Given the design effect, our tests for differential expression are essentially Z-tests based on the GLS fits, followed by some form of multiple comparisons adjustment. \begin{corollary} \label{theoremInference} Let $\Omega = (D^T B^{-1} D)^{-1}$, $\widehat{\Omega} = (D^T \widehat{B}^{-1} D)^{-1}$, and $\Delta = \widehat{\Omega} - \Omega$. Under the conditions of Theorem \ref{thm::GLSFixedB}, the relative error in estimating the design effect is bounded as \begin{equation} \label{eq::designEffectEstRateCorollaryThm1} \frac{\abs{ \delta^T \widehat{\Omega} \delta - \delta^T \Omega \delta}}{ \delta^T \Omega \delta } \leq 2 C' \frac{\kappa(B) \twonorm{B} \twonorm{\Delta} }{n_{\operatorname{ratio}}}, \end{equation} with probability $1 - C / (m \vee n)^d$, for some absolute constants $C, C'$. \end{corollary} We prove Corollary \ref{theoremInference} in Web Supplement Section~\ref{proofTheoremInference}. Corollary \ref{theoremInference} implies that given an accurate estimator of $B^{-1}$, the design effect is accurately estimated and therefore suggests that traditional techniques can be used to gain an approximate understanding of the power of our methods. We show that $B^{-1}$ can be accurately estimated under conditions in Theorems 3 and 4. If pilot data are available that are believed to have similar between-sample correlations to the data planned for collection in a future study, Corollary~\ref{theoremInference} also justifies using this pilot data to estimate the design effect. If no pilot data are available, it is possible to conduct power analyses based on various plausible specifications for the $B$ matrix. \subsection{Rates of convergence for Algorithms 1 and 2} We state the following assumptions. \\ \noindent{\bf (A1)} The number of nonzero off-diagonal entries of $A^{-1}$ and $B^{-1}$ satisfy \begin{eqnarray*} \left| A^{-1} \right|_{0, \text{off}} &= & o(n / \log(m \vee n)) \qquad \qquad \qquad \quad (n, m \rightarrow \infty ) \quad \text{and} \\ \left| B^{-1} \right|_{0, \text{off}} &= & o\left( \frac{m}{\log(m \vee n)} \wedge \frac{n_{\min}^2}{\lVert B \rVert_1^2} \right) \qquad (n, m \rightarrow \infty ). \end{eqnarray*} \noindent{\bf (A2)} The eigenvalues of $A$ and $B$ are bounded away from 0 and $+\infty$. We assume that the stable ranks satisfy $r(A), r(B) \geq 4 \log(m \vee n)$, where $r(A) = \FroNorm{A}^2 / \twonorm{A}^2$. \begin{theorem} \label{mainTheoremGroupCentering} Suppose that (A1) and (A2) hold. Consider the data as generated from model \eqref{modelMeanCov} with $\varepsilon = B^{1/2} Z A^{1/2}$, where $A \in \mathbb{R}^{m \times m}$ and $B \in \mathbb{R}^{n \times n}$ are positive definite matrices, and $Z$ is an $n \times m$ random matrix as defined in Theorem~\ref{thm::GLSFixedB}. Let $C, C', C_1 C_2, C'', C'''$ be some absolute constants. Let $C_A = \sqrt{m} \lVert A \rVert_F / \operatorname{tr}(A)$ and $C_B = \sqrt{n} \lVert B \rVert_F / \operatorname{tr}(B)$. \textbf{(I)} Let $\lambda_A$ and $\lambda_B$ denote the penalty parameters for \eqref{geminiObjectiveFnB} and \eqref{geminiObjectiveFnA} respectively. Suppose \begin{eqnarray} \label{GLassoPenaltyB} \lambda_A \geq C\left( C_A K \frac{\log^{1/2}(m \vee n)}{\sqrt{m}} + \frac{\lVert B \rVert_1}{n_{\min}} \right) \;\text{ and } \; \lambda_B \geq C' \left( C_B K \frac{\log^{1/2}(m \vee n)}{\sqrt{n}} + \frac{\lVert B \rVert_1}{n_{\min}}\right). \end{eqnarray} Then with probability at least $1 - C'' / (m \vee n)^2$, for $\widehat{A \otimes B}$ as define in \eqref{estimatorAKroneckerB}, \begin{eqnarray*} && \lVert \widehat{A \otimes B} - A \otimes B \rVert_2 \leq \lVert A \rVert_2 \lVert B \rVert_2 \delta,\\ && \lVert \widehat{A \otimes B}^{-1} - A^{-1} \otimes B^{-1} \rVert_2 \leq \lVert A^{-1} \rVert_2 \lVert B^{-1} \rVert_2 \delta',\\ \text{ where } \quad && \delta, \delta' = O\left(\lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee 1} + \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \right). \end{eqnarray*} Furthermore, with probability at least $1 - C'''/(m \vee n)^2$, \begin{eqnarray} \ && \lVert \widehat{A \otimes B} - A \otimes B \rVert_F \leq \lVert A \rVert_F \lVert B \rVert_F \eta, \\ \text{ where } && \label{eq::eta} \eta = O\left(\lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee n} / \sqrt{n} + \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m} / \sqrt{m} \right). \end{eqnarray} The same conclusions hold for the inverse estimate, with $\eta$ being bounded in the same order as in \eqref{eq::eta}. \textbf{(II)} Let $\widehat{\beta}$ be defined as in (\ref{GLSestimator}) with $\widehat{B}^{-1}$ being defined as in \eqref{BiHat} and $D$ as in (\ref{meanMatrixTwoGroups}). Then, with probability at least $1 - C/m^d$ the following holds for all $j$, \begin{equation} \label{mainThmBoundTailCutpointGroup} \lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \beta_j^* \rVert_2 \leq C_1 \lambda_A \sqrt{ \frac{n_{\operatorname{ratio}} \left(|B^{-1}|_{0, \text{off}} \vee 1\right)}{n_{\min}}} + C_2 \sqrt{\log m} \lVert (D^T B^{-1} D)^{-1} \rVert_2^{1/2}. \end{equation} \end{theorem} We prove Theorem \ref{mainTheoremGroupCentering} part I in Web Supplement Section \ref{sec::ProofMainThmPartI}; this relies on rates of convergence of $\widehat{B}^{-1}$ and $\widehat{A}^{-1}$ in the operator and the Frobenius norm, which are established in Lemma \ref{boundCovOpFro}. We prove part II in Web Supplement Section \ref{sec::ProofMainThmPartII}. \noindent{\bf Remarks.} We find that the additional complexity of estimating the mean matrix leads to an additional additive term of order $1 / n$ appearing in the convergence rates for covariance estimation for $B$ and $A$. In part I of Theorem~\ref{mainTheoremGroupCentering}, $\lambda_A$ is decomposed into two terms, one term reflecting the variance of $S_B$, and one term reflecting the bias due to group centering. The variance term goes to zero as $m$ increases, and the bias term goes to zero as $n$ increases. To analyze the error in the GLS estimator based on $\widehat{B}^{-1}$, we decompose $\lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \beta_j^* \rVert_2$ as \begin{equation*} \lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \beta_j^* \rVert_2 \leq\lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \widehat{\beta}_j(B^{-1}) \rVert_2 + \lVert \widehat{\beta}_j(B^{-1}) - \beta_j^* \rVert_2, \end{equation*} where the first term is the error due to not knowing $B^{-1}$, and the second term is the error due to not knowing $\beta_j^*$. The rate of convergence given in (\ref{mainThmBoundTailCutpointGroup}) reflects this decomposition. For Algorithm 2, we have analogous rates of convergence for both mean and covariance estimation. Simulations suggest that the constants in the rates for Algorithm 2 are smaller than those in \eqref{mainThmBoundTailCutpointGroup}. We state the following assumptions for Theorem \ref{mainTheoremModSel} to hold on Algorithm 2. \textbf{(A2')} Suppose (A2) holds, and $n = \Omega\left( (\log m) \Parens{ {\twonorm{A} \twonorm{B} b_{\max}}/{C_A^2} } \right)$. \textbf{(A3)} Let $\text{supp}(\gamma) = \{ j : \gamma_j \neq 0 \}$. Let $s = \abs{\text{supp}(\gamma)}$ denote the sparsity of $\gamma$. Assume that $s = O\Parens{\frac{C_A}{\twonorm{B} } n \sqrt{\frac{m}{\log m}}}$. \textbf{Remarks.} Condition (A2') is mild, because the condition on the stable rank of $B$ already implies that $n \geq \log m$. \begin{theorem} \label{mainTheoremModSel} Suppose that (A1), (A2'), and (A3) hold. Consider the data as generated from model \eqref{meanMatrixTwoGroups} with $\varepsilon = B^{1/2} Z A^{1/2}$, where $A \in \mathbb{R}^{m \times m}$ and $B \in \mathbb{R}^{n \times n}$ are positive definite matrices, and $Z$ is an $n \times m$ random matrix as defined in Theorem \ref{mainTheoremGroupCentering}. Let $\lambda_A$ denote the penalty parameter for estimating $B$. Suppose $\lambda_A$ is as defined in \eqref{GLassoPenaltyB}. Let \begin{equation} \label{threshTheorem3} \tau_{\operatorname{init}} \asymp \sqrt{\log m} \lVert (D^T B^{-1}D)^{-1} \rVert_2^{1/2}. \end{equation} Then with probability at least $1 - C'' / (m \vee n)^2$, for output of Algorithm 2, \begin{align} \label{mainThmErrorCov} \twonorm{ \Trace{A} \Parens{\widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2}^{-1} - B^{-1} } \leq \frac{C' \lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1 } }{b_{\min} \varphi_{\min}^2(\rho(B))}, \quad \text{and} \end{align} \begin{equation} \label{mainThmBoundTailCutpoint} \lVert \widehat{\beta}_j(\widehat{B}^{-1}) - \beta_j^* \rVert_2 \leq C_2 \sqrt{\log m} \lVert (D^T B^{-1} D)^{-1} \rVert_2^{1/2}, \end{equation} for all $j$, for absolute constants $C$, $C_2$, $C'$, and $C''$. \end{theorem} We prove Theorem \ref{mainTheoremModSel} in Web Supplement Section~\ref{proofMainThmAlgTwo}. In Web Supplement Section~\ref{MainTheoremFixedGenes} we also show a standalone result, namely Theorem~\ref{mainTheoremFixedGenes}, for the case of fixed sets of group and globally centered genes. This result shows how the algorithm used in the preliminary step to choose which genes to group center can be decoupled from the rest of the estimation procedure in Algorithm 2, so long as certain conditions hold. The proof of Theorem \ref{mainTheoremModSel} indeed validates that such conditions hold for the output of Algorithm 1. It is worth noting that a similar rate of convergence for estimating $A$ could also be derived, but we focus on $B$ in our methodology and applications, and therefore leave this as an exercise for interested readers. We specialize Corollary \ref{theoremInference} to the case where $B^{-1}$ is estimated using Algorithm 2. \begin{corollary} \label{corDesignEffectAlg2} Under the conditions of Theorem \ref{mainTheoremModSel}, we have with probability $1-C/m^2$ \begin{equation} \label{eq::designEffectEstRate} \frac{\abs{ \delta^T \widehat{\Omega} \delta - \delta^T \Omega \delta}}{ \delta^T \Omega \delta } \leq 2 C' \frac{n_{\operatorname{ratio}}}{ \lambda_{\min}(B) } \kappa(B) \lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1}, \end{equation} for some absolute constants $C$ and $C'$. \end{corollary} \textbf{Remarks.} \label{sec::designEffectRate} The right-hand-side of \eqref{eq::designEffectEstRate} goes to zero because of the assumptions (A1), (A2'), and (A3), which ensure that the factor $\lambda_A \sqrt{\abs{B^{-1}}_{0, \operatorname{off}} \vee 1}$ goes to zero. We conduct simulations to assess the accuracy of estimating the design effect in Section~\ref{inferenceGamma}. \section{Proof sketch of Theorem 1} \label{sec::ProofThm1} \begin{proofof2} Let $B_{n, m} \in \mathbb{R}^{n \times n}$ denote a fixed positive definite matrix. Let $D$ be as defined as in \eqref{meanMatrixTwoGroups}. Define $\Delta_{n, m} = B_{n, m}^{-1} - B^{-1}$ and \begin{equation} \label{def:Omega} \Omega = (D^T B^{-1} D)^{-1} \text{ and } \Omega_{n, m} = (D^T B_{n, m}^{-1} D)^{-1}. \end{equation} Note that we can decompose the error for all $j$ as \begin{equation} \label{GLSDecompTriangleIneq} \lVert \widehat{\beta}_j(B_{n, m}^{-1}) - \beta_j^* \rVert_2 \leq \lVert \widehat{\beta}_j(B^{-1}) - \beta_j^* \rVert_2 + \lVert \widehat{\beta}_j(B_{n, m}^{-1}) - \widehat{\beta}_j(B^{-1}) \rVert_2 =: \text{I} + \text{II}. \end{equation} We will use the following lemmas, which are proved in subsections \ref{sec::rateBetaHatBHatinvBetaHatBinv} and \ref{sec::rateBetaHatBinvBetaStar}, to bound these two terms on the right-hand side, respectively. \begin{lemma} \label{HansonWrightBetaBinvBetaStar} Let $\mathcal{E}_2$ denote the event \begin{equation} \label{eventE2} \mathcal{E}_2 = \left\{ \lVert \widehat{\beta}_j(B^{-1}) - \beta_j^* \rVert_2 \leq s_{n, m} \right \}, \quad \text{with} \quad s_{n, m} = C_3 d^{1/2} \sqrt{\frac{\log(m) \lVert B \rVert_2}{n_{\min}}}. \end{equation} Then $P(\mathcal{E}_2) \geq 1 - 2 / m^d$. \end{lemma} \begin{lemma} \label{BetaBtildeBetaBStarHansonWright} Let $B_{n, m} \in \mathbb{R}^{n \times n}$ denote a fixed matrix such that $B_{n, m} \succ 0$. Let $X_j \in \mathbb{R}^n$ denote the $j$th column of $X$, where $X$ is a realization of model (\ref{modelMeanCov}). Let $\mathcal{E}_3$ denote the event \begin{equation} \mathcal{E}_3 = \left \{ \lVert \widehat{\beta}_j(B_{n, m}^{-1}) - \widehat{\beta}_j(B^{-1}) \rVert_2 \leq t_{n, m} \right\}, \quad \text{ with} \quad t_{n, m} = \widetilde{C} n_{\min}^{-1/2} \lVert \Delta_{n, m} \rVert_2. \end{equation} for some absolute constant $\widetilde{C}$. Then $P(\mathcal{E}_3) \geq 1 - 2 / m^d$. \end{lemma} The proof of \eqref{rateBetaHatFixedB1} follows from the union bound $P(\mathcal{E}_2 \cap \mathcal{E}_3) \geq 1 - P(\mathcal{E}_2) - P(\mathcal{E}_3) \geq 1 - 4 / m^d$. Next we prove \eqref{rateGammaHatFixedDelta}. Let $r_{n, m} = s_{n, m} + t_{n, m}$, as defined in \eqref{rateBetaHatFixedB1}, and let $\delta = (1, -1) \in \mathbb{R}^2$. Then \[ | \widehat{\gamma}_j(B^{-1}_{n, m}) - \gamma_j | = \left | \delta^T \left( \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \right) \right | \leq \lVert \delta \rVert_2 \lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \rVert_2 = \sqrt{2} \lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j^* \rVert_2, \] where we used the Cauchy-Schwarz inequality. Hence if $\lVert \widehat{\beta}_j(B^{-1}_{n, m}) - \beta_j \rVert_2 \leq r_{n, m}$, it follows that $| \widehat{\gamma}_j(B^{-1}_{n, m}) - \gamma_j | \leq \sqrt{2} r_{n, m}$. The result holds by applying a union bound over the variables $j = 1, \ldots, m$. \end{proofof2} \section{More proofs for Theorem~\ref{mainTheoremGroupCentering}} \label{sec::proofsforTheorem2} The proof of Lemma~\ref{boundCovOpFro} appears in Section~\ref{sec::proof-lemma-cov-opFro}. The proofs of auxiliary lemmas appear in Section~\ref{sec::proof-intermediate-Kron-prod}. \subsection{Proof of Lemma \ref{boundCovOpFro}} \label{sec::proof-lemma-cov-opFro} In order to prove Lemma \ref{boundCovOpFro}, we need Theorem~\ref{thm::frob}, which shows explicit non-asymptotic convergence rates in the Frobenius norm for estimating $\rho(A)$, $\rho(B)$, and their inverses. Theorem~\ref{thm::frob} follows from the standard proof; see~\cite{RBLZ08,ZRXB11} We also need Proposition~\ref{sandwichBound} and Lemma~\ref{boundW1W2errorOpFro}, which are stated below and proved in Section~\ref{sec::proof-intermediate-Kron-prod}. \begin{theorem} \label{thm::frob} Suppose that (A2) holds. Let $\hat{A}_{\rho}$ and $\hat{B}_{\rho}$ be the unique minimizers defined by \eqref{geminiObjectiveFnA} and~\eqref{geminiObjectiveFnB} with sample correlation matrices $\hat\Gamma(A)$ and $\hat\Gamma(B)$ as their input. Suppose that event $\mathcal{X}_0$ holds, with \begin{eqnarray} \nonumber & & \widetilde{\eta} \sqrt{\offzero{A^{-1}} \vee 1} = o(1) \quad \text{and } \quad \widetilde{\alpha} \sqrt{\offzero{B^{-1}} \vee 1}= o(1). \\ \label{eq::lambda-choir} & & \text{Set for some } \; 0< \epsilon, \varepsilon < 1, \; \; \lambda_{B} = {\widetilde{\alpha} }/{\varepsilon} \; \mbox{ and } \; \lambda_{A} = {\widetilde{\eta} }/{\epsilon}. \end{eqnarray} Then on event $\mathcal{X}_0$, we have for $9< C < 18$ \begin{eqnarray} & & \nonumber \twonorm{\hat{A}_{\rho} - \rho(A)} \le \fnorm{\hat{A}_{\rho} - \rho(A)} \leq C \kappa(\rho(A))^2 \lambda_{B} \sqrt{\offzero{A^{-1}} \vee 1}, \\ \nonumber & & \twonorm{\hat{B}_{\rho} - \rho(B)} \le \fnorm{\hat{B}_{\rho} - \rho(B)} \leq C \kappa(\rho(B))^2 \lambda_{A}\sqrt{\offzero{B^{-1}} \vee 1}, \end{eqnarray} and \begin{eqnarray} & & \label{eq::eventAop} \twonorm{\hat{A}_{\rho}^{-1} - \rho(A)^{-1}} \le \fnorm{\hat{A}_{\rho}^{-1} - \rho(A)^{-1}} < \frac{C \lambda_{B} \sqrt{\offzero{A^{-1}} \vee 1}}{2\varphi^2_{\min}(\rho(A))}, \\ \label{eq::eventBop} & & \twonorm{\hat{B}_{\rho}^{-1} - \rho(B)^{-1}} \le \fnorm{\hat{B}_{\rho}^{-1} - \rho(B)^{-1}} \leq \frac{C \lambda_{A} \sqrt{\offzero{B^{-1}} \vee 1}}{2\varphi^2_{\min}(\rho(B))}. \end{eqnarray} \end{theorem} We now state an auxiliary result, Lemma \ref{boundW1W2errorOpFro}, where we prove a bound on the error in the diagonal entries of the covariance matrices, and on their reciprocals. The following Lemma provides bounds analogous to those in Claim 15.1~\cite{Zhou14a,Zhou14supp}. \begin{lemma} \label{boundW1W2errorOpFro} Let $\widehat{W}_1$ and $\widehat{W}_2$ be as defined in \eqref{W1hatW2hat}. Let $W_1 = \sqrt{\operatorname{tr}(B)} \operatorname{diag}(A)^{1/2}$ and $W_2 = \sqrt{\operatorname{tr}(A)} \operatorname{diag}(B)^{1/2}$. Suppose event $\mathcal{X}_0$ holds, as defined in \eqref{eq:X_0_B}, \eqref{eq:X_0_A}. For $\eta' := \frac{\widetilde{\eta}}{\sqrt{1 - \widetilde{\eta}}} \leq \frac{\lambda_B}{6}$ and $\alpha' := \frac{\widetilde{\alpha}}{\sqrt{1 - \widetilde{\alpha}}} \leq \frac{\lambda_A}{6}$, \begin{align*} \left \lVert \widehat{W}_1 - W_1 \right \rVert_2 &\leq \widetilde{\eta} \sqrt{\tr{B}} \sqrt{a_{\max}}, \qquad \left \lVert \widehat{W}_1^{-1} - W_1^{-1} \right \rVert_2 \leq \frac{\widetilde{\eta}}{1 - \widetilde{\eta}} / \sqrt{\tr{B}} \sqrt{a_{\min}}, \\ \left \lVert \widehat{W}_2 - W_2 \right \rVert_2 &\leq \widetilde{\alpha} \sqrt{\tr{A}} \sqrt{b_{\max}}, \text{ and } \left \lVert \widehat{W}_2^{-1} - W_2^{-1} \right \rVert_2 \leq \frac{\widetilde{\alpha}}{1 - \widetilde{\alpha}} / \sqrt{\tr{A}} \sqrt{b_{\min}}. \end{align*} \end{lemma} \begin{proposition} \label{sandwichBound} \citep{Zhou14a}. Let $\widehat{W}$ and $W$ be diagonal positive definite matrices. Let $\widehat{\Psi}$ and $\Psi$ be symmetric positive definite matrices. Then \begin{align*} \left \lVert \widehat{W} \widehat{\Psi} \widehat{W} - W\Psi W \right \rVert_2 &\leq \left( \left \lVert \widehat{W} - W \right \rVert_2 + \lVert W \rVert_2 \right)^2 \left \lVert \widehat{\Psi} - \Psi \right \rVert_2 \\ &\qquad + \left \lVert \widehat{W} - W \right \rVert_2 \left( \left \lVert \widehat{W} - W \right \rVert_2 + 2 \right) \lVert \Psi \rVert_2 \\ \left \lVert \widehat{W} \widehat{\Psi} \widehat{W} - W\Psi W \right \rVert_F &\leq \left( \left \lVert \widehat{W} - W \right \rVert_2 + \lVert W \rVert_2 \right)^2 \left \lVert \widehat{\Psi} - \Psi \right \rVert_F \\ &\qquad + \left \lVert \widehat{W} - W \right \rVert_2 \left( \left \lVert \widehat{W} - W \right \rVert_2 + 2 \right) \lVert \Psi \rVert_F. \end{align*} \end{proposition} \begin{proofof}{ Lemma \ref{boundCovOpFro}} Assume that event $\mathcal{X}_0$ holds. The proof follows exactly that of Lemma 15.3 in~\cite{Zhou14a, Zhou14supp}, in view of Theorem~\ref{thm::frob}, Lemma \ref{boundW1W2errorOpFro} and Proposition 15.2 from~\cite{Zhou14a,Zhou14supp}, which is restated immediately above in Proposition~\ref{sandwichBound}. \end{proofof} \silent{ By Proposition \ref{sandwichBound} and Lemma \ref{boundW1W2errorOpFro}, \begin{eqnarray*} \delta_{A, 2} &:= & \lVert \widehat{W}_1 \widehat{A}_\rho \widehat{W}_1 / \text{tr}(B) - \text{diag}(A)^{1/2} \rho(A) \text{diag}(A)^{1/2} \rVert_2 \\ &\leq & (1 + \widetilde{\eta})^2 a_{\max} \lVert \widehat{A}_\rho - \rho(A) \rVert_2 + (\widetilde{\eta}^2 + 2 \widetilde{\eta}) a_{\max} \lVert \rho(A) \rVert_2 \\ &\leq & C \lambda_B a_{\max} \kappa(\rho(A))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee 1}, \end{eqnarray*} where we used that $\lVert \kappa(\rho(A)) \rVert_2 \geq \lVert \rho(A) \rVert_2$. Likewise, \begin{align*} \delta_{A, F} &\leq (1 + \widetilde{\eta})^2 a_{\max} \lVert \widehat{A}_\rho - \rho(A) \rVert_F + (\widetilde{\eta}^2 + 2 \widetilde{\eta}) a_{\max} \lVert \rho(A) \rVert_F \\ &\leq C \lambda_B a_{\max} \kappa(\rho(A))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee m}. \end{align*} For $\eta' = \widetilde{\eta}/ \sqrt{1 - \widetilde{\eta}} \leq \lambda_B / \sqrt{6}$, where $\widetilde{\eta} < 1/3$, we have by Theorem \ref{thm::frob}, Proposition \ref{sandwichBound}, and Lemma \ref{boundW1W2errorOpFro}, \begin{eqnarray*} \delta_{A, 2}^- &\leq & \frac{(1 + \eta')^2}{a_{\min}} \lVert \widehat{A}_\rho^{-1} - \rho(A)^{-1} \rVert_2 + \frac{(\eta' + 2) \eta'}{a_{\min}} \lVert \rho(A)^{-1} \rVert_2 \\ &\leq & (2C + 1) \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} / \left( a_{\min} \varphi_{\min}^2(\rho(A)) \right), \; \text { and } \;\\ \delta_{A, F}^- &\leq & \frac{(1 + \eta')^2}{a_{\min}} \lVert \widehat{A}_\rho^{-1} - \rho(A)^{-1} \rVert_F + \frac{(\eta' + 2) \eta'}{a_{\min}} \lVert \rho(A)^{-1} \rVert_F \\ &\leq & (2C + 1) \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee m} / \left( a_{\min} \varphi_{\min}^2(\rho(A)) \right), \end{eqnarray*} where $9/2 < C < 9$. The bounds for $B$ can be derived analogously.} It remains to prove Lemma \ref{boundW1W2errorOpFro}. \begin{proofof}{Lemma \ref{boundW1W2errorOpFro}} Suppose that event $\mathcal{X}_0$ holds. Then \begin{equation*} \max_{i = 1, \ldots, m} \left| \frac{\sqrt{X_i^T (I - P_2)X_i}}{\sqrt{a_{ii} \operatorname{tr}(B)}} - 1 \right| \leq \left(1 - \sqrt{1 - \widetilde{\eta}} \right) \bigvee \left( \sqrt{1 + \widetilde{\eta}} - 1 \right) \leq \widetilde{\eta}. \end{equation*} Thus for all $i$, \begin{equation*} \frac{1}{\sqrt{1 + \widetilde{\eta}}} \leq \frac{ \sqrt{a_{ii} \operatorname{tr}(B)}}{\sqrt{X_i^T (I - P_2)X_i}} \leq \frac{1}{\sqrt{1 - \widetilde{\eta}}}, \end{equation*} so \begin{equation*} \left| \frac{ \sqrt{a_{ii} \operatorname{tr}(B)}}{\sqrt{X_i^T (I - P_2)X_i}} - 1 \right| \leq \left(\frac{1 - \sqrt{1 - \widetilde{\eta}}}{\sqrt{1 - \widetilde{\eta}}} \right) \bigvee \left( \frac{\sqrt{1 + \widetilde{\eta}} - 1}{\sqrt{1 + \widetilde{\eta}}} \right) \leq \frac{\widetilde{\eta}}{\sqrt{1 - \widetilde{\eta}}}. \end{equation*} \end{proofof} \subsection{Proof of Lemma \ref{lemma:absoluteErrorKroneckerProduct}} \label{sec::proof-intermediate-Kron-prod} In order to prove Lemma~\ref{lemma:absoluteErrorKroneckerProduct}, we state Lemma \ref{intermediateKroneckerProdInTermsDelta}, Lemma~\ref{A1KronB1minusAKronB}, and Proposition~\ref{trAtrB_estimator}. Let $\lVert \cdot \rVert$ denote a matrix norm such that $\lVert A \otimes B \rVert = \lVert A \rVert \lVert B \rVert$. Let \begin{align} \Delta &:= \widehat{W}_1 \widehat{A}_{\rho} \widehat{W}_1 \otimes \widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2 / \operatorname{tr}(A) \operatorname{tr}(B) - A \otimes B, \label{DeltaKronProd} \\ \Delta' &:= \operatorname{tr}(A) \operatorname{tr}(B) \left( \widehat{W}_1 \widehat{A}_{\rho} \widehat{W}_1\right)^{-1} \otimes \left( \widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2 \right)^{-1} - A^{-1} \otimes B^{-1}. \label{DeltaPrimeKronProd} \end{align} Lemma~\ref{intermediateKroneckerProdInTermsDelta} is identical to Lemma 15.5 of ~\citet{Zhou14a}, except that we now plug in quantities $\widetilde{\alpha}$ and $\widetilde{\eta}$ as defined in \eqref{entrywiseRateBcorr}. Likewise, Proposition \ref{trAtrB_estimator} is analogous to (20) in Theorem 4.1 of \citet{Zhou14a}, except that we now use the centered data matrix $(I - P_2) X$, together with the rates $\widetilde{\alpha}$, $\widetilde{\eta}$. \begin{lemma} \label{intermediateKroneckerProdInTermsDelta} Let $\widehat{A \otimes B}$ be as in (\ref{estimatorAKroneckerB}). Then for $\Sigma = A \otimes B$, \begin{align} &\left \lVert \widehat{A \otimes B}^{-1} - \Sigma^{-1} \right \rVert \leq (\widetilde{\alpha} \wedge \widetilde{\eta}) \lVert A^{-1} \rVert \lVert B^{-1} \rVert + (1 + \widetilde{\alpha} \wedge \widetilde{\eta}) \lVert \Delta' \rVert \label{intermediateBoundKronInvKnownTrace} \\ &\left \lVert \widehat{A \otimes B} - \Sigma \right \rVert \leq \frac{\lambda_A \wedge \lambda_B}{2} \lVert A \rVert \lVert B \rVert + (1 + \frac{\lambda_A \wedge \lambda_B}{2}) \lVert \Delta \rVert. \label{intermediateBoundKronKnownTrace} \end{align} \end{lemma} Lemma \ref{A1KronB1minusAKronB} is a helpful bound on the difference of Kronecker products. \begin{lemma} \label{A1KronB1minusAKronB} ~\citep{Zhou14a}. For matrices $A_1$ and $B_1$, let $\Delta_A := A_1 - A$ and $\Delta_B := B_1 - B$. Then \[ \lVert A_1 \otimes B_1 - A \otimes B \rVert \leq \lVert \Delta_A \rVert \lVert B \rVert + \lVert \Delta_B \rVert \lVert A \rVert + \lVert \Delta_A \rVert \lVert \Delta_B \rVert. \] \end{lemma} \begin{proposition} \label{trAtrB_estimator} Under the event $\mathcal{X}_0$, as defined in as defined in \eqref{eq:X_0_B}, \eqref{eq:X_0_A}, \begin{equation*} \left| \lVert (I - P_2)X \rVert_F^2 - \text{\normalfont tr}(A) \text{\normalfont tr}(B) \right| \leq (\widetilde{\alpha} \wedge \widetilde{\eta}) \text{\normalfont tr}(A) \text{\normalfont tr}(B). \end{equation*} \end{proposition} \begin{proofof}{Lemma \ref{lemma:absoluteErrorKroneckerProduct}} Assume that event $\mathcal{X}_0$ as defined in (\ref{eq:X_0_B}), (\ref{eq:X_0_A}) holds. The proof follows exactly the steps in Theorems 11.1 and 11.2 in Supplementary Material of~\cite{Zhou14a, Zhou14supp}. \end{proofof} \silent{ Let $\widehat{W}_1$ and $\widehat{W}_2$ be as defined in \eqref{W1hatW2hat}. First we bound $\lVert \Delta' \rVert_2$ using Lemmata \ref{A1KronB1minusAKronB} and \ref{boundCovOpFro}; for $C > 10$, $C' < 19$, \begin{align} \lVert \Delta' \rVert_2 &\leq \delta_{A, 2}^- \lVert B^{-1} \rVert_2 + \lVert A^{-1} \rVert_2 \delta_{B,2}^- + \delta_{A,2}^- \delta_{B,2}^- \notag \\ &\leq \frac{C \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee 1}}{a_{\min} \varphi^2_{\min}(\rho(A)) \varphi_{\min}(B)} + \frac{C' \lambda_A \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{b_{\min} \varphi^2_{\min}(\rho(B)) \varphi_{\min}(A)} \label{DeltaPrimeOpNorm} \\ &+ \frac{C C' \lambda_A \lambda_B}{a_{\min} b_{\min} \varphi^2_{\min}(\rho(A)) \varphi^2_{\min}(\rho(B)) } \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}, \notag \end{align} Now by (\ref{evalsInequalityCorrA}), (\ref{ReciprocalMinEvalA}), (\ref{ReciprocalMinEvalB}), and for $\lVert \Delta' \rVert_2$ as bounded in (\ref{DeltaPrimeOpNorm}), \begin{align} (\widetilde{\alpha} \wedge \widetilde{\eta}) \lVert \Delta' \rVert_2 &\leq \frac{C C' \lambda_A \lambda_B \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{3 a_{\min} b_{\min} \varphi^2_{\min}(\rho(A)) \varphi^2_{\min}(\rho(B))} \times \notag \\ &\left( \frac{\varphi_{\min}(\rho(B))}{C' \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}} + \frac{\varphi_{\min}(\rho(A))}{C \sqrt{|A^{-1}|_{0, \text{off}} \vee 1}} + \lambda_A \wedge \lambda_B \right) \notag \\ &\leq \frac{2C C' \lambda_A \lambda_B}{5a_{\min} b_{\min} } \left( \frac{\sqrt{|A^{-1}|_{0, \text{off}} \vee 1}}{\varphi^2_{\min}(\rho(A))} \frac{\sqrt{|B^{-1}|_{0, \text{off}} \vee 1}}{ \varphi^2_{\min}(\rho(B))} \right). \label{minRateDeltaPrimeOpNorm} \end{align} Next we bound the error term $\lVert \Delta \rVert_2$. By Lemmata \ref{A1KronB1minusAKronB} and \ref{boundCovOpFro}, \begin{eqnarray*} \lVert \Delta \rVert_2 &= & \left \lVert \left( \frac{\widehat{W}_1}{\sqrt{\operatorname{tr}(A)}} \right) \widehat{A}_{\rho} \left( \frac{\widehat{W}_1}{\sqrt{\operatorname{tr}(A)}} \right) \otimes \left( \frac{\widehat{W}_2}{\sqrt{\operatorname{tr}(B)}} \right) \widehat{B}_{\rho} \left( \frac{\widehat{W}_2}{\sqrt{\operatorname{tr}(B)}} \right) - A \otimes B \right \rVert_2 \\ &\leq & \delta_{A, 2} \lVert B \rVert_2 + \lVert A \rVert_2 \delta_{B,2} + \delta_{A,2} \delta_{B,2} \leq C \lambda_B a_{\max} \lVert B \rVert_2 \kappa(\rho(A))^2 \times \notag \\ & & \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} + C' \lambda_A b_{\max} \lVert A \rVert_2 \kappa(\rho(B))^2 \sqrt{|B^{-1}|_{0, \text{off}} \vee 1} \\ & & + C C' \lambda_A \lambda_B a_{\max} b_{\max} \kappa(\rho(A))^2 \kappa(\rho(B))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}, \end{eqnarray*} where $18 < C, C' < 36$, and hence by (\ref{evalsInequalityCorrA}) \begin{align*} &\frac{\lambda_A + \lambda_B}{2} \lVert \Delta \rVert_2 \leq (CC' / 2) \lambda_A \lambda_B a_{\max} b_{\max} \kappa(\rho(A))^2 \kappa(\rho(B))^2 \times \\ & \left( \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \sqrt{|B^{-1}|_{0, \text{off}} \vee 1} \right) \times \left( \frac{1}{C} + \frac{1}{C'} + \lambda_A \wedge \lambda_B \right) \\ &\leq \frac{5CC'}{9} \lambda_A \lambda_B a_{\max} b_{\max} \kappa(\rho(A))^2 \kappa(\rho(B))^2 \sqrt{|A^{-1}|_{0, \text{off}} \vee 1} \sqrt{|B^{-1}|_{0, \text{off}} \vee 1}. \end{align*} Insert the above bound into \eqref{intermediateBoundKronKnownTrace}, and insert bounds \eqref{DeltaPrimeOpNorm} and \eqref{minRateDeltaPrimeOpNorm} into \eqref{intermediateBoundKronInvKnownTrace}, where \eqref{intermediateBoundKronInvKnownTrace} and \eqref{intermediateBoundKronKnownTrace} are from Lemma \ref{intermediateKroneckerProdInTermsDelta}.} \begin{proofof}{Lemma \ref{intermediateKroneckerProdInTermsDelta}} By the triangle inequality and the sub-multiplicativity of the norm $\lVert \cdot \rVert$, with $\Delta$ and $\Delta'$ as defined in (\ref{DeltaKronProd}) and (\ref{DeltaPrimeKronProd}), \begin{align} &\operatorname{tr}(A) \operatorname{tr}(B) \left \lVert \left( \widehat{W}_1^{-1} \widehat{A}_{\rho}^{-1} \widehat{W}_1^{-1} \right) \otimes \left( \widehat{W}_2^{-1} \widehat{B}_{\rho}^{-1} \widehat{W}_2^{-1} \right) \right \rVert \leq \lVert A^{-1} \rVert \lVert B^{-1} \rVert + \lVert \Delta' \rVert \label{BoundEstKroneckerInvDenomTrATrB} \\ &\left \lVert \left( \widehat{W}_1 \widehat{A}_{\rho} \widehat{W}_1 \right) \otimes \left( \widehat{W}_2 \widehat{B}_{\rho} \widehat{W}_2 \right) / \operatorname{tr}(A) \operatorname{tr}(B) \right \rVert \leq \lVert A \rVert \lVert B \rVert + \lVert \Delta \rVert. \label{BoundEstKroneckerDenomTrATrB} \end{align} Following proof of Lemma 15.5~\cite{Zhou14a,Zhou14supp}, we have by definition of $\Delta'$, and Proposition \ref{trAtrB_estimator}, and (\ref{BoundEstKroneckerInvDenomTrATrB}), \begin{eqnarray*} \norm{\widehat{A \otimes B}^{-1} - A^{-1} \otimes B^{-1} } &\leq & (\widetilde{\alpha} \wedge \widetilde{\eta}) \left( \lVert A^{-1} \rVert \lVert B^{-1} \rVert + \lVert \Delta' \rVert \right) + \lVert \Delta' \rVert. \end{eqnarray*} By Proposition \ref{trAtrB_estimator}, we have for $\lambda_A \geq 3 \widetilde{\alpha}$, $\lambda_B \geq 3 \widetilde{\eta}$, where $\widetilde{\alpha} \wedge \widetilde{\eta} \leq \frac{\lambda_A \wedge \lambda_B}{3}$, \begin{align} &\left| \frac{1}{\lVert (I - P_2) X \rVert_F^2} - \frac{1}{\operatorname{tr}(A) \operatorname{tr}(B)} \right| = \left| \frac{\lVert (I - P_2) X \rVert_F^2 - \operatorname{tr}(A) \operatorname{tr}(B)}{ \lVert (I - P_2) X \rVert_F^2 \operatorname{tr}(A) \operatorname{tr}(B) } \right| \notag\\ &\leq \left| \frac{\widetilde{\alpha} \wedge \widetilde{\eta} }{\lVert (I - P_2) X \rVert_F^2} \right| \leq \frac{\widetilde{\alpha} \wedge \widetilde{\eta}}{\operatorname{tr}(A) \operatorname{tr}(B) (1 - \widetilde{\alpha} \wedge \widetilde{\eta})} \notag \\ &\text{ thus } \left| \frac{\operatorname{tr}(A) \operatorname{tr}(B)}{\lVert (I - P_2) X \rVert_F^2} - 1 \right| \leq \frac{\widetilde{\alpha} \wedge \widetilde{\eta}}{1 - \widetilde{\alpha} \wedge \widetilde{\eta}} \leq \frac{\lambda_A \wedge \lambda_B}{2}. \label{BoundTrATrBMinusFroOverFro} \end{align} By the triangle inequality, the definition of $\Delta$ in (\ref{DeltaKronProd}), and (\ref{BoundEstKroneckerDenomTrATrB}) and (\ref{BoundTrATrBMinusFroOverFro}), \begin{eqnarray*} \left \lVert \widehat{A \otimes B} - A \otimes B \right \rVert & \le & \frac{\lambda_A + \lambda_B}{2} \lVert A \rVert \lVert B \rVert + (1 + \frac{\lambda_A + \lambda_B}{2}) \lVert \Delta \rVert; \end{eqnarray*} See the proof of Lemma 15.5~\cite{Zhou14a,Zhou14supp}. \end{proofof} \begin{proofof}{Proposition \ref{trAtrB_estimator}} Suppose event $\mathcal{X}_0$ holds. Note that \[ E[ \lVert (I - P_2)X \rVert_F^2 ] = \text{tr}\left((I - P_2)E[XX^T](I - P_2) \right) = \text{tr}(A) \text{tr}(\widetilde{B}) \] Decomposing by columns, we obtain the inequality, \begin{align*} & \left| \lVert (I - P_2)X \rVert_F^2 - \text{tr}(A) \text{tr}(B) \right| = \left| \sum_{j = 1}^m \lVert (I - P_2) X_j \rVert_2^2 - a_{jj} \text{tr}(B) \right| \\ &\leq \sum_{j = 1}^m \left| X_j^T (I - P_2) X_j - a_{jj} \text{tr}(B) \right| \leq \sum_{j = 1}^m \widetilde{\eta}_{jj} a_{jj} \text{tr}(B) \leq \widetilde{\eta} \text{tr}(A) \text{tr}(B). \end{align*} Decomposing by rows, we obtain the inequality, \begin{align*} & \left| \lVert (I - P_2)X \rVert_F^2 - \text{tr}(A) \text{tr}(B) \right| = \left| \sum_{i = 1}^n \lVert (e_i - p_i)^T X \rVert_2^2 - b_{ii} \text{tr}(A) \right| \\ &\leq \sum_{i = 1}^n \left| (e_i - p_i)^T XX^T (e_i - p_i) - b_{ii} \text{tr}(A) \right| \leq \sum_{i = 1}^n \widetilde{\alpha}_{ii} b_{ii} \text{tr}(A) \leq \widetilde{\alpha} \text{tr}(A) \text{tr}(B). \end{align*} Therefore $\left| \lVert (I - P_2)X \rVert_F^2 - \text{tr}(A) \text{tr}(B) \right| \leq (\widetilde{\alpha} \wedge \widetilde{\eta}) \text{tr}(A) \text{tr}(B)$. \end{proofof} \section{Entrywise convergence of sample correlations} \label{app::entrywise_sample_corr} In this section we prove entrywise rates of convergence for the sample correlation matrices in Theorem \ref{thm::large-devi-cor-multiple}. The theorem applies to the Kronecker product model, $\operatorname{Cov}(\operatorname{vec}(X)) = A^* \otimes B^*$, where for identifiability we define the sample covariance matrices as \begin{equation*} A^* = \frac{m}{\operatorname{tr}(A)}A \quad \text{ and } \quad B^* = \frac{\operatorname{tr}(A)}{m} B, \end{equation*} with the scaling chosen so that $A^*$ has trace $m$. Let $\rho(A) \in \mathbb{R}^{m \times m}$ and $\rho(B) \in \mathbb{R}^{n \times n}$ denote the correlation matrices corresponding to covariance matrices $A^*$ and $B^*$, respectively. Assume that that the mean of $X$ satisfies the two-group model \eqref{meanMatrixTwoGroups}. Let $P_2$ be as defined in \eqref{def:withinGroupProjection}. The matrix $I - P_2$ is a projection matrix of rank $n - 2$ that performs within-group centering. The sample covariance matrices are defined as \begin{align} S(B^*) &= \frac{1}{m} \sum_{j = 1}^m (I - P_2) X_j X_j^T (I - P_2), \label{sampleCovGroupCenter} \\ S(A^*) &= X^T(I - P_2)X / n, \label{colSampleCov} \end{align} where $S(B^*)$ has null space of dimension two. \begin{theorem} \label{thm::large-devi-cor-multiple} Consider a data generating random matrix as in (\ref{modelMeanCov}). Let $C$ be some absolute constant. Let $\widetilde{\alpha}$ and $\widetilde{\eta}$ be as defined in \eqref{entrywiseRateBcorr}. Let $m \vee n \geq 2$. Then with probability at least $1- \frac{3}{(m \vee n)^2}$, for $\widetilde{\alpha}, \widetilde{\eta} < 1/3$, and $\hat\Gamma(A)$ and $\hat\Gamma(B)$ as in (\ref{defGammaAGammaB}), \begin{eqnarray*} \nonumber & & \forall i \not=j, \; \abs{\hat\Gamma_{ij}(B) -\rho_{ij}(B)} \leq \frac{\widetilde{\alpha}}{1-\widetilde{\alpha}} + \abs{\rho_{ij}(B)} \frac{\widetilde{\alpha}}{1-\widetilde{\alpha}} \leq 3 \widetilde{\alpha}, \\ \nonumber & & \forall i \not=j, \; \abs{\hat\Gamma_{ij}(A) -\rho_{ij}(A)} \leq \frac{\widetilde{\eta}}{1-\widetilde{\eta}} +\abs{ \rho_{ij}(A)}\frac{\widetilde{\eta}}{1-\widetilde{\eta}} \leq 3 \widetilde{\eta}. \label{eq::zero-wei-bound} \end{eqnarray*} \end{theorem} We state three results used in the proof of Theorem \ref{thm::large-devi-cor-multiple}: Proposition \ref{rateConvergenceSampleCovToB} provides an entrywise rate of convergence of $S(B^*)$, Proposition \ref{rateConvergenceSampleCovToA} provides an entrywise rate of convergence of $S(A^*)$, and Lemma \ref{lemma::large-devi-rep} states that these entrywise rates imply $\mathcal{X}_0$. Let \begin{equation} \widetilde{B} := (I - P_2)B^*(I - P_2) = \operatorname{Cov}( (I - P_2) X_j), \end{equation} where $X_j$ is the $j$th column of $X$. Let $\widetilde{b}_{ij}$ denote the $(i, j)$th entry of $\widetilde{B}$. \begin{proposition} \label{rateConvergenceSampleCovToB} Let $d > 2$. Then with probability at least $1 - 2/m^{d - 2}$, \begin{equation} \label{eventB_cov} \forall i, j \, \left| S_{ij}(B^*) - b_{ij}^* \right| \leq \phi_{B, ij}, \end{equation} with \begin{equation} \label{tailCutpointSampleCovGroupCenToB} \phi_{B, ij} = C \frac{\log^{1/2}(m)}{\sqrt{m}} \frac{\lVert A^* \rVert_F}{\sqrt{m}} \sqrt{\widetilde{b}_{ii} \widetilde{b}_{jj}} + \frac{3 \lVert B^* \rVert_1}{n_{\min}}. \end{equation} \end{proposition} \begin{proposition} \label{rateConvergenceSampleCovToA} Let $d > 2$. Then with probability at least $1 - 2/n^{d - 2}$, \begin{equation} \label{eventA_cov} \forall i, j \, \left| S_{ij}(A^*) - a_{ij}^* \tr{B^*} / n \right| > \phi_{A, ij}, \end{equation} with \begin{equation} \label{rateA} \phi_{A, ij} = (a_{ij}^* / n) \left| \tr{\widetilde{B}} - \tr{B^*} \right| + d^{1/2} K \log^{1/2}(n \vee m) (1/n) \sqrt{a_{ij}^{*2} + a_{ii}^* a_{jj}^*} \lVert \widetilde{B} \rVert_F. \end{equation} \end{proposition} \begin{lemma} \label{lemma::large-devi-rep} Suppose that (A2) holds and that $m \vee n \geq 2$. The event \eqref{eventA_cov} defined in Proposition~\ref{rateConvergenceSampleCovToA} implies that $\mathcal{X}_0(A)$ holds. Similarly, the event \eqref{eventB_cov} defined in Proposition~\ref{rateConvergenceSampleCovToB} implies $\mathcal{X}_0(B)$. Hence $\prob{{\mathcal X}_0} \geq 1- \frac{3}{(m \vee n)^2}$. \end{lemma} Proposition \ref{rateConvergenceSampleCovToB} is proved in section \ref{rateConvergenceGroupCenterGemini:Bhat:proof}. Proposition \ref{rateConvergenceSampleCovToA} is proved in section \ref{rateConvergenceGroupCenterGemini:Ahat:proof}. Lemma \ref{lemma::large-devi-rep} is proved in section \ref{proofSampleCorrRate}. Note that Lemma \ref{lemma::large-devi-rep} follows from Propositions \ref{rateConvergenceSampleCovToB} and \ref{rateConvergenceSampleCovToA}. We now prove Theorem \ref{thm::large-devi-cor-multiple}, which follows from Lemma \ref{lemma::large-devi-rep}. \begin{proofof}{Theorem \ref{thm::large-devi-cor-multiple}} Let $q_i$ denote the $i$th column of $I - P_2$, so that $q_i^T XX^T q_j$ is the $(i, j)$th entry of $(I - P_2)XX^T(I - P_2)$. Under $\mathcal{X}_0(B)$, the sample correlation $\widehat{\Gamma}(B)$ satisfies the following bound: \begin{eqnarray*} \left| \widehat{\Gamma}_{ij}(B) - \rho_{ij}(B) \right| &= & \left | \frac{q_i^T XX^T q_j}{\sqrt{q_i^T XX^T q_i} \sqrt{q_j^T XX^T q_j}} - \rho_{ij}(B) \right | \\ &= &\left | \frac{q_i^T XX^T q_j / \left( \text{tr}(A^*) \sqrt{b_{ii}^* b_{jj}^* } \right)}{\sqrt{q_i^T XX^T q_i / \left( b_{ii}^* \text{tr}(A^*) \right)} \sqrt{q_j^T XX^T q_j / \left( b_{jj}^* \text{tr}(A^*) \right)} } - \rho_{ij}(B) \right | \\ &\le & \left | \frac{q_i^T XX^T q_j / \left( \text{tr}(A^*) \sqrt{b_{ii}^* b_{jj}^* } \right) - \rho_{ij}(B)}{\sqrt{q_i^T XX^T q_i / \left( b_{ii}^* \text{tr}(A^*) \right)} \sqrt{q_j^T XX^T q_j / \left( b_{jj}^* \text{tr}(A^*) \right)} } \right | \\ &+ & \left | \frac{ \rho_{ij}(B)}{\sqrt{q_i^T XX^T q_i / \left( b_{ii}^* \text{tr}(A^*) \right)} \sqrt{q_j^T XX^T q_j / \left( b_{jj}^* \text{tr}(A^*) \right)} } - \rho_{ij}(B) \right | \\ &\leq & \frac{\widetilde{\alpha} }{1 - \widetilde{\alpha} } + |\rho_{ij}(B)| \left| \frac{1}{1 - \widetilde{\alpha}} - 1 \right| \\ &\leq & 3 \widetilde{\alpha}, \end{eqnarray*} where the first inequality holds by $\mathcal{X}_0(B)$ and the second inequality holds for $\widetilde{\alpha} \leq 1/3$. Similarly, under $\mathcal{X}_0(A)$ we obtain an entrywise bound on the sample correlation $\widehat{\Gamma}(A)$: \begin{eqnarray*} \left| \widehat{\Gamma}_{ij}(A) - \rho_{ij}(A) \right| &= & \left | \frac{X_i^T (I - P_2) X_j}{\sqrt{X_i^T (I - P_2) X_i} \sqrt{X_j^T (I - P_2) X_j}} - \rho_{ij}(A) \right | \\ &= & \left | \frac{X_i^T (I - P_2) X_j / \left( \text{tr}(B^*) \sqrt{a_{ii}^* a_{jj}^* } \right)}{\sqrt{X_i^T (I - P_2) X_i / \left( a_{ii}^* \text{tr}(B^*) \right)} \sqrt{X_j^T (I - P_2) X_j / \left( a_{jj}^* \text{tr}(B^*) \right)} } - \rho_{ij}(A) \right | \\ &\le & \left | \frac{X_i^T (I - P_2) X_j / \left( \text{tr}(B^*) \sqrt{a_{ii}^* a_{jj}^* } \right) - \rho_{ij}(A)}{\sqrt{X_i^T (I - P_2) X_i / \left( a_{ii}^* \text{tr}(B^*) \right)} \sqrt{X_j^T (I - P_2) X_j / \left( a_{jj}^* \text{tr}(B^*) \right)} } \right | \\ &+ & \left | \frac{ \rho_{ij}(A)}{\sqrt{X_i^T (I - P_2) X_i / \left( a_{ii}^* \text{tr}(B^*) \right)} \sqrt{X_j^T (I - P_2) X_j / \left( a_{jj}^* \text{tr}(B^*) \right)} } - \rho_{ij}(A) \right | \\ &\leq & \frac{\widetilde{\eta} }{1 - \widetilde{\eta} } + |\rho_{ij}(A)| \left| \frac{1}{1 - \widetilde{\eta}} - 1 \right| \leq 3 \widetilde{\eta}, \end{eqnarray*} where the first inequality holds by $\mathcal{X}_0(A)$, and the second inequality holds for $\widetilde{\eta} < 1/3$. By Lemma~\ref{lemma::large-devi-rep}, the event $\mathcal{X}_0 = \mathcal{X}_0(B) \cap \mathcal{X}_0(A)$ holds with probability at least $1 - 3 / (n \vee m)^2$, which completes the proof. \end{proofof} \subsection{Proof of Proposition \ref{rateConvergenceSampleCovToB}} \label{rateConvergenceGroupCenterGemini:Bhat:proof} We first present Lemma~\ref{lemma::BiasGroupCentering} and Lemma~\ref{SampleCovProjectionToB}, which decompose the rate of convergence into a bias term and a variance term, respectively. We then combine the rates for the bias and variance terms to prove the entrywise rate of convergence for the sample covariance. Define \begin{align} \mathcal{B}(B^*) &:= E[S(B^*)] - B^* \quad \text{ and } \label{BiasMatrixB} \\ \sigma(B^*) &:= S(B^*) - E[S(B^*)]. \label{VarianceMatrixB} \end{align} We state maximum entrywise bounds on $\mathcal{B}(B^*)$ and $\sigma(B^*)$ in Lemma \ref{lemma::BiasGroupCentering} and Lemma \ref{SampleCovProjectionToB}, respectively. Proofs for these lemmas are provided in Section~\ref{sec::proofofBiasGroupCentering} and~\ref{sec::proofofSampleCovProjectionToB} respectively. \begin{lemma} \label{lemma::BiasGroupCentering} For $\mathcal{B}(B^*)$ as defined in \eqref{BiasMatrixB}, \begin{equation} \lVert \mathcal{B}(B^*) \rVert_{\max} \leq \frac{3 \lVert B^* \rVert_1}{n_{\min}}. \end{equation} \end{lemma} \begin{lemma} \label{SampleCovProjectionToB} Let $\sigma(B^*)$ be as defined in \eqref{VarianceMatrixB}. With probability at least $1 - 2 / m^d$, \[ |\sigma_{ij}(B^*)| = \left| S_{ij}(B^*) - b_{ij}^* \right| < C \log^{1/2}(m) \frac{\lVert A^* \rVert_F}{\operatorname{tr}(A^*)} \sqrt{\widetilde{b}_{ii} \widetilde{b}_{jj}}. \] \end{lemma} We now prove the entrywise rate of convergence for the sample covariance $S(B^*)$. \begin{proofof}{Proposition \ref{rateConvergenceSampleCovToB}} By the triangle inequality, \begin{align*} \left| S_{ij}(B^*) - b_{ij}^* \right| &\leq \left| S_{ij}(B^*) - E[S_{ij}(B^*)] \right| + \left|E[S_{ij}(B^*)] - b_{ij}^* \right| \\ &= |\mathcal{B}_{ij}(B^*)| + |\sigma_{ij}(B^*)| \\ &\leq \phi_{B, ij}, \end{align*} where the last step follows from Lemmas \ref{lemma::BiasGroupCentering} and \ref{SampleCovProjectionToB}. \end{proofof} \textbf{Remark.} Note that the first term of (\ref{tailCutpointSampleCovGroupCenToB}) is of order $\log^{1/2}(m) / \sqrt{m}$, and the second term is of order $\lVert B^* \rVert_1 / n_{\min}$. \subsection{Proof of Proposition \ref{rateConvergenceSampleCovToA}} \label{rateConvergenceGroupCenterGemini:Ahat:proof} \begin{proofof2} We express the $(i, j)$th entry of $S(A^*)$ as a quadratic form in order to apply the Hanson-Wright inequality to obtain an entrywise large deviation bound. Without loss of generality, let $i = 1$, $j = 2$. The $(1, 2)$ entry of $S(A^*)$ can be expressed as a quadratic form, as follows, \begin{align*} S_{12}(A^*) &= X_1^T (I - P_2) X_2 / n \\ &= (1/2) \begin{bmatrix} X_1^T & X_2^T \end{bmatrix} \begin{bmatrix} 0 & (I - P_2) \\ (I - P_2) & 0 \end{bmatrix} \begin{bmatrix} X_1 \\ X_2 \end{bmatrix} / n \\ &= (1/2) \begin{bmatrix} X_1^T & X_2^T \end{bmatrix} \left( \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes (I - P_2) \right) \begin{bmatrix} X_1 \\ X_2 \end{bmatrix} / n. \end{align*} We decorrelate the random vector $(X_1, X_2) \in \mathbb{R}^{2n}$ so that we can apply the Hanson-Wright inequality. The covariance matrix used for decorrelation is \[ \text{Cov}\left( \begin{bmatrix} X_1 \\ X_2 \end{bmatrix} \right) = \begin{bmatrix} a_{11}^* & a_{12}^* \\ a_{21}^* & a_{22}^* \end{bmatrix} \otimes B^* =: A_{\{1, 2\}}^* \otimes B^*, \] with \[ A_{\{1, 2\}}^* = \begin{bmatrix} a_{11}^* & a_{12}^* \\ a_{21}^* & a_{22}^* \end{bmatrix} \in \mathbb{R}^{2 \times 2}. \] Decorrelating the quadratic form yields \[ S_{12}(A^*) = Z^T \Phi Z, \] where $Z \in \mathbb{R}^{2n}$, with $E[Z] = 0$ and $\text{Cov}(Z) = I_{2n \times 2n}$, and \begin{equation} \label{matrixHWforA} \Phi = (1 / 2n) \left( (A_{\{1, 2\}}^*)^{1/2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} (A_{\{1, 2\}}^*)^{1/2} \right) \otimes B^{1/2}(I - P_2)B^{1/2}. \end{equation} To apply the Hanson-Wright inequality, we first find the trace and Frobenius norm of $\Phi$. For the trace, note that \begin{equation} \text{tr}\left( (A_{\{ 1, 2 \}}^*)^{1/2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} (A_{\{ 1, 2 \}}^*)^{1/2} \right) = \text{tr}\left( \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} A_{\{ 1, 2 \}}^* \right) = 2a_{12}^*. \end{equation} For the Frobenius norm, note that \begin{align*} \left \lVert (A_{\{ 1, 2 \}}^*)^{1/2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} (A_{\{ 1, 2 \}}^*)^{1/2} \right \rVert_F^2 &= \text{tr}\left( \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} A_{\{ 1, 2 \}}^* \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} A_{\{ 1, 2 \}}^* \right) \\ &= \text{tr}\left( \begin{bmatrix} a_{12}^{*2} + a_{11}^* a_{22}^* & 2 a_{12}^* a_{22}^* \\ 2 a_{12}^* a_{22}^* & a_{12}^{*2} + a_{11}^* a_{22}^* \end{bmatrix} \right) \\ &= 2 a_{12}^{*2} + 2a_{11}^* a_{22}^*, \end{align*} Therefore the trace of $\Phi$ is \begin{equation} \tr{\Phi } = a_{12}^* \tr{\widetilde{B}} / n, \end{equation} and the Frobenius norm of $\Phi$ is \begin{equation} \lVert \Phi \rVert_F = (1/n) \sqrt{a_{12}^{*2} + a_{11}^* a_{22}^*} \lVert \widetilde{B} \rVert_F. \end{equation} Applying the Hanson-Wright inequality yields \begin{align*} &P\left( \left| S_{12}(A^*) - a_{12}^* \tr{B^*} / n \right| > \phi_{A, 12} \right) \\ &\leq P\left( \left| S_{12}(A^*) - a_{12}^* \tr{\widetilde{B}} / n \right| + (a_{12}^* / n) \left| \tr{\widetilde{B}} - \tr{B^*} \right| > \phi_{A, 12} \right) \\ &= P\left( \left| S_{12}(A) - a_{12}^* \tr{\widetilde{B}} / n \right| > d^{1/2} K \log^{1/2}(n \vee m) \lVert \Phi \rVert_F \right) \\ &\leq 2/ (n \vee m)^d. \end{align*} By the union bound, \begin{eqnarray*} \lefteqn{P\left( \forall i, j \left| S_{ij}(A^*) - a_{ij} \tr{B^*} / n \right| < \phi_{A, ij} \right) } \\ &\geq & 1 - \sum_{i = 1}^m \sum_{j = 1}^m P\left( \left| S_{ij}(A^*) - a_{ij} \tr{B^*} / n \right| > \phi_{A, ij} \right) \\ &\geq & 1 - 2m^2 / (n \vee m)^d \geq 2/(n \vee m)^{d - 2}. \end{eqnarray*} \end{proofof2} \subsection{Proof of Lemma \ref{lemma::large-devi-rep}} \label{proofSampleCorrRate} \begin{proofof2} For the event \eqref{eventB_cov} from Proposition~\ref{rateConvergenceSampleCovToB}, \begin{equation*} \left| S_{ij}(B^*) - b_{ij}^* \right| < \phi_{B, ij} = K^2 d \frac{\log^{1/2}(m)}{\sqrt{m}} C_A \sqrt{\widetilde{b}_{ii} \widetilde{b}_{jj}} + \left| b_{ij}^* - \widetilde{b}_{ij} \right|, \end{equation*} dividing by $\sqrt{b_{ii}^* b_{jj}^*}$ yields \begin{align} \label{intermediateX0B} \left| \frac{q_i XX^T q_j}{\text{tr}(A^*) \sqrt{b_{ii}^* b_{jj}^* }} - \rho_{ij}(B) \right| < K^2 d C_A \frac{\log^{1/2}(m) }{\sqrt{m}} \sqrt{ \frac{\widetilde{b}_{ii} \widetilde{b}_{jj}}{b_{ii}^* b_{jj}^*} } + \frac{\left| b_{ij} - \widetilde{b}_{ij} \right|}{\sqrt{ b_{ii}^* b_{jj}^* }}. \end{align} By Lemma \ref{lemma::BiasGroupCentering}, \[ \widetilde{b}_{ij} = b_{ij}\left[ 1 + O\left( \frac{\lVert B \rVert_1}{n} \right) \right], \] so the right-hand side of \eqref{intermediateX0B} is less than or equal to $\widetilde{\alpha}$. Hence event $\eqref{eventB_cov}$ implies ${\mathcal X}_0(B)$. Therefore, we know that $P(\mathcal{X}_0(B)) \geq 1 - 2/m^{d-2}$. Similarly, event \eqref{eventA_cov} in Proposition~\ref{rateConvergenceSampleCovToA}: \begin{align*} &\left| S_{ij}(A^*) - a_{ij}^* \tr{B^*} / n \right| < \phi_{A, ij} \\ &= (a_{ij}^* / n) \left| \tr{\widetilde{B}} - \tr{B} \right| + d^{1/2} K \log^{1/2}(n \vee m) (1/n) \sqrt{a_{ij}^{*2} + a_{ii}^* a_{jj}^*} \lVert \widetilde{B} \rVert_F, \end{align*} implies that \begin{align*} &\left| \frac{X_j^T (I - P_2) X_t}{\text{tr}(B^*) \sqrt{a_{jj}^* a_{tt}^* }} - \rho_{jt}(A) \right| \\ &< |\rho_{jt}(A)| \frac{\left| \tr{\widetilde{B}} - \tr{B^*} \right|}{\text{tr}(B^*)} + d^{1/2} K \log^{1/2}(n \vee m) \sqrt{\rho_{jt}(A)^2 + 1} \frac{\lVert \widetilde{B} \rVert_F}{\text{tr}(B^*)} \\ &= |\rho_{jt}(A)| \frac{\left| \tr{\widetilde{B}} - \tr{B^*} \right|}{\text{tr}(B^*)} + d^{1/2} K C_{B} \frac{\lVert \widetilde{B} \rVert_F}{\lVert B^* \rVert_F} \sqrt{\rho_{jt}(A)^2 + 1} \frac{\log^{1/2}(n \vee m)}{\sqrt{n}} \\ &\leq \widetilde{\eta}, \end{align*} which is the event ${\mathcal X}_0(A)$. Therefore, we get that $P({\mathcal X}_0(A)) \geq 1 - 2/(n \vee m)^d$. We can obtain the $P({\mathcal X}_0)$ by using a union bound put together $P({\mathcal X}_0(B))$ and $P({\mathcal X}_0(A))$, completing the proof. \end{proofof2} \subsection{Proof of Lemma \ref{lemma::BiasGroupCentering}} \label{sec::proofofBiasGroupCentering} \begin{proofof2} Recall that $\widetilde{B} = (I - P_2)B^*(I - P_2)$. The matrix $\widetilde{B} - B^*$ can be expressed as \[ \widetilde{B} - B^* = (I - P_2)B^*(I - P_2) - B^* = - P_2 B^* - B^* P_2 + P_2 B^* P_2. \] By the triangle inequality, $\lVert \widetilde{B} - B^* \rVert_{\max} \leq \lVert P_2 B^* \rVert_{\max} + \lVert B^* P_2 \rVert_{\max} + \lVert P_2 B^* P_2 \rVert_{\max}$. We bound each term on the right-hand side. First we bound $\lVert P_2 B^* \rVert_{\max}$ and $\lVert B^* P_2 \rVert_{\max}$. Let $p_i$ denote the $i$th column of $P_2$. The $(i,j)$th entry satisfies \[ |p_i^T b_j^*| \leq \lVert B^*p_i \rVert_\infty \leq \lVert B^* \rVert_\infty \lVert p_i \rVert_\infty = \lVert B^* \rVert_1 \lVert p_i \rVert_\infty = \lVert B^* \rVert_1 / n_{\min}, \] so $\lVert P_2 B^* \rVert_{\max} \leq \lVert B^* \rVert_1 / n_{\min}$. Because $P_2$ and $B^*$ are symmetric, $\lVert P_2 B^* \rVert_{\max} = \lVert B^* P_2 \rVert_{\max}$. We now bound $\lVert P_2 B^* P_2 \rVert_{\max}$. Let $B^{1/2}$ denote the symmetric square root of $B^*$. We can express $p_i^T B^* p_j$ as an inner product $( B^{1/2} p_i)^T (B^{1/2} p_j)$, so \begin{align} |(P_2 B^* P_2)_{ij}| &= | ( B^{1/2} p_i)^T (B^{1/2} p_j) | \leq \left( p_i^T B^* p_i \right)^{1/2} \left( p_j^T B^* p_j \right)^{1/2} \label{applyCauchySchwarzPBP} \\ &\leq \lVert p_i \rVert_2 \lVert p_j \rVert_2 \lVert B \rVert_2 \leq \lVert B^* \rVert_2 / n_{\min}, \label{boundPBP} \end{align} where \eqref{applyCauchySchwarzPBP} follows from the Cauchy Schwarz inequality, and \eqref{boundPBP} holds because \[ \lVert p_i \rVert_2 = \begin{cases} 1 / \sqrt{n_1} & \text{if $i \in \{ 1, \ldots, n_1 \}$} \\ 1 / \sqrt{n_2} & \text{if $i \in \{ n_1 + 1, \ldots, n \}$}. \end{cases} \] \end{proofof2} \subsection{Proof of Lemma~\ref{SampleCovProjectionToB}} \label{sec::proofofSampleCovProjectionToB} \begin{proofof2} Let $B^{1/2}$ denote the symmetric square root of $B^*$. Let $Z_j = (a_{jj}^*B^*)^{-1/2}X_j$. We express $S_{ij}(B^*)$ as a quadratic form in order to use the Hanson-Wright inequality to prove a large deviation bound. That is, we show that $S_{ij}(B^*) = \operatorname{vec}(Z)^T \Phi^{ij} \operatorname{vec}(Z)$, with \begin{eqnarray} \label{HWMatrixSampleCovProjection} \Phi^{ij} & =& (1/m)A^* \otimes B^{1/2} (e_j - p_j) (e_i - p_i)^T B^{1/2}. \end{eqnarray} We express $S_{ij}(B^*)$ as a quadratic form, as follows: \begin{eqnarray*} S_{ij}(B^*) &= & \frac{1}{m} \sum_{k = 1}^m (e_i - p_i)^T X_k X_k^T (e_j - p_j) = \frac{1}{m} \sum_{k = 1}^m \operatorname{tr}\left[ (e_i - p_i)^T X_k X_k^T (e_j - p_j) \right] \\ &= & \frac{1}{m} \sum_{k = 1}^m X_k^T (e_j - p_j)(e_i - p_i)^T X_k \\ &= & \frac{1}{m} \operatorname{vec}(X)^T \left( I_{m \times m} \otimes (e_j - p_j)(e_i - p_i)^T \right) \operatorname{vec}(X) \\ &= & \operatorname{vec}(Z)^T \Phi^{ij} \operatorname{vec}(Z) \end{eqnarray*} where \begin{eqnarray} \label{TraceHWMatrixSampleCovProjection} \operatorname{tr}(\Phi^{ij}) &= & \operatorname{tr}(B^{1/2}(e_j - p_j) (e_i - p_i)^T B^{1/2}) = (e_i - p_i)^T B^* (e_j - p_j) = \widetilde{b}_{ij}, \\ \label{FroHWMatrixSampleCovProjection} \lVert \Phi^{ij} \rVert_F &= &\frac{1}{m} \lVert A^* \rVert_F \lVert B^{1/2} (e_j - p_j) (e_i - p_i)^T B^{1/2} \rVert_F \\ \nonumber &= & \frac{1}{m} \lVert A^* \rVert_F \left((e_i - p_i)^T B^* (e_i - p_i) \right)^{1/2} \left((e_j - p_j)^T B^* (e_j - p_j) \right)^{1/2} = \frac{1}{m} \lVert A^* \rVert_F \sqrt{\widetilde{b}_{ii} \widetilde{b}_{jj}}. \end{eqnarray} Therefore, we get that \begin{eqnarray*} \lefteqn{ P\left( \forall i, j \, \left| S_{ij}(B^*) - \widetilde{b}_{ij} \right| \leq K^2 d \log^{1/2}(m) \lVert \Phi^{ij} \rVert_F / c' \right)} \\ &= & P\left( \forall i, j \, \left| \operatorname{vec}(Z)^T \Phi^{ij} \operatorname{vec}(Z) - \tr{\Phi^{ij}} \right| \leq K^2 d \log^{1/2}(m) \lVert \Phi^{ij} \rVert_F / c'\right)\\ &\geq & 1 - 2 m^2 \exp\left( -c \min\left( d^2 \log(m) / c'^2, \frac{d \log^{1/2}(m) \lVert \Phi^{ij} \rVert_F / c' }{\lVert \Phi^{ij} \rVert_2} \right) \right) \\ &\geq & 1 - 2 / m^{d - 2}. \label{ProbEntrywiseErrorBTilde} \end{eqnarray*} If the event $\left\{\forall i, j \, \left| S_{ij}(B^*) - \widetilde{b}_{ij} \right| \leq K^2 d \log^{1/2}(m) \lVert \Phi^{ij} \rVert_F / c'\right\}$ holds, it follows that \begin{eqnarray*} \left| S_{ij}(B^*) - b_{ij}^* \right| \leq \left| S_{ij}(B^*) - \widetilde{b}_{ij} \right| + | b_{ij}^* - \widetilde{b}_{ij} | \leq K^2 d \log^{1/2}(m) \lVert \Phi^{ij} \rVert_F / c' + | b_{ij} - \widetilde{b}_{ij} |. \label{equivalentToX0B} \end{eqnarray*} The Lemma is thus proved. \end{proofof2} \section{Genomic study of ulcerative colitis} \label{sec::UCData} Ulcerative colitis (UC) is a chronic form of inflammatory bowel disease (IBD), resulting from inappropriate immune cell infiltration of the colon. As part of an effort to better understand the molecular pathology of UC, \citet{lepage2011twin} reported on a study of mRNA expression in biopsy samples of the colon mucosal epithelium, with the aim of being able to identify gene transcripts that are differentially expressed between people with UC and healthy controls. The study subjects were discordant identical twins, that is, monozygotic twins such that one twin has UC and the other does not. This allows us to simultaneously explore dependences among samples (both within and between twins), dependences among genes, and mean differences between the UC and non-UC subjects. The data set is available on the Gene Expression Omnibus, GEO accession GDS4519 \citep{edgar2002gene}. The data consist of 10 discordant twin pairs, for a total of 20 subjects. Each subject's biopsy sample was assayed for mRNA expression, using the Affymetrix UG 133 Plus 2.0 array, which has 54,675 distinct transcripts. Previous analyses of this data did not consider twin correlations or unanticipated non-twin correlations, and used very different methodology (e.g.\ Wilcoxon testing). Roughly $70$ genes were found to be differentially expressed \citep{lepage2011twin}. We applied our Algorithm 2 to the UC genomics data as follows. First we selected the $2000$ most variable genes based on marginal variance and then rescaled each gene to have unit marginal variance. We then applied step 1 of Algorithm 2, setting $\lambda = 0.1 \approx 0.5 \left( \sqrt{\frac{\log(m)}{m}} + \frac{3}{n} \right)$, with $m = 2000$ and $n = 20$. For step 2 of the algorithm, we ranked the estimated mean differences, group centered the top ten, and globally centered the remaining genes. We then re-calculated the Gram matrix $S_B$ using the centered data. In step 3, following the Gemini approach, we applied the GLasso to $S_B$ using a regularization parameter $\lambda \approx 0.25(\sqrt{\log(m) / m} + 3/n)$. We obtain estimated differences in means and test statistics via steps 4 through 6. \begin{figure}[h!] \includegraphics[width=0.95\textwidth]{corrBplot} \caption{Estimated person-person correlation matrix and its inverse, estimated using the $2000$ genes with largest marginal variance.} \label{fig::corrBHat} \end{figure} A natural analysis of these data using more standard methods would be a paired t-test for each mRNA transcript (paired by twin pair). Such an approach is optimized for the situation where there is a constant level of correlation within all of the twin pairs, with no non-twin correlations. However as in Efron (2008), we wish to accommodate unexpected correlations, which in this case would be correlations between non-twin subjects or a lack of correlation between twin subjects. Our approach, developed in Section~\ref{sec::covEstimation}, does not require pre-specification or parameterization of the dependence structure, thus we were able to consider twin and non-twin correlations simultaneously. Lepage et al. note that UC has lower heritability than other forms of IBD. If UC has a relatively stronger environmental component, this could explain the pattern of correlations that we uncovered, as shown in Figure~\ref{fig::corrBHat}. The samples are ordered so that twins are adjacent, corresponding to 2 by 2 diagonal blocks. The penalized inverse sample correlation matrix contains nonzero entries both within twin pairs and between twin pairs. To also handle these unexpected non-twin correlations, we performed testing using Algorithm 2. We found only a small amount of evidence for differential gene expression between the UC and non-UC subjects. Four of the adjusted p-values fell below a threshold of $0.1$, using the Benjamini-Hochberg adjustment; that is, four genes satisfied $2000 \widehat{p}_{(i)} / i < 0.1$, where $\widehat{p}_{(i)}$ is the $i^{th}$ order statistic of the p-values calculated using Algorithm 2, for $i = 1, \ldots, 2000$. Based on our theoretical and simulation work showing that our procedure can successfully recover and accommodate dependence among samples, we argue that this is a more meaningful representation of the evidence in the data for differential expression compared to methods that do not adapt to dependence among samples. Specifically, in Section~\ref{subsec::calib_test_stats} we demonstrate that our test statistics are properly calibrated and as a result have weaker (but more accurate) evidence for differential expression results. Below we argue that the sample-wise correlations detected by our approach would be expected to artificially inflate the evidence for differential expression. \subsection{Calibration of test statistics} \label{subsec::calib_test_stats} As noted above, based on the test statistics produced by Algorithm 2, we find evidence for only a small number of genes being differentially expressed. This conclusion, however, depends on the test statistics conforming to the claimed null distribution whenever the group-wise means are equal. In this section, we consider this issue in more detail. \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{qqdataplot} \caption{Quantile plots of test statistics for three disjoint gene sets, each consisting of $2000$ genes. The genes are partitioned based on marginal variance. GLS statistics are taken from step 5 of Algorithm 2; in step 2, the ten genes with greatest mean differences are selected for group centering.} \label{fig::qqdata} \end{figure} The first plot of Figure \ref{fig::qqdata} compares the empirical quantiles of $\Phi^{-1}(T_j)$ to the corresponding quantiles of a standard normal distribution, where $\Phi$ is the standard normal cdf and the $T_j$s are as defined in \eqref{unpairedTstat}. Plots 2 and 3 show the same information for successive non-overlapping blocks of two thousand genes sorted by marginal variance. Since this is a discordant twins study, we also show results for the standard paired t statistics, pairing by twin. In all cases, the paired and unpaired statistics are more dispersed relative to the reference distribution. By contrast, the central portion of the GLS test statistics coincide with the reference line. Overdispersion of test statistics throughout their range is often taken to be evidence of miscalibration~\citep{devlin1999genomic}. In this setting the GLS statistics are calibrated correctly under the null hypothesis, but the paired and unpaired t statistics are not. \subsection{Stability of gene sets} The motivation of our Algorithm 2 is that in many practical settings a relatively small fraction of variables may have differential means, and therefore it is advantageous to avoid centering variables presenting no evidence of a strong mean difference. Here we assess the stability of the estimated mean differences as we vary the number of group centered genes in Algorithm 2. To do so, we successively group center fewer genes, globally centering the remaining genes. The iterative process is as follows. Let $\hat{B}^{-1}_{(i)} \in \mathbb{R}^{n\times n}$ denote the estimate of $B^{-1}$ at iteration $i$, let $\widehat{\beta}_{(i)} \in \mathbb{R}^{2 \times m}$ denote the estimates of the group means $\beta$ on the $i$th iteration, let $\hat{\gamma}_{(i)} \in {\mathbb R}^m$ denote the vector of differences in group means between the two groups, and let $\hat{\mu}_{(i)} \in \mathbb{R}^m$ denote vector of global mean estimates. Let $\hat{\mu}(B^{-1}) \in \mathbb{R}^m$ denote the result of applying GLS with design matrix $D = 1_n$ to estimate the global means. Initialize $\hat{\beta}_{(1)}$, $\hat{\mu}_{(1)}$ and $\hat{\gamma}_{(1)}$ using the sample means. On the $i$th iteration, \begin{enumerate} \item Rank the genes according to $|\widehat{\gamma}_{(i-1)}|$. Center the highest ranked $n_i'$ genes around $\widehat{\beta}_{(i-1)}$. Center the remaining genes around $\hat{\mu}_{(i-1)}$. \item Obtain $\widehat{B}^{-1}_{(i)}$ by applying GLasso to the centered data matrix from step 1. \item Set $\hat{\beta}_{(i)} = \hat{\beta}(\hat{B}^{-1}_{(i)})$, $\hat{\mu}_{(i)} = \hat{\mu}(\hat{B}^{-1}_{(i)})$, and $\hat{\gamma}_{(i)} = (1, -1) \hat{\beta}_{(i)}$. \end{enumerate} We assess the stability of the mean estimates by comparing the rankings of the genes across iterations of the algorithm. Table \ref{topTenGenesInCommon} displays the number of genes in common out of the top ten genes on each pair of iterations of the algorithm. For example, three genes ranked in the top ten on the first iteration of the algorithm are also ranked in the top ten on the last iteration. Iterations six through nine produce the same ranking of the top ten genes. Three genes are ranked among the top ten on every iteration of the algorithm: DPP10-AS1, OLFM4, and PTN. Web Supplement Table \ref{StabilitySimulationUC} shows simulations confirming these results. \begin{table}[t] \centering \caption{Each iteration $k$ of the algorithm produces a ranking of all $2000$ genes. For the top ten genes on each iteration, entry $(i, j)$ of the table shows the number of genes in common in iterations $i$ and $j$ of the algorithm. Note that the maximum possible value for any entry of the table is $10$; if entry $(i, j)$ is $10$, then iterations $i$ and $j$ selected the same top ten genes.} \label{topTenGenesInCommon} \begin{tabular}{r|rrrrrrrrr} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline 1 & 10 & 10 & 7 & 5 & 5 & 3 & 3 & 3 & 3 \\ 2 & 10 & 10 & 7 & 5 & 5 & 3 & 3 & 3 & 3 \\ 3 & 7 & 7 & 10 & 6 & 5 & 3 & 3 & 3 & 3 \\ 4 & 5 & 5 & 6 & 10 & 8 & 5 & 5 & 5 & 5 \\ 5 & 5 & 5 & 5 & 8 & 10 & 7 & 7 & 7 & 7 \\ 6 & 3 & 3 & 3 & 5 & 7 & 10 & 10 & 10 & 10 \\ 7 & 3 & 3 & 3 & 5 & 7 & 10 & 10 & 10 & 10 \\ 8 & 3 & 3 & 3 & 5 & 7 & 10 & 10 & 10 & 10 \\ 9 & 3 & 3 & 3 & 5 & 7 & 10 & 10 & 10 & 10 \\ \hline \end{tabular} \end{table} \subsection{Stability analysis} Table \ref{fdrStability} shows the number of genes that fall below an FDR threshold of $0.1$ on each iteration, for several values of the GLasso penalty $\lambda$. The number of genes below the threshold is more sensitive to the number of group-centered genes than to the GLasso penalty parameter. This is consistent with the first plot of Web Supplement Figure~\ref{fig::inference} where the design effect (in the denominator of the test statistics) is likewise more sensitive to the number of group centered genes than to the GLasso penalty. When fewer than $128$ genes are group centered, the number of genes below an FDR threshold of $0.1$ is stable across the penalty parameters from $\lambda = 0.1$ to $\lambda = 0.8$. \begin{table}[tb] \centering \caption{For the algorithm, this table shows the number of genes that are significant at an FDR level of $0.1$ on each iteration of the algorithm, for different values of the GLasso penalty $\lambda$. The top row shows the number of genes group centered on each iteration.} \label{fdrStability} \begin{tabular}{|r|rrrrrrrrr|} \hline n.group & 2000 & 1024 & 512 & 256 & 128 & 64 & 32 & 16 & 8 \\ \hline $\lambda = 0.1$ & 1006 & 913 & 327 & 14 & 3 & 1 & 1 & 1 & 1 \\ $\lambda = 0.2$ & 865 & 806 & 262 & 2 & 1 & 1 & 1 & 1 & 0 \\ $\lambda = 0.3$ & 778 & 789 & 303 & 3 & 1 & 1 & 0 & 0 & 0 \\ $\lambda = 0.4$ & 706 & 774 & 452 & 3 & 1 & 0 & 0 & 0 & 0 \\ $\lambda = 0.6$ & 657 & 751 & 587 & 19 & 1 & 1 & 0 & 0 & 0 \\ $\lambda = 0.8$ & 628 & 699 & 493 & 30 & 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} \section*{Outline} We provide additional simulation and data analysis results in Section \ref{sec::simulationAppend} and \ref{sec::dataAppend}. We state some preliminary results and notation in Section \ref{sec::apppreliminary}. We prove Theorem \ref{thm::GLSFixedB} in Section \ref{sec::proofsOfTheorems} and Corollary \ref{theoremInference} in Section~\ref{proofTheoremInference}. We prove Theorem \ref{mainTheoremGroupCentering} in Section \ref{sec::ProofMainThmPartI}, with additional lemmas proved in Section~\ref{sec::proofsforTheorem2}. We prove entrywise convergence of the sample correlation matrices for Algorithm 1 in Section~\ref{app::entrywise_sample_corr}. We prove Theorem \ref{mainTheoremModSel} in Section \ref{sec::proofTheorem3}, and we prove additional lemmas used in the proof of Theorem \ref{mainTheoremModSel} in Section \ref{sec::LemmasForTheorem3}. In Section~\ref{secComparison} we provide additional comparisons between our method and some related methods on both simulated and real data. \input{appendadditionalsims} \input{appendproofmain} \input{appendproofkron} \input{supp} \input{appendproofModelSelection} \input{appcomparison} \clearpage \bibliographystyle{ims}
1,314,259,995,511
arxiv
\section{Preliminaries and notations} \label{prel} In this section we introduce notations that we are using throughout the paper. For the general theory about Riemannian symmetric spaces we refer to \cite{H01} and \cite{Wo84}. Let $M$ be a connected Riemannian symmetric space and $o \in M$. We normally denote by $n = \dim(M)$ the dimension of $M$ and by $r = \rk(M)$ the rank of $M$. The isometry group of $M$ is denoted by $I(M)$ and the connected component of $I(M)$ containing the identity transformation is denote by $G = I(M)^o$. We denote by $K$ the isotropy group of $G$ at $o$. Then $M$ can be identified in the canonical way with the homogeneous space $G/K$ equipped with a suitable $G$-invariant Riemannian metric. We denote ${\mathfrak{g}}$ and ${\mathfrak{k}}$ the Lie algebras of $G$ and $K$, respectively. The induced Cartan decomposition of ${\mathfrak{g}}$ is ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$. We identify ${\mathfrak{p}}$ with the tangent space $T_oM$ of $M$ at $o$ in the usual way. Let $\Sigma$ be a connected complete totally geodesic submanifold of $M$. Since $G$ acts transitively on $M$, we can always assume without loss of generality that $o \in \Sigma$. The tangent space $T_o\Sigma$ is a Lie triple system in ${\mathfrak{p}}$, that is, $[[T_o\Sigma,T_o\Sigma],T_o\Sigma] \subseteq T_o\Sigma$. We define ${\mathfrak{k}}' = [T_o\Sigma,T_o\Sigma] \subseteq {\mathfrak{k}}$ and ${\mathfrak{g}}' = {\mathfrak{k}}' \oplus T_o\Sigma \subseteq {\mathfrak{p}}$ and denote by $K'$ and $G'$ the connected closed subgroups of $K$ and $G$ with with Lie algebras ${\mathfrak{k}}'$ and ${\mathfrak{g}}'$, respectively. Then $\Sigma$ can be identified with the homogeneous space $G'/K'$. The group $G'$ is known as the group of glide transformations of $\Sigma$ and $K'$ as the glide isotropy group of $\Sigma$ at $o$. The normal space of $\Sigma$ at $o$ is denoted by $\nu_o\Sigma$. A Lie triple system $V$ in ${\mathfrak{p}}$ is said to be a reflective Lie triple system if the orthogonal complement of $V$ in ${\mathfrak{p}}$ is a Lie triple system. The totally geodesic submanifolds corrersponding to reflective Lie triple systems are called reflective submanifolds. Let $M = G/K$ be a Riemannian symmetric space of compact type and consider the complexification ${\mathfrak{g}}^\mathbb{C}$ of ${\mathfrak{g}}$. Using the Cartan decomposition ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$, we define a subalgebra ${\mathfrak{g}}^*$ of ${\mathfrak{g}}^\mathbb{C}$ by ${\mathfrak{g}}^* = {\mathfrak{k}} \oplus i{\mathfrak{p}}$. Let $G^*$ be the connected closed subgroup of $G^\mathbb{C}$ with Lie algebra ${\mathfrak{g}}^*$. Then $G^*/K$ is a Riemannian symmetric space of noncompact type. If we start with a Riemannian symmetric space of noncompact type and perform the analogous construction, we end up with a Riemannian symmetric space of compact type. This process is known as duality between Riemannian symmetric spaces of compact type and of noncompact type. It essentially says that, up to possible finite subcoverings in the compact case, there is a one-to-one correspondence between Riemannian symmetric spaces of compact type and of noncompact type. If $V$ is a Lie triple system in ${\mathfrak{p}}$, then $iV$ is a Lie triple system in $i{\mathfrak{p}}$. Therefore, duality preserves totally geodesic submanifolds. For this reason we sometimes switch between symmetric spaces of compact type and of noncompact type, which has the advantage that we can apply methods that are specifically designed to the compact or the noncompact situation. \section{Reducible totally geodesic submanifolds with rank one factors} \label{tgsrk1} In this section we investigate the codimension of locally reducible totally geodesic submanifolds with a factor of rank $1$. The following result was proved in \cite{BOR19}. \begin{prop} \cite[Proposition 5.6]{BOR19} \label{hyperbolicfactors} Let $\Sigma$ be a reducible maximal totally geodesic submanifold of an irreducible Riemannian symmetric space $M$ of noncompact type. Assume that the de Rham decomposition of $\Sigma$ contains a real hyperbolic space $\mathbb{R} H^k$ ($k \geq 2$), a complex hyperbolic space $\mathbb{C} H^k$ ($k \geq 2$), the symmetric space $SL_3(\mathbb{R})/SO_3$, or the symmetric space $SO^o_{2,2+k}/SO_2SO_{2+k}$ ($k \geq 1$ odd). Then either $\Sigma = \mathbb{R} H^{k_1} \times \mathbb{R} H^{k_2}$ for some $k_1,k_2 \geq 2$, or there exists a reflective submanifold $\Sigma'$ of $M$ with $\dim(\Sigma') \geq \dim(\Sigma)$. \end {prop} The purpose of this section is to prove the following result: \begin{thm} \label{rankonefactors} Let $\Sigma$ be a locally reducible maximal totally geodesic submanifold of an irreducible Riemannian symmetric space $M$. Assume that the de Rham decomposition of the Riemannian universal covering space $\tilde\Sigma$ of $\Sigma$ contains a symmetric space of rank $1$ and is not equal to the Riemannian product of two spaces of nonzero constant curvature. Then $\codim(\Sigma) \geq i_r(M)$. \end {thm} Using duality between symmetric spaces of compact type and of noncompact type, we can restrict to the case that $M$ is of noncompact type. Then we have $\tilde\Sigma = \Sigma$. Taking into account Proposition \ref{hyperbolicfactors}, it remains to consider the quaternionic hyperbolic space $\mathbb{H} H^k$ ($k \geq 2$) and the Cayley hyperbolic plane $\mathbb{O} H^2$ as a possible rank $1$ factor. We start with the quaternionic case. \begin{lm} \label{QPS} Let $M = \mathbb{H} H^n = Sp_{1,n}/Sp_1 Sp_n$ be the $n$-dimensional quaternionic hyperbolic space and consider the reflective submanifold $\Sigma = \mathbb{H} H^k = Sp_{1,k}/Sp_1 Sp_k$, $1\leq k < n$, of $M$. Let $\rho : Sp_1Sp_k \to SO(\nu _o\Sigma)$ be the slice representation of $\Sigma$. Then the Lie algebra of $\rho(Sp_1Sp_k)$ is isomorphic to ${\mathfrak{s}}{\mathfrak{o}}_3$. \end{lm} \begin{proof} The Lie algebra ${\mathfrak{s}}{\mathfrak{p}}_1 \oplus {\mathfrak{s}}{\mathfrak{p}}_k$ of the isotropy group $Sp_1Sp_k$ is linearly generated by the curvature endomorphisms $R_{u,v}$ with $u,v \in T_o\Sigma$. Hence the Lie algebra of $\rho(Sp_1Sp_k)$ is linearly generated by the restrictions $R_{u,v}\vert_{\nu_o\Sigma} \in {\mathfrak{s}}{\mathfrak{o}}(\nu_o\Sigma)$ with $u,v \in T_o\Sigma$. It is well-known that, up to a positive scalar multiple, the curvature tensor $R$ of $\mathbb{H} H^n$ is given by \begin{equation} \label{CurQPS} R_{u,v}w = - \langle v,w \rangle u + \langle u,w\rangle v - \sum_{\nu=1}^3 \left(\langle J_\nu v,w \rangle J_\nu u - \langle J_\nu u,w \rangle J_\nu v- 2\langle J_\nu u,v \rangle J_\nu w \right), \end{equation} where $J_1,J_2,J_3$ is a canonical basis of the quaternionic K\"{a}hler structure of $\mathbb{H} H^n$ at $o$. It follows immediately from (\ref{CurQPS}) that the Lie algebra of $\rho(Sp_1Sp_k)$ is linearly generated by $J_1,J_2,J_3$ and isomorphic to ${\mathfrak{s}}{\mathfrak{o}}_3$. \end{proof} \begin{lm} \label{QPS2} Let $M = \mathbb{H} H^n = G/K = Sp_{1,n}/Sp_1Sp_n$, $\Sigma = \mathbb{H} H^{n-1} = Sp_{1,n-1}/Sp_1Sp_{n-1}$ and $\Sigma^\perp = H/L = Sp_{1,1}/Sp_1Sp_1 \cong \mathbb{H} H^1$ be the reflective submanifold of $M$ perpendicular to $\Sigma$ at $o$. Let $\tau \in I(M)$ be the geodesic reflection of $M$ in $\Sigma$. Then $\tau \in L \subset K$. \end{lm} \begin{proof} Since $\Sigma$ is a reflective submanifold of $M$, $\tau$ is an isometry. The full isometry group of $\mathbb{H} H^n$ is connected and therefore $\tau \in K$. By construction, the restriction $\tau|_{\Sigma^\perp}$ is the geodesic symmetry of $\Sigma^\perp \cong \mathbb{H} H^1$ at $o$. Since $\mathbb{H} H^1 \cong \mathbb{R} H^4$, the geodesic symmetry $\tau|_{\Sigma^\perp}$ is an inner isometry of $\Sigma^\perp$. The slice representation of $L \cong Sp_1Sp_1$ on $\nu_o\Sigma^\perp = T_o\Sigma$ is $(z,w) \cdot \xi = \xi z^{-1}$ with $(z,w) \in L \cong Sp_1Sp_1$ and $\xi \in \nu_o\Sigma^\perp$. The isotropy representation of $L \cong Sp_1Sp_1$ on $T_o\Sigma^\perp = \nu_o\Sigma$ is $(z,w) \cdot X = wXz^{-1}$ with $(z,w) \in L \cong Sp_1Sp_1$ and $X \in T_o\Sigma^\perp$. As $\tau|_\Sigma = \id_\Sigma$, we have $d_o\tau|_{T_o\Sigma} = \id|_{T_o\Sigma}$ and $d_o\tau|_{T_o\Sigma^\perp} = -\id|_{T_o\Sigma^\perp}$, it follows that $\tau \in L$, corresponding to the element $(1,-1) \in Sp_1Sp_1$. \end{proof} We now prove Theorem \ref{rankonefactors} for the case that $\Sigma$ contains a quaternionic hyperbolic space as a de Rham factor. \begin {prop} \label{quat} Let $\Sigma$ be a reducible maximal totally geodesic submanifold of an irreducible Riemannian symmetric space $M$ of noncompact type. Assume that the de Rham decomposition of $\Sigma$ contains a quaternionic hyperbolic space $\mathbb{H} H^k$, $k \geq 2$. Then $\codim(\Sigma) \geq i_r(M)$. \end {prop} \begin {proof} We can assume that $o \in \Sigma$ and write $M = G/K$ and $\Sigma = G'/K'$ as in Section \ref{prel}. If $\Sigma$ is nonsemisimple, then $\Sigma$ is reflective by \cite[Theorem 1.2]{BO16} and hence $\codim(\Sigma) \geq i_r(M)$. We therefore can assume that $\Sigma$ is semisimple. By assumption, we have $\Sigma = \Sigma_1 \times \Sigma_2$ with $\Sigma_1 = G'_1/K'_1 \cong \mathbb{H} H^k = Sp_{1,k}/Sp_1Sp_k$. We fix a totally geodesic $\tilde\Sigma \cong \mathbb{H} H^{k-1}$ in $\Sigma_1$ with $o \in \tilde\Sigma$. This is a reflective submanifold of $\Sigma_1$ and there exists a reflective submanifold $\tilde\Sigma^\perp \cong \mathbb{H} H^1$ of $\Sigma_1$ with $o \in \tilde\Sigma^\perp$ that is perpendicular to $\tilde\Sigma$ at $o$. Let $\tau \in I(\Sigma_1)$ be the geodesic reflection of $\Sigma_1$ in $\tilde\Sigma$. By Lemma \ref{QPS2}, $\tau \in K'_1 \subset K' \subset K$. With the same arguments as in the proof of Proposition \ref{hyperbolicfactors}, by replacing $\tau$ with a suitable odd power of $\tau$, we may assume that $\tau$ is an involutive isometry of $M$. Moreover, analogously to the proof of Proposition \ref{hyperbolicfactors}, if the set \[ V = \Fix_{\nu_o\Sigma}(d_o\tau) = \{v \in \nu_o\Sigma : d_o\tau(v) = v\} \] of fixed vectors of $d_o\tau$ in $\nu_o\Sigma$ is trivial, then $\tilde\Sigma \times \Sigma_2 \subset \Sigma$ is a reflective submanifold of $M$, and it follows from \cite[Corollary 2.9]{BOR19} that $\Sigma$ is a reflective submanifold of $M$, which implies $\codim(\Sigma) \geq i_r(M)$. Thus we can assume $\dim(V) \geq 1$. Since $\tau$ is involutive, the totally geodesic submanifold $\Sigma'$ of $M$ with $T_o\Sigma' = T_o\tilde\Sigma \oplus T_o\Sigma_2 \oplus V$ is reflective. If $\dim(V) \geq 4$, then $\dim(\Sigma) \leq \dim(\Sigma')$ and thus $\codim(\Sigma) \geq i_r(M)$. If $\dim(V) =1$, we obtain by a similar argument to that used in the proof of Proposition \ref{hyperbolicfactors}, that $\Sigma$ has only one other de Rham factor, which is isometric to a real hyperbolic space. Then, again by Proposition \ref{hyperbolicfactors}, $\Sigma$ is a product of real hyperbolic spaces, which is a contradiction. Thus we are left with the two possibilities $\dim(V) \in \{2, 3\}$. \begin{lm} \label{lemmaA} $T_o\Sigma_2 \oplus V$ is a Lie triple system. \end{lm} \begin{proof} Consider the slice representation $\bar\rho$ of the isotropy group $Sp_1Sp_{k-1}$ of $\tilde\Sigma$ on its normal space $T_o\Sigma_2 \oplus V$ in $\Sigma'$. The isotropy group $Sp_1Sp_{k-1}$ acts trivially on $T_o\Sigma_2$, since $\Sigma_1 \times \Sigma_2 = \Sigma$ and $\tilde\Sigma \subset \Sigma_1$. Since $\dim(Sp_1Sp_{k-1}) \geq 6$ and $\dim(SO(V)) \leq 3$, $\ker(\bar\rho)$ is a nontrivial normal subgroup of $Sp_1Sp_{k-1}$. The set $F$ of fixed vectors of this normal subgroup in $T_o\tilde\Sigma$ is trivial. (Note that $F \neq T_o\tilde\Sigma$, since $Sp_1Sp_{k-1}$ acts almost effectively.) In fact, if $F$ is nontrivial, $F$ must be invariant under $Sp_1Sp_{k-1}$ and so $\tilde\Sigma$ would be reducible, which is a contradiction. Therefore the set of fixed vectors of $\ker(\bar \rho)$ in $T_o\Sigma'$ is exactly $T_o\Sigma_2 \oplus V$ and so this subspace is a Lie triple system. \end{proof} \begin{lm} \label{lemmaB} The slice representation $\bar\rho$ of the isotropy group $Sp_1Sp_{k-1}$ of $\tilde\Sigma$ on its normal space $T_o\Sigma_2 \oplus V$ in $\Sigma'$ is trivial. \end{lm} \begin{proof} Since $Sp_1Sp_{k-1}$ acts trivially on $T_o\Sigma_2$, we only need to show that $Sp_1Sp_{k-1}$ acts trivially on $V$. Assume that $\bar\rho$ is nontrivial. Since $Sp_1Sp_{k-1}$ has no normal subgroups of codimension $1$, we must have $\dim(\bar\rho(Sp_1Sp_{k-1})) > 1$. This proves our assertion for $\dim(V) = 2$. Assume that $\dim(V)=3$ and $\bar\rho(Sp_1Sp_{k-1}) = SO(V)$. Then there are no nonzero fixed vectors by $\bar\rho(Sp_1Sp_{k-1})$ in $V$. From Lemma \ref{lemmaA} we know that $\tilde\Sigma$ is a reflective submanifold of $\Sigma'$. So the set of fixed vectors of $\bar\rho$, which coincides with $T_o\Sigma_2$, is invariant under the isotropy group of the perpendicular reflective submanifold $P$, where $T_oP = T_o\Sigma_2 \oplus V$. Then $\Sigma_2$ is a de Rham factor of $P$ and so $[T_o\Sigma_2 , V]=\{0\}$. This implies that the centralizer ${\mathfrak{z}}_{\mathfrak{p}}(T_o\Sigma_2)$ of $T_o\Sigma_2$ in ${\mathfrak{p}}$ contains $T_o\Sigma_1 \oplus V$. Then ${\mathfrak{z}}_{\mathfrak{p}}(T_o\Sigma_2) + T_o\Sigma_2$ is a proper Lie triple system in $T_oM$ containing $T_o\Sigma$ properly. This contradicts the maximality of $\Sigma$ and so the assertion follows. \end{proof} \begin{lm} \label{lemmaC} $\tilde\Sigma$ is a de Rham factor of $\Sigma'$. \end{lm} \begin{proof} Since $\bar\rho$ is trivial by Lemma \ref{lemmaB}, it follows from \cite[Proposition 3.8]{BOR19} that $\tilde\Sigma$ is either a de Rham factor of $\Sigma'$ or it is contained in a de Rham factor $M_1$ of $\Sigma'$ of constant curvature. Assume the latter holds. Note that $M_1$ is strictly contained in $\Sigma'$. In fact, we have $\rk(\Sigma') \geq 2$ since $[T_o\tilde\Sigma, T_o\Sigma_2] = \{0\}$. Let us write $\Sigma' = M_1\times M_2$, where $M_2$ is not necessarily irreducible. Note that $\Sigma_2 \subset M_2$. In fact, if $v \in T_o\Sigma_2$, then its orthogonal projection onto $T_oM_1$ must be trivial since $\rk(M_1) = 1$, $\tilde\Sigma \subset M_1$ and $[v, T_o\tilde\Sigma] = \{0\}$. Then $[T_o\Sigma_2, T_oM_1] = \{0\}$ and, as in the proof of Lemma \ref{lemmaB}, ${\mathfrak{z}}_{\mathfrak{p}}(T_o\Sigma_2) + T_o\Sigma_2$ is a proper Lie triple system in $T_oM$ that properly contains $T_o\Sigma$. This contradicts the maximality of $\Sigma$ and the assertion follows. \end{proof} We continue with the proof of Proposition \ref{quat}. Let $\Sigma'^\perp$ be the reflective submanifold of $M$ which is perpendicular to $\Sigma'$ at $o$. Then, by construction, $T_o\Sigma'^\perp$ is the $(-1)$-eigenspace of $d_o\tau$. Note that $T_o\tilde\Sigma^\perp \subset T_o\Sigma'^\perp$. According to Lemma \ref{QPS}, the kernel $H$ of the representation of $Sp_1Sp_{k-1}$ (as in Lemma \ref{lemmaB}) on $T_o\tilde\Sigma^\perp$ must be isomorphic to $Sp_{k-1}$ (and hence of dimension $\geq 3$). It follows from Lemma \ref{lemmaC} that $H$ acts trivially on $T_o\Sigma_2 \oplus V$. Since $H$ is a normal subgroup of the isotropy group at $o$ of the de Rham factor $\tilde\Sigma$ of $\Sigma'$ (see Lemma \ref{lemmaC}), $H$ is a normal subgroup of the isotropy group of $\Sigma'$. Then, the set $W$ of fixed vectors of $H$ on the normal space $\nu _o\Sigma'$, which contains $T_o\tilde\Sigma^\perp$, is invariant under the isotropy group at $o$ of the complementary reflective submanifold $\Sigma'^\perp$. Then $W =T_oQ$, where $Q$ is a de Rham factor of $\Sigma'^\perp$. Assume that $W = T_o\Sigma'^\perp$. Using Lemma \ref{lemmaB} we see that $H$ acts trivially on $T_o\Sigma_2 \oplus V$. Then $H$ acts trivially on $\nu _o\tilde\Sigma = T_o\Sigma_2 \oplus V \oplus W$. Using \cite[Proposition 2.8]{BOR19} we obtain that $\tilde\Sigma$ is reflective. Then, using \cite[Corollary 2.9]{BOR19} and the fact that $\tilde\Sigma \subset \Sigma$, we see that $\Sigma$ is reflective and hence $\codim(\Sigma) \geq i_r(M)$. Next, assume that $W$ is a proper subspace of $T_o\Sigma'^\perp$, or equivalently, $Q$ is properly contained in $\Sigma'^\perp$. Let us write, as a nontrivial Riemannian product, $\Sigma'^\perp = Q \times Q'$, where $o \in Q'$ and $T_oQ'$ is the orthogonal complement of $W$ in $T_o\Sigma'^\perp$. Then $\dim(Q') \geq 3$, because otherwise $H \cong Sp_{k-1}$ would act trivially on $T_oQ'$. Let us consider the involutive isometry $\tau \in L$, where $L \cong Sp_1Sp_1\subset K$ is the glide isotropy group of $\tilde\Sigma^\perp$. Recall that $T_o\Sigma'$ is the $(+1)$-eigenspace of $d_o\tau$ and $T_o\tilde\Sigma^\perp$ is contained in the $(-1)$-eigenspace of $d_o\tau$. \begin{lm} \label{lemmaD} The involution $\tau$ commutes with every isometry in the glide isotropy group $L \cong Sp_1Sp_1$ of $\tilde\Sigma^\perp$. \end{lm} \begin{proof} Consider the set $A =\{k\circ \tau \circ k^{-1} \circ \tau^{-1}: k\in L\} \subset L$ of isometries of $M$. Note that $A$ is connected, since $L$ is connected. Note also that any isometry in $A$ acts trivially on $\tilde\Sigma^\perp$. Since $L$ acts almost effectively on $\tilde\Sigma^\perp$, $A$ must be discrete and thus $A = \{\id_M\}$, since $A$ is connected. \end{proof} From Lemma \ref{lemmaD} we see that $L$, via the isotropy representation, leaves the $(+1)$-eigenspace $T_o\Sigma'$ of $d_o\tau$ invariant. Consequently, $L$ leaves $\Sigma'$ invariant. According to Lemma \ref{QPS} there exists a nontrivial normal subgroup $\tilde L \cong Sp_1$ of $L$ such that $\tilde L$ acts trivially on $T_o\tilde\Sigma$. Note that $\tilde L$ acts trivially also on $T_o\Sigma_2$, which follows from the fact that $\Sigma = \Sigma_1 \times \Sigma_2$ (note that $\tilde\Sigma^\perp$ is a totally geodesic submanifold of $\Sigma_1$). \begin{lm} \label{lemmaE} The normal subgroup $\tilde L$ of $L$ acts trivially on $V$. \end{lm} \begin{proof} We have $\tilde L (V) \subset V$, because $\tilde L (T_o\Sigma') = T_o\Sigma' = T_o\tilde\Sigma \oplus T_o\Sigma_2 \oplus V$ and $\tilde L$ acts trivially on both $T_o\tilde\Sigma$ and $T_o\Sigma_2$. If $\dim(V)= 2$, then $\tilde L$ acts trivially on $V$ since $\tilde L \cong Sp_1 \cong Spin_3$. Let $\dim(V)= 3$. Assume that $\tilde L$ acts on $V$ nontrivially and let $g \in \tilde L$ be such that $h = g|_V \neq \id$. Since $\tilde L$ is connected, $+1$ is an eigenvalue of $d_oh$ with multiplicity $1$. Let $\mathbb{R} v$ with $0 \neq v \in V$ be the corresponding eigenspace. Recall from Lemma \ref{lemmaA} that $T_o\Sigma_2 \oplus V$ is a Lie triple system. This Lie triple system is invariant under $\tilde L$. Then $T_o\Sigma_2 \oplus \mathbb{R} v$ is also a Lie triple system, since it coincides with the set of fixed vectors of $g$ in $T_o \Sigma_2 \oplus V$. Let $X$ be the totally geodesic submanifold of $M$ with $T_oX = T_o\Sigma_2 \oplus \mathbb{R} v$. Then $\Sigma_2$ is a semisimple totally geodesic hypersurface of $X$. Then, by \cite[Lemma 5.5]{BOR19}, either there exists an irreducible de Rham factor $\Sigma_2'$ of $\Sigma_2$ with constant curvature, or $X$ is a Riemannian product $X = \Sigma_2 \times \mathbb{R}$. In the first case, by Proposition \ref{hyperbolicfactors}, $\Sigma$ is a product of spaces of constant curvature, which contradicts our assumption. In the second case, ${\mathfrak{z}}_{\mathfrak{p}}(T_o\Sigma_2') + T_o\Sigma_2'$ is a proper Lie triple system in $T_oM$ that properly contains $T_o\Sigma$, since it also contains $v$. This contradicts the maximality of $\Sigma$. \end{proof} From Lemma \ref{lemmaE} and its preceding paragraph we see that $\tilde L$ acts trivially on $T_o\Sigma'$. Since $\tilde L \subset L$, and $L$ is included in the glide isotropy group of $Q$ at $o$, $\tilde L$ acts trivially on $T_oQ'$. Let $U \subset T_oM$ be the subspace of fixed vectors of $\tilde L$. Then $T_o\Sigma' \oplus T_oQ' \subset U$ and so \[ \dim (U) \geq \dim (T_o\Sigma') + \dim (T_oQ') \geq \dim (\Sigma) -2 + 3 > \dim(\Sigma). \] Then the totally geodesic submanifold $S$ of $M$ with $T_oS =U$ satisfies $\dim(U) > \dim(\Sigma)$. Moreover, $S$ contains the reflective submanifold $\Sigma'$. Then $S$ is reflective by \cite[Corollary 2.9]{BOR19} and it follows that $\codim(\Sigma) > \codim(S) \geq i_r(M)$. This finishes the proof of Proposition \ref{quat}. \end{proof} We now consider the Cayley hyperbolic plane $\mathbb{O} H^2$ as a possible rank $1$ factor. \begin{prop} \label{Cayley} Let $\Sigma$ be a reducible maximal totally geodesic submanifold of an irreducible Riemannian symmetric space $M = G/K$ of noncompact type. Assume that the de Rham decomposition of $\Sigma$ contains the Cayley hyperbolic plane $\mathbb{O} H^2$ as a factor. Then $\codim(\Sigma) \geq i_r(M)$. \end{prop} \begin{proof} The full isometry group of $\mathbb{O} H^2$ is connected and isomorphic to the noncompact real simple Lie group $F_4^{-20}$. The isotropy group at $o$ is isomorphic to $Spin_9$ and thus we can write $\mathbb{O} H^2 = F_4^{-20}/Spin_9$. The isotropy representation of $Spin_9$ on $T_o\mathbb{O} H^2$ is equivalent to the spin representation of $Spin_9$ on $\mathbb{R}^{16}$. As usual, we can assume $o \in \Sigma$ and write $\Sigma = G'/K'$ as in Section \ref{prel}. If $\Sigma$ is nonsemisimple, then $\Sigma$ is reflective by \cite[Theorem 1.2]{BO16} and thus $\codim(\Sigma) \geq i_r(M)$. We can therefore assume that $\Sigma$ is semisimple. By assumption, we have $\Sigma = \mathbb{O} H^2 \times \bar\Sigma$, where $\bar\Sigma$ is a semisimple Riemannian symmetric space of noncompact type. The Cayley hyperbolic plane admits only one type of polars, namely Cayley hyperbolic lines $\mathbb{O} H^1$, all of which are congruent to each other in $\mathbb{O} H^2$ and isometric to the real hyperbolic space $\mathbb{R} H^8$. We choose a Cayley hyperbolic line $P \cong \mathbb{O} H^1$ in $\mathbb{O} H^2$ with $o \in P$ and denote by $Q \cong \mathbb{O} H^1$ the Cayley hyperbolic line in $\mathbb{O} H^2$ with $o \in Q$ that is perpendicular to $P$ at $o$. Note that $P$ and $Q$ is a pair of complementary reflective submanifolds of $\mathbb{O} H^2$ and $T_o\mathbb{O} H^2 = T_oP \oplus T_oQ$. The subgroup of the isotropy group $Spin_9$ leaving this decomposition invariant is (isomorphic to) $Spin_8$. The restriction to $Spin_8$ of the isotropy representation of $Spin_9$ is equivalent to the direct sum of the two inequivalent spin representations of $Spin_8$ on $\mathbb{R}^8$. The subgroup $Spin_8$ is the isotropy group of each of the two groups of glide transformations of $P$ and $Q$. We denote by $\tau_P,\tau_Q \in Spin_8$ the geodesic reflections of $\mathbb{O} H^2$ in $P$ and $Q$, respectively. Since $Spin_8 \subset Spin_9 \subset K' \subset K$, both $\tau_P$ and $\tau_Q$ can be viewed as isometries of $M$. Note that $\tau_P$ and $\tau_Q$ lie both in the (finite) center $Z_{Spin(8)}$ of $Spin_8$, since $\tau_P|_Q, \tau _Q|_P$ are the geodesic symmetries of $Q$ and $P$, respectively (and $Spin_8$ acts almost effectively on both $P$ and $Q$). Since any nontrivial element in the center $Z_{Spin(8)}$ of $Spin _8$ has order $2$, $\tau _P$ and $\tau _Q$ have both order $2$, as elements of $K$. We define $V_P = \Fix_{\nu_o\Sigma}(d_o\tau_P) = \{ v \in \nu_o\Sigma : d_o\tau_P(v) = v\}$. Then $T_oP \oplus T_o\bar{\Sigma} \oplus V_P = \Fix_{T_oM}(d_o\tau_P)$. Evidently, $\Fix_{T_oM}(d_o\tau_P)$ is a reflective Lie triple system in $T_oM$. Let $\Sigma_P$ be the reflective submanifold of $M$ with $T_o\Sigma_P = \Fix_{T_oM}(d_o\tau_P)$ and $\Sigma_P^\perp$ be the reflective submanifold of $M$ with $T_o\Sigma_P^\perp = \nu_o\Sigma_P$. Note that $Q \subseteq \Sigma_P^\perp$. If $\dim(V_P) = 0$, then $P \times \bar{\Sigma}$ is a reflective submanifold of $M$. It then follows from \cite[Corollary 2.9]{BOR19} that $\Sigma$ is a reflective submanifold of $M$ and hence $\codim(\Sigma) \geq i_r(M)$. If $\dim(V_P) \geq 8$, then $\dim(\Sigma) \leq \dim(\Sigma_P)$ and hence $\codim(\Sigma) \geq \codim(\Sigma_P) \geq i_r(M)$. If $\dim(V_P) \in \{1,\ldots,7\}$, then the isotropy group $Spin_8$ acts trivially on $V_P$. Since $Spin_8$ acts trivially also on $T_o\bar\Sigma$, it follows that the slice representation of $Spin_8$ on the normal space $T_o\bar{\Sigma} \oplus V_P$ of $P$ at $o$ in $\Sigma_P$ is trivial. An analogous argument as for the quaternionic case in the proof of Lemma \ref{lemmaC} shows that $P$ is a de Rham factor of $\Sigma_P$. The set $\Fix_{T_o\Sigma_P^\perp}(d_o\tau_Q)$ is a Lie triple system containing $T_oQ$ and invariant under the glide isotropy group $H$ of $\Sigma_P^\perp$ at $o$. In fact, $H$ leaves invariant the factor $P$ of $\Sigma_P$ and so it must leave invariant the finite center $Z_{Spin(8)}$ of $Spin_8$. Then the identity component $H^o$ of $H$ must commute with $Z_{Spin(8)}$ and in particular with $\tau_Q$. Thus there exists a Riemannian factor $\tilde\Sigma_P^\perp$ of $\Sigma_P^\perp$ such that $T_o\tilde\Sigma_P^\perp = \Fix_{T_o\Sigma_P^\perp}(d_o\tau_Q)$. Note that $Q \subseteq \tilde\Sigma_P^\perp$. We have $d_o\tau_Q|_{T_oP} = -\id|_{T_oP}$ and, since $\tau_Q \in Spin_8$ and the slice representation of $Spin_8$ on the normal space $T_o\bar\Sigma \oplus V_P$ of $P$ at $o$ in $\Sigma_P$ is trivial, $d_o\tau_Q|_{T_o\bar\Sigma \oplus V_P} = \id|_{T_o\bar\Sigma \oplus V_P}$. Therefore, if $\tilde\Sigma_P^\perp = \Sigma_P^\perp$, then $P$ is a reflective submanifold of $M$ and \cite[Corollary 2.9]{BOR19} implies that $\Sigma$ is a reflective submanifold of $M$ and hence $\codim(\Sigma) \geq i_r(M)$. Otherwise, we get a proper Riemannian product decomposition $\Sigma_P^\perp = \tilde\Sigma_P^\perp \times \bar\Sigma_P^\perp$. Since $Q \subset \tilde\Sigma_P^\perp$, the isotropy group $Spin_8$ of $Q$ acts trivially on $ T_o\bar\Sigma_P^\perp$. As $\tau_Q \in Spin_8$, this implies $d_o\tau_Q|_{T_o\bar\Sigma_P^\perp} = \id_{T_o\bar\Sigma_P^\perp}$, which is a contradiction to $T_o\tilde\Sigma_P^\perp = \Fix_{T_o\Sigma_P^\perp}(d_o\tau_Q)$. This finishes the proof. \end{proof} \section{General structure results} \label{gsr} In this section we prove some general results about totally geodesic submanifolds in symmetric spaces, which will be useful for later purposes. We start by investigating reflection hyperplanes of totally geodesic submanifolds. \begin{prop} \label{cardinal} Let $M=G/K$ be a simply connected irreducible Riemannian symmetric space and $\Sigma = G'/K'$ be a totally geodesic submanifold. Let $W$ and $W'$ be the Weyl groups associated with $M$ and $\Sigma$, respectively. Let $b$ and $b'$ be the number of reflection hyperplanes of $W$ and $W'$, respectively. Then $b' \leq b$. \end{prop} \begin{proof} By duality, we can assume that $M$ is of noncompact type. Then $\Sigma$ is simply connected and hence $K'$ is connected. Let ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$ and ${\mathfrak{g}}' = {\mathfrak{k}}' \oplus {\mathfrak{p}}'$ be the Cartan decompositions associated with $(G,K)$ and $(G',K')$, respectively. As usual, we identify $T_oM$ with ${\mathfrak{p}}$ and $T_o\Sigma$ with ${\mathfrak{p}}'$. Let ${\mathfrak{a}}'$ be a maximal abelian subspace of ${\mathfrak{p}}'$ and ${\mathfrak{a}}$ be a maximal abelian subspace of ${\mathfrak{p}}$ with ${\mathfrak{a}}' \subseteq {\mathfrak{a}}$. We consider $W$ and $W'$ as reflection groups of ${\mathfrak{a}}$ and ${\mathfrak{a}}'$, respectively. Let $H'_1,\ldots,H'_{b'} \subset {\mathfrak{a}}'$ and $H_1,\ldots ,H_b \subset {\mathfrak{a}}$ be the distinct reflection hyperplanes associated with $W'$ and $W$, respectively. We define the set \[ J = \{j \in \{1,\ldots,b\} : {\mathfrak{a}}' \subseteq H_j\}, \] which could be an empty or a nonempty set. Since the intersection of all reflection hyperplanes is $\{0\}$, $J$ is properly contained in $\{1, \ldots , b\}$. By a suitable labelling of the reflection hyperplanes we can assume that there exists $j_0\in \{1,\ldots,b\}$ such that \[ j \notin J \iff j\geq j_0. \] Then $H_j \cap {\mathfrak{a}}' = {\mathfrak{a}}'$ if $j < j_0$ and $H_j \cap {\mathfrak{a}}'$ is a hyperplane of ${\mathfrak{a}}'$ if $j \geq j_0$. Note that any two such hyperplanes $H_j \cap {\mathfrak{a}}'$ may coincide. Let us assume that there exists a reflection hyperplane $H'_d \subset {\mathfrak{a}}'$ such that for every reflection hyperplane $H_j \subset {\mathfrak{a}}$ we have $H_d' \neq H_j \cap {\mathfrak{a}}'$. This is always true for $j < j_0$, and for $j \geq j_0$ this means that $H_d' \cap H_j$ is a hyperplane of $H'_d$. Note that $H_d' \cap H_i'$ is also a hyperplane of $H'_d$ for all $i \neq d$. Therefore we can find $0 \neq u \in H'_d$ so that $u \notin H_i'$ for all $i \neq d$ and $u \notin H_j$ for all $j \geq j_0$. There exists $\epsilon > 0$ so that the open ball $B_\epsilon(u)$ in ${\mathfrak{a}}'$ with radius $\epsilon$ and center $u$ does not intersect $H_i'$ for all $i \neq d$ and $H_j$ for all $j \geq j_0$. We now choose a point $v \in B_\epsilon(u) \subset {\mathfrak{a}}'$ that is not contained in $H_d'$ and define the curve $\gamma : [0,1] \to B_\epsilon(u),\ t \mapsto v + t(u-v)$, which parametrizes the line segment from $v$ to $u$. By construction, we have $\{ j \in J : \gamma(t) \in H_j\} = J$ for each $t \in [0,1]$. According to the Slice Theorem of Hsiang, Palais and Terng (see \cite[Section 2]{HPT88} and \cite[Section 6.5]{PT88}), the dimension of the isotropy orbit $K \cdot \gamma(t)$ satisfies \[ \dim (K\cdot \gamma (t)) = m - \sum_{j\in J} m_j, \] where $m$ is the dimension of a principal $K$-orbit in ${\mathfrak{p}}$ and $m_j$ is the multiplicity of a focal point in $H_j$ which is not in any other reflection hyperplane $H_\nu$ for $\nu \neq j$. It follows that $\dim(K \cdot \gamma(t))$ is independent of the choice of $t \in [0,1]$. On the one hand, this implies that the identity components of the isotropy groups $K_{\gamma (t)}$ ($t \in [0,1]$) coincide, or equivalently, the isotropy algebras ${\mathfrak{k}}_{\gamma (t)}$ ($t \in [0,1]$) coincide. On the other hand, by the choice of $u$ and $v$, the orbit $K' \cdot v \subset {\mathfrak{p}}'$ is a principal orbit of the isotropy action of $K'$ on ${\mathfrak{p}}'$ and $K' \cdot u$ is a parallel focal orbit of $K' \cdot v$. Therefore the isotropy algebra ${\mathfrak{k}}'_v$ is strictly contained in the isotropy algebra ${\mathfrak{k}}'_u$. Thus there exists $z \in {\mathfrak{k}}'$ such that $\ad(z)u = 0$ and $\ad(z)v \neq 0$. Since ${\mathfrak{k}}' \subset {\mathfrak{k}}$, this implies $z \in {\mathfrak{k}}_u$ and $z \notin {\mathfrak{k}} _v$, which contradicts ${\mathfrak{k}}_u = {\mathfrak{k}}_v$. It follows that for every reflection hyperplane $H'_i \subset {\mathfrak{a}}'$ there exists a reflection hyperplane $H_j \subset {\mathfrak{a}}$ such that $H_i' = H_j \cap {\mathfrak{a}}'$, which implies $b' \leq b$. \end{proof} Let $M = G/K$ be a simply connected irreducible Riemannian symmetric space and consider the marked Dynkin diagram associated with $G/K$, which is the Dynkin diagram associated with $G/K$ together with the dimensions of the corresponding root spaces. The dimension of a root space is also called the multiplicity of the root. Here we make the convention that the multiplicity of a non-reduced root $\alpha$ is obtained by adding up the dimensions of the root spaces of $\alpha$ and $2\alpha$. We denote by $\Phi$ the corresponding root system and by $\Phi^+$ the positive roots. If $\Phi$ is reduced and all roots have the same length, we call all roots long. If $\Phi$ is reduced and there are roots of different length, then there are exactly two different lengths and we can naturally distinguish between long and short roots. If $\Phi$ is non-reduced, we call the non-reduced roots short and the other roots long. Recall that the Weyl group acts transitively on the sets of long and short roots and so the multiplicities of any two long (resp.\ short) roots are the same. This implies that all long roots have the same multiplicity $m_1$, and all short roots have the same multiplicity $m_2$. If there are no short roots, our convention is $m_1 = m_2$. We denote by $\bar{l}$ (resp.\ $\bar{s}$) the number of positive long (resp.\ short) roots in $\Phi^+$. Then we have \begin{equation}\label{dim3} \dim(M) = m_1 \bar l + m_2 \bar s + \rk(M). \end{equation} We call $m_1$ and $m_2$ the {\it associated multiplicities} of $M$. \begin{prop}\label{multcomp} Let $M = G/K$ be an irreducible Riemannian symmetric space of noncompact type with associated multiplicities $m_1$ and $m_2$. Let $\Sigma = G'/K'$ be a totally geodesic submanifold of $M$ with $\rk(\Sigma) = \rk(M)$. Let $\Sigma_1$ be a de Rham factor of $\Sigma$ with associated multiplicites $m'_1$ and $m'_2$. Then we have \[ \max\{m'_1, m'_2\}\leq \max\{m_1, m_2\}. \] \end{prop} \begin{proof} Let ${\mathfrak{a}}$ be a maximal abelian subspace of ${\mathfrak{p}}' \cong T_o\Sigma$. Since $\rk(\Sigma) = \rk(M)$, ${\mathfrak{a}}$ is also a maximal abelian subspace of ${\mathfrak{p}} \cong T_oM$. We choose $v \in {\mathfrak{a}}$ so that $K' \cdot v$ is a principal orbit of the $K'$-action on ${\mathfrak{p}}'$ and $K \cdot v$ is a principal orbit of the $K$-action on ${\mathfrak{p}}$. Every normal vector $\xi \in \nu_v(K\cdot v)$ of $K \cdot v$ at $v$ extends uniquely to a $K$-invariant normal vector field $\tilde\xi$ of $K\cdot v$. The restriction $\bar \xi = \tilde \xi|_{K'\cdot v}$ of $\tilde\xi$ to $K' \cdot v$ is a $K'$-invariant normal vector field of $K'\cdot v$. The actions of $K$ on ${\mathfrak{p}}$ and of $K'$ on ${\mathfrak{p}}'$ are polar (see \cite{BCO16,PT88}). This implies that $\tilde\xi$ is parallel with respect to the normal connection of $K\cdot v$ in ${\mathfrak{p}}$ and $\bar \xi$ is parallel with respect to the normal connection of $K'\cdot v$ in ${\mathfrak{p}}'$. Let $A$ (resp.\ $A'$) be the shape operator of $K\cdot v$ in ${\mathfrak{p}}$ (resp.\ of $K'\cdot v$ in ${\mathfrak{p}}'$). For $X \in {\mathfrak{k}}'$ we have \begin{equation}\label{shape2} A_\xi (\ad(X)v) = -\left.\textstyle{\frac{d}{dt}}\right|_{t=0} \tilde\xi_{c_v(t)} = -\left.\textstyle{\frac{d}{dt}}\right|_{t=0} \bar\xi_{c_v(t)} = A'_\xi (\ad(X)v), \end{equation} where $c_v(t) = \Exp(tX)v \in K' \cdot v \subset K \cdot v$. This shows that the tangent space $T_v(K'\cdot v)$ of $K' \cdot v$ at $v$ is invariant under the shape operator $A_\xi$ of $K\cdot v$ with respect to $\xi$. Therefore, each common eigenspace of the (commuting) family of shape operators of $K'\cdot v$ at $v$ is contained in a common eigenspace of the (commuting) family of shape operators of $K\cdot v$ at $v$. Moreover, any curvature normal of $K' \cdot v$ at $v$ is a curvature normal of $K \cdot v$ at $v$. According to \cite [page 63]{BCO16}, the common eigenspaces of the family of shape operators at $v$ of the principal orbit $K \cdot v$ in ${\mathfrak{p}}$ are given by $E_{\alpha} = {\mathfrak{p}}_\alpha$ for $\alpha \in \Phi^+$ reduced and $E_{\alpha} = {\mathfrak{p}}_\alpha \oplus {\mathfrak{p}}_{2\alpha}$ for $\alpha \in \Phi^+$ with $2\alpha \in \Phi^+$, where ${\mathfrak{p}}_\alpha = ({\mathfrak{g}}_\alpha \oplus {\mathfrak{g}}_{-\alpha}) \cap {\mathfrak{p}}$. Then \begin{equation}\label {dimE3} \dim (E_\alpha) = m_i \end{equation} with $i=1$ or $i=2$, depending on whether $\alpha$ is a long root or a short root. The analogous statement holds for the common eigenspaces of the family of shape operators at $v$ of the principal orbit $K' \cdot v$ in ${\mathfrak{p}}'$. This finishes the proof. \end{proof} Recall that a Riemannian symmetric space $M=G/K$ of compact type is inner if the geodesic symmetry $\sigma_o$ of $M$ at $o$ belongs to $K$. It is known (see e.g.\ \cite[Theorem 8.6.7]{Wo84}) that $G/K$ is inner if and only if $\rk(G) = \rk(K)$. A non-inner symmetric space is also called an outer symmetric space. \begin {prop} \label {innSigma} Let $M=G/K$ be an outer irreducible Riemannian symmetric space of compact type and let $\Sigma = G'/K'$ be a maximal totally geodesic submanifold of $M$. If $\Sigma$ is an inner symmetric space, then $\Sigma$ is a reflective submanifold. \end{prop} \begin{proof} Let $\tau \in K'$ be the geodesic symmetry of $\Sigma$ at $o$. Using the same arguments as in the proof of Proposition \ref{hyperbolicfactors}, we may assume that $\tau$ is an involutive isometry of $M$. Since $M$ is an outer symmetric space, $\tau$ must be different from the geodesic symmetry $\sigma _o$ of $M$. Let $\tilde\Sigma$ be the connected component containing $o$ of the fixed point set of the involutive isometry $\sigma_o \circ \tau$. Then $\tilde\Sigma$ is a reflective totally geodesic submanifold of $M$ containing $\Sigma$. Since $\Sigma$ is maximal, $\tilde\Sigma = \Sigma$ and thus $\Sigma$ is reflective. \end{proof} For details on the following constructions we refer to \cite{NT00} and the references therein. Let $M = G/K$ be a Riemannian symmetric space of compact type. The point $o$ is an isolated fixed point of the geodesic symmetry $\sigma_o$ of $M$ at $o$. The connected components different from $\{o\}$ of the fixed point set $\Fix_M(\sigma_o) = \{p \in M : \sigma_o(p) = p\}$ are so-called polars of $M$. A polar consisting of a single point is also called a pole of $o$. Every antipodal point on a closed geodesic through $o$ lies on a polar. More precisely, if $\gamma : [0,1] \to M$ is a geodesic with $\gamma(0) = o = \gamma(1)$, then $\gamma(\frac{1}{2})$ lies in a polar of $o$. In fact, the set $\Fix_M(\sigma_o) \setminus \{o\}$ coincides with the set of antipodal points of $o$. Let $o \neq p \in \Fix_M(\sigma_o)$ and denote by $M^+(p)$ the polar containing $p$. Then $M^+(p) = K \cdot p$ (even is $K$ is not connected, the orbit $K \cdot p$ is connected). Thus the polars of $o$ are the orbits of the isotropy group $K$ at $o$ through the antipodal points of $o$ Every polar $M^+(p)$ is a reflective submanifold of $M$. In fact, we have $T_pM^+(p) = \{v \in T_pM : d_p\sigma_o(v) = v\}$ and $\nu_pM^+(p) = \{v \in T_pM : d_p\sigma_o(v) = -v\}$. It follows that $\nu_pM^+(p)$ is the fixed point set of $d_p(\sigma_o \circ \sigma_p) = d_p(\sigma_p \circ \sigma_o)$ and therefore the connected component of $\Fix_M(\sigma_o \circ \sigma_p)$ containing $p$ is a totally geodesic submanifold of $M$ and $T_pM = T_pM^+(p) \oplus T_pM^-(p)$. It follows that $M^+(p)$ and $M^-(p)$ is a complementary pair of reflective submanifolds. Any such submanifold $M^-(p)$ is called a meridian. Any meridian $M^-(p)$ contains $o$ and has the same rank as $M$. We assume from now on that $M$ is simply connected and irreducible. Then the isotropy group $K$ is connected. Of particular interest to us will be the so-called bottom space or adjoint space $\bar{M}$ of $M$. The bottom space $\bar{M}$ of $M$ is characterized by the property that every Riemannian symmetric space $M'$ that is locally isometric to $M$ is a Riemannian universal covering space of $\bar{M}$. It is constructed from $M$ by identifying all points with the same isotropy groups. The bottom space $\bar{M}$ has no poles and its geodesic symmetric are pairwise distinct isometries (see \cite [Lemma 2.1] {BOR19}). Let $\pi : M \to \bar{M}$ be the canonical projection. We put $\bar{p} = \pi(p) \in \bar{M}$ for $p \in M$ and $\bar{v} = d_p\pi(v) \in T_{\bar{p}}\bar{M}$ for $v \in T_pM$. The geodesic symmetry of $\bar{M}$ at $\bar{p}$ is denoted by $\bar{\sigma}_{\bar{p}}$. Let $\gamma_{\bar{v}} : [0,1] \to \bar{M}$ be a closed geodesic in $\bar{M}$ with period $1$ and $\gamma_{\bar{v}}(0) = \bar{o}$ $ (= \gamma_{\bar{v}}(1))$. Then $\bar{p} = \gamma_{\bar{v}}(\frac{1}{2})$ is an antipodal point of $\bar{o}$ and $\bar{M}^+(\bar{p})$ is a polar of $\bar{M}$ with $\dim(\bar{M}^+(\bar{p})) \geq 1$. The isometry $g^{\bar{v}} = \bar{\sigma}_{\bar{p}} \circ \bar{\sigma}_{\bar{o}} = \bar{\sigma}_{\bar{o}} \circ \bar{\sigma}_{\bar{p}} \in I(\bar{M})$ is involutive, nontrivial and fixes every point on $\gamma_{\bar{v}}([0,1])$. The linear isometry $\ell^{\bar{v}} = d_{\bar{o}}g^{\bar{v}}$ of $T_{\bar{o}}\bar{M}$ is involutive, nontrivial and coincides with parallel transport along $\gamma_{\bar{v}}$ from $\bar{o} = \gamma_{\bar{v}}(0)$ to $\bar{o} = \gamma_{\bar{v}}(1)$. In particular, $\ell^{\bar{v}}(\bar{v}) = \bar{v}$. Note that $g^{\bar{v}} = g^{\bar{w}}$ for any $\bar{w} \in T_{\bar{o}}\bar{M}$ with $\gamma_{\bar{w}}(\frac{1}{2}) = \bar{p}$. The bottom space $\bar{M}$ can be written as $\bar{M} = G/\bar{K}$ with $\bar{K} = G_{\bar{o}}$. The identity component $\bar{K}^o$ of $\bar{K}$ is (isomorphic to) $K$. We now consider the isotropy representation of $\bar{K}^o \cong K$ on $T_{\bar{o}}\bar{M}$. For each $\bar{v} \in T_{\bar{o}}\bar{M}$, the linear isometry $\ell^{\bar{v}}$ leaves the isotropy orbit $K \cdot \bar{v} = K/K_{\bar{v}}$ invariant (see \cite[Proposition 2.4]{BOR19} and its proof). For all $k \in K$ we have $g^{\overline{d_ok(v)}} = kg^{\bar{v}}k^{-1}$ and therefore $g^{\bar{v}} = kg^{\bar{v}}k^{-1}$ for all $k \in K_{\bar{v}}$. This implies $\ell^{\bar{v}} \circ d_{\bar{o}}k = d_{\bar{o}}k \circ \ell^{\bar{v}}$ for all $k \in K_{\bar{v}}$. The subspace $\Fix_{T_{\bar{o}}\bar{M}}(\ell^{\bar{v}})$ of fixed vectors of $\ell^{\bar v}$ in $T_{\bar{o}}\bar{M}$ always contains the normal space $\nu_{\bar v}(K\cdot {\bar v})$. Moreover, if $\rk(M) \geq 2$, then $\Fix_{T_{\bar{o}}\bar{M}}(\ell^{\bar{v}}) = \nu_{\bar v}(K\cdot {\bar v})$ if and only if $K \cdot {\bar v}$ is an extrinsically symmetric orbit. For any Riemannian symmetric space $M' = G/K'$ that is locally isometric to $M$ we can make similar constructions, using the canonical projection $\pi' : M \to M'$ and defining $o', v'$ and other entities analogously. The subspace $\Fix_{T_{o'}M'}(\ell^{v'})$ of fixed vectors of $\ell^{v'}$ in $T_{o'}M'$ always contains the normal space $\nu_{v'}(K\cdot {v'})$. However, it may happen that $\Fix_{T_{o'}M'}(\ell^{v'}) = T_{o'}M'$. Let ${\mathfrak{a}}$ be a maximal abelian subspace of ${\mathfrak{p}} \cong T_oM$ and let $\alpha_1, \ldots,\alpha _r \in {\mathfrak{a}}^*$ be simple roots for the corresponding root system. Let $H^1,\ldots, H^r \in {\mathfrak{a}}$ be the dual basis of $\alpha_1,\ldots,\alpha_r$. We fix an index $i \in \{1,\ldots,r\}$. Note that $H^i$ belongs to a simplex of dimension $1$ in the closure $\bar C$ of the Weyl chamber $C$ that corresponds to the choice of the simple roots $\alpha_1,\ldots,\alpha_r$. In fact, $H^i$ belongs to any of the reflection hyperplanes associated with $\alpha _j$, $j\neq i$. Then the abelian part of the normal space $\nu_{H^i} (K \cdot H^i) = {\mathfrak{z}}_{\mathfrak{p}}(H^i)$ is the real span of $H^i$ and hence of dimension $1$. This means that $K\cdot H^i$ is a most singular (or focal) orbit of $K$. Conversely, if $K \cdot v$ is a most singular orbit, then $K \cdot v = K\cdot H^i$ for some $i\in \{1, \ldots , r\}$ with a suitable rescaling of $v$. In fact, by applying a suitable transformation in the Weyl group to $v$, we can assume that $v$ belongs to $\bar C$ and so $\alpha _j(v) \geq 0$ for all $j \in \{1, \ldots , r\}$. Then, since $K \cdot v$ is most singular, $ \alpha _j (v) = 0$ except for one index $i \in \{1, \ldots , r\}$. Since the real span of $H^i$ is the abelian part of ${\mathfrak{z}}_{\mathfrak{p}}(H^i)$, it is the tangent space at $o$ of a $1$-dimensional flat $S^1$ of $M$. After a suitable rescaling of $H^i$ to some vector $v$ we can assume that $\gamma _{v}: [0,1] \to M$ is a closed geodesic with period $1$. The same can be done if we replace $M$ by an arbitrary globally symmetric quotient $M'$ of $M$ and, in particular, by the bottom space $\bar{M}$. After replacing $\overline{H^i} = d_o\pi(H^i)$ by a scalar multiple $\bar{v}$, we obtain a closed geodesic $\gamma_{\bar{v}}: [0,1] \to \bar{M}$ of period $1$. Let $\alpha = \delta _1 H^1 + \ldots + \delta_r H^r$ be the highest root and assume that $\delta_i > 1$. Then the orbit $K \cdot H^i \cong K \cdot \overline{H^i}$ is not an extrinsically symmetric orbit (see \cite[page 199]{BO16} or \cite{KN64}). Then the tangent space $T_{\bar o}\bar{M}^-(\bar{p})$ contains properly $\nu _{\bar v}(K \cdot \bar{v}) = {\mathfrak{z}}_{\mathfrak{p}}(\bar{v})$, where $\bar{p} = \gamma_{\bar{v}}(1/2)$. Moreover, the meridian $\bar{M}^-(\bar{p})$ is semisimple. In fact, if it were not semisimple, choose $0 \neq \bar{w} \in T_{\bar{o}}\bar{M}^-(\bar{p})$ with $[\bar w, T_{\bar o}\bar{M}^-(\bar p)]=\{0\}$. Then $[\bar w, Z_{\mathfrak{p}}(\bar v)] = \{0\}$ and so $\bar w=\bar v$ up to rescaling, since the abelian part of ${\mathfrak{z}}_{\mathfrak{p}}(\bar v)$ is $1$-dimensional. Then \[ T_{\bar o}\bar{M}^-(\bar p) \subset {\mathfrak{z}}_{\mathfrak{p}}(\bar w) = {\mathfrak{z}}_{\mathfrak{p}}(\bar v) \] and therefore $T_{\bar o}\bar{M}^-(\bar p) = {\mathfrak{z}}_{\mathfrak{p}}(\bar v)$, which is a contradiction. We can replace $\bar M$ by any globally symmetric quotient $M'$ of $M$ and obtain a similar result, but in this situation it may happen that $M'^-(p') = M'$ (namely, if $p'$ is a pole of $o'$). So we have the following result that will be useful for our purposes, since totally geodesic submanifolds of simply connected symmetric spaces are not in general simply connected, but globally symmetric. \begin{prop} \label{ssm} Let $M=G/K$ an irreducible Riemannian symmetric space of compact type and $M' = G/K'$ be a symmetric quotient of $M$. Let $0 \neq v' \in T_{o'}M'$ be such that the isotropy orbit $(K')^o \cdot v'$ is a most singular and not extrinsically symmetric orbit. Then $\gamma_{v'}$ is a closed geodesic, which we may assume to be of period $1$. Moreover, if $p'= \gamma_{v'}(1/2)$, then the tangent space $T_{o'}M'^-(p')$ of the meridian $M'^-(p')$ is a semisimple Lie triple system that contains properly the normal space $\nu_{v'}((K')^o\cdot v') = {\mathfrak{z}}_{\mathfrak{p}}(v)$. \end{prop} We now turn our attention to symmetric spaces whose root system is of type $A$. For $r \geq 2$, the irreducible simply connected Riemannian symmetric spaces of compact type whose root system is of type $A_r$ are $SU_{r+1}/SO_{r+1}$, $SU_{r+1}$, $SU_{2r+2}/Sp_{r+1}$ and $E_6/F_4$ (for which $r = 2$). \begin{prop} \label{irr} Let $M=G/K$ be an irreducible Riemannian symmetric space with root system of type $A_r$, $r \geq 2$. Let $v \in T_oM \cong {\mathfrak{p}}$ so that $K \cdot v$ is a principal orbit in ${\mathfrak{p}}$, and hence isoparametric. Let $E(v)$ be the tangent space at $v$ of a curvature sphere $S(v)$ of $K \cdot v$. Then the connected isotropy group $(K_v)^o$ restricted to $E(v)$ acts irreducibly. \end{prop} \begin {proof} The key fact for the proof is that the subgroup of $K$ that acts on the curvature sphere $S(v)$, when restricted to $S(v)$, is the full group of isometries of $S(v)$. Since the isotropy representations for dual symmetric spaces are equivalent, we may assume that that $M$ is of noncompact type. The normal space $\nu_v (K\cdot v)$ of $K \cdot v$ at $v$ is the maximal abelian subspace of ${\mathfrak{p}}$ that contains the regular tangent vector $v$. Moreover, the Weyl group $W$ associated with the isoparametric submanifold $K\cdot v$ coincides with the Weyl group of $M$ (corresponding to the root system determined by the maximal abelian subspace $\nu _v (K\cdot v)$). Let $\xi \in \nu _v(K\cdot v)$ so that $E(v)$ coincides with the $+1$-eigenspace of the shape operator $A_\xi$ of $K\cdot v$. Equivalently, $u = v + \xi$ belongs to the reflection hyperplane $H$ of $\nu _v(K\cdot v)$ associated with $E(v)$, and does not belong to any other reflection hyperplane of the Weyl group. The tangent vector $u$ belongs to a simplex of dimension $r-1$ of the closure of a Weyl chamber (such a simplex is an open subset of $H$). Note that the curvature normal $\eta(v)$ associated with $E(v)$ is perpendicular to the hyperplane $H$ of $\nu _v (K\cdot v)$. The focal parallel orbit $K\cdot u$ of $K \cdot v$ is a subprincipal orbit and \[ \nu _u (K\cdot u) ={\mathfrak{z}}_{\mathfrak{p}}(u), \] where ${\mathfrak{z}}_{\mathfrak{p}}(u)= \{z\in {\mathfrak{p}}: [u, z]=0\}$ is the centralizer of $u$ in ${\mathfrak{p}} \cong T_oM$. The Lie triple system $\nu _u (K\cdot u)$ splits as \[ \nu _u (K\cdot u) = H \oplus (E(v) \oplus \mathbb{R} \eta (v)), \] where $H$ is the abelian part of the Lie triple system and $E(v) \oplus \mathbb{R} \eta (v)$ is a nonabelian Lie triple system of rank $1$. Note that $v \in \nu_u (K\cdot u)$, since $H \oplus \mathbb{R}\eta (v) = \nu _v(K\cdot v)$, and so the rank of $\nu _u(K\cdot u)$ is $r$. Let $N \subset M$ be the symmetric space of rank $1$ associated with $E(v)\oplus \mathbb R \, \eta (v)$. The marked Dynkin diagram of $N$ consists of one of the nodes of the Dynkin diagram of $M$ with corresponding multiplicity $m$ (see e.g.\ \cite {HO92}). In our particular situation there are no double roots and $m \in \{1,2,4,8\}$. Therefore, $N = G'/K'$ is an $(m+1)$-dimensional real hyperbolic space, where $G' \subset G$ are the glide transformations of $N$ (and so $K' \subset K$). If $X \subset M$ is the totally geodesic submanifold with $T_oX = \nu_u(K\cdot u)$, then \[ X = \mathbb{R}^{r-1} \times N = (\mathbb{R}^{r-1}\times G')/K'. \] The image of the representation of $(K_u)^o$ on $T_oX = \nu _u(K \cdot u)$ coincides with the image of the isotropy representation of $K'$ on $T_oX$ (see \cite [Theorem 2]{HO92}). Note that $K_v \subset K_u$, since $K \cdot v$ is a principal orbit and $u= v + \xi \in \nu _v(K\cdot v)$. Hence $(K_u)_v = K_v$ and then \[ \left.(K_v)^o\right|_{\nu _u(K\cdot u)} = \left.((K_u)_v)^o\right|_{\nu _u(K\cdot u)} = \left.(K'_v)^o\right|_{\nu _u(K\cdot u)}, \] where we regard $K'$ as the isotropy group at $o$ of $X$, acting on $T_oX = H \oplus (E(v) \oplus \mathbb{R} \eta (v)$. In particular, \begin {equation}\label {1} \left.(K_v)^o\right|_{E(v)} = \left.(K'_v)^o\right|_{E(v)}. \end {equation} Note that $\langle \eta (v), -v \rangle =1$ since $A_{-v}= \id$. We write $v = c\eta (v) + w$ with $c \neq 0$ and $w\in H$. Since $K'$ acts trivially on $H$, we get $K'_v = K' _{\eta (v)}$ and so $E(v)$ may be regarded as the tangent space of the unit sphere of $T_oN$ at $\frac {1}{\Vert \eta (v)\Vert}\eta (v)$ . Since $N$ is an $(m+1)$-dimensional real hyperbolic space, we get $K' \cong SO_{m+1}$ and so $\left.K'_v\right|_{E(v)} \cong SO_m$, which acts irreducibly on $E(v)$. Then, from (\ref {1}), $(K_v)^o$ acts irreducibly on $E(v)$. \end {proof} The following proposition follows from the classification of polars (see \cite {NT00}, though the meridians are not explicitely listed there). Nevertheless, we include a direct proof here based on general arguments. Let $\Phi$ be a root system in $\mathbb{R}^{r+1}$ of type $A_r$. We can assume that $\Phi = \{e_i - e_j : i,j \in \{1, \ldots , r+1\}, \ i\neq j\}$, where $e_1, \ldots , e_{r+1}$ is the canonical basis of $\mathbb{R}^{r+1}$. Then $\Delta = \{ e_i - e_{i+1}: 1 \leq i \leq r\}$ is a set of simple roots of $\Phi$ and $\Phi ^+ = \{e_i - e_j : 1 \leq i < j\leq r +1\}$ is the resulting set of positive roots. Note that $e_1-e_2, \ldots , e_r-e_{r+1}$ is a basis of the hyperplane $\mathbb{R}_0^{r+1} = \{x \in \mathbb{R}^{r+1} : x_1 + \ldots + x_{r+1} = 0\}$ of $\mathbb{R}^{r+1}$. The Weyl group $W$ generated by the hyperplane reflections associated with $\Phi$ acts irreducibly on $\mathbb{R}_0^{n+1}$. Let $\tilde{\Phi}$ be a nonempty proper root subsystem of $\Phi$. Then, up to a suitable relabelling, $\tilde{\Phi}$ is characterized in the following way: there exist integers $0 = d_0 < d_1< \ldots < d_k \leq r+1$ with $d_{\nu+1} - d_\nu \geq 2$ such that \[ \tilde{\Phi} = \{e_i-e_j : i,j \in \{d_\nu +1, \ldots , d_{\nu+1}\}, \ i\neq j,\ 0 \leq \nu \leq k-1\}. \] The key fact for proving the above well-known equality is that if $e_i-e_j$ and $e_j -e_l$ belong to $\tilde{\Phi}$, then $e_i-e_l$ belongs to $\tilde{\Phi}$ as well, which follows by applying to $e_j-e_l$ the hyperplane reflection of $\mathbb{R}_0^{r+1}$ determined by $e_i-e_j$. Note that $\tilde{\Phi}$ is the direct sum of root systems of type $A_{d_{\nu+1}-(d_\nu +1)}$, $\nu =0, \ldots , k-1$. Consider the following set ($I$) of $k + r -d_k + 1 \geq 2$ linearly independent linear equations of $\mathbb{R}^{r+1}$: \[ (I) = \begin{cases} x_{d_\nu +1} + x_{d_\nu + 2} + \ldots + x_{d_{\nu+1}} = 0 & ,\ \nu = 0, \ldots , k-1, \\ x_{\mu+1} = 0 & ,\ \mu = d_k,\ldots,r. \end {cases} \] If $V \subset \mathbb{R}^{r+1}_0$ is the subspace determined by (I), then \[ \tilde{\Phi} = \Phi \cap V. \] Any $\alpha \in \Phi \setminus \tilde{\Phi}$ projects nontrivially onto $V^\perp \cap \mathbb{R}^{r+1}_0$. Then, since $\Phi \setminus \tilde{\Phi}$ is finite, there exists $z\in V^\perp \cap \mathbb{R}^{r+1}_0$ such that \begin {equation} \label{3} \tilde{\Phi} = \{\alpha \in \Phi : \langle \alpha , z \rangle = 0\}. \end {equation} We will use this equality in the proof of the following proposition. \begin{prop} \label{A_r} Let $M = G/K$ be a simply connected Riemannian symmetric space of compact type with $r = \rk(M) \geq 2$ and root system of type $A_r$. Let $\bar M = G/\bar K$ be the bottom space of $M$. Then $V \subset T_{\bar o}\bar M$ is the tangent space to a meridian of $\bar M$ if and only if $V$ is the normal space to an extrinsically symmetric isotropy orbit of $\bar K^o \cong K$ (or equivalently, for our spaces, to a most singular orbit). In particular, $V$ is a nonsemisimple Lie triple system of $T_{\bar o}\bar M$. \end{prop} \begin{proof} Let $\bar{M}$ be the bottom space of $M$ and $\gamma_{\bar{v}}: [0,1] \to \bar{M}$ be a closed geodesic with period $1$ and $\gamma_{\bar{v}}(0)= \bar{o} = \gamma_{\bar{v}}(1)$. Consider the polar $\bar{M}^+(\bar{p})$ and the meridian $\bar{M}^-(\bar{p})$ through the antipodal point $\bar{p} = \gamma_{\bar{v}}(\frac{1}{2})$ of $\bar{o}$. The meridian $\bar{M}^-(\bar{p})$ is the connected component containing $\bar{o}$ of $\Fix_{\bar{M}}(g^{\bar{v}})$ and $T_{\bar{o}}\bar{M}^-(\bar{p}) = \Fix_{T_{\bar{o}}\bar{M}}(\ell^{\bar{v}})$. Recall that $\bar{M} = G/\bar{K}$ with $\bar{K} = G_{\bar{o}}$, $\bar{K}^o \cong K$, $g^{\bar{v}} = kg^{\bar{v}}k^{-1}$ and $\ell^{\bar{v}} \circ d_{\bar{o}}k = d_{\bar{o}}k \circ \ell^{\bar{v}}$ for all $k \in (\bar{K}^o)_{\bar{v}}$, and $\ell^{\bar{v}}(\bar{K} \cdot \bar{v}) = \bar{K} \cdot \bar{v}$. We now choose $\bar{w} \in \bar{v} + \nu_{\bar{v}}(\bar{K}^o \cdot \bar{v})$ such that the orbit $\bar{K}^o \cdot \bar{w}$ is principal, and thus isoparametric. According to \cite[Proposition 2.4]{BOR19} and its proof, the linear isometry $\ell^{\bar v}$ fixes pointwise the normal space $ \nu_{\bar{v}}(\bar{K}^o \cdot \bar{v})$. In fact, the parallel transport $\ell^{\bar{v}}$ along $\gamma_{\bar{v}}$ must be trivial when restricted to any abelian subspace of $T_{\bar{o}}\bar{M}$ containing $\bar{v}$. We have $(\bar{K}^o)_{\bar{w}} \subseteq (\bar{K}^o)_{\bar{v}}$ since $\bar{K}^o \cdot \bar{w}$ is a principal orbit. Moreover, we have $\nu_{\bar{w}}(\bar{K}^o \cdot \bar{w}) \subseteq \nu_{\bar{v}}(\bar{K}^o \cdot \bar{v})$ by construction. Then $\ell^{\bar{v}}(\bar{K}^o \cdot \bar{w}) = \bar{K}^o \cdot \bar{w}$, $\ell^{\bar{v}} \circ d_{\bar{o}}k = d_{\bar{o}}k \circ \ell^{\bar{v}}$ for all $k \in (\bar{K}^o)_{\bar{w}}$, and $\ell^{\bar{v}}$ is the identity on $\nu_{\bar{w}}(\bar{K}^o \cdot \bar{w})$. Let $S$ be the connected component of $\Fix_{T_{\bar{o}}\bar{M}}(\ell^{\bar{v}}) \cap (\bar{K}^o \cdot \bar{w})$ containing $\bar{w}$. Since $\ell^{\bar{v}}$ is an isometry of the orbit $\bar{K}^o\cdot \bar{w}$, we see that $S$ is a totally geodesic submanifold of $\bar{K} ^o \cdot \bar{w}$. Let $W$ be the orthogonal complement of $T_{\bar{w}}S$ in $T_{\bar{w}}(\bar{K}^o \cdot \bar{w})$ and $\alpha$ be the second fundamental form of $\bar{K}^o \cdot \bar{w}$. For $X \in W$ and $Y \in T_{\bar{w}}S$ we have \[ \alpha (X,Y) = \ell^{\bar{v}} (\alpha(X,Y)) = \alpha (\ell^{\bar{v}}(X),\ell^{\bar{v}}(Y)) = \alpha(-X,Y) = -\alpha (X,Y), \] and hence $\alpha (X,Y)=0$. It follows that $T_{\bar{w}}S$ is invariant under the shape operator $A_{\bar{\xi}}$ of $\bar{K}^o \cdot \bar{w}$ for any normal vector $\bar{\xi}$ of $S$ at $\bar{w}$. This implies that $S$ is an isoparametric submanifold of $T_{\bar{o}}\bar{M}$ (which is contained in $\bar{w} + W^\perp$). If $E'(\bar{w})$ is the tangent space at $\bar{w}$ of a curvature sphere of $S$, then $E'(\bar{w})$ is contained in the tangent space $E(\bar{w})$ of some curvature sphere of $\bar{K}^o \cdot \bar{w}$. Since $S$ is invariant under the action of $(\bar{K}^o)_{\bar{w}}$, also $E'(\bar{w})$ is invariant by $(\bar{K}^o)_{\bar{w}}$. From Proposition \ref{irr} we know that $(\bar{K}^o)_{\bar{w}}$ acts irreducibly on $E(\bar{w})$, and consequently \[ E'(\bar{w}) = E(\bar{w}). \] Therefore, any curvature sphere of $S$ is a curvature sphere of $\bar{K}^o \cdot \bar{w}$. This implies that the Weyl group of $S$ is generated by some of the reflection hyperplanes of the Weyl group associated with $\bar{K}^o \cdot \bar{w}$. Note that the set $J' = \{\eta'_1(\bar{w}),\ldots,\eta'_d(\bar{w})\}$ of the curvature normals $S$ at $\bar{w}$ is a subset of the set $J$ of the curvature normals of $\bar{K}^o \cdot \bar{w}$ at $\bar{w}$. From (\ref {3}) and its preceding paragraph, there exists $\bar{z} \in \nu_{\bar{w}}(\bar{K}^o \cdot \bar{w})$ such that $\eta \in J$ satisfies $\langle \eta , \bar{z} \rangle = 0$ if and only if $\eta \in J'$. This implies that $T_{\bar{w}}S$ coincides with the $(+1)$-eigenspace of the shape operator of $\bar{K}^o \cdot \bar{w}$ with respect to $-\bar{w}+\bar{z}$. Note that $\bar{z} = \bar{w} + (-\bar{w} +\bar{z})$ and so $\bar{K}^o \cdot \bar{z}$ is a parallel focal orbit to $\bar{K}^o \cdot \bar{w}$ and $S$ is the connected component of the fibers of the parallel map from $\bar{K}^o \cdot \bar{w}$ into $\bar{K}^o \cdot \bar{z}$. Then \[ \nu_{\bar{z}}(\bar{K}^o \cdot \bar{z}) = T_{\bar{w}}S \oplus \nu_{\bar{w}}(\bar{K}^o \cdot \bar{w}). \] Note that $\ell^{\bar{v}}(\bar{z}) = \bar{z}$ and $\ell^{\bar{v}}(\bar{K}^o \cdot \bar{z}) = \bar{K}^o \cdot \bar{z}$. Moreover, $\ell^{\bar{v}}$ is the identity on $\nu_{\bar{z}}(\bar{K}^o \cdot \bar{z})$ and minus the identity on $T_{\bar{z}}(\bar{K}^o \cdot \bar{z})$. Thus we see that $\bar{K}^o \cdot \bar{z}$ is extrinsically symmetric. Since the set of fixed vectors of $\ell^{\bar{v}}$ coincides with $\nu_{\bar{z}} (\bar{K}^o \cdot \bar{z})$, the assertion follows. \end{proof} The next two results are not related to the above, but will be useful later. \begin{prop} \label{indexSigmaM} Let $M$ be an irreducible Riemannian symmetric space and $\Sigma$ be a connected totally geodesic submanifold of $M$ with $\dim(\Sigma) < \dim(M)$. Then $i(\Sigma) \leq i(M)$. \end{prop} \begin{proof} Let $\Sigma'$ be a maximal totally geodesic submanifold of $M$ with $d = \codim(\Sigma') = i(M)$. We can assume that $o \in \Sigma \cap \Sigma'$. By assumption, the isotropy group $K$ of $M = G/K$ acts irreducibly on $T_oM$. Therefore, for all $0 \neq X \in T_oM$ the span of $\{d_ok(X) : k \in K\}$ is equal to $T_oM$. It follows that there exists $k \in K$ such that $T_o\Sigma$ is not contained in $d_ok(T_o\Sigma')$, or equivalently, $\Sigma$ is not contained in $k(\Sigma')$. Then $\Sigma \cap k(\Sigma')$ is a proper totally geodesic submanifold of $\Sigma$ and the codimension of $\Sigma \cap k(\Sigma')$ in $\Sigma$ is less than or equal to $d$. Consequently, $i(\Sigma) \leq d = i(M)$. \end{proof} \begin{re} \label{obstruction} {\rm Let $M = G/K$ and $\Sigma = G'/K'$ be irreducible Riemannian symmetric spaces of noncompact type. We have the following obvious necessary conditions for the existence of a totally geodesic embedding of $\Sigma$ into $M$: \begin{align*} & \dim(\Sigma) < \dim(M),\ \dim(G') < \dim(G),\ \dim(K') < \dim(K), \\ & \rk(\Sigma) \leq \rk(M),\ \rk(G') \leq \rk(G),\ \rk(K') \leq \rk(K). \end{align*} Proposition \ref{indexSigmaM} shows that the index imposes further necessary conditions: \[ i(\Sigma) \leq i(M) \leq \dim(M) - \dim(\Sigma). \] Choose for example $M = SO^o_{d, d +k}/SO_d SO_{d+k}$. For given $\Sigma$, if we choose $k$ sufficiently large and $d \geq \rk(\Sigma)$, then the first set of necessary conditions is satisfied. We know from \cite{BO18} that $\rk(\Sigma) \leq i(\Sigma)$, and from \cite{BO16} that equality holds only for $SL_{r+1}(\mathbb{R})/SO_{r+1}$ and $SO^o_{r, r +l}/SO_r SO_{r+l}$. If $\Sigma$ is different from these symmetric spaces, then we can choose $d$ with $\rk(\Sigma) \leq d < i(\Sigma)$. Then $i(\Sigma) > d = i(M)$ and therefore the second set of necessary conditions tells us that there cannot be a totally geodesic embedding of $\Sigma$ into $SO^o_{d, d +k}/SO_d SO_{d+k}$ for any $k \geq 0$. Thus the index gives a useful additional obstruction for the existence of totally geodesic embeddings (or immersions in the compact case) in addition to the standard obstructions given by dimensions and ranks.} \end{re} \begin{re} {\rm It is known that every Riemannian symmetric space $M$ of noncompact type admits a totally geodesic embedding into $SL_n(\mathbb{R})/SO_n$ for some $n \in \mathbb{N}$. This is a consequence of the well-known unitary trick (and can also be seen as a particular case of Karpelevich's Theorem by embedding $I(M)^o$ into the special linear group via the adjoint representation). By duality, any symmetric space $M$ of compact type admits a totally geodesic immersion into $SU_n/SO_n$ for some $n \in \mathbb{N}$. The symmetric space $SU_n/SO_n$ is a totally geodesic hypersurface of the symmetric space $U_n/SO_n$. The symmetric space $U_n/SO_n$ is a symmetric $R$-space, arising as a symmetric orbit of the isotropy representation of the symmetric space $Sp_n/U_n$. Hence $U_n/SO_n$ admits an isometric immersion into $\mathbb{R}^{n(n+1)}$ with parallel second fundamental form. Then, by a well-known result of Vilms \cite{V72}, the corresponding Gauss map is totally geodesic. Consequently, every Riemannian symmetric space of compact type admits a totally geodesic immersion into some real Grassmannian $SO_{2r+k}/SO_rSO_{r+k}$. Then $i(M) \leq i (SO_{2r+k}/SO_rSO_{r+k}) = r$. Thus the index of $M$ is a lower bound for the rank $r$ of a real Grassmannian into which $M$ can be totally geodesically immersed (where the index of $M$ is defined to be the sum of the indices of the locally irreducible components of $M$).} \end{re}  The next result is useful for the investigation of totally geodesic submanifolds in reducible Riemannian symmetric spaces. \begin{prop} \label{productestimate} Let $M = M_1 \times M_2$ be the Riemannian product of two irreducible Riemannian symmetric spaces $M_1,M_2$ and $\Sigma = G'/K'$ be a totally geodesic submanifold of $M$. Let $o = (o_1,o_2) \in \Sigma$ and assume that $T_{o_1}M_1 \times \{0\}$ and $\{0\} \times T_{o_2}M_2$ are not contained in $T_o\Sigma \subseteq T_oM = T_{o_1}M_1 \times T_{o_2}M_2$. Then $\codim(\Sigma) \geq i(M_1) + i(M_2)$. \end{prop} \begin{proof} Let $\pi_j : M = M_1 \times M_2 \to M_j$ be the canonical projection. If $\pi _1(\Sigma) \subsetneq M_1$ and $\pi_2(\Sigma) \subsetneq M_2$, then $\Sigma$ is contained in the totally geodesic submanifold $\pi_1(\Sigma) \times \pi _2(\Sigma)$ of $M$ and \begin{align*} \codim_M(\Sigma) & \geq \codim_M(\pi_1(\Sigma) \times \pi _2(\Sigma)) = \codim_{M_1}(\pi_1(\Sigma)) + \codim_{M_2}(\pi_2(\Sigma)) \\ & \geq i (M_1) + i(M_2). \end{align*} We can therefore assume, without loss of generality, that $\pi _1(\Sigma) = M_1$. We define two subspaces $V_j = \ker(d_o\pi_j|_{T_o\Sigma})$ ($j =1,2$) of $T_o\Sigma$ and note that both are $K'$-invariant. {\sc Case 1: $V_1 = \{0\}$.} Then $\pi_1|_\Sigma : \Sigma \to M_1$ is a totally geodesic local diffeomorphism and hence an affine map. It follows that $\pi_1|_\Sigma$ is a homothety and therefore $\Sigma$ is an irreducible symmetric space. If $\dim (M_1) \leq \dim (M_2)$, then $\dim(\Sigma) = \dim (M_1) \leq \frac{1}{2}\dim (M)$ and \[ \codim(\Sigma) \geq \textstyle{\frac{1}{2}}\dim(M) = \textstyle{\frac{1}{2}}\dim(M_1) + \textstyle{\frac{1}{2}}\dim(M_2) \geq i(M_1) + i(M_2), \] where the last inequality follows from the fact that every irreducible symmetric space contains at least one pair of perpendicular reflective submanifolds (\cite{L75}, \cite{L79}). If $\dim(M_1) > \dim(M_2)$, then $V_2 \neq \{0\}$ because of $\dim(\Sigma) = \dim(M_1)$. Since $V _2$ is $K'$-invariant and $\Sigma$ is irreducible, it follows that $V_2 = T_o\Sigma$. Then $\Sigma = M_1$ and hence $T_o\Sigma = T_{o_1}M_1 \times \{0\}$, which contradicts the assumption. {\sc Case 2: $V_1 \neq \{0\}$.} Then $\Sigma$ is a Riemannian product $\Sigma = \Sigma _1 \times \Sigma _2$, where $\Sigma_1$ is homothetic to $M_1$ and $\pi_2|_{\Sigma_2} : \Sigma _2 \to M_2$ is a totally geodesic immersion with $T_o\Sigma _2 = V_1$. If $\pi_2|_{\Sigma_1} : \Sigma_1 \to M_2$ is constant, then $\Sigma_1 = M_1$, which contradicts the assumption. Let us consider the totally geodesic map $\pi_2|_{\Sigma} : \Sigma \to M_2$. Assume that $V _2 \neq \{0\}$. Since $(\pi_1,\pi _2)|_\Sigma$ is the inclusion map of $\Sigma$ into $M$, we have $V_1 \cap V_2=\{0\}$. Then, as $V _2$ is $K'$-invariant, we must have $V_2= T_o\Sigma _1$, which implies $T_o\Sigma _1 = T_{o_1}M_1$ and contradicts the assumption. Therefore $\pi_2|_\Sigma : \Sigma \to M_2$ is a totally geodesic immersion. If $\pi_2(\Sigma) = M_2$, then $\pi_2|_\Sigma : \Sigma \to M_2$ is an affine local diffeomorphism and so $M_2 = \Sigma = \Sigma_1\times \Sigma_2$ up to rescaling of the metric in each irreducible factor. This is a contradiction since $M_2$ is irreducible. Thus $\pi_2(\Sigma)$ is strictly contained in $M_2$ and so \[ \dim(\Sigma) \leq \dim (M_2) - i(M_2) \leq \dim (M_1) - i(M_1) + \dim(M_2) - i(M_2), \] which implies $\codim(\Sigma) \geq i(M_1) + i (M_2)$. \end{proof} \section{Lagrangian Grassmannians} \label{LG} The complex $2$-plane Grassmannian $G_2(\mathbb{C}^{2r+2}) = SU_{2r+2}/S(U_2U_{2r})$ is the complexification of the quaternionic projective space $\mathbb{H} P^r = Sp_{r+1}/Sp_1Sp_r$, or equivalently, $\mathbb{H} P^r$ is a real form of $G_2(\mathbb{C}^{2r+2})$. The symmetric space $M = SU_{2r+2}/Sp_{r+1}$ is the Lagrangian Grassmannian of all real forms of $G_2(\mathbb{C}^{2r+2})$ that are congruent to $\mathbb{H} P^r$. We have $\rk(M) = r$ and $\dim(M) = r(2r+3)$. The associated root system is of type $A_r$ and all roots have multiplicity $4$. The symmetric space $M = SU_4/Sp_2 \cong Spin_6/Spin_5$ is isometric to $S^5$ and hence $i(M) = 1$. Oni\v{s}\v{c}ik (\cite{O80}) proved that $i(M) = 6$ for $M = SU_6/Sp_3$. In this section we will prove that $i(SU_8/Sp_4) = 11$ and $i(SU_{2r+2}/Sp_{r+1}) = 4r$ for $r \geq 4$. Throughout this section we assume that $r \geq 3$. We know from \cite{L79} that $i_r(SU_8/Sp_4) = 11$ and $i_r(SU_{2r+2}/Sp_{r+1}) = 4r$ for $r \geq 4$. \begin{lm} \label {SigmaA_r} Let $\Sigma$ be a maximal, locally reducible, totally geodesic submanifold of $M = SU_{2r+2}/Sp_{r+1}$. Assume that $\Sigma$ has a local de Rham factor $\Sigma^1 = G^1/K^1 \subset M$ whose root system is not of type $A_s$, where $s = \rk(\Sigma^1)$. Then $\Sigma$ is nonsemisimple. \end{lm} \begin {proof} Let $\bar{M}$ be the bottom space of $M$ and $\pi : M \to \bar{M}$ be the canonical projection. We use the notations introduced in Section \ref{gsr}. Let $\bar{\Sigma}$ be the maximal totally geodesic submanifold of $\bar{M}$ with $T_{\bar{o}}\bar{\Sigma} = d_o\pi(T_o\Sigma)$. We denote by $\bar{\Sigma}^1 = G^1/\bar{K}^1 \subset \bar{M}$ the corresponding local de Rham factor of $\bar{\Sigma}$ whose root system is not of type $A_s$. Locally, around $\bar{o}$, we can write $\bar{\Sigma} = \bar{\Sigma}^1 \times \bar{\Sigma}'$. Assume that $\Sigma$ is semisimple. Then $\bar{\Sigma}$ is semisimple. Let $\alpha_1, \ldots, \alpha_s$ be a set of simple roots of the root system of $\bar{\Sigma}^1$ and $\bar{H}^1,\ldots,\bar{H}^s \in T_{\bar{o}}\bar{\Sigma}^1$ be the dual basis of $\alpha_1, \ldots, \alpha_s$. Let $\alpha = \delta _1\alpha _1 + \ldots + \delta_s\alpha _s$ be the highest root. Since the root system of $\bar{\Sigma}^1$ is not of type $A_s$, we have $\delta_i > 1$ for some $i \in \{1, \ldots , s \}$. Then $(\bar{K}^1)^o \cdot \bar{H}^i$ is a most singular and not extrinsically symmetric orbit in $T_{\bar{o}}\bar{\Sigma}^1$. Since $\bar{H}^i$ is a most singular vector, we can rescale $\bar{H}^i$ to a vector $\bar{v} \in T_{\bar{o}}\bar{\Sigma}^1$ so that the closed geodesic $\gamma_{\bar{v}} : [0,1] \to \bar{\Sigma}^1$ has period $1$ (see e.g.\ proof of \cite[Proposition 2.4]{BOR19}). Let $\bar{p} = \gamma_{\bar{v}}(\frac{1}{2})$ be the antipodal point of $\bar{o}$ on $\gamma_{\bar{v}}$. It follows from Proposition \ref{ssm} that the tangent space $T_{\bar{o}}(\bar{\Sigma}^1)^-(\bar{p})$ of the meridian $(\bar{\Sigma}^1)^-(\bar{p})$ is a semisimple Lie triple system. It may happen that $T_{\bar{o}}(\bar{\Sigma}^1)^-(\bar{p}) = T_{\bar{o}}\bar{\Sigma}^1$ if $\bar{\Sigma}^1$ has poles. The meridian $(\bar{\Sigma}^1)^-(\bar{p})$ of $\bar\Sigma^1$ is contained in the meridian $\bar{M}^-(\bar{p})$ of $\bar{M}$. By Proposition \ref{A_r}, $V = T_{\bar{o}}\bar{M}^-(\bar{p})$ is a nonsemisimple Lie triple system of $T_{\bar{o}}\bar{M}$. Moreover, as explained in Section \ref{gsr}, $V$ contains the centralizer ${\mathfrak{z}}_{T_{\bar{o}}\bar{M}}(\bar{v})$ of $\bar{v}$ in $T_{\bar{o}}\bar{M}$. Note that $T_{\bar{o}}\bar{\Sigma}' \subset {\mathfrak{z}}_{T_{\bar{o}}\bar{M}}(\bar{v})$. Note that \[ W = T_{\bar{o}}(\bar{\Sigma}^1)^-(\bar{p}) \oplus T_{\bar{o}}\bar{\Sigma}' \] is a semisimple Lie triple system in $T_{\bar{o}}\bar{M}$ which is contained in the nonsemisimple Lie triple system $V$. This implies that there exists $\bar{z} \in V$ with $\bar{z} \notin W$ so that $[\bar{z}, W] = \{0\}$. In particular, $[\bar{z}, T_{\bar{o}}\bar{\Sigma}'] = \{0\}$. Then \[ U = {\mathfrak{z}}_{T_{\bar{o}}\bar{M}}(T_{\bar{o}}\bar{\Sigma}') + T_{\bar{o}}\bar{\Sigma}' \] is a proper Lie triple system in $T_{\bar{o}}\bar{M}$. Since $\bar{z} \in U$, we see that $T_{\bar{o}}\bar{\Sigma}$ is properly contained in $U$. This is a contradiction to the maximality of $\bar{\Sigma}$. It follows that $\Sigma$ is nonsemisimple. \end{proof} \begin {cor} \label {corSigmaA_r} Let $M = SU_{2r +2}/Sp_{r+1}$ and $\Sigma$ be a maximal totally geodesic submanifold of $M$. Assume that $\Sigma$ has a local de Rham factor whose root system is not of type $A$. Then $\Sigma$ is locally irreducible. \end{cor} \begin {proof} Assume that $\Sigma$ is locally reducible. Then $\Sigma$ is nonsemisimple by Lemma \ref {SigmaA_r}. From \cite[Theorem 1.2]{BO16} we see that $T_o\Sigma$ coincides with the normal space of an extrinsically symmetric isotropy orbit. However, from \cite[Table 3]{BO16} we know that the root system of any irreducible factor of a maximal nonsemisimple totally geodesic submanifold of $M$ is of type $A$. It follows that $\Sigma$ must be locally irreducible. \end {proof} From the classification of Riemannian symmetric spaces and their root systems with multiplicities (see e.g.\ \cite[Section 13.1]{BCO16}) we immediately get the following result. \begin {lm}\label{AAA} Let $\Sigma = G'/K'$ be an irreducible simply connected Riemannian symmetric space of compact type whose root system is not of type $A_s$, $s = \rk(\Sigma)$. Let $m_1$ and $m_2$ be the associated multiplicities of $\Sigma$. Then one of the following statements holds: \begin{itemize} \item[({\rm i})] $\Sigma = G'/K'$ is inner, that is, $\rk(G') = \rk(K')$; \item [({\rm ii})] $\max\{m_1, m_2\}\leq 2$; \item [({\rm iii})] $\Sigma = SO_{2s+n}/SO_sSO_{s+n}$ with $s \geq 3$ odd and $n \geq 4$ even. {\rm [}In this case $\Sigma$ is outer, $\rk(\Sigma) = s$ and $(m_1,m_2) = (1,n)$.{\rm ]} \end{itemize} \end{lm} We will now investigate these three possibilities in more detail. \begin{prop} \label{grassmannian} Let $\Sigma$ be a totally geodesic submanifold of $M = SU_{2r +2}/Sp_{r+1}$ and assume that $\Sigma$ is locally isometric to $SO_{2s+n}/SO_sSO_{s+n}$ with $s \geq 3$ odd and $n \geq 4$ even. Then $\codim(\Sigma) > i_r(M)$. \end{prop} \begin {proof} From \cite[Table 5]{BO16} we know that \[ \Sigma' = SO_{2s+n-1}/SO_sSO_{s+n-1} \] is a maximal totally geodesic submanifold of $SO_{2s+n}/SO_sSO_{s+n}$ for which the codimension is equal to the index of $SO_{2s+n}/SO_sSO_{s+n}$. Furthermore, $\Sigma'$ is a reflective submanifold and its complementary reflective submanifold $(\Sigma')^\perp$ is locally isometric to an $s$-dimensional sphere $S^s$. Note that $\Sigma'$ is an inner symmetric space because $s$ is odd and $n$ is even, and hence its geodesic symmetry $\tau$ at $o$ is in $SO_sSO_{s+n-1}$. Moreover, we have $d_o\tau = I_s \otimes (-I_{s+n-1})$. Note that $\tau$ is in the center $\{(I_s,\pm I_{s+n-1})\}$ of $SO_sSO_{s+n-1}$, because $s \geq 3$ is odd and $n \geq 4$ is even. Therefore $\tau$ can be considered as an involutive isometry of $SO_{2s+n}/SO_sSO_{s+n}$ with $d_o\tau(X) = X$ for all $X \in T_o(\Sigma')^\perp$. Geometrically, $\tau$ is the isometric reflection of $SO_{2s+n}/SO_sSO_{s+n}$ in the reflective submanifold $(\Sigma')^\perp$. We now consider this setup in the bottom space $\bar{M}$ of $M$ via the canonical projection $\pi : M \to \bar{M}$ with the corresponding totally geodesic submanifolds $\bar{\Sigma}$, $\bar{\Sigma}'$, $(\bar{\Sigma}')^\perp$ and involution $\bar{\tau}$. If $\bar{\Sigma}$ is a reflective submanifold of $\bar{M}$, then $\Sigma$ is a reflective submanifold of $M$ and the assertion is obvious. So let as assume that $\bar{\Sigma}$ is not a reflective submanifold of $\bar{M}$. Since $\bar{\tau}$ is an involutive isometry in the center of the identity component $(\bar{K}')^o$ of the isotropy group $\bar{K}'$ of $\bar{\Sigma}$, we can consider $\bar{\tau}$ as an involutive isometry of $\bar{M}$. Let $V^+$ and $V^- $ be the $(+1)$- and $(-1)$-eigenspaces of $d_{\bar{o}}\bar{\tau}$, respectively. Note that $V^+$ and $V^- $ are complementary reflective Lie triple systems. Moreover, we have $T_{\bar{o}}\bar{\Sigma}' \subset V^-$ and $T_{\bar{o}}(\bar{\Sigma}')^\perp \subset V^+$ by construction of $\bar{\tau}$. If $T_{\bar{o}}\bar{\Sigma}'= V^- $, then $\bar{\Sigma}'$ is reflective and hence $\bar{\Sigma}$ is reflective by \cite[Corollary 2.9]{BOR19} (since $\bar{\Sigma}$ contains $\bar{\Sigma}'$), which contradicts the assumption that $\bar{\Sigma}$ is not a reflective submanifold of $\bar{M}$. Thus we have a nontrivial orthogonal decomposition \[ V^- = T_{\bar{o}}\bar{\Sigma}' \oplus V^-_1. \] If $\dim(V^-_1) \geq s$, then $V^-$ is a reflective Lie triple system with $\dim(V^-) \geq \dim(\bar{\Sigma})$. The reflective submanifold $\tilde{\Sigma}$ of $M$ with $d_o\pi(T_o\tilde{\Sigma}) = V^-$ then satisfies $\dim(\tilde{\Sigma}) \geq \dim(\bar{\Sigma}) = \dim(\Sigma)$. If $\dim (V^-_1) < s$, then the isotropy group $SO_s SO_{s+n-1}$ acts trivially on $V^-_1$. Since $\bar{\tau}$ belongs to this isotropy group by construction, $d_{\bar{o}}\bar{\tau}$ is the identity on $V^-_1$, which is a contradiction to $V^-_1 \subset V^-$. Since $n\geq 4$, $SO_{s+n-1}$ (which contains the symmetry $\tau$) acts trivially on $\mathbb{R}^d$ for all $d \leq s+2$. This implies that $\dim(\tilde{\Sigma}) \geq \dim(\Sigma) + 2 > \dim (\Sigma)$, and so $\codim(\Sigma) > i_r(M)$. \end{proof} \begin{prop} \label{mleq2} Let $\Sigma = G'/K'$ be a locally irreducible totally geodesic submanifold of $M= SU_{2r +2}/Sp_{r+1}$ ($r \geq 3$) with associated multiplicities $m_1$ and $m_2$. If $\max \{m_1, m_2\} \leq 2$, then $\codim(\Sigma) \geq 4r \geq i_r(M)$. \end{prop} \begin{proof} Recall that the root system associated with $M$ is of type $A_r$ and all roots have multiplicity $4$. The number of reflection hyperplanes of a Weyl group of type $A_r$ is $\frac {r(r+1)}{2}$ and coincides with the number $\vert \Phi ^+\vert$ of positive roots in the corresponding root system. Since every root has multiplicity $4$, we get $\dim(M) = 4\vert \Phi ^+\vert +r = 2r^2 +3r$. Let $l$ be the number of reflections hyperplanes of the Weyl group of the universal covering space $\tilde{\Sigma}$ of $\Sigma$. Then $l = \bar{l} + \bar{s}$, where $\bar{l}$ is the number of long positive roots and $\bar{s}$ is the number of short positive roots in the root system associated with $\tilde{\Sigma}$, taking into account our conventions made near the beginning of Section \ref{gsr}. Then, using the assumption that $\max \{m_1, m_2\} \leq 2$ and the fact that $\rk(\Sigma) \leq \rk(M)$, we get \[ \dim (\Sigma) \leq l(\max \{m_1, m_2\}) + \rk(\Sigma) \leq 2l + r . \] From Proposition \ref{cardinal} we know that $l \leq \frac {r(r+1)}{2}$ and hence \[ \dim (\Sigma) \leq 2\frac {r(r+1)}{2} + r = r^2 + 2r. \] This implies \[ \codim(\Sigma) = \dim(M) - \dim(\Sigma) \geq 2r^2 + 3r - r^2 - 2r = r^2 + r = r(r+1) \geq 4r, \] since $r\geq 3$. \end{proof} \begin{prop} \label{factorsAAA} Let $\Sigma$ be a maximal semisimple totally geodesic submanifold of $M= SU_{2r +2}/Sp_{r+1}$. If the root system of every local de Rham factor of $\Sigma$ is of type $A$, then $\codim(\Sigma) > 4r$. \end{prop} \begin{proof} We denote by $\tilde{\Sigma}$ the Riemannian universal covering space of $\Sigma$ and consider its de Rham decomposition $\tilde{\Sigma} = \tilde{\Sigma}_1 \times \ldots \times \tilde{\Sigma_b}$, $b \geq 1$. {\sc Case 1.} Assume that $\tilde{\Sigma}_1 = S^{k_1}$ for some $k_1 \geq 2$ (after a suitable relabelling of the factors). If $\tilde{\Sigma}$ is reducible, it follows from (the dual version of) Proposition \ref{hyperbolicfactors} that $\tilde{\Sigma} = S^{k_1} \times S^{k_2}$ with $k_2 \geq 2$. The rank of the isotropy group of $\tilde{\Sigma}$ must satisfy \[ \rk(SO_{k_1} \times SO_{k_2}) \leq \rk(Sp_{r+1}) = r+1. \] Since \[ \frac{1}{2} \dim(\Sigma) -1 = \frac{k_1 -1}{2} + \frac{k_2 -1}{2} \leq \rk(SO_{k_1}\times SO_{k_2}), \] we obtain \[ \dim (\Sigma)\leq 2\rk(SO_{k_1}\times SO_{k_2}) + 2 \leq 2r +4. \] From this we get \[ \codim(\Sigma) = \dim(M) - \dim (\Sigma) \geq (2r^2 + 3r) - (2r+4) = 2r^2 + r - 4 > 4r \] since $r \geq 3$, which is a contradiction. If $\tilde{\Sigma}$ is irreducible, we have $\tilde{\Sigma} = S^{k_1}$. We must have $\left[\frac{k_1}{2}\right] = \rk(SO_{k_1}) \leq \rk(Sp_{r+1}) = r + 1$, which gives $k_1 \leq 2r+3$ and hence \[ \codim(\Sigma) = r(2r+3) - k_1 \geq r(2r+3) - (2r+3) = 2r^2 + r - 3 > 4r. \] It follows that none of the de Rham factors of $\tilde{\Sigma}$ is a sphere. {\sc Case 2.} Assume that $\tilde{\Sigma}_1 = E_6/F_4$ (after a suitable relabelling of the factors). Recall that $\rk(E_6/F_4) = 2$, $\dim(E_6/F_4) = 26$, and the associated multiplicities of $E_6/F_4$ are $m_1 = 8 = m_2$. It follows from Proposition \ref{multcomp} that $\rk(\tilde{\Sigma}) < \rk(M) = r$. If $r = 3$, then $\tilde{\Sigma} = E_6/F_4$, $\dim(\Sigma) = 26$ and $\dim(SU_8/Sp_4) = 27$. Since $SU_8/Sp_4$ does not admit a totally geodesic hypersurface, this case cannot occur. If $r = 4$, then $\tilde{\Sigma} = E_6/F_4$ since $\tilde{\Sigma}$ cannot have a rank $1$ factor by Case 1. Then $\dim(\Sigma) = 26$, $\dim(SU_{10}/Sp_5) = 44$, and hence $\codim(\Sigma) = 18 > 16 = 4r$. If $r \geq 5$, then $\tilde{\Sigma} = E_6/F_4 \times \tilde{\Sigma}'$, where $\tilde{\Sigma}' = \{0\}$ or $\tilde{\Sigma}' = \tilde{\Sigma}_2 \times \ldots \times \tilde{\Sigma_b}$ and each $\tilde{\Sigma}_i$ is an irreducible, simply connected, Riemannian symmetric space with $\rk(\tilde{\Sigma}_i) \geq 2$ and root system of type $A$. The isotropy group $\tilde{K}$ of $\tilde{\Sigma}$ must satisfy $\rk(\tilde{K}) \leq \rk(Sp_{r+1}) = r+1$, and thus the isotropy group $\tilde{K}'$ of $\tilde{\Sigma}'$ must satisfy $\rk(\tilde{K}') \leq r-3$. From the list of symmetric spaces with root system of type $A$ we can easily find the symmetric spaces $\tilde{\Sigma}'$ of maximal possible dimension with $\rk(\tilde{\Sigma}') \leq r-3$ and $\rk(\tilde{K}') \leq r-3$. They are: \begin{itemize} \item[(i)] $\tilde{\Sigma}' = SU_3$ if $r = 5$; \item[(ii)] $\tilde{\Sigma}' = SU_4$ if $r = 6$; \item[(iii)] $\tilde{\Sigma}' = SU_{2(r-3)}/Sp_{r-3}$ if $r \geq 7$. \end{itemize} In particular, such a $\tilde{\Sigma}'$ is always irreducible. If $r = 5$, then $\dim(\Sigma) \leq \dim(E_6/F_4) + \dim(SU_3) = 34$. Since $\dim(SU_{12}/Sp_6) = 65$, this gives $\codim(\Sigma) \geq 31 > 20 = 4r$. If $r = 6$, then $\dim(\Sigma) \leq \dim(E_6/F_4) + \dim(SU_4) = 41$. Since $\dim(SU_{14}/Sp_7) = 90$, this gives $\codim(\Sigma) \geq 49 > 24 = 4r$. If $r \geq 7$, then $\dim(\Sigma) \leq \dim(E_6/F_4) + \dim(SU_{2(r-3)}/Sp_{r-3}) = 26 + (r-4)(2r-5)$. Since $\dim(SU_{2r+2}/Sp_{r+1}) = r(2r+3)$, this gives $\codim(\Sigma) \geq r(2r+3) - 26 - (r-4)(2r-5) = 14r - 46 > 4r$. {\sc Case 3.} $\tilde{\Sigma}$ is a product of factors of the form $SU_{k_i+1}/SO_{k_i+1}$, $SU_{k_i+1}$ or $SU_{2k_i+2}/Sp_{k_i+1}$ ($2 \leq k_i \leq r$) with $k_1 + \ldots + k_b \leq r$. First assume that $\rk(\Sigma) = \rk(M) = r$. Let ${\mathfrak{a}} \subset T_o\Sigma \subset T_oM$ be a maximal abelian subspace. Then, by Proposition \ref{cardinal} and its proof, the root system $\tilde{\Phi}$ associated with the maximal abelian subspace ${\mathfrak{a}}$ of $T_o\Sigma$ is a root subsystem of the root system $\Phi$ associated with the maximal abelian subspace ${\mathfrak{a}}$ of $T_oM$. From (\ref {3}) it follows that there exists $0 \neq z \in {\mathfrak{a}}$ such that the Weyl group $\tilde{W}$ associated with $\tilde{\Phi}$ fixes $z$. This implies that $\Sigma$ is nonsemisimple, which contradicts the assumption that $\Sigma$ is semisimple. Thus we have $\rk(\Sigma) < \rk(M) = r$. From the particular product form of $\tilde{\Sigma}$ it follows easily that $\dim(\Sigma) = \dim(\tilde{\Sigma}) \leq \dim(SU_{2r}/Sp_r) = (r-1)(2r+1)$ and therefore $\codim(\Sigma) = \dim(M) - \dim(\Sigma) \geq r(2r+3) - (r-1)(2r+1) = 4r + 1 > 4r$. \end{proof} We can now state the main result of this section. \begin{thm} \label{LagGras} For $M = SU_{2r+2}/Sp_{r+1}$ we have $i(M) = 4r = i_r(M)$ if $r \geq 4$ and $i(M) = 11 = i_r(M)$ if $r = 3$. \end{thm} \begin{proof} We already know that $i_r(M) = 4r$ if $r \geq 4$ and $i_r(M) = 11$ if $r = 3$. Let $\Sigma$ be a maximal totally geodesic submanifold of $M$. If $\Sigma$ is nonsemisimple, then we have $\codim(\Sigma) \geq 4r$ by \cite[Theorem 4.2]{BO16}. Assume that $\Sigma$ is semisimple. If $\Sigma$ is locally reducible, it follows from Corollary \ref{corSigmaA_r} that every local de Rham factor of $\Sigma$ must have a root system of type $A$, which then implies $\codim(\Sigma) \geq 4r$ by Proposition \ref{factorsAAA}. Thus we can assume that $\Sigma$ is locally irreducible. If the root system of $\Sigma$ is not of type $A$, then we have three possibilities by Lemma \ref{AAA}: (i): $\Sigma = G'/K'$ is inner, that is, $\rk(G') = \rk(K')$. Since $M = SU_{2r+2}/Sp_{r+1}$ is an outer symmetric space, then $\Sigma$ is a reflective submanifold by Proposition \ref{innSigma} and therefore $\codim(\Sigma) \geq i_r(M)$. (ii): $\max\{m_1, m_2\}\leq 2$. Then $\codim(\Sigma) \geq 4r$ by Proposition \ref{mleq2}. (iii): $\Sigma = SO_{2s+n}/SO_sSO_{s+n}$ with $s \geq 3$ odd and $n \geq 4$ even. Then $\codim(\Sigma) \geq 4r$ by Proposition \ref{grassmannian}. If the root system of $\Sigma$ is of type $A$, then $\codim(\Sigma) > 4r$ by Proposition \ref{factorsAAA}. \end{proof} \section{Quaternionic Grassmannians} In this section we determine the index of the quaternionic Grassmann manifold $M = Sp_{2r+k}/Sp_rSp_{r+k}$, $r \geq 1$, $k \geq 0$. We already know the index for some values of $r$ and $k$ from results in \cite{BO16} (Table 4, Corollary 7.2 and Corollary 7.7). More precisely: For $r = 1$ and $k = 0$ we have $i(M) = 1 (\neq 4r)$. For $r = 1$ and $k > 0$ we have $i(M) = 4 (= 4r)$. For $r = 2$ and $k = 0$ we have $i(M) = 6 (\neq 4r)$. For $r \geq 2$ and $k \geq r-1$ we have $i(M) = 4r$. It therefore remains to determine the index for $Sp_{2r+k}/Sp_rSp_{r+k}$ with $r \geq 3$ and $0 \leq k \leq r-2$. \begin{lm} \label{spreduction} Let $r \geq 3$. If $i(Sp_{2r}/Sp_rSp_r) = 4r$, then $i(Sp_{2r+k}/Sp_rSp_{r+k}) = 4r$ for all $k \geq 0$. \end{lm} \begin{proof} The canonical inclusion $Sp_{2r} \subset Sp_{2r+k}$ leads to a canonical totally geodesic embedding of $\Sigma = Sp_{2r}/Sp_rSp_r = Sp_{2r} \cdot o$ into $M = Sp_{2r+k}/Sp_rSp_{r+k}$. From Proposition \ref{indexSigmaM} we obtain $4r = i(\Sigma) \leq i(M)$. On the other hand, from \cite{BO16} we know that $i_r(M) = 4r$. Since we always have $i(M) \leq i_r(M)$, we obtain $4r = i(\Sigma) \leq i(M) \leq i_r(M) = 4r$ and hence $i(M) = 4r$. \end{proof} It follows from Lemma \ref{spreduction} that it suffices to prove $i(Sp_{2r}/Sp_rSp_r) = 4r$ for $r \geq 3$. As we mentioned above, this equality does not hold for $r \in \{1,2\}$. We will prove this equality first for $r \in \{3,4,5\}$ and then for arbitrary $r \geq 6$ by an inductive argument. The next result provides useful bounds for the index of $Sp_{2r}/Sp_rSp_r$. \begin{lm} \label{lowbound} We have $4r-4 \leq i(Sp_{2r}/Sp_rSp_r) \leq 4r$ for all $r \geq 3$. \end{lm} \begin{proof} The second inequality follows from the fact that the reflective index of $M = Sp_{2r}/Sp_rSp_r$ is equal to $4r$ for $r \geq 3$ (see \cite{BO16}). For the first inequality, consider the action of the isotropy group $K = Sp_rSp_r$ on the quaternionic Grassmannian $Sp_{2r}/Sp_rSp_r \cong G_r(\mathbb{H}^{2r})$ of $r$-dimensional quaternionic subspaces of $\mathbb{H}^{2r}$. This action induces a decomposition $\mathbb{H}^{2r} = \mathbb{H}^r \times \mathbb{H}^r$. Define an $r$-dimensional quaternionic subspace $V$ of $\mathbb{H}^{2r} = \mathbb{H}^r \times \mathbb{H}^r$ by $V = \{ (z,z) : z \in \mathbb{H}^r \}$. The isotropy group of $K$ at $V$ is the diagonal subgroup $\Delta Sp_r$ and therefore the orbit $\Sigma = K \cdot V$ of $K$ containing $V$ is isometric to $Sp_r \cong Sp_rSp_r/\Delta Sp_r$. One can show that $\Sigma$ is a totally geodesic submanifold of $M$. In fact, if we consider the base point $o \in M$ as an $r$-dimensional quaternionic subspace of $\mathbb{H}^{2r}$, then the orthogonal complement $o^\perp$ of $o$ in $\mathbb{H}^{2r}$ is also a fixed point of the $K$-action on $M$. Thus $o^\perp$ is a pole of $o$ in $M$. The orbit of $K$ through the midpoint of a geodesic in $M$ connecting $o$ and $o^\perp$ is $\Sigma$. Thus $\Sigma \cong Sp_r$ is a centrosome of $M$ and therefore totally geodesic in $M$ (see \cite{NT95} for details). In \cite{BO17} we proved that $i(Sp_r) = 4r-4$. Using Proposition \ref{indexSigmaM} we then obtain $4r-4 = i(Sp_r) = i(\Sigma) \leq i(M)$. \end{proof} \begin{lm} \label{isotropybounds} Let $\Sigma = G'/K'$ be a totally geodesic submanifold of $M = Sp_{2r}/Sp_rSp_r$ and $H$ be a (locally) irreducible factor of $K'$. Then the following inequalities hold: \begin{align} \rk(H) & \leq r, \label{bound1} \\ \dim(H) & \leq 2r^2 - 3r + 4 \label{bound2} \mbox{ if ${\mathfrak{h}} \not\cong {\mathfrak{s}}{\mathfrak{p}}_r$},\\ \rk(K') & \leq 2r, \label{bound3} \\ \dim(K') & < 2r(2r+1). \label{bound4} \end{align} \end{lm} \begin{proof} We prove these inequalities on Lie algebra level. Denote by $\pi$ the projection from ${\mathfrak{h}}$ into one of the two ${\mathfrak{s}}{\mathfrak{p}}_r$-factors. Then $\pi({\mathfrak{h}})$ is isomorphic to ${\mathfrak{h}}/\ker(\pi)$. Since ${\mathfrak{h}}$ is simple and $\pi({\mathfrak{h}}) \neq \{0\}$, then $\pi({\mathfrak{h}})$ is isomorphic to ${\mathfrak{h}}$ and $\rk({\mathfrak{h}}) = \rk(\pi({\mathfrak{h}})) \leq \rk({\mathfrak{s}}{\mathfrak{p}}_r) = r$. Since $\pi({\mathfrak{h}}) \neq \{0\}$ for at least one of the two projections, we proved (\ref{bound1}). If ${\mathfrak{h}} \not\cong {\mathfrak{s}}{\mathfrak{p}}_r$, then $\dim({\mathfrak{h}}) = \dim(H) \leq \dim(Sp_r) - i_r(Sp_r) = (2r^2+r) - 4(r-1) = 2r^2-3r+4$, since the subgroup of $Sp_r$ with Lie algebra $\pi({\mathfrak{h}})$ is a totally geodesic submanifold of $Sp_r$ and $i_r(Sp_r) = 4(r-1)$ by \cite{BO17}. Since ${\mathfrak{k}}'$ is a subalgebra of ${\mathfrak{s}}{\mathfrak{p}}_r \oplus {\mathfrak{s}}{\mathfrak{p}}_r$, we obviously have $\rk({\mathfrak{k}}') \leq \rk({\mathfrak{s}}{\mathfrak{p}}_r \oplus {\mathfrak{s}}{\mathfrak{p}}_r) = 2r$ and $\dim({\mathfrak{k}}') < \dim({\mathfrak{s}}{\mathfrak{p}}_r \oplus {\mathfrak{s}}{\mathfrak{p}}_r) = 2(2r^2 + r) = 2r(2r+1)$. \end{proof} \begin{prop} \label{isp6sp3sp3} For $M = Sp_6/Sp_3Sp_3$ we have $i(M) = 12$. \end{prop} \begin{proof} We know from Table 4 in \cite{BO16} that $i_r(M) = 12$ and that $Sp_5/Sp_2Sp_3$ is a reflective submanifold of $M$ whose codimension is equal to $12$. Assume that there exists a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d = \codim(\Sigma) < 12$. From Lemma \ref{lowbound} we obtain $d \in \{8,9,10,11\}$. We can slightly improve this. From Theorem 4.2 in \cite{BO16} we know that $\Sigma$ must be semisimple. The classification in \cite{BO16} of symmetric spaces with index $\leq 6$ tells us that $d \geq 7$. From Proposition 7.4 in \cite{BO16} we then get that $d(d-1) \geq 2(\dim(M) - \rk(M) - 1) = 64$, that is, $d \geq 9$. It follows that $d \in \{9,10,11\}$, or equivalently, $\dim(\Sigma) \in \{25,26,27\}$. We write $\Sigma = G'/K'$ with ${\mathfrak{k}}' = [T_o\Sigma,T_o\Sigma]$ and ${\mathfrak{g}}' = [T_o\Sigma,T_o\Sigma] \oplus T_o\Sigma$. Let $H$ be a (locally) irreducible factor of $K'$. From Lemma \ref{isotropybounds} we know that \begin{align} \rk({\mathfrak{h}}) & \leq 3 \label{3bound1} ,\\ \rk({\mathfrak{k}}') & \leq 6 \label{3bound3} , \\ \dim({\mathfrak{k}}') & < 42 \label{3bound4} . \end{align} {\sc Case 1: $\rk(\Sigma) = 1$.} Then $\tilde\Sigma \in \{S^{25},S^{26},S^{27},\mathbb{C} P^{13}\}$ and so ${\mathfrak{k}}' \in \{{\mathfrak{s}}{\mathfrak{o}}_{25},{\mathfrak{s}}{\mathfrak{o}}_{26},{\mathfrak{s}}{\mathfrak{o}}_{27},{\mathfrak{u}}_{13}\}$. In all cases we have $\rk({\mathfrak{k}}') > 6$, which contradicts (\ref{3bound3}). {\sc Case 2: $\rk(\Sigma) = 2$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible or $\tilde\Sigma = S^{k_1} \times S^{k_2}$ with $k_1 \geq k_2 \geq 2$. If $\tilde\Sigma = S^{k_1} \times S^{k_2}$, then $\dim({\mathfrak{k}}') = \dim({\mathfrak{s}}{\mathfrak{o}}_{k_1}) + \dim({\mathfrak{s}}{\mathfrak{o}}_{k_2}) = \frac{1}{2}(k_1(k_1-1) + k_2(k_2-1))$. Since $\dim(\Sigma) \in \{25,26,27\}$, we have $k_1 \geq 13$ and thus $\dim({\mathfrak{k}}') \geq 78$, which contradicts (\ref{3bound4}). Thus $\tilde\Sigma$ is irreducible. Since $\dim(\Sigma) \in \{25,26,27\}$ and $\rk(\Sigma) = 2$, we have only two possibilities, namely $\tilde\Sigma = SO_{15}/SO_2SO_{13}$ and $\tilde\Sigma = E_6/F_4$. Since $\rk({\mathfrak{s}}{\mathfrak{o}}_{13}) = 6$ and $\rk({\mathfrak{f}}_4) = 4$, we can exclude both possibilities using (\ref{3bound1}). {\sc Case 3: $\rk(\Sigma) = 3$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible. Since $\dim(\Sigma) \in \{25,26,27\}$ and $\rk(\Sigma) = 3$, we have only one possibility, namely $\tilde\Sigma = SO_{12}/SO_3SO_9$. Since $\rk({\mathfrak{s}}{\mathfrak{o}}_9) = 4$, we can exclude this possibility using (\ref{3bound1}). Altogether it now follows that there exists no maximal totally geodesic submanifold $\Sigma$ of $M$ with $\codim(\Sigma) < 12 = i_r(M)$, and therefore $i(M) = 12$. \end{proof} \begin{prop} \label{isp8sp4sp4} For $M = Sp_8/Sp_4Sp_4$ we have $i(M) = 16$. \end{prop} \begin{proof} We know from Table 4 in \cite{BO16} that $i_r(M) = 16$ and that $Sp_7/Sp_3Sp_4$ is a reflective submanifold of $M$ whose codimension is equal to $16$. Assume that there exists a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d = \codim(\Sigma) < 16$. From Lemma \ref{lowbound} we obtain $d \in \{12,13,14,15\}$, or equivalently, $\dim(\Sigma) \in \{49,50,51,52\}$. We write $\Sigma = G'/K'$ with ${\mathfrak{k}}' = [T_o\Sigma,T_o\Sigma]$ and ${\mathfrak{g}}' = [T_o\Sigma,T_o\Sigma] \oplus T_o\Sigma$. Let $H$ be a (locally) irreducible factor of $K'$. From Lemma \ref{isotropybounds} we know that \begin{align} \rk({\mathfrak{h}}) & \leq 4 \label{4bound1} ,\\ \dim({\mathfrak{h}}) & \leq 24 \label{4bound2} \mbox{ if ${\mathfrak{h}} \not\cong {\mathfrak{s}}{\mathfrak{p}}_4$} ,\\ \rk({\mathfrak{k}}') & \leq 8 \label{4bound3} , \\ \dim({\mathfrak{k}}') & < 72 \label{4bound4} . \end{align} {\sc Case 1: $\rk(\Sigma) = 1$.} Then $\tilde\Sigma \in \{S^{49},S^{50},S^{51},S^{52},\mathbb{C} P^{25},\mathbb{C} P^{26},\mathbb{H} P^{13}\}$ and so ${\mathfrak{k}}' \in \{{\mathfrak{s}}{\mathfrak{o}}_{49},{\mathfrak{s}}{\mathfrak{o}}_{50},{\mathfrak{s}}{\mathfrak{o}}_{51},{\mathfrak{s}}{\mathfrak{o}}_{52},{\mathfrak{u}}_{25},{\mathfrak{u}}_{26},{\mathfrak{s}}{\mathfrak{p}}_{13}\oplus {\mathfrak{s}}{\mathfrak{p}}_1\}$. In all cases we have $\rk({\mathfrak{k}}') > 8$, which contradicts (\ref{4bound3}). {\sc Case 2: $\rk(\Sigma) = 2$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible or $\tilde\Sigma = S^{k_1} \times S^{k_2}$ with $k_1 \geq k_2 \geq 2$. If $\tilde\Sigma = S^{k_1} \times S^{k_2}$, then $\dim({\mathfrak{k}}') = \dim({\mathfrak{s}}{\mathfrak{o}}_{k_1}) + \dim({\mathfrak{s}}{\mathfrak{o}}_{k_2}) = \frac{1}{2}(k_1(k_1-1) + k_2(k_2-1))$. Since $\dim(\Sigma) \in \{49,50,51,52\}$, we have $k_1 \geq 25$ and thus $\dim({\mathfrak{k}}') \geq 300$, which contradicts (\ref{4bound4}). Thus $\tilde\Sigma$ is irreducible. Since $\rk(\Sigma) = 2$ and $\dim(\Sigma) \in \{49,50,51,52\}$, we have only three possibilities, namely $\tilde\Sigma = SO_{27}/SO_2SO_{25}$, $\tilde\Sigma = SO_{28}/SO_2SO_{26}$ and $\tilde\Sigma = SU_{15}/S(U_2U_{13})$. In all cases we have $\rk({\mathfrak{k}}') > 8$, which contradicts (\ref{4bound3}). {\sc Case 3: $\rk(\Sigma) = 3$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible. Since $\rk(\Sigma) = 3$ and $\dim(\Sigma) \in \{49,50,51,52\}$, we have only one possibility, namely $\tilde\Sigma = SO_{20}/SO_3SO_{17}$. In this case we have $\rk({\mathfrak{k}}') > 8$, which contradicts (\ref{4bound3}). {\sc Case 4: $\rk(\Sigma) = 4$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible or the product of two symmetric spaces of rank $2$. Firstly, assume that $\tilde\Sigma$ is irreducible. Since $\dim(\Sigma) \in \{49,50,51,52\}$ and $\rk(\Sigma) = 4$, we have only one possibility, namely $\tilde\Sigma = SO_{17}/SO_4SO_{13}$. In this case we have ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{o}}_{13}$ and $\rk({\mathfrak{h}}) = 6$, which contradicts (\ref{4bound1}). Next, assume that $\tilde\Sigma = \tilde\Sigma_1 \times \tilde\Sigma_2$, where $\tilde\Sigma_1$ and $\tilde\Sigma_2$ are irreducible symmetric spaces of rank $2$. we can assume that $\dim(\tilde\Sigma_1) \geq \dim(\tilde\Sigma_2)$. Then $25 \leq \dim(\tilde\Sigma_1) \leq 47$, using the fact that $5$ is the lowest dimension of an irreducible symmetric space of rank $2$. We discuss the various possibilities. If $\tilde\Sigma_1 = SO_{2+k}/SO_2SO_k$, $13 \leq k \leq 23$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{o}}_k$ with $k \in \{13,\ldots,23\}$ and thus $\rk({\mathfrak{h}}) > 4$, which contradicts (\ref{4bound1}). If $\tilde\Sigma_1 = SU_{2+k}/S(U_2U_k)$, $7 \leq k \leq 11$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{u}}_k$ with $k \in \{7,\ldots,11\}$ and thus $\rk({\mathfrak{h}}) > 4$, which contradicts (\ref{4bound1}). If $\tilde\Sigma_1 = Sp_{2+k}/Sp_2Sp_k$, $4 \leq k \leq 5$. For $k = 5$ we have ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{p}}_5$ and thus $\rk({\mathfrak{h}}) > 4$, which contradicts (\ref{4bound1}). If $k = 4$, then $\dim(\tilde\Sigma_1) = 32$ and thus $17 \leq \dim(\tilde\Sigma_2) \leq 20$. Moreover, (\ref{4bound3}) implies that the rank of the isotropy group of $\tilde\Sigma_2$ must be $\leq 2$. It is easy to check that there does not exist an irreducible symmetric space of rank $2$ with these properties. If $\tilde\Sigma_1 = E_6/F_4$, then ${\mathfrak{h}} = {\mathfrak{f}}_4$ and $\dim({\mathfrak{h}}) = 52$, which contradicts (\ref{4bound2}). If $\tilde\Sigma_1 = E_6/Spin_{10}U_1$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{o}}_{10}$ and $\rk({\mathfrak{h}}) = 5$, which contradicts (\ref{4bound1}). Altogether it now follows that there exists no maximal totally geodesic submanifold $\Sigma$ of $M$ with $\codim(\Sigma) < 16 = i_r(M)$, and therefore $i(M) = 16$. \end{proof} \begin{prop} \label{isp10sp5sp5} For $M = Sp_{10}/Sp_5Sp_5$ we have $i(M) = 20$. \end{prop} \begin{proof} We know from Table 4 in \cite{BO16} that $i_r(M) = 20$ and that $Sp_9/Sp_4Sp_5$ is a reflective submanifold of $M$ whose codimension is equal to $20$. Assume that there exists a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d = \codim(\Sigma) < 20$. From Lemma \ref{lowbound} we obtain $d \in \{16,17,18,19\}$, or equivalently, $\dim(\Sigma) \in \{81,82,83,84\}$. We write $\Sigma = G'/K'$ with ${\mathfrak{k}}' = [T_o\Sigma,T_o\Sigma]$ and ${\mathfrak{g}}' = [T_o\Sigma,T_o\Sigma] \oplus T_o\Sigma$. Let $H$ be a (locally) irreducible factor of $K'$. From Lemma \ref{isotropybounds} we know that \begin{align} \rk({\mathfrak{h}}) & \leq 5 \label{5bound1} ,\\ \dim({\mathfrak{h}}) & \leq 39 \label{5bound2} \mbox{ if ${\mathfrak{h}} \not\cong {\mathfrak{s}}{\mathfrak{p}}_5$},\\ \rk({\mathfrak{k}}') & \leq 10 \label{5bound3} , \\ \dim({\mathfrak{k}}') & < 110 \label{5bound4} . \end{align} {\sc Case 1: $\rk(\Sigma) = 1$.} Then $\tilde\Sigma \in \{S^{81},S^{82},S^{83},S^{84},\mathbb{C} P^{41},\mathbb{C} P^{42},\mathbb{H} P^{21}\}$ and so ${\mathfrak{k}}' \in \{{\mathfrak{s}}{\mathfrak{o}}_{81},{\mathfrak{s}}{\mathfrak{o}}_{82},{\mathfrak{s}}{\mathfrak{o}}_{83},{\mathfrak{s}}{\mathfrak{o}}_{84},{\mathfrak{u}}_{41},{\mathfrak{u}}_{42},{\mathfrak{s}}{\mathfrak{p}}_{21}\oplus {\mathfrak{s}}{\mathfrak{p}}_1\}$. In all cases we have $\rk({\mathfrak{k}}') > 10$, which contradicts (\ref{5bound3}). {\sc Case 2: $\rk(\Sigma) = 2$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible or $\tilde\Sigma = S^{k_1} \times S^{k_2}$ with $k_1 \geq k_2 \geq 2$. If $\tilde\Sigma = S^{k_1} \times S^{k_2}$, then $\dim({\mathfrak{k}}') = \dim({\mathfrak{s}}{\mathfrak{o}}_{k_1}) + \dim({\mathfrak{s}}{\mathfrak{o}}_{k_2}) = \frac{1}{2}(k_1(k_1-1) + k_2(k_2-1))$. Since $\dim(\Sigma) \in \{81,82,83,84\}$, we have $k_1 \geq 41$ and thus $\dim({\mathfrak{k}}') \geq 820$, which contradicts (\ref{5bound4}). Thus $\tilde\Sigma$ is irreducible. Since $\rk(\Sigma) = 2$ and $\dim(\Sigma) \in \{81,82,83,84\}$, we have only three possibilities, namely $\tilde\Sigma = SO_{43}/SO_2SO_{41}$, $\tilde\Sigma = SO_{44}/SO_2SO_{42}$ and $\tilde\Sigma = SU_{23}/S(U_2U_{21})$. In all cases we have $\rk({\mathfrak{k}}') > 10$, which contradicts (\ref{5bound3}). {\sc Case 3: $\rk(\Sigma) = 3$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible. Since $\rk(\Sigma) = 3$ and $\dim(\Sigma) \in \{81,82,83,84\}$, we have four possibilities, namely $\tilde\Sigma = SO_{30}/SO_3SO_{27}$, $\tilde\Sigma = SO_{31}/SO_3SO_{28}$, $\tilde\Sigma = SU_{17}/S(U_3U_{14})$ and $\tilde\Sigma = Sp_{10}/Sp_3Sp_7$. In the first three cases we have $\rk({\mathfrak{k}}') > 10$, which contradicts (\ref{5bound3}). In the last case we can choose ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{p}}_7$, then $\rk({\mathfrak{h}}) = 7$, which contradicts (\ref{5bound1}). {\sc Case 4: $\rk(\Sigma) = 4$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible or the product of two symmetric spaces of rank $2$. Firstly, assume that $\tilde\Sigma$ is irreducible. Since $\dim(\Sigma) \in \{81,82,83,84\}$ and $\rk(\Sigma) = 4$, we have only one possibility, namely $\tilde\Sigma = SO_{25}/SO_4SO_{21}$. In this case we can choose ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{o}}_{21}$ and so $\rk({\mathfrak{h}}) = 10$, which contradicts (\ref{5bound1}). Next, assume that $\tilde\Sigma = \tilde\Sigma_1 \times \tilde\Sigma_2$, where $\tilde\Sigma_1$ and $\tilde\Sigma_2$ are irreducible symmetric spaces of rank $2$. we can assume that $\dim(\tilde\Sigma_1) \geq \dim(\tilde\Sigma_2)$. Then $41 \leq \dim(\tilde\Sigma_1) \leq 79$, using the fact that $5$ is the lowest dimension of an irreducible symmetric space of rank $2$. We discuss the various possibilities. If $\tilde\Sigma_1 = SO_{2+k}/SO_2SO_k$, $21 \leq k \leq 39$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{o}}_k$ with $k \in \{21,\ldots,39\}$ and thus $\rk({\mathfrak{h}}) > 5$, which contradicts (\ref{5bound1}). If $\tilde\Sigma_1 = SU_{2+k}/S(U_2U_k)$, $11 \leq k \leq 19$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{u}}_k$ with $k \in \{11,\ldots,19\}$ and thus $\rk({\mathfrak{h}}) > 5$, which contradicts (\ref{5bound1}). If $\tilde\Sigma_1 = Sp_{2+k}/Sp_2Sp_k$, $6 \leq k \leq 9$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{p}}_k$ with $k \in \{6,\ldots,9\}$ and thus $\rk({\mathfrak{h}}) > 5$, which contradicts (\ref{5bound1}). {\sc Case 5: $\rk(\Sigma) = 5$.} By Theorem \ref{rankonefactors}, $\tilde\Sigma$ is irreducible or the product of a symmetric space of rank $2$ and a symmetric space of rank $3$. However, there are no irreducible symmetric paces of rank $5$ and dimension in $\{81,82,83,84\}$. Consequently, $\tilde\Sigma = \tilde\Sigma_1 \times \tilde\Sigma_2$, where $\tilde\Sigma_1$ and $\tilde\Sigma_2$ are irreducible symmetric spaces of rank 2 or 3 and $\rk(\tilde\Sigma_1) + \rk(\tilde\Sigma_2) = 5$. We can assume that $\dim(\tilde\Sigma_1) \geq \dim(\tilde\Sigma_2)$. Then $41 \leq \dim(\tilde\Sigma_1) \leq 79$, using the fact that $5$ is the lowest dimension of an irreducible symmetric space of rank $\geq 2$. If $\rk(\tilde\Sigma_1) = 2$, we can use the arguments given in the previous Case 4. Assume that $\rk(\tilde\Sigma_1) = 3$. We discuss the various possibilities. If $\tilde\Sigma_1 = SO_{3+k}/SO_3SO_k$, $14 \leq k \leq 26$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{o}}_k$ with $k \in \{14,\ldots,26\}$ and thus $\rk({\mathfrak{h}}) > 5$, which contradicts (\ref{5bound1}). If $\tilde\Sigma_1 = SU_{3+k}/S(U_3U_k)$, $7 \leq k \leq 13$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{u}}_k$ with $k \in \{7,\ldots,13\}$ and thus $\rk({\mathfrak{h}}) > 5$, which contradicts (\ref{5bound1}). If $\tilde\Sigma_1 = Sp_{3+k}/Sp_3Sp_k$, $4 \leq k \leq 6$, we need different arguments. If $k = 6$, then we can choose ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{p}}_6$ and thus $\rk({\mathfrak{h}}) > 5$, which contradicts (\ref{5bound1}). If $k = 5$, then $\dim(\tilde\Sigma_1) = 60$ and thus $21 \leq \dim(\tilde\Sigma_2) \leq 24$. It follows that $\tilde\Sigma_2 = SO_{13}/SO_2SO_{11}$, or $\tilde\Sigma_2 = SO_{14}/SO_2SO_{12}$, or $\tilde\Sigma_2 = SU_8/S(U_2U_6)$, or $\tilde\Sigma_2 = Sp_5/Sp_2Sp_3$. Since the isotropy algebra of $\tilde\Sigma_1$ has rank $8$, the isotropy algebra of $\tilde\Sigma_2$ must have rank $\leq 2$ by (\ref{5bound3}). However, in all four cases the isotropy algebra of $\tilde\Sigma_2$ has rank $> 2$, which gives a contradiction. If $k = 4$, then $\dim(\tilde\Sigma_1) = 48$ and thus $33 \leq \dim(\tilde\Sigma_2) \leq 36$. It follows that $\tilde\Sigma_2 = SO_{19}/SO_2SO_{17}$, or $\tilde\Sigma_2 = SO_{20}/SO_2SO_{18}$, or $\tilde\Sigma_2 = SU_{11}/S(U_2U_9)$. Since the isotropy algebra of $\tilde\Sigma_1$ has rank $7$, the isotropy algebra of $\tilde\Sigma_2$ must have rank $\leq 3$ by (\ref{5bound3}). However, in all three cases the isotropy algebra of $\tilde\Sigma_2$ has rank $> 3$, which gives a contradiction. If $\tilde\Sigma_1 = SO_{14}/U_7$, then ${\mathfrak{h}} = {\mathfrak{s}}{\mathfrak{u}}_7$ satisfies $\rk({\mathfrak{h}}) = 6$, which contradicts (\ref{5bound1}). If $\tilde\Sigma_1 = E_7/E_6U_1$, then ${\mathfrak{h}} = {\mathfrak{e}}_6$ satisfies $\rk({\mathfrak{h}}) = 6$, which contradicts (\ref{5bound1}). Altogether it now follows that there exists no maximal totally geodesic submanifold $\Sigma$ of $M$ with $\codim(\Sigma) < 20 = i_r(M)$, and therefore $i(M) = 20$. \end{proof} \begin{prop} \label{isp2rsprspr} For $M = Sp_{2r}/Sp_rSp_r$ ($r \geq 3$) we have $i(M) = 4r$. \end{prop} \begin{proof} Consider the symmetric space $N = Sp_{2(r+3)}/Sp_{r+3}Sp_{r+3}$. We already know from \cite{BO16} that $i_r(N) = 4(r+3)$. The symmetric space $N^+ = Sp_{r+3}/Sp_3Sp_r \times Sp_{r+3}/Sp_rSp_3$ is a polar of $N$ with corresponding meridian $N^- = Sp_{2r}/Sp_rSp_r \times Sp_6/Sp_3Sp_3$ (see \cite{CN78} for details). Both $N^+$ and $N^-$ are reflective submanifolds of $N$ and $o \in N^-$. Let $\Sigma$ be a totally geodesic submanifold of $N$ with $\codim(\Sigma) = i(N)$ and $o \in \Sigma$. Then $\Sigma' = \Sigma \cap N^-$ is a totally geodesic submanifold of $N^-$. By construction, the codimension $\codim_N(\Sigma)$ of $\Sigma$ in $N$ and the codimension $\codim_{N^-}(\Sigma')$ of $\Sigma'$ in $N^-$ satisfy \[ \codim_N(\Sigma) \geq \codim_{N^-}(\Sigma'). \] We define subspaces $V_1,V_2,W$ of $T_oN$ by $V_1 = T_o(Sp_{2r}/Sp_rSp_r)$, $V_2 = T_o(Sp_6/Sp_3Sp_3)$ and $W = T_o\Sigma$. Then, by construction, $T_oN^- = V_1 \oplus V_2$. For $j \in \{1,2\}$ we define $K^j = \{k \in K : V_j \subseteq d_ok(W)\}$, where $K = Sp_{r+3}Sp_{r+3}$. If $k \in K \setminus K^j$, then $V_j$ is not a subspace of $d_ok(W)$. By continuity of the action of $K$ on $T_oN$, there exists an open neighborhood ${\mathcal U}_j$ of $k$ in $K$ such that $V_j$ is not a subspace of $d_ok(W)$ for all $k \in {\mathcal U}_j$. Thus $K \setminus K^j$ is an open subset of $K$. Let $0 \neq u \in T_oN$, $0 \neq v_j \in V_j$, and assume that $u$ is perpendicular to $W$. We define the analytic function \[ f_{v_j,u} : K \to \mathbb{R}\ , k \mapsto \langle v_j, d_ok(u) \rangle. \] Assume that $K^j$ contains a nonempty open subset $\Omega$ of $K$. Then $f_{v_j,u}|_\Omega = 0$ and thus $f_{v_j,u} = 0$ by analyticity of $f_{v_j,u}$ and since $K$ is connected. It follows that $\mathbb{R} v_j \subseteq d_ok(W)$ for all $k \in K$. This is a contradiction since $\bigcap_{k \in K} d_ok(W) = \{0\}$. It follows that $K^j$ does not contain any nonempty open subsets of $K$. Altogether we now see that $K \setminus K^j = \{ k \in K : V_j \not\subseteq d_ok(W)\}$ is an open and dense subset of $K$. It follows that \[ (K \setminus K^1) \cap (K \setminus K^2) = \{ k \in K : V_1 \not\subseteq d_ok(W) \mbox{ and } V_2 \not\subseteq d_ok(W) \} \] is an open and dense subset of $K$. We can therefore assume, without loss of generality, that $V_1$ and $V_2$ are not contained in $T_o\Sigma'$. Using Proposition \ref{productestimate} we then obtain \begin{align*} i(Sp_{2(r+3)}/Sp_{r+3}Sp_{r+3}) & = \codim_N(\Sigma) \geq \codim_{N^-}(\Sigma') \\ & \geq i(Sp_{2r}/ Sp_rSp_r) + i(Sp_6/ Sp_3Sp_3). \end{align*} By induction over $r$ we can now prove the assertion. For $r \in \{3,4,5\}$ we already know that $i(Sp_{2r}/ Sp_rSp_r) = 4r$ by Propositions \ref{isp6sp3sp3}, \ref{isp8sp4sp4} and \ref{isp10sp5sp5}. For $r \geq 6$ we can then use the previous inequality and the induction hypothesis and obtain \begin{align*} 4(r+3) & = i_r(Sp_{2(r+3)}/Sp_{r+3}Sp_{r+3}) \\ & \geq i(Sp_{2(r+3)}/Sp_{r+3}Sp_{r+3}) \\ & \geq i(Sp_{2r}/ Sp_rSp_r) + i(Sp_6/ Sp_3Sp_3) = 4r + 12 = 4(r+3). \end{align*} This finishes the proof. \end{proof} \begin{re} \rm Proposition \ref{productestimate} can be generalized to the case that $M = M_1 \times \ldots \times M_k$ is the Riemannian product of a finite number of irreducible factors. Using similar arguments as in the proof of Proposition \ref{isp2rsprspr}, we can then show the following: Let $\Sigma = \Sigma_1\times \ldots \times \Sigma_k$ be a totally geodesic submanifold of a Riemannian symmetric space $M$ of noncompact type, where $\Sigma_1, \ldots, \Sigma _k$ are irreducible factors of $\Sigma$. Then \[ i(\Sigma _1) + \ldots + i(\Sigma _k) \leq i(M). \] In particular, if $i(M) = \rk(M)$, then $i(\Sigma_1) + \ldots + i(\Sigma _k) \leq \rk(M)$. Therefore, if $\rk(\Sigma) = \rk(M)$, then $i(\Sigma _\nu) = \rk(\Sigma _\nu)$ for all $\nu \in \{1, \ldots , k\}$. The possible factors are then known from \cite{BO16}. \end{re} \begin{thm} \label{quatGras} We have $i(Sp_{2r+k}/Sp_rSp_{r+k}) = 4r$ for all $r \geq 3$ and $k \geq 0$. \end{thm} \begin{proof} This now follows from Lemma \ref{spreduction} and Proposition \ref{isp2rsprspr}. \end{proof} \section{Irreducible Hermitian symmetric spaces} In this section we study the index of irreducible Hermitian symmetric spaces. Our first result states that a maximal totally geodesic submanifold of sufficiently small codimension in an irreducible Hermitian symmetric space is a complex submanifold. \begin{prop} \label{maxHerm} Let $M = G/K$ be an irreducible Hermitian symmetric space. Every maximal totally geodesic submanifold $\Sigma$ of $M$ with $\codim(\Sigma) < \frac{1}{2}\dim(M)$ is a (semisimple) complex submanifold. \end{prop} \begin{proof} By duality, we can assume that $M$ is of noncompact type. Let $\Sigma$ be a maximal totally geodesic submanifold of $M$ with $\codim(\Sigma) < \frac{1}{2}\dim(M)$. We can assume that $o \in \Sigma$. Let ${\mathfrak{g}} = {\mathfrak{k}} + {\mathfrak{p}}$ be the corresponding Cartan decomposition of ${\mathfrak{g}}$. The center ${\mathfrak{z}}({\mathfrak{k}})$ of ${\mathfrak{k}}$ is $1$-dimensional and there exists $Z \in {\mathfrak{z}}({\mathfrak{k}})$ such that $J = \ad(Z)$ is the complex structure on ${\mathfrak{p}} \cong T_oM$. The differential $d_oz$ at $o$ of the isometry $z = \Exp(\frac{\pi}{2}Z) \in K$ of $M$ is $d_oz : T_oM \to T_oM,\ X \mapsto JX$. Then $\Sigma^J = z(\Sigma)$ is also a maximal totally geodesic submanifold of $M$ with $o \in \Sigma^J$. By construction, we have $T_o\Sigma^J = JT_o\Sigma$. As usual, we write $\Sigma = G'/K'$ with ${\mathfrak{g}}' = {\mathfrak{k}}' + {\mathfrak{p}}' \subset {\mathfrak{k}} + {\mathfrak{p}} = {\mathfrak{g}}$, where ${\mathfrak{p}}' = T_o\Sigma$ and ${\mathfrak{k}}' = [{\mathfrak{p}}',{\mathfrak{p}}']$. Then, since $Z \in {\mathfrak{z}}({\mathfrak{k}})$, we have $\Sigma^J = G''/K'$ with $G'' = zG'z^{-1}$. Now consider the de Rham decomposition $\Sigma = \Sigma_0 \times \Sigma_1 \times \ldots \times \Sigma_l$ of $\Sigma$, where $\Sigma_0$ is the, possibly $0$-dimensional, Euclidean factor. Then, by construction, the de Rham decomposition of $\Sigma^J$ is $\Sigma^J = \Sigma_0^J \times \Sigma_1^J \times \ldots \times \Sigma_l^J$ with $\Sigma_i^J = z(\Sigma_i)$. The intersection ${\mathfrak{q}} = T_o\Sigma \cap T_o\Sigma^J = {\mathfrak{p}}' \cap J{\mathfrak{p}}'$ is a $J$-invariant Lie triple system in ${\mathfrak{p}}$. Since $\dim(\Sigma^J) = \dim(\Sigma) > \frac{1}{2}\dim(M)$, we have $\dim({\mathfrak{q}}) > 0$. As both ${\mathfrak{p}}'$ and $J{\mathfrak{p}}'$ are $\Ad(K')$-invariant, the intersection ${\mathfrak{q}} = {\mathfrak{p}}' \cap J{\mathfrak{p}}'$ is also $\Ad(K')$-invariant. Since maximal flats of irreducible Hermitian symmetric spaces are totally real submanifolds, the Euclidean factor $\Sigma_0$ is a totally real submanifold of $M$ and therefore $T_o\Sigma_0 \cap T_o\Sigma_0^J = \{0\}$. Since $\Ad(K')$ acts irreducibly on each tangent space $T_o\Sigma_j$ (and $T_o\Sigma_j^J$) for $1 \leq j \leq l$, we see that ${\mathfrak{q}} = \bigoplus_{i \in I} T_o\Sigma_i$ for some nonempty subset $I$ of $\{1,\ldots,l\}$ and $T_o\Sigma_i^J = T_o\Sigma_i$ for all $i \in I$. Let ${\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}}) = \{ U \in {\mathfrak{p}} : [U,{\mathfrak{q}}] = \{0\}\}$ be the centralizer of ${\mathfrak{q}}$ in ${\mathfrak{p}}$ and put ${\mathfrak{r}} = {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}}) + {\mathfrak{q}}$. We claim that ${\mathfrak{r}}$ is a Lie triple system in ${\mathfrak{p}}$ containing both $T_o\Sigma$ and $T_o\Sigma^J$. If $U,V,W \in {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}})$, then $[[U,V],{\mathfrak{q}}] = \{0\}$ by the Jacobi identity, which implies $[[[U,V],W],{\mathfrak{q}}] = \{0\}$ by the Jacobi identity. Thus $[[U,V],W] \in {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}})$, which shows that ${\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}})$ is a Lie triple system in ${\mathfrak{p}}$. For $U,V,W \in {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}})$ and $U',V',W' \in {\mathfrak{q}}$ we get $[[U+U',V+V'],W+W'] = [[U,V],W] + [[U',V'],W'] \in {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}}) + {\mathfrak{q}} = {\mathfrak{r}}$ by a straightforward calculation. Thus ${\mathfrak{r}}$ is a Lie triple system. Since $\Sigma$ is a Riemannian product $\Sigma = \Sigma_0 \times \Sigma_1 \times \ldots \times \Sigma_l$, we have $[T_o\Sigma_i,T_o\Sigma_j] = \{0\}$ for all $0 \leq i < j \leq l$. As ${\mathfrak{q}} = \bigoplus_{i \in I} T_o\Sigma_i$, it follows that $T_o\Sigma_i \subseteq {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}})$ for all $i \notin I$. Similarly, we have $T_o\Sigma_i^J \subseteq {\mathfrak{z}}_{\mathfrak{p}}({\mathfrak{q}})$ for all $i \notin I$. Altogether we see that ${\mathfrak{r}}$ is a Lie triple system in ${\mathfrak{p}}$ containing $T_o\Sigma$ and $T_o\Sigma^J$. Assume that ${\mathfrak{r}} = {\mathfrak{p}}$. Then we can write ${\mathfrak{r}} = {\mathfrak{p}} = {\mathfrak{q}} \oplus {\mathfrak{q}}^\perp$ with $[{\mathfrak{q}},{\mathfrak{q}}^\perp] = \{0\}$. This implies that $M$ is reducible, which is a contradiction. Thus ${\mathfrak{r}}$ is properly contained in ${\mathfrak{p}}$. Since $\Sigma$ (and $\Sigma^J$) is a maximal totally geodesic submanifold of $M$, we must have ${\mathfrak{r}} = T_o\Sigma = T_o\Sigma^J$, which means that $\Sigma = \Sigma^J$ is a complex submanifold of $M$. In particular, the de Rham decomposition of $\Sigma$ has no Euclidean factor and thus $\Sigma$ is semisimple. \end{proof} Our next result states that a maximal totally geodesic submanifold with sufficiently small codimension in an irreducible Hermitian symmetric space must be a reflective submanifold. \begin{prop} \label{estimate} Let $M$ be an irreducible Hermitian symmetric space of noncompact type with $\rk(M) \geq 2$ and let $\Sigma$ be a maximal totally geodesic submanifold of $M$ with $\codim(\Sigma) < \frac{1}{2}\dim(M)$. If \[ \textstyle{\frac{1}{4}}\codim(\Sigma)^2 + \codim(\Sigma) +\rk(\Sigma) < \dim(M), \] then $\Sigma$ is a reflective submanifold of $M$. In particular, if \[ \textstyle{\frac{1}{4}}\codim(\Sigma)^2 + \codim(\Sigma) < \dim(M) - \rk(M), \] then $\Sigma$ is a reflective submanifold of $M$. \end{prop} \begin{proof} By Proposition \ref{maxHerm}, $\Sigma$ is a semisimple complex submanifold of $M$. Therefore the slice representation $\rho : K' \to SO(\nu_o\Sigma)$ acts by unitary transformations. As usual, we write $\Sigma = G'/K'$ with ${\mathfrak{k}}' = [T_o\Sigma,T_o\Sigma]$ and ${\mathfrak{g}}' = {\mathfrak{k}}' + T_o\Sigma$. Then $\rho(K') \subseteq U(\nu_o\Sigma)$. If $\dim(K') > \dim(U(\nu_o\Sigma)) = \frac{1}{4}\codim(\Sigma)^2$, then the kernel of the slice representation $\rho$ must have positive dimension and therefore $\Sigma$ is a reflective submanifold of $M$ by Proposition 3.4 in \cite{BO16}. A principal $K'$-orbit on $\Sigma$ has dimension $\dim(M)-\codim(\Sigma)-\rk(\Sigma)$ and thus $\dim(K') \geq \dim(M) - \codim(\Sigma) - \rk(\Sigma)$. Consequently, if $\frac{1}{4}\codim(\Sigma)^2 < \dim(M) - \codim(\Sigma) - \rk(\Sigma)$, then $\Sigma$ is a reflective submanifold of $M$. The last statement follows from the fact that $\rk(\Sigma) \leq \rk(M)$. \end{proof} We now apply the previous two results to irreducible Hermitian symmetric spaces $M$ with $r = \rk(M) \geq 2$. We put $d = \codim(\Sigma)$ and $n = \dim(M)$. \smallskip For $M = SO^o_{2,2+k}/SO_{2}SO_{2+k}$ ($k \geq 1$) we have $n = 2k+4$, $r = 2$ and $i_r(M) = 2$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < k+2$ must be complex. This immediately implies $i(M) = 2 = i_r(M)$. \smallskip For $M = Sp_r(\mathbb{R})/U_r$ we have $n = r^2+r$ and $i_r(M) = 2r-2$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < \frac{1}{2}(r^2+r)$ must be complex. Assume that $d < i_r(M)$. Then $d \in \{2,4,\ldots,2r-4\}$. We have \[ \frac{1}{4}d^2 + d \leq \frac{1}{4}(2r-4)^2 + (2r-4) = r^2 -2r < r^2 = n-r. \] It follows from Proposition \ref{estimate} that $\Sigma$ is reflective, which contradicts $d < i_r(M)$. Thus we must have $i(M) = i_r(M)$. \smallskip For $M = SU_{r,r+k}/S(U_rU_{r+k})$ ($k \geq 0$) we have $n = 2r(r+k)$ and $i_r(M) = 2r$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < r(r+k)$ must be complex. Assume that $d < i_r(M)$. Then $d \in \{2,4,\ldots,2r-2\}$. We have \[ \frac{1}{4}d^2 + d \leq \frac{1}{4}(2r-2)^2 + (2r-2) = r^2-1 < 2r^2+2rk-r = n-r. \] It follows from Proposition \ref{estimate} that $\Sigma$ is reflective, which contradicts $d < i_r(M)$. Thus we must have $i(M) = i_r(M)$. \smallskip For $M = SO^*_{4r}/U_{2r}$ we have $n = 4r^2-2r$ and $i_r(M) = 4r-2$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < 2r^2-r$ must be complex. Assume that $d < i_r(M)$. Then $d \in \{2,4,\ldots,4r-4\}$. We have \[ \frac{1}{4}d^2 + d \leq \frac{1}{4}(4r-4)^2 + (4r-4) = 4r^2 -4r < 4r^2 - 3r = n-r. \] It follows from Proposition \ref{estimate} that $\Sigma$ is reflective, which contradicts $d < i_r(M)$. Thus we must have $i(M) = i_r(M)$. \smallskip For $M = SO^*_{4r+2}/U_{2r+1}$ we have $n = 4r^2+2r$ and $i_r(M) = 4r$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < 2r^2+r$ must be complex. Assume that $d < i_r(M)$. Then $d \in \{2,4,\ldots,4r-2\}$. We have \[ \frac{1}{4}d^2 + d \leq \frac{1}{4}(4r-2)^2 + (4r-2) = 4r^2 - 1 < 4r^2 + r = n-r. \] It follows from Proposition \ref{estimate} that $\Sigma$ is reflective, which contradicts $d < i_r(M)$. Thus we must have $i(M) = i_r(M)$. \smallskip From these calculations we conclude: \begin{thm} \label{Hermsymm} For an irreducible Hermitian symmetric space of classical type we have $i(M) = i_r(M)$. \end{thm} \begin{re} \rm It is somewhat surprising that this argument is inconclusive for the irreducible Hermitian symmetric spaces of exceptional type. For $M = E_6^{-14}/Spin_{10}U_1$ we have $n = 32$, $r = 2$ and $i_r(M) = 12$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < 16$ must be complex. Assume that $d < i_r(M)$. Then $d \in \{2,4,6,8,10\}$. We have \[ \frac{1}{4}d^2 + d < 30 = n-r \quad \Longleftrightarrow \quad d \in \{2,4,6,8\}. \] It follows from Proposition \ref{estimate} that the index of $M$ is either $10$ or $12$. We proved in \cite{BOR19}, with different methods, that $i(M) = 12 = i_r(M)$. For $M = E_7^{-25}/E_6U_1$ we have $n = 54$, $r = 3$ and $i_r(M) = 22$. By Proposition \ref{maxHerm}, a maximal totally geodesic submanifold $\Sigma$ of $M$ with $d < 27$ must be complex. Assume that $d < i_r(M)$. Then $d \in \{2,4,\ldots,20\}$. We have \[ \frac{1}{4}d^2 + d < 51 = n-r \quad \Longleftrightarrow \quad d \in \{2,4,6,8,10,12\}. \] It follows from Proposition \ref{estimate} that $i(M) \in \{14,16,18,20,22\}$. We proved in \cite{BOR19}, with different methods, that $i(M) = 22 = i_r(M)$. \end{re}
1,314,259,995,512
arxiv
\section{\label{Intro} Introduction} Standard internal combustion engines generate torque by burning fuel in the combustion chambers of cylinders. The kinetic energy of the expanding gases is applied to move a piston, which in turn is connected to a crankshaft to produce rotation and do work. The performance of an engine is specified by the maximum output power and torque and its energy (fuel) efficiency under certain conditions. Such characterizations can also be applied to biological molecular motors, a subject that has been of growing interest in recent biophysical research. V- and F-type ATPases are examples of rotary molecular motors, which perform proton pumping or ATP synthesis to maintain cell activity (for a recent review, see Ref.~\citen{NakanishiMatsui20101343}). Surprisingly, they are similar in appearance to a Wankel engine, which mainly consists of a cylinder, a rotor, and an eccentric shaft and has three moving chambers for each stroke of a combustion cycle (intake, compression, ignition, and exhaust).\cite{hege2001wankel} In the F$_1$ domain of ATPase, the so-called $\gamma$-shaft rotates inside a cylinder consisting of three symmetrically arranged, paired $\alpha$- and $\beta$-subunits. ATP is hydrolyzed to ADP and phosphate, with the released chemical bonding energy being spent to perform the rotation. \cite{Boyer1993215,Abrahams1994,Noji1997} The conversion is known to be highly energy efficient.\cite{Kinosita29042000,PhysRevLett.104.198103} Apart from their energy sources, a basic difference between biological and man-made engines may lie in the stiffness of their architectural components. For molecular motors, recent single-molecule analyses have begun to discover mechanisms involving local deformations in the cylinder unit caused by ATP hydrolysis that generate torque, which rotates the $\gamma$-shaft.\cite{Arai2013} In contrast to such deformable components, the piston and cylinder in man-made engines are made of harder materials. For the latter, it is also known that efficient operation requires completely sealed combustion chambers, as well as bearings and lubricant to maintain smooth mechanical movement. However, in biological engines the relevance of such deformations among components to the efficiency of force conversion remains mysterious. \begin{figure}[tb] \begin{center} \includegraphics[height=6.0cm,keepaspectratio,clip]{fig1.eps} \end{center} \caption{ (Color online) Sketch of an AMM. The rod is loosely positioned inside the cylinder and can freely rotate around the $z$-axis. During rotation, its tilt with respect to the $z$-axis is maintained by contacts on the cylinder. The cylinder has three attachment points for the rod and works as a three-tooth ratchet. The cylinder is embedded in the membrane of a vesicle. When the rod rotates in a certain direction, the system can pump certain ions up and across the membrane. The $z$-axis of the Cartesian coordinate system is fixed to the axis of the cylinder. The arrows at right denote a linearly polarized ac field. } \label{fig:rotor} \end{figure} Our motivation is to understand the properties of the force-to-torque conversion in (artificial) molecular motors with deformable components when certain stimuli are applied, i.e., properties independent of energy sources, and our approach is based on mathematical modeling. In addition to biological molecular motors, artificial molecular motors (AMMs)\cite{doi:10.1021/cr0300993,Browne2006,ANIE:ANIE200504313,ijms11062453} represent good objects of study in this context. AMMs (or synthetic molecular motors) are small devices consisting of a rotor and stator consisting of (supra-) molecules, the rotor being capable of rotation relative to the stator under certain stimuli. Such rotation is largely due to noncovalent interactions between the rotor and stator. In particular, the recently described rotational, propeller-shaped supramolecules confined in nanopores\cite{Kuhne14122010} can be considered to be an example of an AMM made of deformable units. One significant advantage of studying AMMs is their well-characterized symmetry and responses to external stimuli. For more detailed information on AMMs, see the reviews in Refs.~\citen{doi:10.1021/cr0300993,Browne2006,ANIE:ANIE200504313,ijms11062453}. Ratchet models \cite{FeynmannLecI} provide a basis for the theoretical study of molecular motors.\cite{RevModPhys.69.1269,Reimann200257,ANIE:ANIE200504313,RevModPhys.81.387,PhysRevE.69.021102,Kawaguchi20142450} In particular, a variety of one-dimensional (1D) piecewise linear ratchet models plays an important role in determining energy efficiency.\cite{PhysRevLett.71.1477,PhysRevLett.72.1766,PhysRevLett.72.2652,PhysRevLett.72.2984,Rousselet1994,Astumian09051997,PhysRevE.75.061115} In the context of molecular rotary motors, these models treat the rotation of the rotor as the 1D motion of a particle in a sawtooth-type potential, and they demonstrate that a particle can move unidirectionally as a result of certain stimuli or modulations of the potential. Thus, ratchet models partly account for deformation of the cylinder subunit through modulation of the potential. However, realistic deformations are more complex than potential modulations in 1D space and involve richer dynamics. It therefore seems natural for our purpose to investigate the effects of two-dimensional potential modulations on efficiency with 2D ratchet models, as a minimal system of deformable units. Figure~\ref{fig:rotor} shows a schematic of the three-tooth rotary ratchet system that we consider as an AMM, which is composed of a rod (rotor) and cylinder (stator) and is anchored in and crosses a membrane. The system is perturbed by a heat bath and exposed to electromagnetic fields. The rod can respond to such fields and be driven by a linearly polarized ac field, which temporally modulates an effective potential for the rod--cylinder interaction. Here we assume that the polarization axis lies in the $xy$-plane (see Fig.~\ref{fig:rotor} for the coordinate system). Under certain conditions, the driving field can induce unidirectional rotation of the rotor in the stator. This can be used to generate work when a load is applied. As an example, we suppose that the system functions as a pump of ions across the membrane, against the concentration gradient. We focus on two main questions: How great a load can the driving field bear in performing productive work? How can the efficiency of the conversion of power from the ac field's input to the output work be estimated? Such systems have been studied in Refs.~\citen{PhysRevE.84.061119} and \citen{PhysRevE.87.022144}, where the rotor--stator interaction was described with 2D ratchet potentials having either twofold or threefold symmetry (two- or three-tooth rotary ratchet models) and the dynamics were analyzed using the Langevin equation for a particle in such potentials. The main interest was the robustness of the unidirectional rotation induced by a linearly polarized ac field. One result was that, unlike the two-tooth structure, the three-tooth ratchet allows robust unidirectional rotation for any polarization. However, loads and energy efficiency were not considered in those studies. Here, to target these two questions, we develop a coarse-grained kinetic description that incorporates the deformational properties of 2D ratchet systems, through an analysis of the efficiency of force conversion from the ac driving field to the torque under load in the three-tooth rotary ratchet model. As a part of this framework, we propose a master equation, which is extended by taking into account the motion of boundaries between coarse-grained states. This enables us to estimate expectation values for the time derivatives of physical variables and to extract characteristic quantities related to the force conversion. The analytic expressions obtained for mean angular momentum, power, and efficiency agree qualitatively with numerical simulation data using a few adjustable parameters. We describe our model in Sect.~\ref{sec:model} and present its characteristic dynamics in Sect.~\ref{sec:Output_Torque}. We propose the coarse-grained dynamical description in Sect.~\ref{sec:theory} and show the results for the energetics in Sect.~\ref{sec:Energetics}. In Sect.~\ref{discuss}, we discuss the relationship between mean angular momentum and output power and propose a characteristic feature of 2D ratchet systems. \section{\label{sec:model} Model} \begin{figure}[tbh] \def5.5cm{5.3cm} \centering \begin{tabular}{ll} (a)&(b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig2a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig2b.eps} \end{tabular} \def2.9cm{2.9cm} \begin{tabular}{ll} (c)&(d) \\ \includegraphics[height=2.9cm,keepaspectratio,clip] {fig2c.eps} & \includegraphics[height=2.9cm,keepaspectratio,clip] {fig2d.eps} \end{tabular} \caption{ (Color online) Contour graphs of $V_0(\boldsymbol{x})$ and potential profiles. Panels (a) and (b) show the contour graphs of $V_0(\boldsymbol{x})$ at (a) $(a,b,c,d) = (-0.1, 0.3, 0.15, -0.1)$ and (b) $(-0.1, 0, 0.15, 0)$, where the horizontal (vertical) direction corresponds to the $x$ ($y$) direction, and the dashed curves draw contour levels. Panels (c) and (d) show the curves of $\Phi(r,\theta)$ for the azimuthal angle $\theta \in [0,2\pi]$ at three radii $r\in \{0.8,1.0,1.1\}$. The values for $(a,b,c,d)$ of (c) and (d) are those of (a) and (b), respectively. The potential in panel (a) [(c)] is chiral with a ratchet structure, and that in panel (b) [(d)] is achiral without a ratchet structure. The ratchet direction of the potential in panel (a) [or (c)] is defined as anticlockwise (positive). } \label{fig:ContAndPol} \end{figure} The rotational motion of the rotor tip in the stator (Fig.~\ref{fig:rotor}) is described as motion of a particle in a 2D ratchet potential. Consider the projection of the rotor tip onto the $xy$-plane. Let us describe its position at time $t$ as $\boldsymbol{X}(t)\equiv [X(t),Y(t)]^{^{\mathrm{T}}$, the movement of $\boldsymbol{X}(t)$ (${\equiv} \boldsymbol{X}$) is described by the Langevin equation, \begin{equation} \gamma\dot{\boldsymbol{X}}(t) = -\partial_{\boldsymbol{X}} V(\boldsymbol{X},t) + \boldsymbol{R}(t) \quad (\gamma=1), \label{LEQ} \end{equation} where $\partial_{\boldsymbol{x}}V \equiv (\partial V/\partial x, \partial V/\partial y)^{^{\mathrm{T}}$, $\gamma$ is the viscous damping coefficient, which is set to unity, and $\boldsymbol{R}\equiv [R_{x}(t), R_{y}(t)]^{^{\mathrm{T}}$ is the white Gaussian noise characterized by the ensemble averages $\langle R_{j}(t)\rangle=0$ and $\langle R_{j}(t)R_{k}(t') \rangle = 2D \delta_{j,k}\delta(t-t')$, $j,k\in \{x,y\}$, with $D$ the strength of the noise. We regard $\boldsymbol{R}$ as thermal noise, and impose $D=\gamma k_{\mathrm{B}}T$, where $k_{\mathrm{B}}$ and $T$ are the Boltzmann constant and the temperature. $V(\boldsymbol{x},t)$ [$ {=} V_0(\boldsymbol{x}) + V_h(\boldsymbol{x},t) + V_{I}(\boldsymbol{x})$] is the potential function. $V_0(\boldsymbol{x})$ represents a three-tooth ratchet potential [Figs.~\ref{fig:ContAndPol}(a) and \ref{fig:ContAndPol}(b)]: in the 2D polar representation $\boldsymbol{x}^{^{\mathrm{T}}=(r\cos\theta,r\sin\theta)$, $V_0(\boldsymbol{x})\equiv \Phi(r,\theta)$ reads \begin{equation} \Phi(r,\theta) =\Phi_{0}(r) -\frac{a}{4}r^3\cos 3\theta -\frac{b}{4}r^5\sin 3\theta +\frac{d}{6}r^6\sin 6\theta, \label{Phi:expand_reduce} \end{equation} where $\Phi_{0}(r)=(1-r^{2})^{2}(1 +c r^2)/4$ [Figs.~\ref{fig:ContAndPol}(c) and \ref{fig:ContAndPol}(d)]. $\Phi_{0}(r)$ builds a potential valley. This is modified from that in Ref.~\citen{PhysRevE.87.022144} for a better confinement of motion within the valley. The second and third terms in Eq.~(\ref{Phi:expand_reduce}) create the threefold symmetry. The fourth term makes a ratchet structure by adding asymmetry in azimuth. Below, we treat only potentials with three minima and saddles on the valley as in Figs.~\ref{fig:ContAndPol}(a) and \ref{fig:ContAndPol}(b). $V_h(\boldsymbol{x},t)$ [${\equiv}\,\mathord{-}H(t)\boldsymbol{N}\boldsymbol{\oldcdot}\boldsymbol{x}$] (``$\,\boldsymbol{\oldcdot}\,$'' denotes the inner product) is the electric (or magnetic) interaction energy of the rotor in a linearly polarized ac field $H(t)\boldsymbol{N}$, where $H(t) = h\cos \Omega t$, and $\boldsymbol{N}=(\cos\phi,\sin\phi)^{^{\mathrm{T}}$ denotes the polarization (vector) with polarization angle $\phi$. $V_I(\boldsymbol{x})$ [${\equiv} (I/2\pi)\tan^{-1}(y/x)$] represents a function to generate a load with strength $I$ (the load torque), which is distinguished from the potentials in that it is multivalued. The potential structure is classified into achiral, for $b=d=0$, and chiral, for $b\ne 0$ or $d\ne 0$. Under the mirror transformation $\theta\rightarrow -\theta$ and $\theta\rightarrow \pm 2\pi/3-\theta$ in Eq.~(\ref{Phi:expand_reduce}), the achiral potentials are invariant, but each chiral is mapped to the other corresponding mirror image. The chiral potentials are distinguished as either clockwise or anticlockwise. Specifically, the direction of a ratchet potential is anticlockwise or positive (clockwise or negative) if, around each of the potential minima, each direction from the side of steeper slope to the more gradual side is anticlockwise (clockwise) (see Fig.~\ref{fig:ContAndPol}). The ac field can induce a torque to rotate the particle either clockwise or anticlockwise depending on the ratchet direction. As mentioned in Sect.~\ref{Intro}, we suppose that this torque can be applied to drive the pumping function. Here, such a function is brought with load force given by the gradient of $V_I(\boldsymbol{x})$ as \begin{equation} \boldsymbol{f}_{I}(\boldsymbol{x}) = -\partial_{\boldsymbol{x}} V_I(\boldsymbol{x}) = \frac{I}{2\pi} \left( \frac{y}{|\boldsymbol{x}|^2}, -\frac{x}{|\boldsymbol{x}|^2}\right)^{^{\mathrm{T}}. \label{load_force} \end{equation} This is a field that circularly rotates about the origin. To limit our scope, we impose the following conditions on the driving field: I. Letting $\Delta V$ be the potential difference between the minimum and the saddle of $V_0(\boldsymbol{x})$, both the typical magnitudes of $V_h(\boldsymbol{x},t)$ and $V_I(\boldsymbol{x})$, being denoted by $O(h)$ and $O(I)$ \bibnote[Note2]{ Although the original dimension of $h$ is the energy divided by the dimension of $|\boldsymbol{x}|$ from $V_h(\boldsymbol{x},t)$, $h$ is also regarded as an energetic quantity as well as $I$ and $\Delta V$, because the typical magnitude of $\boldsymbol{x}$ is normalized to be a dimensionless number of $O(1)$ for the radius of the potential valley [See Eq.~(\ref{Phi:expand_reduce})].} [$O(\boldsymbol{\oldcdot})$ and $o(\boldsymbol{\oldcdot})$ denote the Landau symbols (Big- and Little-O)], are smaller than $\Delta V$. Below, we assume $O(I)\sim O(h)$. II. The period of the ac field $T_{p}\equiv 2\pi/\Omega$ is much longer than a typical relaxation time to the potential minima, which is denoted by $T_r$ and we may have $T_r \sim O(1)$, i.e., $\Omega T_r \ll 1$. These settings are relevant in stochastic resonance (SR) phenomenon \cite{PhysRevA.39.4854,RevModPhys.70.223}, and may be reasonable assumptions for the (artificial) molecular motor system. We denote by $p(\boldsymbol{x},t)\mathrm{d}\boldsymbol{x}$ a probability for an event $\boldsymbol{X}(t)\in [x,x+dx)\times [y,y+dy)$. From Eq.~(\ref{LEQ}), the time evolution of the probability density function (PDF) $p(\boldsymbol{x},t)$ obeys the Fokker--Planck equation: \begin{gather} \partial_{t} p(\boldsymbol{x},t) = - \partial_{\boldsymbol{x}}\boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t), \label{FPE} \\ \boldsymbol{J}(\boldsymbol{x},t) \equiv \left\{ - \partial_{\boldsymbol{x}} V(\boldsymbol{x},t) \right\} p(\boldsymbol{x},t) - D \partial_{\boldsymbol{x}} p(\boldsymbol{x},t), \label{FPE:J} \end{gather} where $\partial_t\equiv \partial /\partial t$, $\partial_{\boldsymbol{x}}\boldsymbol{\oldcdot} \boldsymbol{J}$ denotes the divergence of a vector field $\boldsymbol{J}$, and $\boldsymbol{J}(\boldsymbol{x},t)$ represents the probability current density. In the absence of the fields ($h=0$ and $I=0$), the PDF approaches the canonical distribution function, which satisfies $\boldsymbol{J}(\boldsymbol{x},t)=\boldsymbol{0}$ with the relation $D=\gamma k_{\mathrm{B}}T$ ($\gamma=1$). As shown in Ref.~\citen{PhysRevE.87.022144}, for $h\ne 0$ and $I=0$, the unidirectional rotation of the particle can be induced by an ac driving field. In addition, when the load is applied ($I > 0$), there being a competitive bias circulation in $\boldsymbol{J}(\boldsymbol{x},t)$ from Eq.~(\ref{load_force}), it is expected that the induced rotational motion can persist if the load is sufficiently weak. \section{\label{sec:Output_Torque} Mean Angular Momentum} \begin{figure}[t] \def5.5cm{5.5cm} \centering \includegraphics[height=5.5cm,keepaspectratio,clip] {fig3.eps} \caption{ (Color online) Scaled MAM $L/h^2$ versus noise intensity $D$. The symbols and curves correspond to results of numerical simulation and the approximation Eqs.~(\ref{LI:1}) and (\ref{Lh_final}) for $(I,\Omega)=(0,0.0025)$ (filled circles and thin solid curve), $(0.001,0.0025)$ (filled triangles and thin dashed curve), $(0.002,0.0025)$ (filled squares and thin dashed-dotted curve), $(0,0.005)$ (open circles and thick solid curve), $(0.001,0.005)$ (open triangles and thick dashed curve) and $(0.002,0.005)$ (open squares and thick dashed-dotted curve) with $(a,b,c,d,h,\phi) = (-0.1, 0.3, 0.15,-0.1,0.05,0)$. The adjustable parameters in Eqs.~(\ref{LI:1}) and (\ref{Lh_final}) are set to $g_{L}= 1.25$ and $g_{L}'/g_{L}=0.95$ for all. } \label{fig:DvsL} \end{figure} First, we give an overview of the dynamics of Eq.~(\ref{LEQ}). The numerical simulation for the model was performed using the second-order stochastic Runge--Kutta method \cite{PhysRevA.45.600,ruemelin:604}. To quantify the circulation of trajectory, we define the mean angular momentum (MAM): \begin{equation} L = \overline{ X(t)\dot{Y}(t)-Y(t)\dot{X}(t)}, \label{MAM} \end{equation} where $\overline{A(t)} \equiv \int_{0}^{T_{\mathrm{tot}}} \mathrm{d}t\, A(t)/T_{\mathrm{tot}}$ denotes the mean of a dynamical variable $A(t)$ over the observation time $T_{\mathrm{tot}}$ (${\gg} T_{p}$). The anticlockwise (clockwise) rotation corresponds to $L>0$ ($L<0$). \begin{figure}[t] \def5.5cm{5.5cm} \centering \includegraphics[height=5.5cm,keepaspectratio,clip] {fig4.eps} \caption{ (Color online) Typical time series of $\theta(t)$ and its dependence on $D$. The abscissa indicates the elapsed time. The positive side of $\theta(t)$ corresponds to the anticlockwise rotation. } \label{fig:TS} \end{figure} Figure~\ref{fig:DvsL} shows graphs of $L$ with respect to the noise intensity; the symbols and curves indicate the results from numerical simulations and theoretical analysis. In the numerical simulations, $\overline{A(t)}$ is obtained by averaging over 35 computational runs in addition to the long time average of $T_{\mathrm{tot}}=10^8$. The ratchet potential used here is that shown in Fig.~\ref{fig:ContAndPol}(a), the direction of which is classified as anticlockwise (positive). Without the load, $I=0$ (open and closed circles), the MAM exhibits a bell-shaped curve with respect to $D$, which implies the magnitude of the MAM is maximized by SR. The sign of the MAM in SR depends on the ratchet direction. As the load is increased under a clockwise rotation ($I>0$), the negative region of the MAM expands. This behavior indicates that the MAM consists of a component from $H(t)\boldsymbol{N}$ and that from $\boldsymbol{f}_{I}(\boldsymbol{X})$, and these are in competition. This also implies that, for the noise intensity beyond the SR peak, the rotation forced by $\boldsymbol{f}_{I}(\boldsymbol{X})$ is more persistent for noise than that induced by the ac driving field. Figure~\ref{fig:TS} shows a typical time series of the angular displacement defined by \begin{equation} \theta(t) = \int_{0}^{t}\mathrm{d}s \left\{ \frac{X(s)\dot Y(s)-Y(s)\dot X(s) }{|\boldsymbol{X}(s)|^2} \right\} \label{def:theta} \end{equation} for several noise intensities, $D\in\{0.01,0.03,0.04,0.05,0.08\}$, which is taken from the points on the curve for $(\Omega, I) = (0.005, 0.002)$ in Fig.~\ref{fig:DvsL}, where the SR peaks at $D\approx 0.04$. We see the mean angular velocity, $\overline{\Dot\theta}=\theta(T_{\mathrm{tot}})/T_{\mathrm{tot}}$, increases and decreases with $D$ below and beyond the peak point of SR. There is also a turning point at which the rotational direction switches from anticlockwise to clockwise (see the curve for $D=0.07$). \begin{figure}[t] \def5.5cm{5.3cm} \centering \includegraphics[height=5.5cm,keepaspectratio,clip] {fig5.eps} \caption{ (Color online) $L/h^2$ versus load torque $I$. The symbols and curves represent numerical and theoretical results for $D=0.03$ (diamonds and solid curve), $0.04$ (circles and dashed curve), $0.05$ (triangles and dashed-dotted curve) and $0.06$ (squares and dashed-double-dotted curve) in the chiral case of Fig.~\ref{fig:ContAndPol}(a) with $(\Omega, h,\phi) = (0.005, 0.05, 0)$. } \label{fig:PvsL} \end{figure} Figure~\ref{fig:PvsL} shows the $I$-dependence of the MAM at several noise intensities around the SR point. We see that the sign of MAM reverses to negative values as $I$ increases. This is because the component of MAM from the load torque increases with $I$ and dominates that from the ac driving field. In addition to the above results, we should note that the MAM does not significantly depend on $\phi$. As suggested in Ref.~\citen{PhysRevE.87.022144}, this property can bring a robustness such that a rotary system always performs a unidirectional rotation regardless of the polarization angle. \section{\label{sec:theory} Theory} We now develop a coarse-grained description of the dynamics. After introducing notation in Sect.~\ref{subsec:geom}, we obtain a master equation for coarse-grained states in Sect.~\ref{sec:markov}, and analyze it in Sect.~\ref{sec:LRT}. In Sect.~\ref{sec:MAM}, we establish a formalism to estimate the time derivatives of energetic quantities. \subsection{\label{subsec:geom} Definitions} \begin{figure}[t] \def5.5cm{5.7cm} \centering \begin{tabular}{ll} (a)&(b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig6a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig6b.eps} \end{tabular} \caption{ (Color online) (a) Definition of symbols for locally maximal and minimal points $\{\mathrm{O},\boldsymbol{x}_{\sigma}\}$, saddle points $\{\boldsymbol{x}^{\mu}\}$, ridge curves $\{\mathrm B_{\sigma}, \mathrm B^{\mu}\}$ (thick solid curves), potential valley C, and domains $\{\mathrm D_{\sigma}^{\mu}\}$ on $V_0(\boldsymbol{x})$ [$(a,b,c,d)=(0.1,-0.15,0.1,-0.05)$]. Thick and thin dashed curves corresponds to the ridge curves $\{\Tilde{\mathrm B}_{\sigma}, \Tilde{\mathrm B}^{\mu}\}$ on $V(\boldsymbol{x},t)$ [$H(t)=0.05$(thick) and $-0.05$ (thin)]. Here, $\sigma,\mu\in \{0,1,2\}$. (b) Definition of symbols for locally maximal and minimal points $\{\Tilde{\mathrm{O}},\Tilde{\boldsymbol{x}}_{\sigma}\}$, saddle points $\{\Tilde{\boldsymbol{x}}^{\mu}\}$, ridge curves $\{\Tilde{\mathrm B}_{\sigma}, \Tilde{\mathrm B}^{\mu}\}$ (thick dashed curves), and domains $\{\Tilde{\mathrm D}_{\sigma}^{\mu}\}$ on $V(\boldsymbol{x},t)$ [$H(t)=0.05$]. Unit tangential and normal vectors $\{\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x}), \boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})\}$ or $\{\Tilde{\boldsymbol{\tau}}_{\sigma}^{\mu}(\boldsymbol{x}), \Tilde{\boldsymbol{n}}_{\sigma}^{\mu}(\boldsymbol{x})\}$ are defined on the boundary of $\mathrm{D}_{\sigma}^{\mu}$ or $\Tilde{\mathrm{D}}_{\sigma}^{\mu}$. The vectors are the eigenvectors of the Hessian matrix $\Hat{G}(\boldsymbol{x})$ on the ridge curves. The tip of the unit normal vector is directed toward the interior of the specified domain, and the associated unit tangential vector is oriented to its right. $\mathrm{B}_{\epsilon}^{\mu}$ is a domain of width $2\epsilon$ covering $\Tilde{\mathrm{B}}^{\mu}$, which is indicated by a hatched region. $\Delta\mathrm{D}_{\sigma,}^{\mu}$ and $\Delta\mathrm{D}_{\sigma}^{\mu\ast}$ are the differential domain from $\mathrm{D}_{\sigma}^{\mu}$ to $\Tilde{\mathrm{D}}_{\sigma}^{\mu}$. } \label{fig:PotGeom} \end{figure} Figure~\ref{fig:PotGeom}(a) shows our notation to describe the structure of the potential function $V_{0}(\boldsymbol{x})$. $\mathrm{C}$ denotes the potential valley. $\boldsymbol{x}_{\sigma}$ and $\boldsymbol{x}^{\mu}$ ($\sigma,\mu=0,1,2$) denote the minimal and saddle points of $V_{0}(\boldsymbol{x})$, which satisfy $\partial_{\boldsymbol{x}}V_{0}(\boldsymbol{x})=\boldsymbol{0}$. Ridge curves of $V_{0}(\boldsymbol{x})$ are denoted by $\mathrm{B}_{\sigma}$ and $\mathrm{B}^{\mu}$, where $\mathrm{B}_{\sigma}$ ($\mathrm{B}^{\mu}$) is the curve running from the origin $\mathrm O$ toward infinity through the minimal point $\boldsymbol{x}_{\sigma}$ (the saddle point $\boldsymbol{x}^{\mu}$). Each domain surrounded by the neighboring curves $\mathrm{B}_{\sigma}$ and $\mathrm{B}^{\mu}$ is denoted by $\mathrm{D}_{\sigma}^{\mu}$ (${\in} \{\mathrm{D}_{0}^{0}, \mathrm{D}_{1}^{0}, \mathrm{D}_{1}^{1}, \mathrm{D}_{2}^{1}, \mathrm{D}_{2}^{2}, \mathrm{D}_{0}^{2}\}$). In our coarse-grained description, the 2D space is divided into the six domains of $\mathrm{D}_{\sigma}^{\mu}$; the six events of $\boldsymbol{X}(t)\in\mathrm{D}_{\sigma}^{\mu}$ construct a state space. Although in Ref.~\citen{PhysRevE.87.022144} the master equation for these six states is obtained under the static boundaries $\mathrm{B}_{\sigma}$ and $\mathrm{B}^{\mu}$, here we develop another approach based on moving ridge curves of $V(\boldsymbol{x},t)$. Then, to the notation mentioned above, we also add another notation based on $V(\boldsymbol{x},t)$; $\Tilde{\boldsymbol{x}}_{\sigma}$, $\Tilde{\boldsymbol{x}}^{\mu}$ ($\sigma,\mu=0,1,2$), and $\Tilde{\mathrm O}$ denote the minimal, saddle, and maximal points of $V(\boldsymbol{x},t)$, which satisfy $\partial_{\boldsymbol{x}}V(\boldsymbol{x},t)=\boldsymbol{0}$, respectively; $\Tilde{\mathrm{B}}_{\sigma}$ ($\Tilde{\mathrm{B}}^{\mu}$) denotes the ridge curve of $V(\boldsymbol{x},t)$ which runs from $\Tilde{\mathrm {O}}$ toward infinity through the minimal point $\Tilde{\boldsymbol{x}}_{\sigma}$ (the saddle point $\Tilde{\boldsymbol{x}}^{\mu}$); $\Tilde{\mathrm{D}}_{\sigma}^{\mu}$ denotes a domain surrounded by the curves $\Tilde{\mathrm{B}}_{\sigma}$ and $\Tilde{\mathrm{B}}^{\mu}$. As shown in Fig.~\ref{fig:PotGeom}, the boundaries $\Tilde{\mathrm{B}}_{\sigma}$ and $\Tilde{\mathrm{B}}^{\mu}$ vary with $H(t)$. Furthermore, we define $\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x})$ and $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})$ as unit tangential and normal vectors to the boundary of $\mathrm{D}_{\sigma}^{\mu}$ at $\boldsymbol{x}$ ($\boldsymbol{x}\in\mathrm{B}_{\sigma}$ or $\boldsymbol{x}\in\mathrm{B}^{\mu}$). Here, we orient the tip (or referential direction) of the normal vector $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})$ inside $\mathrm{D}_{\sigma}^{\mu}$, and orient the tip of the tangential vector $\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x})$ to the right-hand side (RHS) of $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})$ [Fig.~\ref{fig:PotGeom}(b)]. Similarly, corresponding to $\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x})$ and $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})$, we define the unit tangential and normal vectors $\Tilde{\boldsymbol{\tau}}_{\sigma}^{\mu}(\boldsymbol{x})$ and $\Tilde{\boldsymbol{n}}_{\sigma}^{\mu}(\boldsymbol{x})$ on the boundary of $\Tilde{\mathrm{D}}_{\sigma}^{\mu}$, respectively. Curvatures of the potential at $\boldsymbol{x}_{\sigma}$ and $\boldsymbol{x}^{\mu}$ are defined as follows. Near an extremum $\boldsymbol{x}_{\ast}$ (${\in} \{\boldsymbol{x}_{\sigma}, \boldsymbol{x}^{\mu}\}$), we expand $V(\boldsymbol{x},t)$ as \begin{equation} V(\boldsymbol{x},t) \approx V(\boldsymbol{x}_{\ast},t) - \left\{ \boldsymbol{f}_{I}(\boldsymbol{x}_{\ast}) +H(t)\boldsymbol{N} \right\} \boldsymbol{\oldcdot} \delta\boldsymbol{x} + \tfrac{1}{2} \delta\boldsymbol{x}^{^{\mathrm{T}}\Hat{G}(\boldsymbol{x}_{\ast}) \delta\boldsymbol{x}, \label{PotExpand} \end{equation} where $\delta\boldsymbol{x} \equiv \boldsymbol{x}-\boldsymbol{x}_{\ast}$ and $ \Hat{G}(\boldsymbol{x}_{\ast})\equiv \partial_{\boldsymbol{x}}\partial_{\boldsymbol{x}}^{^{\mathrm{T}} V(\boldsymbol{x},t) \bigr|_{\boldsymbol{x}=\boldsymbol{x}_{\ast}} $ is the $2\times 2$ Hessian matrix at $\boldsymbol{x}_{\ast}$. We define a local coordinate system as $\boldsymbol{x} = \boldsymbol{x}_{\ast}+ \xi\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast}) +\eta\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast})$ with coordinates $(\xi,\eta)$. From the nature of ridge curves and valley, the basis vectors $\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast})$ and $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast})$ satisfy \begin{align} \Hat{G}(\boldsymbol{x}_{\ast}) \boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast}) &=\Lambda_{\tau}(\boldsymbol{x}_{\ast}) \boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast}), \label{eigen_tau} \\ \Hat{G}(\boldsymbol{x}_{\ast}) \boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast}) &=\Lambda_{n}(\boldsymbol{x}_{\ast}) \boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast}), \label{eigen_n} \end{align} where $\Lambda_{\tau}(\boldsymbol{x}_{\ast})$ and $\Lambda_{n}(\boldsymbol{x}_{\ast})$ are the eigenvalues corresponding to $\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast})$ and $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x}_{\ast})$, respectively. $\Lambda_{\tau}(\boldsymbol{x}_{\ast})$ and $\Lambda_{n}(\boldsymbol{x}_{\ast})$ are also the curvatures along a ridge curve and the valley. We have $\Lambda_{\tau}(\boldsymbol{x}_{\sigma})>0$ and $\Lambda_{n}(\boldsymbol{x}_{\sigma})>0$ at the minimal points, and $\Lambda_{\tau}(\boldsymbol{x}^{\mu})>0$ and $\Lambda_{n}(\boldsymbol{x}^{\mu})<0$ at the saddle points. In the local coordinate system, the third term in Eq.~(\ref{PotExpand}) is transformed to $ \{ \Lambda_{\tau}(\boldsymbol{x}_{\ast})\xi^{2} + \Lambda_{n}(\boldsymbol{x}_{\ast}) \eta^{2}\}/2 $. \subsection{\label{sec:markov} Master equation} We denote by $P(\sigma,\mu,t)$ ($\sigma,\mu \in \{0, 1,2\}$) the probability of finding the trajectory $\boldsymbol{X}(t)$ in the domain $\mathrm{D}_{\sigma}^{\mu}$ at time $t$. $P(\sigma,\mu,t)$ is related to $p(\boldsymbol{x},t)$ as \begin{equation} P(\sigma,\mu,t) \equiv \left( \delta_{\sigma,\mu}^{(3)}+\delta_{\sigma,\mu+1}^{(3)} \right) \int_{\boldsymbol{x}\in \mathrm{D}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \,p(\boldsymbol{x},t), \label{Markov:Psig_mu} \end{equation} where $\delta_{j,k}^{(3)}$ denotes the Kronecker delta, which is $1$ if $j=k$ and $0$ otherwise for integers $j$ and $k$, and with periodic boundary conditions $\delta_{j+3,k}^{(3)}=\delta_{j,k+3}^{(3)}=\delta_{j,k}^{(3)}$ and $\mathrm{D}_{\sigma+3}^{\mu}= \mathrm{D}_{\sigma}^{\mu+3} =\mathrm{D}_{\sigma}^{\mu}$ imposed. Hereafter, quantities with a suffixed $\sigma$ or $\mu$, such as $\boldsymbol{x}_{\sigma}$, $\boldsymbol{x}^{\mu}$, $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})$, and $\boldsymbol{\tau}_{\sigma}^{\mu}(\boldsymbol{x})$, obey these boundary conditions. The factor $\delta_{\sigma,\mu}^{(3)}+\delta_{\sigma,\mu+1}^{(3)}$ in Eq.~(\ref{Markov:Psig_mu}) is 1 only if a specified domain $\mathrm{D}_{\sigma}^{\mu}$ is of type $\mathrm{D}_{\mu}^{\mu}$ or type $\mathrm{D}_{\mu+1}^{\mu}$. $P(\sigma,\mu,t)$ is thus nonzero only for allowed pairs of $\sigma$ and $\mu$. Likewise, we denote by $P(\sigma,t)$ the probability of finding the trajectory $\boldsymbol{X}(t)$ in the domain $\mathrm{D}_{\sigma}^{\sigma}\cup \mathrm{D}_{\sigma}^{\sigma+2} \equiv \mathrm{D}_{\sigma}$ ($\sigma \in \{0,1,2\}$), i.e., the attractive region for $\boldsymbol{x}_{\sigma}$, and by $Q(\mu,t)$ the probability of finding $\boldsymbol{X}(t)$ in the domain $\mathrm{D}_{\mu}^{\mu}\cup \mathrm{D}_{\mu+1}^{\mu} \equiv \mathrm{D}^{\mu}$ ($\mu$ $\in$ $\{0,1,2\}$), i.e., the united domain on both sides of $\mathrm{B}^{\mu}$. Specifically, \begin{align} P(\sigma,t) &\equiv \sum_{\mu} P(\sigma,\mu,t)= \int_{\boldsymbol{x}\in \mathrm{D}_{\sigma}} \mathrm{d}\boldsymbol{x} \,p(\boldsymbol{x},t), \label{P_sig} \\ Q(\mu,t) &\equiv \sum_{\sigma} P(\sigma,\mu,t)= \int_{\boldsymbol{x}\in \mathrm{D}^{\mu}} \mathrm{d}\boldsymbol{x} \,p(\boldsymbol{x},t). \label{Q_mu} \end{align} Using $P(\sigma,\mu,t)$, $P(\sigma,t)$, and $Q(\mu,t)$, we define the conditional probabilities $P(\sigma\mid\mu,t)$ and $Q(\mu\mid\sigma,t)$ as \begin{equation} P(\sigma\mid\mu,t) \equiv \frac{P(\sigma,\mu,t)}{Q(\mu,t)}, \quad Q(\mu\mid\sigma,t) \equiv \frac{P(\sigma,\mu,t)}{P(\sigma,t)}. \label{COND_PQ} \end{equation} Now let us consider a master equation for $P(\sigma,\mu,t)$. From Eqs.~(\ref{FPE}) and (\ref{Markov:Psig_mu}), we have \begin{equation} \partial_t P(\sigma,\mu,t) = \int_{\boldsymbol{x}\in \mathrm{D}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ - \partial_{\boldsymbol{x}}\boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t) \right\}. \label{D_P_sig_mu} \end{equation} Dividing the domain of integration into $\Tilde{\mathrm D}_{\sigma}^{\mu}$ and $\Delta \mathrm D_{\sigma}^{\mu} \equiv \mathrm D_{\sigma}^{\mu}-\Tilde{\mathrm D}_{\sigma}^{\mu} $, we rewrite the RHS as \begin{equation} \int_{\boldsymbol{x}\in \mathrm{D}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ \boldsymbol{\oldcdot} \right\} = \int_{\boldsymbol{x}\in \Tilde{\mathrm{D}}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ \boldsymbol{\oldcdot} \right\} + \int_{\boldsymbol{x}\in \Delta\mathrm{D}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ \boldsymbol{\oldcdot} \right\}, \label{D_P_sig_mu:current} \end{equation} where ``$\,\boldsymbol{\oldcdot}\,$'' denotes $ - \partial_{\boldsymbol{x}} \boldsymbol{\oldcdot}\boldsymbol{J}(\boldsymbol{x},t) $ [${=} \partial_{t} p(\boldsymbol{x},t)$]. The difference region $\Delta\mathrm{D}_{\sigma}^{\mu}$ consists of domains $\{\boldsymbol{x}\mid\boldsymbol{x} \in \mathrm D_{\sigma}^{\mu}, \boldsymbol{x} \notin \Tilde{\mathrm D}_{\sigma}^{\mu}\}$ and $\{\boldsymbol{x}\mid\boldsymbol{x} \in \Tilde{\mathrm D}_{\sigma}^{\mu}, \boldsymbol{x} \notin \mathrm D_{\sigma}^{\mu}\}$, which we refer to as ``positive'' and ``negative'' domains, respectively. For the latter, we invert the sign of integration. To employ Eq.~(\ref{D_P_sig_mu:current}), we assume that the noise intensity $D$ is much smaller than the potential difference $\Delta V$ ($\Delta V/D\gg 1$) and that $h$, $\Omega$, and $I$ are very small. These assumptions are often used in studies of SR\cite{PhysRevA.39.4854,RevModPhys.70.223}. In this situation, the probability density of $\boldsymbol{X}(t)$ is localized at the minima of $V(\boldsymbol{x},t)$ and can be regarded as near thermal equilibrium around them. We thus assume that thermal equilibrium for the PDF, $\boldsymbol{J}(\boldsymbol{x},t)= \boldsymbol{0}$, approximately holds along the curve $\Tilde{\mathrm{B}}_{\sigma}$. Applying this to the first term in Eq.~(\ref{D_P_sig_mu:current}), we have \begin{align} \int_{\boldsymbol{x}\in \Tilde{\mathrm{D}}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ - \partial_{\boldsymbol{x}} \boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t) \right\} &\approx \int_{\boldsymbol{x}\in \Tilde{\mathrm{B}}^{\mu}} \mathrm{d}\boldsymbol{x} \,\Tilde{\boldsymbol{n}}_{\sigma}^{\mu}(\boldsymbol{x})\boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t) \nonumber \\ &\equiv \left( \delta_{\sigma,\mu+1}^{(3)}- \delta_{\sigma,\mu}^{(3)} \right) J^{\mu}(t), \label{def:J_mu} \end{align} where $J^{\mu}(t)$ is the probability current, i.e., the transition rate, from $\Tilde{\mathrm{D}}_{\mu}^{\mu}$ to $\Tilde{\mathrm{D}}_{\mu+1}^{\mu}$ induced by thermal activation and is positive for anticlockwise rotations. Note that $\Tilde{\mathrm{B}}^{\mu}$ lies on the moving potential barrier. It is reasonable to expect that the magnitude of $\boldsymbol{J}(\boldsymbol{x},t)$ reflects the degree of deviation from thermal equilibrium and to assume that $|\boldsymbol{J}(\boldsymbol{x},t)|$ is locally maximal (minimal) at $\Tilde{\boldsymbol{x}}^{\mu}$ ($\Tilde{\boldsymbol{x}}_{\sigma}$), and that $|\boldsymbol{J}(\boldsymbol{x},t)|$ increases as $\boldsymbol{x}$ nears the boundary $\Tilde{\mathrm{B}}^{\mu}$ along the valley. Thus, $\Tilde{\mathrm{B}}^{\mu}$ can be taken as a natural boundary between states. Indeed, the current density may have an $O(I)$ bias due to the load force such that $\boldsymbol{J}(\boldsymbol{x},t)\sim O(I)$ everywhere, although it is assumed to vanish along $\Tilde{\mathrm{B}}_{\sigma}$. We expect that this bias would smoothly vanish as $I\rightarrow 0$ and only contribute a meaningful effect to states near thermal equilibrium. This bias is integrated into $J^{\mu}(t)$ at the boundary $\Tilde{\mathrm{B}}^{\mu}$. Because $h$ is small and $\Delta V/D \gg 1$, the PDF nearly vanishes around the origin O and the temporal maximum $\Tilde{\mathrm O}$. We can thus regard both O and $\Tilde{\mathrm O}$ as essentially being the same point and all the curves $\mathrm{B}_{\sigma}$, $\Tilde{\mathrm{B}}_{\sigma}$, $\mathrm{B}^{\mu}$, and $\Tilde{\mathrm{B}}^{\mu}$ as starting at O. This allows us to consider the difference domain $\Delta\mathrm{D}_{\sigma}^{\mu}$ as being composed of one domain surrounded by $\mathrm{B}_{\sigma}$ and $\Tilde{\mathrm{B}}_{\sigma}$ and another surrounded by $\mathrm{B}^{\mu}$ and $\Tilde{\mathrm{B}}^{\mu}$, denoted $\Delta\mathrm{D}_{\sigma\ast}^{\mu}$ and $\Delta\mathrm{D}_{\sigma}^{\mu\ast}$, respectively. With $\Delta\mathrm{D}_{\sigma}^{\mu}$ separated into $\Delta\mathrm{D}_{\sigma\ast}^{\mu}$ and $\Delta\mathrm{D}_{\sigma}^{\mu\ast}$, the second term in Eq.~(\ref{D_P_sig_mu:current}) reads \begin{equation} \int_{\Delta\mathrm{D}_{\sigma}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ \boldsymbol{\oldcdot} \right\} = \int_{\Delta\mathrm{D}_{\sigma}^{\mu\ast}} \mathrm{d}\boldsymbol{x} \left\{ \boldsymbol{\oldcdot} \right\} + \int_{\Delta\mathrm{D}_{\sigma\ast}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{ \boldsymbol{\oldcdot} \right\}. \label{DD_int} \end{equation} Using the notation $\partial_t P(\sigma,\mu,t)\bigr|_{Q} \equiv -\int_{\Delta\mathrm{D}_{\sigma}^{\mu\ast}} \mathrm{d}\boldsymbol{x} \left\{\boldsymbol{\oldcdot} \right\}$ and $\partial_t P(\sigma,\mu,t)\bigr|_{P} \equiv \int_{\Delta\mathrm{D}_{\sigma\ast}^{\mu}} \mathrm{d}\boldsymbol{x} \left\{\boldsymbol{\oldcdot}\right\}$, from Eqs.~(\ref{D_P_sig_mu:current})--(\ref{DD_int}) we express Eq.~(\ref{D_P_sig_mu}) as \begin{align} \partial_t P(\sigma,\mu,t) \approx& \left( \delta_{\sigma,\mu+1}^{(3)} -\delta_{\sigma,\mu}^{(3)} \right) J^{\mu}(t) \nonumber \\ & - \partial_t P(\sigma,\mu,t)\bigr|_{Q} + \partial_t P(\sigma,\mu,t)\bigr|_{P}. \label{DP:express} \end{align} Under the assumptions $O(h) \ll \Delta V$ and $\Omega T_r \ll 1$, the displacement and velocity of the movement of the boundaries, $\Tilde{\mathrm{B}}_{\sigma}$ and $\Tilde{\mathrm{B}}^{\mu}$, can be regarded as sufficiently small and sufficiently slow, respectively, in the following arguments. In this case, we consider the roles of the current $J^{\mu}(t)$ and the two following terms in Eq.~(\ref{DP:express}) individually, by applying virtual variations of the boundaries under certain conditions. For $J^{\mu}(t)$, when there is no boundary variation, i.e., $\Tilde{\mathrm{B}}_{\sigma}=\mathrm{B}_{\sigma}$ and $\Tilde{\mathrm{B}}^{\mu}=\mathrm{B}^{\mu}$, we can ignore the last two terms in Eq.~(\ref{DP:express}) and thus have $\partial_t P(\sigma,t)\approx J^{\sigma-1}(t)-J^{\sigma}(t)$ and $\partial_t Q(\mu,t)\approx 0$. This implies that the time evolution of $P(\sigma,t)$ is dominated by $J^{\mu}(t)$, or the action of $J^{\mu}(t)$ is connected to the time evolution of $P(\sigma,t)$. For $\partial_t P(\sigma,\mu,t)\bigr|_{Q}$ and $\partial_t P(\sigma,\mu,t)\bigr|_{P}$, we consider variation of $\Tilde{\mathrm{B}}^{\mu}$ (or $\Tilde{\mathrm{B}}_{\sigma}$) under the conditions that the other boundaries are fixed to their reference states $\mathrm{B}_{\sigma'}$ and $\mathrm{B}^{\mu'}$ ($\sigma'\ne \sigma$, $\mu'\ne \mu$) at $H(t)=0$, and that $J^{\mu'}(t)=0$ for $\mu' \in \{\mu-1,\mu+1\}$ (or $\mu' \in \{\sigma-1,\sigma,\sigma+1\}$). Under these conditions, the influence from the other boundaries being ignored, we can identify an effect only of the specified variation of boundary, and clarify the respective roles of $\partial_t P(\mu,\mu,t)\bigr|_{Q}$ and $\partial_t P(\mu,\mu,t)\bigr|_{P}$ as follows. For simplicity, considering only the case of $\sigma=\mu$ in $\partial_t P(\sigma,\mu,t)\bigr|_{Q}$ and $\partial_t P(\sigma,\mu,t)\bigr|_{P}$ we have \begin{align} \partial_t P(\mu,\mu,t)\bigr|_{Q} & = \int_{\boldsymbol{x}\in\Tilde{\mathrm{B}}^{\mu}} \,\mathrm{d}\boldsymbol{x}\, \Tilde{\boldsymbol{n}}_{\mu}^{\mu}(\boldsymbol{x}) \boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t) - \int_{\boldsymbol{x}\in\mathrm{B}^{\mu}} \,\mathrm{d}\boldsymbol{x}\, \boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x}) \boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t) \nonumber \\ &\approx \int_{\boldsymbol{x}\in\mathrm{B}^{\mu}} \,\mathrm{d}\boldsymbol{x}\, \boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x}) \boldsymbol{\oldcdot} \left\{ \boldsymbol{J}(\Tilde{\boldsymbol{x}}(\boldsymbol{x}),t) - \boldsymbol{J}(\boldsymbol{x},t) \right\} , \label{Q_RJ} \\ \partial_t P(\mu,\mu,t)\bigr|_{P} & \approx \int_{\boldsymbol{x}\in\mathrm{B}_{\mu}} \,\mathrm{d}\boldsymbol{x}\, \boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x}) \boldsymbol{\oldcdot} \boldsymbol{J}(\boldsymbol{x},t), \label{P_RJ} \end{align} where $\Tilde{\boldsymbol{x}}=\Tilde{\boldsymbol{x}}(\boldsymbol{x})$ is a mapping from $\boldsymbol{x}$ on $\mathrm{B}^{\mu}$ to its nearest point, $\Tilde{\boldsymbol{x}}$, on $\Tilde{\mathrm{B}}^{\mu}$ and the approximation $\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x})\approx \Tilde{\boldsymbol{n}}_{\sigma}^{\mu}(\Tilde{\boldsymbol{x}}(\boldsymbol{x}))$ is applied (the boundaries $\Tilde{\mathrm{B}}^{\mu}$ and $\mathrm{B}^{\mu}$ [$\Tilde{\mathrm{B}}_{\mu}$ and $\mathrm{B}_{\mu}$] are assumed to be sufficiently close). We can regard $\partial_t P(\mu,\mu,t)\bigr|_{Q}$ [$\partial_t P(\mu,\mu,t)\bigr|_{P}$] as a relative current to $J^{\mu}(t)$ because each RHS of Eqs.~(\ref{Q_RJ}) and (\ref{P_RJ}) represents the integral of the flux through $\Tilde{\mathrm{B}}^{\mu}$ ($\mathrm{B}_{\mu}$) relative to that through $\mathrm{B}^{\mu}$ ($\Tilde{\mathrm{B}}_{\mu}$), where $\boldsymbol{J}(\boldsymbol{x},t)=\boldsymbol{0}$ on $\Tilde{\mathrm{B}}_{\mu}$ in Eq.~(\ref{P_RJ}). In Eq.~(\ref{Q_RJ}), since we have assumed that the current attains a local maximum on $\Tilde{\mathrm{B}}^{\mu}$ because $|\boldsymbol{J}(\Tilde{\boldsymbol{x}}(\boldsymbol{x}),t) \boldsymbol{\oldcdot}\boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x})| \geq |\boldsymbol{J}(\boldsymbol{x},t) \boldsymbol{\oldcdot}\boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x})|$, we have that $\partial_t P(\mu,\mu,t)\bigr|_{Q}$ represents the incoming relative current into the domain $\mathrm{D}_{\mu}^{\mu}$. From the virtual variation of $\Tilde{\mathrm{B}}^{\mu}$, by ignoring the current into the domain $\mathrm{D}^{\mu}$, we regard $Q(\mu,t)$ as a constant in Eq.~(\ref{Q_RJ}). Then, using the conditional probability in Eq.~(\ref{COND_PQ}), we have \begin{equation} \int_{\boldsymbol{x}\in\mathrm{B}^{\mu}} \mathrm{d}\boldsymbol{x}\, \frac{ \boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x}) \boldsymbol{\oldcdot} \left\{ \boldsymbol{J}(\boldsymbol{x},t) - \boldsymbol{J}(\Tilde{\boldsymbol{x}}(\boldsymbol{x}),t) \right\} }{Q(\mu,t)} = \partial_t P(\mu\mid\mu,t) \end{equation} and $\partial_t P(\mu,\mu,t)\bigr|_{Q}=Q(\mu,t) \partial_t P(\mu\mid\mu,t)$. Similarly, by applying such a virtual variation to $\Tilde{\mathrm{B}}_{\mu}$ in Eq.~(\ref{P_RJ}), and ignoring the current into the domain $\mathrm{D}_{\mu}$, we regard $P(\mu,t)$ as a constant and have $\partial_t P(\mu,\mu,t)\bigr|_{P} =P(\mu,t)\partial_t Q(\mu\mid\mu,t)$. As a result of the above approximation and simplification, Eq.~(\ref{DP:express}) reads \begin{gather} \partial_t P(\sigma,\mu,t) \approx \left( \delta_{\sigma,\mu+1}^{(3)} -\delta_{\sigma,\mu}^{(3)} \right) J^{\mu}(t)+ J_{\sigma}^{\mu}(t), \label{DP} \\ J_{\sigma}^{\mu}(t) \equiv \left( \delta_{\sigma,\mu+1}^{(3)} +\delta_{\sigma,\mu}^{(3)} \right) \left\{ P(\sigma,t) \partial_t Q(\mu\mid\sigma,t) - Q(\mu,t) \partial_t P(\sigma\mid\mu,t) \right\}. \label{def:J'} \end{gather} Because, as mentioned above, we are treating the currents in Eq.~(\ref{DP:express}) separately, the total current in Eq.~(\ref{DP}) can be read as a superposition of currents that cause independent actions; the current $J^{\mu}(t)$ is relevant only to the evolution of $P(\sigma,t)$ without affecting $Q(\mu\mid\sigma,t)$, whereas the two (relative) currents in $J_{\sigma}^{\mu}(t)$ are related to the change in the ratios of $P(\sigma,\mu,t)$ to $P(\sigma,t)$ and to $Q(\mu,t)$. In Sect.~\ref{sec:MAM}, we shall see $J_{\sigma}^{\mu}(t)$ is indispensable in explaining the circulation induced by the ac driving field. To complete the master equation, we have to express $J^{\mu}(t)$ with known quantities. With this, we can approximately solve Eqs.~(\ref{DP}) and (\ref{def:J'}) by regarding $J_{\sigma}^{\mu}(t)$ as a small quantity, which, as shown below, enters at the level of $O(h^2)$. We first analyze the linearized master equation, \begin{equation} \partial_t P(\sigma,\mu,t) \approx \delta_{\sigma,\mu+1}^{(3)} J^{\mu}(t) - \delta_{\sigma,\mu}^{(3)} J^{\mu}(t), \label{P_sig_mu} \end{equation} within a linear response treatment in Sect.~\ref{sec:LRT}, in which $P(\sigma,t)$ and $J^{\mu}(t)$ are related to the driving field. \subsection{\label{sec:LRT} Linear response treatment} By applying reaction rate theory~\cite{RevModPhys.62.251} or Langer's method~\cite{PhysRevLett.21.973} for $J^{\mu}(t)$ in Eq.~(\ref{def:J_mu}), we obtain \begin{gather} J^{\mu}(t) \approx W(\mu,\mu,t) P(\mu,t)- W(\mu+1,\mu,t)P(\mu+1,t), \label{J_mu} \\ W(\sigma,\mu,t) \equiv \frac{1}{2\pi} e^{-\{ V(\boldsymbol{x}^{\mu},t) -V(\boldsymbol{x}_{\sigma},t)\}/D} \sqrt{ \frac{H_{\tau}H_{n}|G_n|}{G_{\tau}}}, \label{W_sig_mu} \end{gather} where $W(\mu+1,\mu,t)$ [$W(\mu,\mu,t)$] is the transition rate from the state $\boldsymbol{X}(t)\in \mathrm{D}_{\mu+1}$ to the state $\boldsymbol{X}(t)\in \mathrm{D}_{\mu}^{\mu}$ [from $\boldsymbol{X}(t)\in \mathrm{D}_{\mu}$ to $\boldsymbol{X}(t)\in \mathrm{D}_{\mu+1}^{\mu}$] \bibnote[Note1]{Supplemental material for the derivation of Eqs.~(\ref{J_mu})--(\ref{eq:Q_sig_mu}) is provided online.}. $H_{\tau}$ and $H_{n}$ ($G_n$ and $G_{\tau}$) are the eigenvalues of the Hessian matrix, as defined in Eqs.~(\ref{eigen_tau}) and (\ref{eigen_n}), at the potential minimum (saddle), for which we have $H_{\tau}\equiv\Lambda_{\tau}(\boldsymbol{x}_{\sigma})$ and $H_{n}\equiv\Lambda_{n}(\boldsymbol{x}_{\sigma})$ [$G_{n}\equiv\Lambda_{n}(\boldsymbol{x}^{\mu})< 0$ and $G_{\tau}\equiv\Lambda_{\tau}(\boldsymbol{x}^{\mu})$] from the threefold symmetry. Also, we obtain the relationship between $P(\sigma,\mu,t)$ and $Q(\mu\mid\sigma,t)$ in Eq.~(\ref{COND_PQ}) as \begin{gather} P(\sigma,\mu,t) \approx \left( \delta_{\sigma,\mu}^{(3)}+\delta_{\sigma+2,\mu}^{(3)} \right) Q(\mu\mid\sigma,t) P(\sigma,t), \\ Q(\mu\mid\sigma,t)\approx\frac{1}{2} \left\{ 1 + \frac{2\boldsymbol{f}_{\sigma}\boldsymbol{\oldcdot}\boldsymbol{n}_{\sigma}^{\mu}(\boldsymbol{x}_{\sigma}) } {\sqrt{2\pi D H_{n}}} \right\}, \label{eq:Q_sig_mu} \end{gather} where $\boldsymbol{f}_{\sigma}\equiv \boldsymbol{f}_{I}(\boldsymbol{x}_{\sigma}) +H(t)\boldsymbol{N}$. This derivation is based on the condition of local thermal equilibrium around the potential minima~\cite{Note1}. To obtain the relationships of $P(\sigma,t)$, $Q(\mu,t)$, and $J^{\mu}(t)$ to the driving fields in $O(h)$ and $O(I)$, we expand $P(\sigma,t)$ and $W(\sigma,\mu,t)$ in Eqs.~(\ref{J_mu}) and (\ref{W_sig_mu}) as \begin{gather} P(\sigma,t) \approx P_0(\sigma) + P_{1}(\sigma,t), \label{P_div} \\ W(\sigma,\mu,t) \approx W_{0} \left\{ 1 + \frac{H(t)}{D} \boldsymbol{N}\boldsymbol{\oldcdot} (\boldsymbol{x}^{\mu}-\boldsymbol{x}_{\sigma}) - \frac{ V_I(\boldsymbol{x}^{\mu}) -V_I(\boldsymbol{x}_{\sigma}) }{D} \right\}, \label{W_sig_mu2} \end{gather} where the first and the second [and the third in Eq.~(\ref{W_sig_mu2})] terms are of zeroth- and first-order in $h$ and $I$, respectively, we assume $\sum_{\sigma} P_0(\sigma) = 1$ and $\sum_{\sigma} P_1(\sigma,t) = 0$ for the normalization, and the transition rate \begin{equation} W_0 \equiv \frac{1}{2\pi} e^{ -\{ V_{0}(\boldsymbol{x}^{0}) -V_{0}(\boldsymbol{x}_{0})\}/D } \sqrt{ \frac{H_{\tau}H_{n}|G_n|}{G_{\tau}}} \label{W_sig_mu0} \end{equation} results from the thermal activation without load and ac driving fields. Here we neglect the $I$- and $h$-dependence in $H_{\tau}$, $H_{n}$, $G_{\tau}$, and $G_{n}$ for simplification, i.e., in which we replace $\Hat{G}(\boldsymbol{x}_{\ast})$ in Eqs.~(\ref{eigen_tau}) and (\ref{eigen_n}) with $ \partial_{\boldsymbol{x}}\partial_{\boldsymbol{x}}^{^{\mathrm{T}} V_{0}(\boldsymbol{x}) \bigr|_{\boldsymbol{x}=\boldsymbol{x}_{\ast}} $. Note that we have used the threefold symmetry in $V_{0}(\boldsymbol{x})$, e.g., $V_{0}(\boldsymbol{x}_{\sigma})=V_{0}(\boldsymbol{x}_{\sigma+1})$, for $W_0$. Substituting Eqs.~(\ref{P_div}) and (\ref{W_sig_mu2}) into Eq.~(\ref{J_mu}), the zeroth-order equality of $J^{\mu}(t)=0$ reads $P_{0}(\sigma) = 1/3$, and, up to $O(h)$ and $O(I)$, $J^{\mu}(t)$ reads \begin{align} J^{\mu}(t) \approx W_{0} \biggl\{& P_{1}(\mu,t) - P_{1}(\mu+1,t) + \frac{H(t)}{3D} \boldsymbol{N}\boldsymbol{\oldcdot} \left( \boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu} \right) \nonumber \\ &+ \frac{ V_I(\boldsymbol{x}_{\mu})- V_I(\boldsymbol{x}_{\mu+1}) }{3D} \biggr\}. \label{eq:J_mu_1} \end{align} Applying this to $\partial_t P(\sigma,t) \approx J^{\sigma-1}(t)-J^{\sigma}(t)$ from Eq.~(\ref{P_sig_mu}), we find $P_1(\sigma,t)\approx (h\boldsymbol{N}\boldsymbol{\oldcdot}\boldsymbol{x}_{\sigma}/3) \operatorname{Re} \left[ \Tilde\chi(\Omega)e^{i\Omega t} \right]$ with $\Tilde\chi(\Omega)=3W_{0}/\{D(i\Omega+3W_{0})\}$\cite{PhysRevE.87.022144}. Note that we have $\boldsymbol{x}_{\sigma+1}+\boldsymbol{x}_{\sigma-1}=-\boldsymbol{x}_{\sigma}$ and $V_{I}(\boldsymbol{x}_{\mu+1})-V_{I}(\boldsymbol{x}_{\mu})= I/3$ from the threefold symmetry. Thus, up to $O(h)$, we obtain $P(\sigma,t)$ as \begin{align} P(\sigma,t)&\approx \frac{1}{3} \left\{1+ h \frac{\boldsymbol{N}\boldsymbol{\oldcdot}\boldsymbol{x}_{\sigma}}{D} \operatorname{Re} \left[ \frac{3W_{0}e^{i\Omega t}} {i\Omega + 3W_{0}} \right] \right\}. \label{P_sig_app} \end{align} Also, substituting Eqs.~(\ref{eq:Q_sig_mu}) and (\ref{P_sig_app}) into $Q(\mu,t)=\sum_{\sigma\in\{\mu,\mu+1\}} Q(\mu\mid\sigma,t)P(\sigma,t)$, we get \begin{align} Q(\mu,t) \approx \frac{1}{3} \biggl\{& 1+ \frac{ H(t)\boldsymbol{N}\boldsymbol{\oldcdot} \left(\boldsymbol{n}_{\mu}^{\mu} -\boldsymbol{n}_{\mu+1}^{\mu+1}\right) }{\sqrt{2\pi D H_{n}}} \nonumber \\ & \quad + \frac{h}{2D} \boldsymbol{N}\boldsymbol{\oldcdot}\left( \boldsymbol{x}_{\mu}+\boldsymbol{x}_{\mu+1} \right) \operatorname{Re}\left[ \frac{3W_{0} e^{i\Omega t}} {i\Omega + 3W_{0}} \right] \biggr\}, \label{Q_mu_app} \end{align} where $\boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x}_{\mu})\equiv \boldsymbol{n}_{\mu}^{\mu}$, $\boldsymbol{n}_{\mu+1}^{\mu}(\boldsymbol{x}_{\mu+1})= -\boldsymbol{n}_{\mu+1}^{\mu+1}$, and we have used $\boldsymbol{f}_{I}(\boldsymbol{x}_{\mu})\boldsymbol{\oldcdot} \boldsymbol{n}_{\mu}^{\mu}= \boldsymbol{f}_{I}(\boldsymbol{x}_{\mu+1})\boldsymbol{\oldcdot}\boldsymbol{n}_{\mu+1}^{\mu+1}$ from the threefold symmetry. From Eqs.~(\ref{P_sig_app}) and (\ref{eq:J_mu_1}), we find \begin{equation} J^{\mu}(t) \approx \frac{h W_0 }{3D} \boldsymbol{N}\boldsymbol{\oldcdot} ( \boldsymbol{x}_{\mu+1} - \boldsymbol{x}_{\mu} ) \operatorname{Re} \left[ \frac{i\Omega e^{i\Omega t}}{i\Omega + 3W_0} \right] -\frac{W_0 I}{9D}, \label{J_mu_app} \end{equation} where the first and second terms are the respective currents driven by $H(t)$ and the load. \subsection{\label{sec:MAM} Coarse-grained kinetics} We next develop a method to estimate kinetic quantities in terms of a coarse-grained description. For a comparable argument in the case of 1D ratchet models, see Ref.~\citen{JPSJ.66.1234}. The expectation value for the time derivative of a quantity $A\{\boldsymbol{X}(t)\}\equiv A$ reads \begin{align} \langle \dot{A} \rangle &= \int \mathrm{d}\boldsymbol{x}\, \partial_{\boldsymbol x} A \boldsymbol{\oldcdot} \boldsymbol{J} \quad [\boldsymbol{J}\equiv\boldsymbol{J}(\boldsymbol{x},t)] \nonumber \\ &= \sum_{\mu} \int_{\Tilde{\mathrm{D}}^{\mu}_{\mu}\cup \Tilde{\mathrm{D}}^{\mu}_{\mu+1}} \mathrm{d}\boldsymbol{x}\, \partial_{\boldsymbol x} A \boldsymbol{\oldcdot} \boldsymbol{J} + \sum_{\sigma,\mu} \int_{\Delta \mathrm{D}^{\mu}_{\sigma}} \mathrm{d}\boldsymbol{x}\, \partial_{\boldsymbol x} A \boldsymbol{\oldcdot} \boldsymbol{J} \nonumber \\ &\approx \sum_{\mu} \Delta A^{\mu} J^{\mu}(t) + \sum_{\sigma,\mu} \Delta A_{\sigma}^{\mu} J_{\sigma}^{\mu}(t), \label{PhysObs} \end{align} with the two types of current as in Eqs.~(\ref{DP}) and (\ref{def:J'}). Assuming that $\boldsymbol{J}$ lies along the potential valley (see Sect.~\ref{sec:markov}), for example, the integral over $\Delta \mathrm{D}^{\mu\ast}_{\mu}$ [see Eqs.~(\ref{DD_int})--(\ref{P_RJ})] in the second term in the second line can be approximated as \begin{align} \int_{\Delta \mathrm{D}^{\mu\ast}_{\mu}}& \mathrm{d}\boldsymbol{x}\, \partial_{\boldsymbol x} A \boldsymbol{\oldcdot} \boldsymbol{J} \nonumber\\ & \approx - \int_{\boldsymbol{x}\in \mathrm{C}} \mathrm{d}\boldsymbol{x} \left(\partial_{\boldsymbol x} A\right)_{\mathrm{C}} \int_{\boldsymbol{x}\in\mathrm{B}^{\mu}} \mathrm{d}\boldsymbol{x}\, \boldsymbol{n}_{\mu}^{\mu}(\boldsymbol{x}) \boldsymbol{\oldcdot} \left\{ \boldsymbol{J}(\Tilde{\boldsymbol{x}}(\boldsymbol{x}),t) - \boldsymbol{J}(\boldsymbol{x},t) \right\}, \label{app_in_DotA} \end{align} where $\mathrm{C}$ denotes the curve along the valley in the related domain and $(\partial_{\boldsymbol x} A)_{\mathrm{C}}$ the tangential derivative along the curve. In other words, each double integral over the 2D domain is converted into repeated integrals over $\mathrm{C}$ and its orthogonal curves nearly parallel to $\mathrm{B}^{\mu}$ (or $\mathrm{B}_{\sigma}$ for $\Delta \mathrm{D}^{\mu}_{\sigma\ast}$) and then decoupled into independent integrals as in Eq.~(\ref{app_in_DotA}). A similar procedure is applied to the other integrals in Eq.~(\ref{PhysObs}). Thus, we regard $\Delta A^{\mu}$ as a representative difference of $A$ between the domains $\mathrm{D}^{\mu}_{\mu+1}$ and $\mathrm{D}^{\mu}_{\mu}$ and $\Delta A_{\sigma}^{\mu}$ as that between the boundaries $\mathrm{B}_{\sigma}$ and $\mathrm{B}^{\mu}$ of $\mathrm{D}^{\mu}_{\sigma}$. Recall here the currents $J^{\mu}(t)$, $-Q(\mu,t)\partial_t P(\mu\mid\mu,t)$, $P(\mu,t)\partial_t Q(\mu\mid\mu,t)$, $Q(\mu,t)\partial_t P(\mu+1\mid\mu,t)$, and $-P(\mu+1,t)\partial_t Q(\mu\mid\mu+1,t)$ (anticlockwise), which increase and decrease $A$ on the downstream and upstream sides, respectively, on the specified boundary. For each of these currents, there is a possible coupling with one of the characteristic differences $A(\boldsymbol{x}_{\mu+1})-A(\boldsymbol{x}_{\mu})$, $A(\boldsymbol{x}_{\mu+1})-A(\boldsymbol{x}^{\mu})$, and $A(\boldsymbol{x}^{\mu})-A(\boldsymbol{x}_{\mu})$ as $\Delta A^{\mu}$ or $\Delta A_{\sigma}^{\mu}$ in Eq.~(\ref{PhysObs}). Each product of a current and the characteristic difference represents transport of $A$ through the specified boundary. In Eq.~(\ref{PhysObs}), $\langle \dot{A} \rangle$ is expressed as a superposition of such transports. However, there are no clear definitions for the relationships between $\Delta A^{\mu}$ ($\Delta A_{\sigma}^{\mu}$) and the characteristic difference. We therefore determine these empirically by comparison with the results of numerical simulations. For instance, by applying Eq.~(\ref{PhysObs}) to the velocity we obtain \begin{align} \langle \dot{\boldsymbol{X}}(t)\rangle &\approx g_{V} \sum_{\mu} (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu}) J^{\mu}(t) + g_{V}' \sum_{\sigma,\mu} (\boldsymbol{x}^{\mu}-\boldsymbol{x}_{\sigma}) \left( \delta_{\sigma,\mu}^{(3)}+\delta_{\sigma,\mu+1}^{(3)} \right) \nonumber \\ &\quad \times \left\{ P(\sigma,t) \partial_t Q(\mu\mid\sigma,t) - Q(\mu,t) \partial_t P(\sigma\mid\mu,t) \right\}. \label{Ex_V} \end{align} In the first term, $\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu}$ gives the representative difference in the position vector between $\mathrm{D}^{\mu}_{\mu+1}$ and $\mathrm{D}^{\mu}_{\mu}$. In the second term, with $\sigma$ set to equal $\mu$ in the summation, $(\boldsymbol{x}^{\mu}-\boldsymbol{x}_{\mu}) P(\mu,t)\partial_t Q(\mu\mid\mu,t)$ and $(\boldsymbol{x}_{\mu}-\boldsymbol{x}^{\mu}) Q(\mu,t)\partial_t P(\mu\mid\mu,t)$ give the components of the velocity caused by variations in $\Tilde{\mathrm B}_{\mu}$ and $\Tilde{\mathrm B}^{\mu}$, respectively. We use the adjustable parameters $g_{V}$ and $g_{V}'$ to absorb errors arising from the approximation in Eq.~(\ref{PhysObs}) and determine these by fits to the data. Such adjustable parameters, introduced here and below, are dimensionless, and we regard them as $O(1)$. For the expectation value for the MAM in Eq.~(\ref{MAM}), assuming $L \approx \langle L\rangle$ for sufficiently large $T_{\mathrm{tot}}$, we have $L= L^{(I)}+ L^{(h)}$ with \begin{align} L^{(I)} &\approx \frac{g_{L}}{2} \sum_{\mu} \left\{ \boldsymbol{x}^{\mu}\times (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu}) \right\}_{z} \overline{ J^{\mu}(t) }, \label{LIdef:LI} \\ L^{(h)} &\approx g_{L}' \sum_{\sigma,\mu} (\boldsymbol{x}_{\sigma}\times \boldsymbol{x}^{\mu})_{z} \left( \delta_{\sigma,\mu}^{(3)}+\delta_{\sigma,\mu+1}^{(3)} \right) \nonumber \\ &\quad \times \overline{ \left\{ P(\sigma,t) \partial_t Q(\mu\mid\sigma,t) - Q(\mu,t) \partial_t P(\sigma\mid\mu,t) \right\} }, \label{L_expect} \end{align} where $L^{(I)}$ and $L^{(h)}$ come from the two types of current. Each summand in Eq.~(\ref{LIdef:LI}) represents the $z$-component of the angular momentum at $\boldsymbol{x}^{\mu}$, i.e., the vector product between $\boldsymbol{x}^{\mu}$ and $(\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu})J^{\mu}(t)/2$, where the latter is the mean of $(\boldsymbol{x}_{\mu+1}-\boldsymbol{x}^{\mu})J^{\mu}(t)$ and $(\boldsymbol{x}^{\mu}-\boldsymbol{x}_{\mu})J^{\mu}(t)$. Applying Eqs.~(\ref{P_sig_app})--(\ref{J_mu_app}) to Eqs.~(\ref{LIdef:LI})--(\ref{L_expect}), we obtain \begin{equation} L^{(I)} \approx - \frac{ g_{L} W_{0}I}{6D} \{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times\boldsymbol{x}^{0} \}_{z} \label{LI:1} \end{equation} and $L^{(h)}$ as in Eq.~(\ref{Lh_final}) in Appendix~\ref{App:AC-induced-quanta}. Note that because of the threefold symmetry, $\{ (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu}) \times \boldsymbol{x}^{\mu} \}_{z}$ is independent of $\mu$. Using Eqs.~(\ref{LI:1}) and (\ref{Lh_final}), we rewrite $L$ as \begin{gather} L \approx \frac{ g_{L} W_{0}}{6D} \{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times\boldsymbol{x}^{0} \}_{z}\, \{I_0(D)-I\} , \label{LwithI_0} \\ I_{0}(D) \equiv - \frac{9 g_{L}' h^2 \Omega^{2}}{2 g_{L} \sqrt{2\pi D H_{n}} } \frac{ \boldsymbol{x}_{0}\boldsymbol{\oldcdot} \boldsymbol{n}_{0}^{0} }{\Omega^2 + (3W_0)^2}. \label{TorqBalance} \end{gather} For the mirror image of the potential, the sign of $ \left\{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times \boldsymbol{x}^{0} \right\}_{z} $ is inverted, but $\boldsymbol{x}_{0}\boldsymbol{\oldcdot} \boldsymbol{n}_{0}^{0}$ remains unchanged. One can check that $ \left\{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times \boldsymbol{x}^{0} \right\}_{z}\geq 0 $ for a positive ratchet potential: When $I=I_{0}(D)$, the load balances the ac-induced torque. The expression for $I_0(D)$ in Eq.~(\ref{TorqBalance}) implies that a stronger torque from the ac driving field to cope with a load requires the ratchet potential to have a greater asymmetry with respect to $\boldsymbol{x}_{0}\boldsymbol{\oldcdot} \boldsymbol{n}_{0}^{0}$, because of the latter's relation to the degree of asymmetry.\cite{PhysRevE.87.022144} $I_{0}(D)$ indicates the minimal load strength or coercive (load) torque, which is taken from the coercive field---in magnetic terminology---and the load torque for $I > I_{0}(D)$ overwhelms the ac-induced torque. The curves in Fig.~\ref{fig:DvsL} refer to plots of Eq.~(\ref{LwithI_0}); they qualitatively agree with the numerical results. The adjustable parameters are set to $g_{L}= 1.25$ and $g_{L}'/g_{L}=0.95$ throughout this paper. The peak of the curves with respect $D$ identifies SR and mainly comes from the factor $W_0\Omega^2/\{\Omega^2 + (3W_0)^2\}$ in Eq.~(\ref{Lh_final}), which has a maximum for $\Omega = 3W_0$. In Fig.~\ref{fig:PvsL}, the value of $I$ at $L=0$ tends to increase as $D$ decreases. This is explained by Eq.~(\ref{TorqBalance}), because $I_{0}(D)$ is a monotonically decreasing function of $D$. This implies that for a stronger coercive load torque, SR should occur in a smaller $D$-region to gain the advantage, because the coercive torque increases as the peak point for SR ($\Omega\approx 3W_0$) shifts to small-$D$ regions. We describe a related implication of the $D^{-1/2}$ factor on $I_{0}(D)$ in Sect.~\ref{DvsOmg}. \section{\label{sec:Energetics} Energetics} We consider the energetics and the efficiency\cite{JPSJ.66.1234,Sekimoto01011998,PhysRevLett.95.130602,Sekimoto2010} in the force conversion from the linearly polarized ac field to the torque for the load. Our approach follows the methods developed in Refs.~\citen{PhysRevLett.83.903,PhysRevE.68.021906,PhysRevE.70.061105}, and adds two dimensional characteristics to them. We separate the slowly varying part $\boldsymbol{V}$ from $\Dot{\boldsymbol{X}}$ as $\Dot{\boldsymbol{X}}\equiv \boldsymbol{V}+\delta\Dot{\boldsymbol{X}}$, where $\delta\Dot{\boldsymbol{X}}$ denotes the fluctuating part whereas $\boldsymbol{V}$ has a long-term correlation with the driving field. Furthermore, $\boldsymbol{V}$ is decomposed as $ \boldsymbol{V} \equiv \langle \dot{\boldsymbol{X}} \rangle +\boldsymbol{V}_{\theta} $, where $\langle \dot{\boldsymbol{X}} \rangle$ is regarded as a translational mode, which is in fact an oscillation in the direction along the driving field $H(t)\boldsymbol{N}$ [See the argument below Eq.~(\ref{av_X})], and $\boldsymbol{V}_{\theta}$ represents a steady rotational mode around the origin. For simplicity, we approximate $\langle \dot{\boldsymbol{X}} \rangle$ as $ \langle \dot{\boldsymbol{X}} \rangle \approx \operatorname{Re}[\Tilde{\boldsymbol{V}}_{h}e^{i\Omega t} ] $ with the Fourier coefficient $\Tilde{\boldsymbol{V}}_h = (2/T_{p}) \int_{0}^{T_{p}} \mathrm{d}t \langle \dot{\boldsymbol{X}}(t) \rangle e^{-i\Omega t}$, of the fundamental harmonic (or the linear response part). For the energetics on the rotational mode, the force $\boldsymbol{F}=-\partial_{\boldsymbol{X}}V(\boldsymbol{X},t)$ is also decomposed as \begin{equation} \boldsymbol{F}\equiv\gamma \langle \dot{\boldsymbol{X}} \rangle + \Tilde{\boldsymbol{F}}, \label{def:ti_F} \end{equation} where $ \gamma \langle \dot{\boldsymbol{X}} \rangle =\langle \boldsymbol{F}\rangle $ is the mean frictional force, and $\Tilde{\boldsymbol{F}}$ involves the force relevant to the rotational mode. This corresponds to the decomposition $\dot{\boldsymbol{X}}\equiv\langle \dot{\boldsymbol{X}} \rangle +\Tilde{\dot{\boldsymbol{X}}}$. With the component $\Tilde{\dot{\boldsymbol{X}}}$, which is unbiased from the translational mode and leaves the rotational mode, we define relative angular momentum and angular velocity as \begin{align} L'(t) &\equiv X(\dot{Y}-\langle\dot{Y}\rangle ) -Y (\dot{X}-\langle \dot{X}\rangle), \label{def:ti_L} \\ \omega'(t) &\equiv \frac{ X(\dot{Y}-\langle\dot{Y}\rangle ) -Y (\dot{X}-\langle \dot{X}\rangle) }{X^2+Y^2}. \label{def:ti_omg} \end{align} Now, let us consider the energy (power) balance equation (EBE). The derivation of EBE involves calculating the long time average of the inner product of Eq.~(\ref{LEQ}) and $\boldsymbol{F}$, i.e., $ \gamma\overline{\dot{\boldsymbol X}\boldsymbol{\oldcdot} \boldsymbol F} =\overline{\left|\boldsymbol{F}\right|^2} +\overline{\boldsymbol F\boldsymbol{\oldcdot}\boldsymbol R} $; details are given in Appendix~\ref{App:EBE}, which contains the decomposition of $\overline{\left|\boldsymbol{F}\right|^2}$ to terms relevant to the two modes and the estimation of $\overline{\boldsymbol F\boldsymbol{\oldcdot}\boldsymbol R}$. We thus find the EBE as \begin{equation} \overline{ \dot{\boldsymbol{X}}\boldsymbol{\oldcdot} \boldsymbol{f}_{h} } = \overline{ (-\dot{\boldsymbol{X}}\boldsymbol{\oldcdot}\boldsymbol{f}_{I}) } + \gamma \left( \overline{|\langle\Dot{\boldsymbol{X}}\rangle |^2} + \overline{L'}\overline{\omega'} \right) +Q_T, \label{EnergyBalanceEq} \end{equation} where $\boldsymbol{F}=\boldsymbol{f}_{h}+\boldsymbol{f}_{I}$ [$\boldsymbol{f}_{h}\equiv H(t)\boldsymbol{N}$, $\boldsymbol{f}_{I}\equiv \boldsymbol{f}_{I}(\boldsymbol{X}(t))$], and \begin{align} \gamma Q_T \equiv &\;k_{\mathrm B}T(\overline{\partial_xF_x +\partial_yF_y}) + \overline{\left(\frac{X\Tilde F_x+Y\Tilde F_y}{\sqrt{X^2+Y^2}}\right)^2} \nonumber \\ &+ \gamma^2 \overline{\left\{\gamma^{-1}(X\Tilde F_y-Y\Tilde F_x) - \overline{L'}\right\}\left(\frac{1}{\gamma}\frac{X\Tilde F_y-Y\Tilde F_x}{X^2+Y^2} - \overline{\omega'}\right)}\,. \label{def:Qt} \end{align} The left-hand side (LHS) in Eq.~(\ref{EnergyBalanceEq}) represents the input power of the driving field $\boldsymbol{f}_{h}$ into the rotary system, and is denoted by $P_{h}\equiv \overline{\dot{\boldsymbol{X}}\boldsymbol{\oldcdot} \boldsymbol{N}H(t)}$. The first term on the RHS represents an output power of the system for the load: \begin{equation} P_I \equiv - \overline{ \dot{\boldsymbol{X}}\boldsymbol{\oldcdot} \boldsymbol{f}_{I}(\boldsymbol{X}) } =\overline{\dot V_I(\boldsymbol{X})} = \frac{I}{2\pi} \overline{\dot\theta(t)}. \label{def:P_I} \end{equation} The second term on the RHS, $P_d \equiv \gamma \left( \overline{|\langle\Dot{\boldsymbol{X}}\rangle|^2} + \overline{L'}\overline{\omega'} \right) $, represents the energy dissipation rate of the two modes (Supposing the rotor drags and rotates the surrounding molecules, this power is spent to retain such a movement). However, we replace it with \begin{equation} P_d \approx \frac{\gamma}{2} |\Tilde{\boldsymbol{V}}_{h}|^{2} + \gamma L \overline{\dot\theta} \label{def:P_d} \end{equation} for simplicity. Here, as shown in Eqs.~(\ref{eq:av_L'}) and (\ref{eq:av_omg'}) in Appendix~\ref{App:EBE}, the difference between $\overline{L'}$ and $L$ (also that between $\overline{\omega'}$ and $\overline{\Dot{\theta}}$) can be regarded as $o(h^2)$. The last term, $Q_T$, in Eq.~(\ref{EnergyBalanceEq}) represents the power of the thermally activated fluctuations. In particular, the second term in Eq.~(\ref{def:Qt}) is the mean of the squared radial component of $\Tilde{\boldsymbol{F}}$, which excludes the two modes, and the third is the covariance of $L'(t)$ and $\omega'(t)$. The latter involves that the relationship between $L'(t)$ and $\omega'(t)$ is not constant but fluctuates. Thus, the last two terms in Eq.~(\ref{def:Qt}) represent the fluctuation increased by additional degree of freedom to the rotational orbit. Here, we consider two types of output/input power ratio, $\rho$ and $\eta$: \begin{gather} \rho = \frac{ P_d + P_I }{P_h}, \label{def:eta} \\ \eta = \frac{ P_d^{\prime} + P_I }{P_h}, \quad P_d^{\prime}\equiv \gamma L\overline{\Dot\theta}, \label{def:eta1} \end{gather} where $\rho$ denotes the ratio of the total output power of the slowly varying component to the input power, and characterizes the preservation of the powers of motion in the time scale ${\sim}\Omega^{-1}$, and $\eta$ denotes the power conversion efficiency of the ac driving field to the rotational motion subject to a load. In the latter, $P_d$ is replaced with $P_d'$, so that the numerator of $\eta$ consists of only the output powers of the rotational mode. This corresponds to the so-called rectification efficiency (or generalized efficiency) in the 1D ratchet models in Refs.~\citen{PhysRevE.68.021906,PhysRevLett.83.903,PhysRevE.70.061105,PhysRevE.75.061115}. An advantage of the generalized efficiency is that it gives nonvanishing values even in the absence of loads. Below, we show both numerical simulation and approximation results for the above-mentioned powers, $\rho$ and $\eta$. \begin{figure}[t] \def5.5cm{5.3cm} \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig7a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig7b.eps} \end{tabular} \caption{ (Color online) (a) Scaled output power $P_{I}/h^2$ versus $D$. (b) $P_{I}/h^2$ versus $I$. The graph settings are the same as those in Fig.~\ref{fig:DvsL} [panel (a)] and Fig.~\ref{fig:PvsL} [panel (b)], respectively. The curves indicate Eq.~(\ref{PI}), which adjustable parameters are set to $g_{O}= 0.47g_{L}$ and $g_{O}'/g_{O} =g_{L}'/g_{L}=0.95$. } \label{fig:DvsPI} \end{figure} First, let us consider the expectation value for $P_{I}$ in Eq.~(\ref{def:P_I}). Hereafter, we assume that $P_I = \langle P_I\rangle$ with the ergodic hypothesis and that the other powers obey this. In a similar way to $L$ in Sect.~\ref{sec:MAM}, partitioning $P_{I}$ into $P_{I}^{(I)}$ and $P_{I}^{(h)}$ ($ P_{I}= P_{I}^{(I)}+P_{I}^{(h)}$) related to the currents $J^{\mu}(t)$ and $J_{\sigma}^{\mu}(t)$, we obtain the following estimates: \begin{align} P_{I}^{(I)} \approx& \;g_{O} \sum_{\mu} \{ V_{I}(\boldsymbol{x}_{\mu+1}) - V_{I}(\boldsymbol{x}_{\mu}) \} \overline{J^{\mu}(t)}, \label{Ex_PI|I} \\ P_{I}^{(h)} \approx& \;g_{O}' \sum_{\sigma,\mu} \left\{ V_{I}(\boldsymbol{x}_{\mu+1})- V_{I}(\boldsymbol{x}_{\mu}) \right\} \left( \delta_{\sigma,\mu}^{(3)}-\delta_{\sigma,\mu+1}^{(3)} \right) \nonumber \\ &\times \overline{ \left\{ P(\sigma,t)\partial_t Q(\mu\mid\sigma,t) - Q(\mu,t) \partial_t P(\sigma\mid\mu,t) \right\} }, \label{Ex_PI|h} \end{align} where $g_{O}$ and $g_{O}'$ are adjustable parameters. In Eq.~(\ref{Ex_PI|I}), each summand represents the rate of energy change for the transition $\boldsymbol{x}_{\mu}\rightarrow \boldsymbol{x}_{\mu+1}$ due to thermal activation. From Eq.~(\ref{J_mu_app}), we get $ P_{I}^{(I)}\approx - g_{O} W_{0}I^2/(9D)$. In Eq.~(\ref{Ex_PI|h}), each summand represents the energy consumption for the movement in the direction $\boldsymbol{x}_{\mu}\rightarrow \boldsymbol{x}_{\mu+1}$ induced by the deformation of $\Tilde{\mathrm D}_{\mu}^{\mu}$ and $\Tilde{\mathrm D}_{\mu+1}^{\mu}$. Using Eq.~(\ref{App:PI_h_1}) in Appendix~\ref{App:AC-induced-quanta}, we obtain \begin{align} P_{I} & \approx \frac{ g_{O} W_0 I}{9D} \left\{ I_{0}(D) -I \right\}, \label{PI} \end{align} where $I_{0}(D)$ is given in Eq.~(\ref{TorqBalance}), and $g_{O}'/g_{O} =g_{L}'/g_{L}$ is assumed so that $P_{I}$ is proportional to $L$ for $I_{0}(D)\geq I$. Figure~\ref{fig:DvsPI} shows graphs of $P_{I}$ with respect to (a) $D$ and (b) $I$. In panel (b), $P_I$ is approximately parabolic taking positive values for $0 < I < I_0(D)$ with a maximum at $I= I_0(D)/2$. The maximum output power is estimated as $g_{O} W_0 \{I_{0}(D)\}^2 /(36 D)$. \begin{figure}[t] \def5.5cm{5.5cm} \centering \includegraphics[height=5.5cm,keepaspectratio,clip] {fig8.eps} \caption{ (Color online) Scaled input power $P_{h}/h^2$ versus $D$. The graph settings are the same as those in Fig.~\ref{fig:DvsL}. The curves indicate Eq.~(\ref{Pac}), which adjustable parameter is set at $g_{V}= 0.75$ throughout this paper. } \label{fig:DvsPh} \end{figure} Next, we estimate the expectation value for $P_{h}$. From Eqs.~(\ref{J_mu_app}) and (\ref{Ex_V}), keeping terms up to $O(h^2)$, we obtain \begin{align} P_h &\approx g_{V} \sum_{\mu} \left\{ ( \boldsymbol{x}_{\mu+1} - \boldsymbol{x}_{\mu} ) \boldsymbol{\oldcdot} \boldsymbol{N} \right\} \overline{ H(t)J^{\mu}(t) } \nonumber \\ &= \frac{ g_{V} h^2}{6D} \frac{ W_0 \Omega^2 }{\Omega^2 + (3W_0)^2} \sum_{\mu} \left\{ ( \boldsymbol{x}_{\mu+1} - \boldsymbol{x}_{\mu} ) \boldsymbol{\oldcdot} \boldsymbol{N} \right\}^2 \nonumber \\ &= \frac{3 g_{V} h^2 |\boldsymbol{x}_{0}|^2 }{4D} \frac{ W_0\Omega^2 }{\Omega^2 + (3W_0)^2}, \label{Pac} \end{align} where, between the second and third lines, we have used Eq.~(\ref{App:J1}) in Appendix~\ref{App:AC-induced-quanta} and $ \sum_{\mu} \left\{ ( \boldsymbol{x}_{\mu+1} - \boldsymbol{x}_{\mu} ) \boldsymbol{\oldcdot} \boldsymbol{N} \right\}^2= 9|\boldsymbol{x}_{0}|^2/2 $ [See Eqs.~(\ref{App:vec}) and (\ref{App:Nxx})]. Setting $g_{V}= 0.75$, Fig.~\ref{fig:DvsPh} shows graphs of $P_h$ with respect to $D$. The peak for $P_h$ is due to SR. $P_h$ has no strong dependence on $I$ and $\phi$. \begin{figure}[t] \def5.5cm{5.3cm} \centering \begin{tabular}{ll} (a)&(b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig9a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig9b.eps} \end{tabular} \caption{ (Color online) Energy dissipation rates of (a) the slowly varying modes $P_{d}$ and (b) the rotational mode $P_{d}'$ versus $D$. The graph settings are the same as those in Fig.~\ref{fig:DvsL}. The curves indicate Eqs.~(\ref{Pd}) [panel (a)] and (\ref{Pd'}) [panel (b)]. } \label{fig:DvsPd} \end{figure} \begin{figure}[t] \def5.5cm{5.3cm} \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig10a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig10b.eps} \end{tabular} \caption{(Color online) (a) Power ratio $\rho$ and (b) efficiency $\eta$ versus $D$. The graph settings are the same as those in Fig.~\ref{fig:DvsL}. The curves indicate Eqs.~(\ref{rho}) [panel (a)] and $\eta$ made of Eqs.~(\ref{def:eta1}), (\ref{PI}), (\ref{Pac}) and (\ref{Pd'}) [panel (b)]. In panel (a), the $I$-dependence of the curves is slight. } \label{fig:DvsRho} \end{figure} For $\Tilde{\boldsymbol{V}}_h$, using the first term in Eq.~(\ref{Ex_V}) for the $O(h)$ approximation, we have \begin{equation} \Tilde{\boldsymbol{V}}_h\approx \frac{ g_{V} h W_0}{3 D} \frac{i\Omega}{i\Omega + 3W_0} \sum_{\mu} \left\{ (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu})\boldsymbol{\oldcdot}\boldsymbol{N} \right\} (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu}). \label{Vh} \end{equation} Substituting this and Eq.~(\ref{PI}) into Eq.~(\ref{def:P_d}), we obtain \begin{gather} P_d \approx \frac{9 g_{V}^2 \gamma h^2 \left| \boldsymbol{x}_{0} \right|^4 }{8D^2} \frac{ \Omega^2 W_0^2 }{\Omega^2 + (3W_0)^2} + P_d', \label{Pd} \\ P_d' \approx \frac{\pi g_{L}g_{O} \gamma W_0^2}{27D^2} \{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times \boldsymbol{x}^{0} \}_{z} \left\{ I_{0}(D) -I \right\}^2, \label{Pd'} \end{gather} where, in the calculation of $|\Tilde{\boldsymbol{V}}_{h}|^{2}$, we have used \begin{equation} \biggl| \sum_{\mu} \{ (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu})\boldsymbol{\oldcdot}\boldsymbol{N} \} (\boldsymbol{x}_{\mu+1}-\boldsymbol{x}_{\mu}) \biggr|^2 =\frac{3^4}{4}|\boldsymbol{x}_{0}|^4, \label{P_dh} \end{equation} which is obtained in terms of Eqs.~(\ref{App:vec}) and (\ref{App:Nxx}) in Appendix~\ref{App:AC-induced-quanta} by noting that the vector in $|\cdots|$ on the LHS is collinear with $\boldsymbol{N}$. Figure~\ref{fig:DvsPd}(a) shows graphs of $P_{d}$ with respect to $D$. We see the curve is similar to that of $P_{h}$, because the first term in Eq.~(\ref{def:P_d}) is the dominant contribution. Figure~\ref{fig:DvsPd}(b) shows graphs of $P_{d}'$ with respect to $D$. The maximum and minimum of the curve correspond to the SR peak and the zero point where $I=I_{0}(D)$, respectively. $P_d$ and $P_{d}'$ are quantities in $O(h^2)$ and $O(h^4)$, and $P_{d}'$ is much smaller than $P_{d}$. Although, Eq.~(\ref{Pd}) well agrees with the numerical result, the minimum point of Eq.~(\ref{Pd'}) somewhat differs from the numerical result. This deviation is believed to stem from the several approximations made, in particular, in estimating the transition rate with the saddle point approximation and neglecting the $I$-dependence in the curvatures (the Hessian matrix). \begin{figure}[t] \def5.5cm{5.3cm} \centering \begin{tabular}{ll} (a)&(b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig11a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig11b.eps} \end{tabular} \caption{ (Color online) (a) $\rho$ and (b) $\eta$ versus $I$. The graph settings are the same as those in Fig.~\ref{fig:PvsL}. } \label{fig:IvsRho} \end{figure} From Eqs.~(\ref{Pac}) and (\ref{Pd}), we obtain $\rho$ and $\eta$ as \begin{equation} \rho = \eta + \frac{3 g_{V} \gamma \left|\boldsymbol{x}_{0}\right|^2 }{2D} W_0, \label{rho} \end{equation} where $\eta$ is determined from Eqs.~(\ref{PI}), (\ref{Pac}), and (\ref{Pd'}). Figure~\ref{fig:DvsRho} shows graphs of $\rho$ and $\eta$ with respect to $D$. The behavior around $D=0$ in Fig.~\ref{fig:DvsRho}(a), in which $\rho$ quickly drops from $\rho=1$, is due to a minor oscillation caused by the ac field around a potential minimum that is irrelevant to the unidirectional rotation and must be excluded from consideration. $P_d$ is dominated by the energy dissipation of the translational mode, and $P_d$ adds a much larger contribution to the numerator in $\rho$ than $P_I$. In contrast to $\rho$, $\eta$ in Fig.~\ref{fig:DvsRho}(b) involves the characteristic points of SR and $I=I_{0}(D)$. Although $\eta$ is very small, we believe it will become larger if we improve the potential shape. Figure~\ref{fig:IvsRho} shows graphs of $\rho$ and $\eta$ with respect to $I$. $\rho$ and $\eta$ are positive for a finite range of $I$, although not all the range is displayed. For small $|I|$, the analytical results agree relatively well with the numerical results except for their magnitudes. The deviation may be large depending on $D$ and the setting of the adjustable parameters. \section{\label{discuss} Discussion} \subsection{\label{d1} Relationship between $P_{I}$ and $L$} \begin{figure}[t] \def5.5cm{5.5cm} \centering \includegraphics[height=5.5cm,keepaspectratio,clip] {fig12.eps} \caption{ (Color online) Relationship between $P_{I}$ and $L$. The symbols and curves indicate the numerical and the theoretical results for $I=0$ (filled circles, thick solid curve), $0.001$ (filled triangles, thick dashed curve), $0.002$ (filled squares, thick dashed-dotted curve), $0.005$ (filled diamonds, thin solid curve), $0.01$ (open circles, dotted curve) and $0.02$ (open triangles, dashed-double-dotted curve) at $(a,b,c,d, h,\Omega, \phi) = (-0.1, 0.3, 0.15,-0.1,0.05,0.005, 0)$. } \label{fig:PIvsL} \end{figure} In the strongly dissipative system, in which inertia is neglected as Eq.~(\ref{LEQ}), the MAM is proportional to the (mean) viscous torque, i.e., $\gamma L$ ($\gamma=1$), cf. terminal velocity in viscous media. From Eq.~(\ref{LwithI_0}), for $I_{0}(D) > I$, the viscous torque is an excessive product of the applied ac field. Both $L$ and $P_{I}$ depend on the angular velocity, and the two quantities are expected to be connected in a simple relation. Here, in terms of these quantities, let us discuss another characteristic of the motor other than the efficiencies. Figure~\ref{fig:PIvsL} shows the relationship between $P_{I}$ and $L$ through parameter $D$. We see that $L$ is a single-valued function of $P_{I}$. Furthermore, although $L$ is a nonlinear function of $P_{I}$ on the whole, we can approximate them as being proportional within the first quadrant. Indeed, the hypothetical expressions for $L$ in Eqs.~(\ref{LIdef:LI}) and (\ref{L_expect}) and those for $P_{I}$ in Eqs.~(\ref{Ex_PI|I}) and (\ref{Ex_PI|h}) are arranged so as to be proportional. Consequently, from Eqs.~(\ref{LwithI_0}) and (\ref{PI}), we have \begin{equation} L= \frac{3 g_{L} }{2 g_{O} I} \{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times \boldsymbol{x}^{0} \}_{z} P_I. \label{relation:LandP} \end{equation} From $ P_I = I/(2\pi)\,\overline{\Dot\theta(t)}$, we can regard $ \left\{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times \boldsymbol{x}^{0} \right\}_{z} $ as a moment of inertia. For synthetic or natural molecular motor systems, if it is possible to experimentally measure the MAM (viscous torque) and the angular velocity $\overline{\Dot\theta(t)}$ for a sufficiently wide range of temperature under conditions of constant load, we may obtain results comparable to the graph in Fig.~\ref{fig:PIvsL}, although the obtained result may not necessarily obey Eq.~(\ref{relation:LandP}). On the measurement of torque of biological molecular motors, in Ref.~\citen{PhysRevLett.104.218103}, a method based on the fluctuation theorem\cite{PhysRevLett.71.2401,PhysRevE.61.2361} and the Jarzynski equality\cite{PhysRevLett.78.2690} is proposed. \subsection{\label{DvsOmg} $D^{-3/2}$ scaling of SR peaks as a characteristic of 2D ratchet systems} \begin{figure}[t] \def5.5cm{5.1cm} \centering \begin{tabular}{cc} $S=1.25$ & $S=1.50$ \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig13a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig13b.eps} \\ $S=1.75$& $S=2.00$ \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig13c.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig13d.eps} \end{tabular} \caption{ (Color online) Contour graphs of $V_0(\boldsymbol{x})$ parameterized as $(a,b,c,d) = (-0.1S, 0.3S, 0.15S^2,-0.1S)$ with $S\in \{1.25, 1.50, 1.75, 2.00\}$. For $S=1.00$, see Fig.~\ref{fig:ContAndPol}(a). The settings is the same as in Fig.~\ref{fig:ContAndPol}(a). } \label{fig:4Cont} \end{figure} \begin{figure}[t] \def5.5cm{5.5cm} \centering \begin{tabular}{ll} (a)&(b) \\ \includegraphics[height=5.5cm,keepaspectratio,clip] {fig14a.eps} & \includegraphics[height=5.5cm,keepaspectratio,clip] {fig14b.eps} \end{tabular} \caption{ (Color online) (a) Scaled MAM and (b) mean angular velocity $\omega/\Omega$ versus $D$ for a series of potentials parameterized by $S$ as $(a,b,c,d) = (-0.1S, 0.3S, 0.15S^2,-0.1S)$. The symbols indicate the results of numerical simulations for $S=1.00$ (diamonds), $1.25$ (downward-pointing triangles), $1.50$ (squares), $1.75$ (circles), and $2.00$ (upward-pointing triangles). The other parameters are $(I, h,\Omega, \phi) = (0,0.05,0.0025,0)$. The axes of the main and inset plots are on log--log and linear scales. The additional curves represent (a) $L = C_{L} D^{-\alpha}$ with $(C_{L},\alpha)= (0.00136, 1)$ (dashed curve), $(0.00022,1.5)$ (solid curve), and $(0.000062, 2)$ (dotted curve) and (b) $\omega/\Omega = C_{\omega} D^{-\alpha}$ with $(C_{\omega},\alpha)= (0.00136, 1)$ (dashed curve), $(0.00026,1.5)$ (solid curve), and $(0.000066, 2)$ (dotted curve). } \label{fig:Dvsomega} \end{figure} The factor of $D^{-1/2}$ in the expression for $I_0(D)$ in Eq.~(\ref{TorqBalance}) stems from the current $J_{\sigma}^{\mu}(t)$, which is caused by the deformation of the state boundaries. The factor can be regarded as a characteristic of 2D ratchet systems driven by external fields, because it arises from the first term in Eq.~(\ref{def:LJ}), which involves the basic property in two dimensions that the driving field will not always lie along the rotational direction (or the potential valley), i.e., $\boldsymbol{N}\boldsymbol{\oldcdot}( \boldsymbol{n}_{\mu+1}^{\mu+1}- \boldsymbol{n}_{\mu}^{\mu})$. (We exclude the possibility of cases with such tight coupling that the directions of the driving force and the motion are always parallel, which may be more appropriately described as 1D ratchet systems.) Here we present evidence of the $D^{-1/2}$ dependence with numerical simulations for the MAM and the mean angular velocity $\omega \equiv \overline{\Dot{\theta}} = 2\pi P_I/I$ at $I=0$. Consider a trace of SR peaks in a series of potentials parameterized as $(a,b,c,d) = (-0.1S, 0.3S, 0.15S^2,-0.1S)$ by $S\in \{1.00, 1.25, 1.50, 1.75, 2.00\}$, which are shown in Figs.~\ref{fig:ContAndPol}(a) and \ref{fig:4Cont}. This parameterization makes the contour plots similar (compare the shapes of the potential valleys) but controls the potential differences, with $\Delta V \approx 0.209$ ($S=1.00$), $0.285$ ($1.25$), $0.358$ ($1.50$), $0.423$ ($1.75$), and $0.479$ ($2.00$). As mentioned at the end of Sect.~\ref{sec:MAM}, the SR peak lies near the point of $D$ that satisfies $\Omega = 3W_0$ for $I=0$. The peak position, $D_{\mathrm{SR}}$, increases with $\Delta V$, i.e., $D_{\mathrm{SR}} \approx C_{D}\Delta V$, from the logarithm of $\Omega = 3W_0$ and Eq.~(\ref{W_sig_mu0}), where $C_{D}$ may also depend on $\Delta V$ through the curvature of the potential. From Eq.~(\ref{LwithI_0}) or Eq.~(\ref{Lh_final}), the peak height depends on $D$ as $L \approx C_{L} D_{\mathrm{SR}}^{-3/2}$, where $C_{L}$ involves geometric information about the potential, i.e., $C_{L}\propto \left\{ (\boldsymbol{x}_{0}-\boldsymbol{x}_{1}) \times\boldsymbol{x}^{0} \right\}_{z} \boldsymbol{x}_{0}\boldsymbol{\oldcdot} \boldsymbol{n}_{0}^{0} /\sqrt{H_n}$. Similarly, from Eq.~(\ref{App:PI_h_1}), the SR peak of $\omega$ has a form $\omega/\Omega \approx C_{\omega}D_{\mathrm{SR}}^{-3/2}$ with $C_{\omega}\propto \boldsymbol{x}_{0}\boldsymbol{\oldcdot} \boldsymbol{n}_{0}^{0} /\sqrt{H_n}$. One can thus see the $D_{\mathrm{SR}}^{-3/2}$ scaling for the peaks in the plot of $L$ or $\omega$ for $D$ as a manifestation of the factor of $D^{-1/2}$ in the expression for $I_0(D)$, within a range of $S$ such that the factor $C_{L}$ (or $C_{\omega}$) does not significantly change. Figure~\ref{fig:Dvsomega} shows $L/h^2$ and $\omega/\Omega$ as a function of $D$ for the series of potentials. In the additional curves (for $C_L D^{-\alpha}$ and $C_{\omega} D^{-\alpha}$, $\alpha \in \{1,1.5,2\}$), for which the values of $C_L$ and $C_{\omega}$ were determined by eye, it can be seen that the $D^{-3/2}$ curve is the closest to a tangent to the envelopes of the peaks. This result is consistent with the above argument. Deviations here between the curve and envelope may be caused by the dependence of $C_L$ or $C_{\omega}$ on the details of the shape of the potential. Also note that this scaling does not hold when $\Delta V$ is so small that SR is replaced by another behavior. \section{\label{summary} Summary} An artificial molecular rotary system driven by linearly polarized ac fields, which can generate a unidirectional rotation under a load, was studied using the three-tooth Brownian rotary ratchet model. The dynamics are described by the Langevin equation for a particle in the 2D three-tooth ratchet potential with threefold symmetry. To consider how much load for which the ac induced torque can bear a positive work (coercive load torque), and how to estimate efficiency in the power conversion from the ac-field input to the output under the load, we have developed an approach treating them with coarse-grained variables. As a part of our coarse-grained kinetic description, we have proposed a master equation which is extended by incorporating the dynamical effects from oscillating boundaries between states. Here, the oscillation is assumed to be sufficiently small and slow. In addition to the normal current over the potential barrier under thermal activation, the master equation involves a current induced by moving boundaries (the ridge curves), which is applied to explain the circulation induced by the driving force. This also enables us to estimate expectation values for the time derivative of physical quantities. Using this, we have obtained approximate expressions for the MAM and the powers composing the energy balance equation. From the MAM result, we have obtained the coercive torque against the torque induced by the ac driving field with $I_{0}(D)$ given in Eq.~(\ref{TorqBalance}). The factor $D^{-1/2}$ in $I_{0}(D)$ is associated with the feature of the driving field that is not always along the trajectory of the motion, and can be regarded as a characteristic of 2D ratchet systems possessing such driving forces. The coercive torque is also relevant to the maximum output power for the load as $P_I \propto W_0 \{I_{0}(D)\}^2/D$ at $I=I_{0}(D)/2$. We have also suggested the determination of the linear relationship between the MAM and angular velocity for another characterization of the molecular motor. We have characterized the energetics with the two types of output/input power ratio; the numerator of $\rho$ is the output power of the slowly varying component of the motion, that of $\eta$ is the output power of the rotational mode, and their denominator is the input power of the driving field. Because only the rotational mode produces useful work for the load, $\eta$ measures the efficiency in the force conversion to the torque. In the present design of the potential, the linear response part (the translational mode) dominated the slowly varying components, and provided the main contribution to the energy dissipation rate for the viscous resistance. Accordingly, $\rho$ was dominated by the energy dissipation, and the magnitude of $\eta$ was small. However, our main purpose in this paper was not to demonstrate models of larger $\eta$, but to construct an analytical framework for the performance estimation of 2D ratchet models. In fact, our approach has incorporated several 2D properties into the kinetic description in Sect.~\ref{sec:MAM} and the EBE [Eq.~(\ref{EnergyBalanceEq}) and, especially, $Q_T$ in Eq.~(\ref{def:Qt})]. For a larger efficiency, we consider that the ratio between the translational and rotational modes depends on the potential structure, and we can increase the relative magnitude of the rotational mode by making the best use of the ratchet effect. Designing such models that can demonstrate an efficient force conversion may be an underlying theme of research on molecular motor systems. A possible approach is to make the potential shape much harder for motions other than rotational motion, because the presented potential may be too soft for radial motion, and to improve the potential design to increase $I_{0}(D)$ optimizing relevant geometrical factors. Although we have not deeply investigated how the fluctuation $Q_T$ influences the efficiency, that may also bring important information for the design, especially if an analytic expression for $Q_T$ is obtained. These remain problems for future study.
1,314,259,995,513
arxiv
\section{Posterior means of the patient fixed effects in the Australian GP data} \label{app:estimation result patient} \begin{table}[ht!] \centering \begin{tabular}{|c|rrrrrrrr|} \hline \multirow{2}{*}{Variable} & \multicolumn{8}{c|}{Product} \\ \cline{2-9} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{7} & \multicolumn{1}{c|}{8} \\ \hline Intercept & \cellcolor{gray!25}1.4026 & \cellcolor{gray!25}-1.2448 & -0.3887 & \cellcolor{gray!25}1.1099 & \cellcolor{gray!25}-2.4041 & -0.1309 & \cellcolor{gray!25}-1.7113 & 0.6777 \\ dagegp1 & \cellcolor{gray!25}0.1974 & -0.1334 & 0.0110 & 0.0752 & \cellcolor{gray!25}-0.5025 & -0.0202 & \cellcolor{gray!25}-0.2892 & 0.0644 \\ dagegp3 & -0.1309 & 0.0614 & -0.0620 & -0.0003 & \cellcolor{gray!25}0.3161 & -0.0032 & 0.0857 & 0.0109 \\ dagegp4 & \cellcolor{gray!25}-0.3881 & \cellcolor{gray!25}0.1814 & \cellcolor{gray!25}-0.2384 & -0.1042 & \cellcolor{gray!25}0.8045 & -0.0268 & \cellcolor{gray!25}0.3775 & 0.0017 \\ drfe2 & -0.0430 & 0.0008 & -0.0372 & -0.0140 & 0.0434 & -0.0189 & -0.0471 & 0.0070 \\ drfe3 & \cellcolor{gray!25}-0.2479 & -0.0529 & 0.0285 & 0.0762 & 0.0914 & 0.1058 & -0.0275 & \cellcolor{gray!25}0.1353 \\ drfe4 & -0.0199 & 0.1012 & -0.0094 & 0.0513 & 0.0666 & 0.0723 & -0.0719 & 0.0054 \\ dbleed1 & 0.0500 & \cellcolor{gray!25}-0.1327 & 0.0604 & -0.0887 & \cellcolor{gray!25}0.3982 & -0.0249 & \cellcolor{gray!25}-0.5218 & \cellcolor{gray!25}-0.2285 \\ dbleed2 & 0.0163 & -0.0748 & 0.0217 & -0.0227 & 0.0068 & 0.0397 & -0.0821 & -0.0248 \\ dbp1 & -0.0591 & 0.0000 & -0.0303 & 0.0288 & 0.0034 & 0.0306 & -0.0235 & \cellcolor{gray!25}-0.1427 \\ dbp3 & \cellcolor{gray!25}-0.9902 & \cellcolor{gray!25}0.2436 & 0.0060 & 0.0128 & \cellcolor{gray!25}0.2350 & \cellcolor{gray!25}-0.2954 & \cellcolor{gray!25}0.2541 & 0.0355 \\ drel1 & 0.0426 & -0.0107 & -0.0933 & -0.0022 & \cellcolor{gray!25}0.1565 & 0.0293 & 0.0290 & \cellcolor{gray!25}-0.3945 \\ drel3 & -0.0144 & 0.0272 & -0.0206 & 0.0004 & -0.0261 & 0.0086 & -0.0185 & 0.0190 \\ drel4 & -0.0916 & 0.0869 & 0.0691 & -0.0006 & -0.0094 & 0.0285 & 0.0033 & \cellcolor{gray!25}-0.2036 \\ dchild1 & \cellcolor{gray!25}-1.7332 & \cellcolor{gray!25}1.3001 & -0.0078 & -0.0903 & \cellcolor{gray!25}0.9163 & \cellcolor{gray!25}-0.9867 & \cellcolor{gray!25}0.5286 & -0.0371 \\ dchild2 & -0.0462 & 0.0350 & -0.0626 & -0.0410 & \cellcolor{gray!25}0.9787 & -0.0504 & \cellcolor{gray!25}0.5930 & -0.0539 \\ dfut1 & \cellcolor{gray!25}-0.3189 & -0.0043 & \cellcolor{gray!25}0.1956 & 0.0249 & \cellcolor{gray!25}0.6284 & -0.0775 & \cellcolor{gray!25}0.2036 & -0.1144 \\ dfut2 & \cellcolor{gray!25}-0.2846 & \cellcolor{gray!25}0.1912 & \cellcolor{gray!25}-0.2151 & \cellcolor{gray!25}-0.1990 & -0.0067 & 0.0366 & -0.1458 & 0.0115 \\ dfut4 & \cellcolor{gray!25}-0.3581 & 0.0496 & 0.0475 & 0.0106 & \cellcolor{gray!25}0.2882 & 0.0059 & 0.0128 & 0.0312 \\ dpil1 & \cellcolor{gray!25}0.4713 & \cellcolor{gray!25}0.3618 & -0.0934 & \cellcolor{gray!25}-0.2596 & -0.0112 & -0.0316 & -0.0441 & -0.0278 \\ dpil3 & \cellcolor{gray!25}-0.1822 & \cellcolor{gray!25}-0.2448 & 0.0300 & 0.0612 & 0.0552 & 0.0351 & 0.0469 & 0.0816 \\ dwt1 & 0.0815 & 0.0368 & \cellcolor{gray!25}-0.2582 & -0.0616 & 0.0318 & 0.0640 & -0.0127 & 0.0833 \\ dcomp2 & \cellcolor{gray!25}-0.3389 & \cellcolor{gray!25}-0.1969 & \cellcolor{gray!25}0.2133 & 0.0628 & \cellcolor{gray!25}0.2300 & -0.0034 & \cellcolor{gray!25}0.3100 & -0.0164 \\ dpay2 & -0.0245 & -0.0540 & -0.0202 & -0.0029 & 0.0080 & 0.0603 & 0.0072 & 0.0075 \\ dpay3 & 0.0321 & -0.0631 & -0.0683 & -0.0167 & -0.0365 & \cellcolor{gray!25}0.2909 & -0.0173 & -0.0046 \\ dsmk2 & \cellcolor{gray!25}-0.2666 & -0.0118 & -0.0266 & -0.0129 & -0.0038 & 0.0422 & 0.0867 & 0.0308 \\ dsmk3 & \cellcolor{gray!25}-0.5183 & -0.0143 & 0.0125 & 0.0248 & 0.0144 & -0.0531 & 0.0461 & 0.0332 \\ \hline \end{tabular} \caption{Regression coefficient posterior mean estimates for the attributes of a female patient for various products in the contraceptive discussion data. Parameters whose 90\% credible interval does not include 0 are shown in grey.} \label{table:estimation result for women attributes} \end{table} \section{Posterior means of the GP fixed effects in the Australian GP data} \label{app:estimation result GP} \begin{table}[ht!] \centering \begin{tabular}{|c|rrrrrrrr|} \hline \multirow{2}{*}{Variable} & \multicolumn{8}{c|}{Product} \\ \cline{2-9} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{7} & \multicolumn{1}{c|}{8} \\ \hline Female & -0.0643 & 0.0254 & \cellcolor{gray!25}-0.4392 & 0.0716 & 0.0375 & \cellcolor{gray!25}0.6089 & \cellcolor{gray!25}-0.4569 & -0.0248 \\ \hline Fellow & -0.0182 & -0.0990 & 0.0682 & 0.0375 & 0.2107 & 0.1086 & -0.1452 & -0.0091 \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Family\\ planning\end{tabular}} & \multirow{2}{*}{-0.0004} & \multirow{2}{*}{-0.0160} & \multirow{2}{*}{-0.1220} & \multirow{2}{*}{0.2292} & \multirow{2}{*}{0.0446} & \multirow{2}{*}{0.0360} & \multirow{2}{*}{-0.0321} & \multirow{2}{*}{-0.0133} \\ & & & & & & & & \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Bulk-\\ bill\end{tabular}} & \multirow{2}{*}{-0.0199} & \multirow{2}{*}{-0.0372} & \multirow{2}{*}{0.0425} & \multirow{2}{*}{-0.0389} & \multirow{2}{*}{-0.0594} & \multirow{2}{*}{ 0.0035} & \multirow{2}{*}{0.0503} & \multirow{2}{*}{0.0060} \\ & & & & & & & & \\ \hline Age & 0.0087 & 0.0079 & \cellcolor{gray!25}0.0205 & -0.0064 & \cellcolor{gray!25}0.0176 & -0.0043 & 0.0090 & -0.0101 \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Australian\\ graduate\end{tabular}} & \multirow{2}{*}{0.0835} & \multirow{2}{*}{0.0569} & \multirow{2}{*}{-0.0080} & \multirow{2}{*}{0.3457} & \multirow{2}{*}{0.0989} & \multirow{2}{*}{-0.2348} & \multirow{2}{*}{-0.0933} & \cellcolor{gray!25} \\ & & & & & & & & \multirow{-2}{*}{\cellcolor{gray!25}0.5586} \\ \hline Urban & -0.0947 & 0.0072 & 0.0693 & -0.0073 & 0.0112 & 0.0040 & -0.0285 & 0.1814 \\ \hline \end{tabular} \caption{Regression coefficient posterior mean estimates for the GP fixed effects for various products in the contraceptive discussion data. Parameters whose 90\% credible interval does not include 0 are shown in grey.} \label{table:estimation result for doctors fixed effects} \end{table} \newpage \section{Posterior means of the parameters in \texorpdfstring{$\bm{R}_{\boldsymbol \epsilon}$}{bReps}} \label{app:correlation} \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|} \hline Parameter & Posterior Mean & 95\% Credible Interval \\ \hline $r_{21}$ & -0.1157 & (-0.2058, -0.0251) \\ $r_{31}$ & -0.0502 & (-0.1446, { }0.0451) \\ $r_{32}$ & { }0.1617 & ({ }0.0707, { }0.2506) \\ $r_{41}$ & -0.0481 & (-0.1512, { }0.0532) \\ $r_{42}$ & { }0.0454 & (-0.0578, { }0.1512) \\ $r_{43}$ & { }0.5891 & ({ }0.5151, { }0.6577) \\ $r_{51}$ & -0.2354 & (-0.3223, -0.1467) \\ $r_{52}$ & -0.0255 & (-0.1203, { }0.0695) \\ $r_{53}$ & { }0.1759 & ({ }0.0822, { }0.2657) \\ $r_{54}$ & { }0.2424 & ({ }0.1431, { }0.3398) \\ $r_{61}$ & { }0.4721 & ({ }0.3930, { }0.5472) \\ $r_{62}$ & -0.2702 & (-0.3572, -0.1810) \\ $r_{63}$ & { }0.0144 & (-0.0753, { }0.1038) \\ $r_{64}$ & { }0.0358 & (-0.0646, { }0.1367) \\ $r_{65}$ & -0.0695 & (-0.1629, { }0.0266) \\ $r_{71}$ & -0.2054 & (-0.3068, -0.1028) \\ $r_{72}$ & -0.0540 & (-0.1559, { }0.0499) \\ $r_{73}$ & { }0.1849 & ({ }0.0810, { }0.2877) \\ $r_{74}$ & { }0.1877 & ({ }0.0747, { }0.2973) \\ $r_{75}$ & { }0.5185 & ({ }0.4288, { }0.6016) \\ $r_{76}$ & -0.0051 & (-0.1138, { }0.1014) \\ $r_{81}$ & -0.0196 & (-0.1147, { }0.0749) \\ $r_{82}$ & -0.0491 & (-0.1412, { }0.0440) \\ $r_{83}$ & { }0.0205 & (-0.0704, { }0.1091) \\ $r_{84}$ & { }0.1044 & ({ }0.0059, { }0.2009) \\ $r_{85}$ & -0.0788 & (-0.1701, { }0.0131) \\ $r_{86}$ & { }0.1841 & ({ }0.0930, { }0.2753) \\ $r_{87}$ & { }0.2046 & ({ }0.1036, { }0.3040) \\ \hline \end{tabular} \caption{Posterior mean estimates of the elements in the correlation matrix $\bm{R}_{\boldsymbol \epsilon}$ of the errors terms with their 95\% credible intervals.} \label{table:correlation} \end{table} \newpage \section{Posterior means of the parameters in \texorpdfstring{$\boldsymbol \Sigma_{\boldsymbol \alpha}$}{Sigba}} \label{app:covariance} \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|} \hline Parameter & Posterior Mean & 95\% Credible Interval \\ \hline $\sigma^2_{\alpha_1}$ & { }0.5147 & ({ }0.3630, 0.7095) \\ $\sigma^2_{\alpha_2}$ & { }0.6492 & ({ }0.4745, 0.8690) \\ $\sigma^2_{\alpha_3}$ & { }1.3021 & ({ }0.9554, 1.7634) \\ $\sigma^2_{\alpha_4}$ & { }1.5329 & ({ }1.0905, 2.1402) \\ $\sigma^2_{\alpha_5}$ & { }1.0586 & ({ }0.7785, 1.4112) \\ $\sigma^2_{\alpha_6}$ & { }1.4533 & ({ }1.0695, 1.9627) \\ $\sigma^2_{\alpha_7}$ & { }1.9449 & ({ }1.3741, 2.7053) \\ $\sigma^2_{\alpha_8}$ & { }1.2432 & ({ }0.8962, 1.6945) \\ $\sigma_{(\alpha_2, \alpha_1)}$ & { }0.3009 & ({ }0.1813, 0.4417) \\ $\sigma_{(\alpha_3, \alpha_1)}$ & { }0.2733 & ({ }0.1148, 0.4542) \\ $\sigma_{(\alpha_3, \alpha_2)}$ & { }0.3026 & ({ }0.1314, 0.5004) \\ $\sigma_{(\alpha_4, \alpha_1)}$ & { }0.2481 & ({ }0.0731, 0.4489) \\ $\sigma_{(\alpha_4, \alpha_2)}$ & { }0.2659 & ({ }0.0681, 0.4870) \\ $\sigma_{(\alpha_4, \alpha_3)}$ & { }0.2525 & (-0.0200, 0.5434) \\ $\sigma_{(\alpha_5, \alpha_1)}$ & { }0.0764 & (-0.0677, 0.2272) \\ $\sigma_{(\alpha_5, \alpha_2)}$ & { }0.1866 & ({ }0.0322, 0.3566) \\ $\sigma_{(\alpha_5, \alpha_3)}$ & -0.0177 & (-0.2376, 0.2022) \\ $\sigma_{(\alpha_5, \alpha_4)}$ & { }0.5141 & ({ }0.2705, 0.8058) \\ $\sigma_{(\alpha_6, \alpha_1)}$ & { }0.2012 & ({ }0.0305, 0.3969) \\ $\sigma_{(\alpha_6, \alpha_2)}$ & { }0.2195 & ({ }0.0360, 0.4232) \\ $\sigma_{(\alpha_6, \alpha_3)}$ & { }0.1101 & (-0.1427, 0.3672) \\ $\sigma_{(\alpha_6, \alpha_4)}$ & { }0.2469 & (-0.0342, 0.5528) \\ $\sigma_{(\alpha_6, \alpha_5)}$ & { }0.2856 & ({ }0.0539, 0.5430) \\ $\sigma_{(\alpha_7, \alpha_1)}$ & { }0.0611 & (-0.1420, 0.2722) \\ $\sigma_{(\alpha_7, \alpha_2)}$ & { }0.2400 & ({ }0.0277, 0.4768) \\ $\sigma_{(\alpha_7, \alpha_3)}$ & { }0.2573 & (-0.0378, 0.5871) \\ $\sigma_{(\alpha_7, \alpha_4)}$ & -0.2164 & (-0.5979, 0.1113) \\ $\sigma_{(\alpha_7, \alpha_5)}$ & { }0.0148 & (-0.2596, 0.2956) \\ $\sigma_{(\alpha_7, \alpha_6)}$ & { }0.2915 & (-0.0194, 0.6297) \\ $\sigma_{(\alpha_8, \alpha_1)}$ & { }0.2570 & ({ }0.0982, 0.4404) \\ $\sigma_{(\alpha_8, \alpha_2)}$ & { }0.3344 & ({ }0.1562, 0.5423) \\ $\sigma_{(\alpha_8, \alpha_3)}$ & { }0.0750 & (-0.1594, 0.3157) \\ $\sigma_{(\alpha_8, \alpha_4)}$ & { }0.2315 & (-0.0290, 0.5223) \\ $\sigma_{(\alpha_8, \alpha_5)}$ & { }0.2537 & ({ }0.0449, 0.4889) \\ $\sigma_{(\alpha_8, \alpha_6)}$ & { }0.4904 & ({ }0.2432, 0.7802) \\ $\sigma_{(\alpha_8, \alpha_7)}$ & { }0.4139 & ({ }0.1235, 0.7563) \\ \hline \end{tabular} \caption{Posterior mean estimates of the elements in the covariance matrix $\boldsymbol \Sigma_{\boldsymbol \alpha}$ of the random effects with their 95\% credible intervals. The variance parameter in $\boldsymbol \Sigma_{\boldsymbol \alpha}$ is denoted by $\sigma^2_{\alpha_i}$ while the $(i,j)$-th covariance parameter is given by $\sigma_{(\alpha_i, \alpha_j)}$.} \label{table:covariance} \end{table} \section{Introduction} \label{sec:introduction} Bayesian inference for the multivariate probit (MVP) model is usually performed using the data augmentation representation of \cite{chib1998analysis}, whereby the latent variables indicating the observed outcomes are normally distributed. For unique identification of the regression parameters, the covariance matrix of these latent normal random variates is assumed to be a correlation matrix $\bm{R}_{\boldsymbol \epsilon}$. However, Monte Carlo sampling for $\bm{R}_{\boldsymbol \epsilon}$ in a Bayesian context is difficult due to the restrictions on the diagonal entries and the requirement that the matrix $\bm{R}_{\boldsymbol \epsilon}$ must be positive definite. This article presents three contributions, two methodological and the third a subject matter one. The first methodological contribution provides an improved method for sampling the potentially high dimensional correlation matrix $\bm{R}_{\boldsymbol \epsilon}$ within a Markov chain Monte Carlo (MCMC) algorithm. In order to circumvent the positive definiteness restriction imposed on a correlation matrix, we adopt the reparameterisation strategy of \cite{smith2013bayesian} which re-expresses $\bm{R}_{\boldsymbol \epsilon}$ as an unconstrained Cholesky factor $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$. This maps the manifold space of a correlation matrix to a Euclidean space, which improves posterior simulation while keeping the number of unknown parameters the same. A prior distribution is then specified on $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ such that the implied marginal densities of the correlation coefficients are uniform on $(-1,1)$. We employ the Hamiltonian Monte Carlo (HMC) algorithm \citep{neal2011mcmc} to sample the high dimensional $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ efficiently, thereby avoiding the slow exploration of parameter space by random walk updates as in \cite{smith2013bayesian}. The second methodological contribution is to introduce antithetic sampling, based on the work of \cite{hammersley1956new}, into the Metropolis-Hastings (MH) literature. In order to implement this idea, we specify the proposal distribution of parameter update as a deterministic function. Here, the generated samples will be super-efficient in terms of the reduction in variance of the Monte Carlo estimates compared to the same estimates constructed from uncorrelated samples. Although the chain update proposal is deterministic, the convergence properties are not compromised when this is embedded within a larger system of MCMC sampling. Our proposed methodology is motivated by the over-relaxation algorithm \citep{adler1981over, barone1990improving}, and is similar to the idea built within the framework of HMC in \cite{pakman2014exact}. However, our proposed sampler is different from these methods in two main aspects. First, there is no randomness in the proposal distribution for parameter updates in our method, whereas theirs still retain a certain degree of stochasticity. Second, we introduce perfect negative correlation between successive MCMC samples via the deterministic proposal, while they suggest partial or zero dependence between the samples. Results based on our real data application document a significant improvement of up to a 16 times performance gain in the mixing behaviour of the Markov chain, thereby lowering the autocorrelation between the iterates. The computing time of the algorithm is also marginally reduced due to the deterministic sampling. Our methodological development is motivated by the staged stated preference panel data collection described in \cite{fiebig2017consideration}, which is used to study the decision-making of Australian general practitioners (GPs) about female contraceptive products. Here, the authors used the data from the third and final stage, whereas we explore outcomes from the second stage. This second stage relates to the question of which particular contraceptive products GPs would discuss with a female patient, defined by a vignette that is part of the experimental design. Separate univariate analyses on each product would ignore possible complex dependence structures that are useful in exploring which particular bundles of products are discussed with patients. This is important here because in any correlated choice problem there may be multiple close substitutes, which makes joint rather than marginal probabilities more relevant. Therefore, we model the GPs' choices by an MVP model. Inspection of the resulting graphical model describing this interaction between products lends support to the suitability of a multivariate approach. By using the MVP model, we are able to compute the joint probability of specific product bundles being discussed with a patient. Posterior estimation of this probability, based on a patient with certain socio-economic and clinical characteristics, reveals differing views among the GPs in the sample on the suitability of long acting contraceptive choices. This variability is known as medical practice variation in the health industry, whereby the decision making of GPs is influenced by both their personal characteristics such as gender, age and qualifications, as well as other unobservables that we model as random effects. The rest of the paper is organised as follows. Section~\ref{sec:multivariate} describes the MVP model with random effects and reviews previous research associated with sampling $\bm{R}_{\boldsymbol \epsilon}$. Section~\ref{sec:efficient} presents our proposed methodology of sampling $\bm{R}_{\boldsymbol \epsilon}$, and Section~\ref{sec:deterministic} outlines the antithetic sampling technique whose efficiency is illustrated via simulation studies in Section~\ref{sec:simulation}. Section~\ref{sec:application} provides our analysis of the discussion preference data of contraceptive products by Australian GPs, and Section~\ref{sec:conclusion} concludes. Appendices~\ref{app:sampling scheme}--\ref{app:covariance} provide further details on the contraceptive product data analysis. \section{Multivariate probit model with random effects} \label{sec:multivariate} The MVP model has been used extensively to model correlated binary data \citep{gibbons1998health, buchmueller2013preference}. Let $\bm{y}_{it}=(y_{1,it}, \dotsc, y_{D,it})^\top$ be a vector of $D$ correlated binary outcomes for individual $i=1,\dotsc,P$ at time period $t$, for $t=1, \dotsc, T$. The latent variable representation of the MVP model, using the data augmentation approach of \cite{albert1993bayesian}, is given by \begin{gather} \bm{y}^\ast_{it} = \boldsymbol \alpha_i + \bm{B} \bm{x}_{it} + \boldsymbol \epsilon_{it}, \label{eqn:MVP} \\ \boldsymbol \alpha_i = (\alpha_{1,i}, \dotsc, \alpha_{D,i})^\top \stackrel{iid}{\sim} \mathcal{N}({\bf 0}, \boldsymbol \Sigma_{\boldsymbol \alpha}), \\ \boldsymbol \epsilon_{it} = (\epsilon_{1,it}, \dotsc, \epsilon_{D,it})^\top \stackrel{iid}{\sim} \mathcal{N}({\bf 0}, \bm{R}_{\boldsymbol \epsilon}), \end{gather} for $i=1,\dotsc,P, t=1,\dotsc,T,$ where $\bm{y}^\ast_{it}=(y^\ast_{1,it}, \dotsc, y^\ast_{D,it})^\top$ is a continuous latent variable, $\boldsymbol \alpha_i$ is a $D$-vector of outcome-specific random effects for individual $i$ allowing for heterogeneity between individuals, $\bm{x}_{it}=(1, x_{1,it}, \dotsc, x_{K-1,it})^\top$ is an exogenous variable, $\bm{B}$ is a $D \times K$ matrix of regression coefficients and $\boldsymbol \epsilon_{it}$ is a $D$-vector correlated error term which models the dependence structure between outcomes. The variable $\bm{x}_{it}$ is assumed to be uncorrelated with both $\boldsymbol \alpha_i$ and $\boldsymbol \epsilon_{it}$. This is entirely appropriate in the stated preference case that is our motivating analysis but relaxing the assumption of exogenous $\bm{x}_{it}$ represents a useful extension. In order for $\bm{B}$ to be uniquely identified \citep{chib1998analysis}, $\bm{R}_{\boldsymbol \epsilon}$ is set to be a correlation matrix. The observed outcome $\bm{y}_{it}$ is defined to be dependent on the latent variable $\bm{y}^\ast_{it}$ via the relationship \begin{equation} y_{d,it} = \mathds{1}(y^\ast_{d,it}>0), \quad d=1, \dotsc, D, \label{eqn:latent} \end{equation} where $\mathds{1}(\bm E)$ is an indicator function which takes value 1 if the event $\bm E$ occurs and 0 otherwise. Let $\bm{y}=\{\bm{y}_{it}; i=1,\dotsc,P, t=1,\dotsc,T\}$ be the set of observed discrete outcomes. The density of the latent variables $\bm{y}^\ast$ conditional on the random effects $\boldsymbol \alpha_{1:P}=(\boldsymbol \alpha_1,\dotsc,\boldsymbol \alpha_P)$ is given by \begin{equation} p(\bm{y}^\ast|\boldsymbol \alpha_{1:P}, \boldsymbol \theta) = \prod_{i=1}^P \prod_{t=1}^T \phi(\bm{y}^\ast_{it}; \boldsymbol \mu_{it}, \bm{R}_{\boldsymbol \epsilon}), \label{eqn:likelihood} \end{equation} where $\boldsymbol \theta:=(\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \Sigma_{\boldsymbol \alpha})$ denotes the vector of model parameters, $\boldsymbol \mu_{it}=\boldsymbol \alpha_i + \bm{B} \bm{x}_{it}$ and $\phi$ is the multivariate normal density function. Following the specification of the MVP model in (\ref{eqn:MVP})--(\ref{eqn:latent}), the posterior density is \begin{equation} \pi(\bm{y}^\ast, \boldsymbol \alpha_{1:P}, \boldsymbol \theta|\bm{y}) = \frac{p(\bm{y}|\bm{y}^\ast, \boldsymbol \alpha_{1:P}, \boldsymbol \theta)p(\bm{y}^\ast|\boldsymbol \alpha_{1:P},\boldsymbol \theta)p(\boldsymbol \alpha_{1:P}|\boldsymbol \theta)p(\boldsymbol \theta)}{p(\bm{y})}, \label{eqn:posterior} \end{equation} where $p(\bm{y})$ is the marginal likelihood, $p(\boldsymbol \theta)$ is the prior on the model parameters $\boldsymbol \theta$ and \begin{equation} p(\bm{y}|\bm{y}^\ast, \boldsymbol \alpha_{1:P}, \boldsymbol \theta) = \prod_{i=1}^P \prod_{t=1}^T \prod_{d=1}^D \bigg(\mathds{1}(y_{d,it}=0)\mathds{1}(y^\ast_{d,it} \leq 0) + \mathds{1}(y_{d,it}=1)\mathds{1}(y^\ast_{d,it} > 0)\bigg). \label{eqn:augmented posterior} \end{equation} Useful conjugate priors are available for $\bm{B}$ (or $\boldsymbol \beta=\text{vec}(\bm{B})$) and $\boldsymbol \Sigma_{\boldsymbol \alpha}$ which simplifies MCMC sampling, but it is difficult to posit a suitable prior for $\bm{R}_{\boldsymbol \epsilon}$. \subsection{Prior choice for the correlation matrix \texorpdfstring{$\bm{R}_{\boldsymbol \epsilon}$}{bReps}} \label{subsec:prior} \cite{barnard2000modeling} decompose a covariance matrix $\Sig_{\beps}$ as $\bm{S}\bm{R}_{\boldsymbol \epsilon}\bm{S},$ where $\bm{S}$ is a diagonal matrix of standard deviations and $\bm{R}_{\boldsymbol \epsilon}$ is a correlation matrix. They show that if $\Sig_{\beps} \sim \mathcal{IW}(\nu, \bm{I})$, i.e. an inverse-Wishart distribution with degrees of freedom $\nu$ and scale matrix $\bm{I}$, then the density of $\bm{R}_{\boldsymbol \epsilon}$ is \begin{equation} p(\bm{R}_{\boldsymbol \epsilon}) \propto |\bm{R}_{\boldsymbol \epsilon}|^{\frac{1}{2}(\nu-1)(D-1)-1}\Bigg(\prod_{i=1}^D|\bm{R}_{\boldsymbol \epsilon}(-i;-i)|\Bigg)^{-\frac{\nu}{2}}, \label{eqn:marginally uniform prior} \end{equation} where $\bm{R}_{\boldsymbol \epsilon}(-i;-i)$ denotes the $i$-th principal submatrix of $\bm{R}_{\boldsymbol \epsilon}$, that is $\bm{R}_{\boldsymbol \epsilon}$ with its $i$-th row and column removed. We follow \cite{barnard2000modeling} and take (\ref{eqn:marginally uniform prior}) as the prior for $\bm{R}_{\boldsymbol \epsilon}$, which induces a modified Beta distribution on each off-diagonal element $r_{ij}$ of $\bm{R}_{\boldsymbol \epsilon}, i \neq j$. In particular, the marginal densities of the $r_{ij}$ are uniform on $(-1,1)$ when $\nu=D+1$, which means that posterior inference is invariant to the ordering of the binary outcomes $\bm{y}$. Furthermore, recent results in \cite{wang2018on} establish that for such a choice of $\nu$, the corresponding matrix of partial correlations $\rho_{kl}$ has the LKJ distribution of \cite{lewandowski2009generating} with unit shape parameter. This means that all $\rho_{kl}$ are marginally distributed according to a $\textrm{Beta}(\frac{D}{2}, \frac{D}{2})$ distribution over $(-1,1)$ with both shape parameters $\frac{D}{2}$, which is informative in high dimensions because the Beta density increasingly concentrates around zero. The informativity of $\rho_{kl}$ is useful in practical applications, where more often than not a sparse structure on the partial correlation matrix is desirable to suggest conditional independence. The dependence structures imposed by the marginally uniform prior are less studied in the literature. Since analytical results for these properties are limited \citep{tokuda2011visualizing}, we briefly illustrate these graphically instead. The results obtained are based on correlation matrices of dimension $D=4$ but they can be generalised to higher dimensions. We generate $10^7$ samples from (\ref{eqn:marginally uniform prior}) with $\nu=D+1$ by normalising the covariance matrices drawn from an $\mathcal{IW}(D+1, \bm{I})$ distribution. Figure~\ref{fig:corr_vs_par_corr} illustrates the pairwise dependence structures among the correlations $r_{ij}$ and the partial correlations $\rho_{kl}$ when the pairs share (top panels) or do not share (bottom panels) common indices. When there is a shared index, the density on $(r_{12}, r_{13})$ tends to support similar values in absolute terms (the visible cross pattern), which is less apparent when there is no common index in $(r_{12}, r_{34})$. However, both distributions have most of their density on the vertices corresponding to $|r_{ij}|\approx1$. This means that inference for all pairs of $r_{ij}$ is skewed towards jointly extreme values a priori (the univariate margin for each $r_{ij}$ is still uniform on $(-1,1)$), although this effect diminishes with an increase in the number of observations. In contrast, pairs of partial correlations $\rho_{kl}$ exhibit no dependence structure regardless of whether or not there is a common index. Independence is also observed between $r_{ij}$ and $\rho_{kl}$, except when both parameters have the same indices $(r_{12},\rho_{12})$ in which case they are strongly positively correlated. \begin{figure}[t!] \centering \includegraphics[trim= 1cm 3.5cm 1.25cm 5cm,clip,width=\textwidth]{corr_vs_par_corr} \caption{Bivariate density plots showing the dependence structures associated with the marginally uniform prior (\ref{eqn:marginally uniform prior}) on $\bm{R}_{\boldsymbol \epsilon}$ with $\nu=D+1$, for pairs of parameters sharing common indices (top panels) and without a common index (bottom panels).} \label{fig:corr_vs_par_corr} \end{figure} We now discuss related work on priors for $\bm{R}_{\boldsymbol \epsilon}$. Let $\mathcal{R}^D$ be the space of all valid correlation matrices. \cite{barnard2000modeling} also suggest a uniform prior over all correlation matrices in $\mathcal{R}^D$, which is equivalent to the LKJ prior with unit shape, as suggested by the \cite{stan2017stan}. Note that the induced prior on the partial correlation matrix is the marginally uniform prior in (\ref{eqn:marginally uniform prior}) with $\nu=D+1$ (Figure \ref{fig:corr_vs_par_corr}). This might not be a suitable prior for $\rho_{kl}$ since, as discussed above, this joint distribution for $\rho_{kl}$ exhibits dependence and has large mass on extreme values. \cite{chib1998analysis} propose using a multivariate normal prior on the $r_{ij}$, with the support of the prior restricted to values of $r_{ij}$ which give a correlation matrix in $\mathcal{R}^D$, while \cite{liechty2004bayesian} introduce a mixture of normal distributions prior on $r_{ij}$ to express a priori knowledge of blocked structure in $\bm{R}_{\boldsymbol \epsilon}$. However, these choices of normal priors do not imply that all marginal densities of the $r_{ij}$ are the same due to the constraints imposed on the $r_{ij}$ for the resulting $\bm{R}_{\boldsymbol \epsilon}$ to be in $\mathcal{R}^D$. \subsection{Posterior sampling of \texorpdfstring{$\bm{R}_{\boldsymbol \epsilon}$}{bReps}} \label{subsec:posterior} Posterior simulation for $\bm{R}_{\boldsymbol \epsilon}$ is challenging for two reasons: (i) the diagonal elements of $\bm{R}_{\boldsymbol \epsilon}$ must be 1 and, (ii) $\bm{R}_{\boldsymbol \epsilon}$ must be positive definite. \cite{chib1998analysis} suggest sampling the $r_{ij}$ elements of $\bm{R}_{\boldsymbol \epsilon}$ in blocks using a random walk Metropolis-Hastings (RWMH) algorithm with a multivariate $t$ proposal density. However, the resulting matrix obtained after each proposal is not guaranteed to be a valid correlation matrix in addition to the RWMH algorithm being notorious for its slow exploration of the parameter space. Tuning the parameters of this proposal distribution also requires finding an approximate mode of the log posterior distribution and the observed Fisher information for every iteration, resulting in high computational overheads. In the setting of hierarchical regression models, \cite{barnard2000modeling} adopt the Griddy-Gibbs sampler of \cite{ritter1992facilitating} to sample $\bm{R}_{\boldsymbol \epsilon}$. Here, prior to the Gibbs step, one needs to solve a quadratic equation to determine the support for a single $r_{ij}$ (while keeping the rest fixed) which results in a valid correlation matrix. The authors document the clear inefficiency in this sampling scheme when the prior in (\ref{eqn:marginally uniform prior}) is used due to its tendency to place more weight on the edges of $\mathcal{R}^D$ space. Moreover, the design of drawing one $r_{ij}$ at a time becomes computationally prohibitive when $D$ is large. \section{Efficient sampling for \texorpdfstring{$\bm{R}_{\boldsymbol \epsilon}$}{bReps} when using a marginally uniform prior} \label{sec:efficient} This section describes an efficient way of sampling $\bm{R}_{\boldsymbol \epsilon}$ by utilising Hamiltonian dynamics \citep{duane1987hybrid}. This involves reparameterising $\bm{R}_{\boldsymbol \epsilon}$ to enable sampling of parameters in an unconstrained space. Due to the attractive properties of the marginally uniform prior in (\ref{eqn:marginally uniform prior}) with $\nu=D+1$ discussed in Section~\ref{subsec:prior}, we will use this prior hereafter. Inference for the posterior distribution in (\ref{eqn:posterior}) can be performed using a Gibbs sampler (see Chapter 10 of \cite{greenberg2012introduction} for details). Our focus here is on the following non-standard conditional posterior distribution \begin{equation} \pi(\bm{R}_{\boldsymbol \epsilon}|\bm{y}, \bm{y}^\ast, \boldsymbol \alpha_{1:P}, \boldsymbol \theta_{-\bm{R}_{\boldsymbol \epsilon}}) \propto \prod_{i=1}^P \prod_{t=1}^T \phi(\bm{y}^\ast_{it}; \boldsymbol \mu_{it}, \bm{R}_{\boldsymbol \epsilon}) \cdot p(\bm{R}_{\boldsymbol \epsilon}), \label{eqn:conditional posterior} \end{equation} where $\boldsymbol \theta_{-\bm{\mathcal{S}}}$ is defined as $\boldsymbol \theta$, but excluding the parameters $\bm{\mathcal{S}}$. \subsection{An unconstrained parameterisation} \label{subsec:unconstrained} Because of the restrictions on sampling correlation coefficients on a confined space, we adopt the reparameterisation strategy in \cite{smith2013bayesian} which re-expresses $\bm{R}_{\boldsymbol \epsilon}$ via a positive definite matrix $\Sig_{\beps}$ as \begin{equation} \bm{R}_{\boldsymbol \epsilon} = \boldsymbol \Lambda_{\boldsymbol \epsilon}^{-1/2} \Sig_{\beps} \boldsymbol \Lambda_{\boldsymbol \epsilon}^{-1/2}, \label{eqn:reparameterisation} \end{equation} where $\boldsymbol \Lambda_{\boldsymbol \epsilon}=\textrm{diag}(\Sig_{\beps})$. The covariance matrix $\Sig_{\beps}$ can then be written in terms of its Cholesky factorisation $\Sig_{\beps}=\bm{\mathcal{L}}_{\boldsymbol \epsilon}\Leps^\top,$ where $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ is a lower triangular matrix. The diagonal elements of $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ are set to 1 so that the transformation of $\bm{R}_{\boldsymbol \epsilon}$ to $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ is one-to-one. We define an operator $\textrm{vechL}$ which vectorises the strict lower triangle of a matrix by row. The unknown parameter $\textrm{vechL}(\bm{\mathcal{L}}_{\boldsymbol \epsilon})=\{L_{ij};i=2, \dotsc, D, j<i\}$ lies in $\mathbb{R}^{D(D-1)/2}$ and is therefore unconstrained. \cite{lindstrom1988newton} also implement the Cholesky factorisation on a covariance matrix to optimise the log-likelihood function of a linear mixed effects model. Other possible reparameterisation methods for $\bm{R}_{\boldsymbol \epsilon}$ include using polar coordinates \citep{rapisarda2007parameterizing} and partial autocorrelations \citep{daniels2009modeling}, but we adopt the representation in (\ref{eqn:reparameterisation}) due to its computational tractability. By using a change of variables, we can rewrite the density function in (\ref{eqn:conditional posterior}) in terms of $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ as \begin{equation} \pi(\bm{\mathcal{L}}_{\boldsymbol \epsilon}|\bm{y}, \bm{y}^\ast, \boldsymbol \alpha_{1:P}, \boldsymbol \theta_{-\bm{\mathcal{L}}_{\boldsymbol \epsilon}}) \propto \pi(\bm{R}_{\boldsymbol \epsilon}|\bm{y}, \bm{y}^\ast, \boldsymbol \alpha_{1:P}, \boldsymbol \theta_{-\bm{R}_{\boldsymbol \epsilon}}) \cdot |{\bf J}|, \label{eqn:unconstrained conditional posterior} \end{equation} where $|{\bf J}|=|\partial \textrm{vechL}(\bm{R}_{\boldsymbol \epsilon}) / \partial \textrm{vechL}(\bm{\mathcal{L}}_{\boldsymbol \epsilon})^\top|$ is the determinant of the Jacobian for the transformation. We now note that for the transformation from $\bm{R}_{\boldsymbol \epsilon}$ to $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$, the prior on lower triangular Cholesky factor $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ whose diagonal entries are all fixed as ones, given by \begin{equation} p(\bm{\mathcal{L}}_{\boldsymbol \epsilon}) \propto p(\bm{R}_{\boldsymbol \epsilon}) \cdot |{\bf J}|, \label{eqn:prior cholesky} \end{equation} induces a marginally uniform prior on all $r_{ij}$ for $\nu=D+1$. \subsection{Sampling the Cholesky factor using HMC} \label{subsec:sampling} HMC, popularised by \cite{neal2011mcmc}, has enjoyed considerable recent interest within the statistical literature due to its ability to generate credible but distant candidate parameters for the MH algorithm, thereby reducing autocorrelation in the posterior samples. It does so by exploiting gradient information of the log posterior density to simulate a trajectory according to physical dynamics. Given a target distribution of interest $\pi(\boldsymbol \vartheta)$, which in our case is the density in (\ref{eqn:unconstrained conditional posterior}), HMC introduces a fictitious momentum variable $\bm{u}$ into the physical system, which is assumed to follow a $\mathcal{N}({\bf 0}, \bm{M})$ pseudo-prior and targets the augmented distribution \begin{equation} \pi(\boldsymbol \vartheta, \bm{u}) \propto \exp(-\mathcal{H}(\boldsymbol \vartheta, \bm{u})), \label{eqn:HMC} \end{equation} where $\mathcal{H}(\boldsymbol \vartheta, \bm{u}) = -\log \pi(\boldsymbol \vartheta) + \frac{1}{2} \bm{u}^\top \bm{M} ^{-1} \bm{u}$ is termed the Hamiltonian which is made up of potential energy and kinetic energy components. The potential energy is derived from minus the log density of $\boldsymbol \vartheta$ under the target distribution while the kinetic energy is due to the movement of the momentum variable $\bm{u}$. The Hamiltonian system is used to describe the evolution of $\boldsymbol \vartheta$ and $\bm{u}$ over time $t$ via the differential equations \begin{equation} \frac{d \boldsymbol \vartheta}{dt} = \frac{\partial \mathcal{H}}{\partial \bm{u}} \quad \textrm{and} \quad \frac{d \bm{u}}{dt} = -\frac{\partial \mathcal{H}}{\partial \boldsymbol \vartheta}. \label{eqn:hamiltonian dynamics} \end{equation} The dynamics in (\ref{eqn:hamiltonian dynamics}) can be implemented in practice using the leapfrog method \citep{neal2011mcmc} and discretising continuous time by a stepsize $\varepsilon$ so that \begin{equation} \begin{aligned} \bm{u}(t + \varepsilon/2) &= \bm{u}(t) - (\varepsilon/2) \frac{\partial \mathcal{H}}{\partial \boldsymbol \vartheta}(\boldsymbol \vartheta(t)) \\ \boldsymbol \vartheta(t+\varepsilon) &= \boldsymbol \vartheta(t) + \varepsilon \frac{\partial \mathcal{H}}{\partial \bm{u}}(\bm{u}(t+\varepsilon/2)) \\ \bm{u}(t+\varepsilon) &= \bm{u}(t+\varepsilon/2) - (\varepsilon/2) \frac{\partial \mathcal{H}}{\partial \boldsymbol \vartheta}(\boldsymbol \vartheta(t+\varepsilon)). \end{aligned} \label{eqn:leapfrog} \end{equation} \cite{neal2011mcmc} shows that properties of the Hamiltonian such as reversibility and volume preservation are maintained under the symplectic integrator in (\ref{eqn:leapfrog}). Proposed values $\boldsymbol \vartheta'$ and $\bm{u}'$ obtained after a trajectory length of $\mathcal{T}=n\varepsilon$ by iterating procedures in (\ref{eqn:leapfrog}) $n$ times are then accepted with probability $\min \{1, \exp(\mathcal{H}(\boldsymbol \vartheta, \bm{u})-\mathcal{H}(\boldsymbol \vartheta', \bm{u}')) \}$. The invariant distribution of the Markov chain generated from the HMC algorithm is $\pi(\boldsymbol \vartheta, \bm{u})$ and samples from $\pi(\boldsymbol \vartheta)$ can be obtained by marginalising out the momentum $\bm{u}$. In order to implement the HMC algorithm as described above, computation of the derivatives of (\ref{eqn:unconstrained conditional posterior}) with respect to the $L_{ij}$ is required for the leapfrog update. Lemma~\ref{lemma:derivatives for gradient} derives the expressions for these gradients. \begin{lemma} Let $\bm{E}_k$ denote the matrix obtained by removing column $k$ from an identity matrix $\bm{I}$. For the parameterisation of $\bm{R}_{\boldsymbol \epsilon}$ in (\ref{eqn:reparameterisation}), \begin{enumerate}[font=\normalfont, label=(\roman*)] \item $\displaystyle \frac{\partial \bm{R}_{\boldsymbol \epsilon}^{-1}}{\partial L_{ij}}=-\boldsymbol \Lambda_{\boldsymbol \epsilon}^{1/2} \bigg( \Sig_{\beps}^{-1} \frac{\partial \Sig_{\beps}}{\partial L_{ij}} \Sig_{\beps}^{-1} + \frac{\partial \boldsymbol \Lambda_{\boldsymbol \epsilon}^{-1/2}}{\partial L_{ij}} \boldsymbol \Lambda_{\boldsymbol \epsilon}^{1/2} \Sig_{\beps}^{-1} + \Sig_{\beps}^{-1} \boldsymbol \Lambda_{\boldsymbol \epsilon}^{1/2} \frac{\partial \boldsymbol \Lambda_{\boldsymbol \epsilon}^{-1/2}}{\partial L_{ij}} \bigg) \boldsymbol \Lambda_{\boldsymbol \epsilon}^{1/2}$. \item $\displaystyle \frac{\partial \log|\bm{R}_{\boldsymbol \epsilon}(-k;-k)|}{\partial L_{ij}}=\text{tr}\bigg(\bm{R}_{\boldsymbol \epsilon}^{-1}(-k;-k) \bm{E}_k^\top \frac{\partial \bm{R}_{\boldsymbol \epsilon}}{\partial L_{ij}} \bm{E}_k\bigg)$. \item $\displaystyle \frac{\partial \log |\bm{R}_{\boldsymbol \epsilon}|}{\partial L_{ij}}= - \frac{2L_{ij}}{\sum_{k=1}^i L^2_{ik}}$. \end{enumerate} \label{lemma:derivatives for gradient} \end{lemma} \begin{proof} Lemma 1(i) and (ii) are respectively obtained using Theorems 1 and 2 in Chapter 8 of \cite{magnus1999matrix}, by expressing $\frac{\partial \Sig_{\beps}^{-1}}{\partial L_{ij}}$ in terms of $\frac{\partial \bm{R}_{\boldsymbol \epsilon}^{-1}}{\partial L_{ij}}$ using the chain rule, and writing $\bm{R}_{\boldsymbol \epsilon}(-k;-k)$ as $\bm{E}_k^\top \bm{R}_{\boldsymbol \epsilon} \bm{E}_k$. Lemma 1(iii) is straightforward by noting that $|\bm{R}_{\boldsymbol \epsilon}|=|\boldsymbol \Lambda_{\boldsymbol \epsilon}|^{-1}$ since $|\boldsymbol \Sigma_\epsilon|=1$ from its Cholesky decomposition. \end{proof} \section{A deterministic proposal distribution} \label{sec:deterministic} Various strategies have been proposed to reduce the variability in the Monte Carlo estimate of the expectation $\mathbb{E}[f(\boldsymbol \vartheta)]$ of a scalar function $f$ of parameter $\boldsymbol \vartheta$ with respect to some posterior distribution $\pi(\boldsymbol \vartheta)$, including the Rao-Blackwellisation \citep{robert2004monte} and the control variates \citep{dellaportas2012control, oates2017control}. These techniques produce an efficient estimator of $\mathbb{E}[f(\boldsymbol \vartheta)]$ based on sampled $\boldsymbol \vartheta$ generated from an MCMC sampler. Here, we focus on a particular class of methods which integrate variance reduction techniques dynamically within an MCMC sampling algorithm. Let $\boldsymbol \vartheta=(\vartheta_1, \dotsc, \vartheta_n)^\top$ be a parameter vector with normal full conditional distributions $\vartheta_i|\boldsymbol \vartheta_{-i} \sim \mathcal{N}(\mu_i,\sigma_i^2)$, where the conditional mean $\mu_i$ and the conditional variance $\sigma^2_i$ may depend on $\boldsymbol \vartheta_{-i} = \{\vartheta_j:j=1, \dotsc,n, j \not = i\}$. \cite{adler1981over} and \cite{barone1990improving} introduce an over-relaxation method where the update on $\boldsymbol \vartheta$ is performed using Gibbs sampling, and where the new value $\vartheta'_i$ for each margin of $\boldsymbol \vartheta$ is generated as \begin{equation} \vartheta'_i = (1+\kappa)\mu_i - \kappa \vartheta_i + u \sigma_i \sqrt{1-\kappa^2}, \quad i=1,\dotsc,n, \label{eqn:over-relaxation} \end{equation} with $u\sim \mathcal{N}(0,1)$ being a standard normal random variable. Equation (\ref{eqn:over-relaxation}) allows for the introduction of dependence between successive samples via the constant antithetic parameter $\kappa$, which is required to be in the open interval $(-1,1)$ so that the Markov chain is ergodic and produces $\pi(\boldsymbol \vartheta)$ as its stationary distribution. This scheme is exactly the conventional Gibbs sampler when $\kappa=0$. Variance reduction in estimating $\mathbb{E}[f(\boldsymbol \vartheta)]$ is achieved through the antithetic variable method \citep{hammersley1956new} by setting $\kappa>0$ so that the estimation bias in the previous sample is corrected in the opposite direction. The rate of convergence for the over-relaxation method in (\ref{eqn:over-relaxation}) is studied in \cite{barone1990improving}, while \cite{green1992metropolis} establish that the asymptotic variance of the estimator for $\mathbb{E}[f(\boldsymbol \vartheta)]$ using this strategy for linear $f$ is proportional to $\frac{1-\kappa}{1+\kappa}$. The inefficiency of an MCMC sampler in estimating $\mathbb{E}[f(\boldsymbol \vartheta)]$ is usually measured by the integrated autocorrelation time \citep{roberts2009examples}, which is defined as \begin{equation*} \textrm{IACT}_f = 1 + \sum_{j=1}^\infty \rho_{j,f}, \end{equation*} where $\rho_{j,f}$ is the lag $j$ autocorrelation function of the MCMC iterates of $f(\boldsymbol \vartheta)$ after convergence. Alternatively, one can measure the efficiency of the sampler by computing the effective sample size per MCMC iteration, which by definition is the reciprocal of the IACT. A small value of the IACT is desirable in practice as it indicates that the Markov chain mixes well. Motivated by the over-relaxation sampler and noting that the IACT can be less than 1 if some of the autocorrelations are negative, in which case a Monte Carlo estimator constructed is super-efficient, we introduce into the MH literature a deterministic design of the proposal distribution for $\boldsymbol \vartheta$ \begin{equation} q(\boldsymbol \vartheta'|\boldsymbol \vartheta) = \delta_{\psi(\boldsymbol \vartheta)}(\boldsymbol \vartheta'), \label{eqn:deterministic proposal} \end{equation} where $\psi$ is a mapping function which introduces negative correlation between samples and $\delta_{\psi(\boldsymbol \vartheta)}$ is the Dirac delta function at $\psi(\boldsymbol \vartheta)$. In this case, the MH acceptance probability involves the ratio of $\pi(\boldsymbol \vartheta)$ evaluated at $\boldsymbol \vartheta'$ and $\boldsymbol \vartheta$. When $\pi(\boldsymbol \vartheta)$ is a normal distribution, we propose setting \begin{equation} \psi(\boldsymbol \vartheta) = 2 \boldsymbol \mu_{\boldsymbol \vartheta} - \boldsymbol \vartheta, \label{eqn:antithetic} \end{equation} where $\boldsymbol \mu_{\boldsymbol \vartheta}$ is the mean of $\pi(\boldsymbol \vartheta)$. It is clear that (\ref{eqn:antithetic}) represents an example of the antithetic variable with perfect negative correlation, and also an instance of the over-relaxation method in (\ref{eqn:over-relaxation}) with $\kappa=1$, which is outside the range of values for which the Markov chain is ergodic. Symmetry of the normal density gives $\pi(\boldsymbol \vartheta')=\pi(\boldsymbol \vartheta)$, which in turn translates to an acceptance probability of one. Clearly, our proposed antithetic sampling will only yield an ergodic Markov chain when it is coupled with stochastic simulation of additional parameters that affect the value of the deterministic proposal $\psi(\boldsymbol \vartheta)$, in particular $\boldsymbol \mu_{\boldsymbol \vartheta}$. Under this condition, the value of $\boldsymbol \mu_{\boldsymbol \vartheta}$ changes in every iteration of the update and this drives the exploration of $\boldsymbol \vartheta$ in the parameter space. Furthermore, the dependence between $\boldsymbol \vartheta$ and other model parameters prevents exact periodicity from occurring, and thus the Markov chain is aperiodic. The conditional posterior distribution of the random effects $\boldsymbol \alpha_{1:P}$ in our MVP model is normal and likewise for the regression parameters $\boldsymbol \beta$ when using a conjugate prior. Therefore, we can employ the antithetic sampling method in (\ref{eqn:antithetic}) to improve the IACTs of $\boldsymbol \alpha_{1:P}$ and $\boldsymbol \beta$. In fact, antithetic sampling of normal random variables can also be understood in terms of a HMC update. Suppose that $\boldsymbol \vartheta \sim \mathcal{N}(\boldsymbol \mu_{\boldsymbol \vartheta}, \boldsymbol \Sigma_{\boldsymbol \vartheta})$, and the prior on the momentum variable $\bm{u}$ is chosen as $\mathcal{N}({\bf 0}, \boldsymbol \Sigma^{-1}_{\boldsymbol \vartheta})$. \cite{pakman2014exact} show that the resulting Hamiltonian system can be solved analytically, with solution given by \begin{equation} \boldsymbol \vartheta(t) = \boldsymbol \mu_{\boldsymbol \vartheta} + \boldsymbol \Sigma_{\boldsymbol \vartheta} \bm{u}(0) \sin(t) + (\boldsymbol \vartheta(0)-\boldsymbol \mu_{\boldsymbol \vartheta}) \cos(t), \label{eqn:solution for Hamilton} \end{equation} which is a linear combination of $\boldsymbol \mu_{\boldsymbol \vartheta}$, the initial value $\boldsymbol \vartheta(0)$ of $\boldsymbol \vartheta$ and the initial momentum $\bm{u}(0)$. Note that (\ref{eqn:solution for Hamilton}) is a multivariate generalisation of (\ref{eqn:over-relaxation}) with $t=\textrm{cos}^{-1}(-\kappa)$. Equation~(\ref{eqn:solution for Hamilton}) is thus equivalent to the antithetic sampler in (\ref{eqn:antithetic}) when setting $t=\pi$ radians. Since there is no approximation error in the Hamiltonian dynamics for a normal distribution, an MH accept-reject step is not required in the HMC sampler, and the proposed value of $\boldsymbol \vartheta$ will always be accepted. This equivalence relation was first observed by \cite{pakman2014exact}, but was not particularly useful in their framework of sampling from a truncated multivariate normal distribution. Our proposal for antithetic sampling is different from theirs in the sense that it is entirely deterministic, and we choose $t=\pi$ radians to induce a perfect negative proposal correlation. \cite{pakman2014exact}, on the other hand, suggest setting $t=\frac{\pi}{2}$ radians, which is equivalent to drawing a fresh sample from a random number generator when it is applied to the setting of a normal distribution. We refer to this approach as the independent sampler hereafter. So far, our discussion has mainly focused on normal $\pi(\boldsymbol \vartheta)$. This is because an analytic solution to the Hamiltonian system is only available for a normal distribution. It is possible to extend the proposed antithetic sampler to more general distributions by obtaining an approximation of $\boldsymbol \mu_{\boldsymbol \vartheta}$ in order to propose a new value of $\boldsymbol \vartheta$, and then accept or reject the proposal in an MH algorithm to target the true $\pi(\boldsymbol \vartheta)$, as suggested in \cite{green1992metropolis}. However, the application of this generalisation and its variants (e.g.~\cite{creutz1987overrelaxation}) is somewhat limited due to high rejection rates in the accept-reject step \citep{neal1998suppressing}. In this case, the HMC algorithm provides a way to overcome this shortcoming. \section{Simulation studies} \label{sec:simulation} We now study the efficiency of the antithetic variable technique described in Section~\ref{sec:deterministic}. Two examples are presented. The first examines the antithetic sampler in a more general setting, while the second is specific to the application in Section~\ref{sec:application}. Reported IACT values of the parameters are computed using the \verb|coda| package \citep{plummer2006coda} in \verb|R|. \vspace{\baselineskip} \noindent {\bf Example 1. }The stationary distribution $\pi(\boldsymbol \theta)$ is specified as a bivariate normal distribution with high correlation (0.99) between the variables. We investigate the performance of three sampling schemes - the independent sampler, the over-relaxation algorithm with $\kappa=0.9$, and a coupling of the over-relaxation algorithm (on the first margin) with the antithetic sampler (on the second margin). Note that this coupling strategy introduces stochasticity into the antithetic sampler, which is essential to produce an ergodic Markov chain. The samplers are each run for 10\,000 iterations from the same initialised value $(2,2)$, and the update on each margin is performed conditional on the other. Figure~\ref{fig:correlated normal} illustrates the trajectories of the first 50 samples generated. Exploration of the target space is reduced to a random walk under the independent sampler. In contrast, the other two samplers move between different contours of the density and explore the full support of the distribution in an elliptical manner, thereby reducing the IACT significantly. The IACT decreases further when the over-relaxation algorithm on the second margin is replaced by antithetic sampling. In this analysis, the mixing of both margins is improved by a factor of 1.75. \begin{figure}[h!] \centering \includegraphics[trim= 0.5cm 1cm 6.5cm 9.5cm,clip,width=0.6\textwidth]{normal} \caption{Trajectories of the first 50 samples generated from the independent sampler (left), the over-relaxation algorithm with $\kappa=0.9$ (middle), and the over-relaxation algorithm coupled with the antithetic sampler (right). The blue solid lines represent the 95\% confidence region of the bivariate normal distribution.} \label{fig:correlated normal} \end{figure} \vspace{\baselineskip} \noindent {\bf Example 2. }A simulated dataset is generated following the MVP model given in (\ref{eqn:MVP})--(\ref{eqn:latent}), with $D=8, P=162, T=16$ and values of the parameters $\boldsymbol \theta=(\boldsymbol \beta, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \Sigma_{\boldsymbol \alpha})$ set to be the posterior mean estimates of the parameters in Model 1 of the female contraceptive product analysis of Section~\ref{sec:application}. To avoid hand-tuning the stepsize $\varepsilon$ and the trajectory length $\mathcal{T}$ for the HMC update of $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$, we utilise the No-U-Turn Sampler (NUTS) with the dual averaging scheme of \cite{hoffman2014no}. We use the following non-informative prior distributions: $\boldsymbol \beta \sim \mathcal{N}({\bf 0}, 100\bm{I})$, $\boldsymbol \Sigma_{\boldsymbol \alpha} \sim \mathcal{IW}(9, \bm{I})$ and the prior distribution on the lower triangular Cholesky factor $\bm{\mathcal{L}}_{\boldsymbol \epsilon}$ given in (\ref{eqn:prior cholesky}). The sampling scheme is run for 30\,000 iterations, with the first 5\,000 samples discarded as burn-in. Appendix~\ref{app:sampling scheme} details the Gibbs sampling scheme. Figure~\ref{fig:autocorrelation} compares graphically the marginal posterior densities and sample autocorrelations of randomly sampled random effects $\boldsymbol \alpha_{1:P}$ and the regression parameter $\boldsymbol \beta$ between independent and antithetic sampling. Despite the absence of a stochastic component in the updates of $\boldsymbol \alpha_{1:P}$ and $\boldsymbol \beta$, the kernel density plots of these parameters indicate that the coupling of a stochastic MCMC scheme for the remaining parameters with the antithetic variable technique gives the same posterior distributions as those under independent sampling. The autocorrelation plots show that the samples generated from antithetic sampling have positive dependence with a higher rate of decay over the number of lags, thereby demonstrating the superior mixing of the Markov chain. The IACT values of the randomly sampled parameters are significantly lower, with improvement factors of 3.72 and 2.10 observed for $\alpha_{3,80}$ and $\beta_{182}$ respectively. The box plot showing the distribution of the IACT values of $\bm{\alpha}_{1:P}$ also indicates that some of these parameters are super-efficient. Furthermore, the log IACT ratios of the independent sampler compared to the antithetic sampler are well above 0, suggesting that all $\boldsymbol \alpha_{1:P}$ and $\boldsymbol \beta$ parameters experience efficiency gains. Although perfect negative correlation is induced between successive samples by the deterministic proposal, this does not necessarily translate to an equivalent autocorrelation in the posterior samples. Rather, the negative relationship is used to reduce the magnitude of positive autocorrelation present in the MCMC samples. Note that convergence to the posterior distribution might be slow for poorly initialised values under antithetic sampling so we suggest using independent sampling during the burn-in period and later switching to the deterministic proposal. \begin{figure}[t!] \onehalfspacing \centering \includegraphics[trim= 0.2cm 0.75cm 1.25cm 0.75cm,clip,width=0.7\textwidth]{IS_vs_AS_ratio} \caption{Marginal posterior densities of a randomly selected random effects term (top panel) and regression coefficient (bottom panel), and their sample autocorrelation plots under independent sampling (IS) and antithetic sampling (AS). Rightmost column gives the distributions of the IACT values and the element-wise IACT ratios of IS to AS for $\boldsymbol \alpha_{1:P}$ and $\boldsymbol \beta$ on the log scale.} \label{fig:autocorrelation} \end{figure} The remaining simulation experiments investigate the performance of the MVP model in the context of recovering the true parameters of the data generating process under different specifications of prior distribution on $\boldsymbol \theta$. We use the posterior root-mean-square error (RMSE) defined by \begin{equation} \textrm{RMSE}(\boldsymbol \theta) = \sqrt{\frac{1}{N}\sum_{j=1}^N (\boldsymbol \theta^{[j]}-\boldsymbol \theta_{\textrm{true}})^2}, \label{eqn:RMSE} \end{equation} as the performance measure, where $\boldsymbol \theta^{[j]}$ is the $j$-th iterate from the $N$ posterior samples and $\boldsymbol \theta_{\textrm{true}}$ is the true value of $\boldsymbol \theta$. The measure in (\ref{eqn:RMSE}) is defined for univariate $\boldsymbol \theta$. For a multivariate $\boldsymbol \theta$, the posterior RMSE is calculated for each margin of $\boldsymbol \theta$. All the results shown are based on 1\,000 different replicate sets of simulated data with the same true parameter values. We first consider the conditionally conjugate hierarchical inverse-Wishart $\mathcal{HIW}(\lambda, \bm{A})$ prior of \cite{huang2013simple} with degrees of freedom $\lambda$ and positive scale parameter $\bm{A}=(A_1, \dotsc, A_D)^\top$ as an alternative to the inverse-Wishart prior on the $D \times D$ covariance matrix $\boldsymbol \Sigma_{\boldsymbol \alpha}$, \begin{equation*} \begin{gathered} \boldsymbol \Sigma_{\boldsymbol \alpha}|a_1,\dotsc,a_D \sim \mathcal{IW}\Bigg(\lambda + D - 1, 2 \lambda\textrm{diag}\bigg(\frac{1}{a_1}, \dotsc, \frac{1}{a_D}\bigg)\Bigg), \\ a_i \stackrel{iid}{\sim} \mathcal{IG}(0.5, A^{-2}_i), \quad i=1,\dotsc, D, \end{gathered} \end{equation*} where $\mathcal{IG}(a, b)$ is an inverse-Gamma distribution with shape $a$ and scale $b$. The marginal prior of the standard deviation in $\boldsymbol \Sigma_{\boldsymbol \alpha}$ is a half-$t(\lambda, A_i)$ distribution, as suggested in \cite{gelman2006prior}. In the simulation, we select $\lambda=2$ and choose a weakly informative scale parameter whereby $A_1 = A_2 = 0.23$ and $A_3 = \cdots = A_8 = 0.46$ so that approximately 95\% of the half-$t$ density is below 1 and 2 respectively. This specification is relevant to the real data application in Section~\ref{sec:application}, where our prior belief is that the variability in the tendency of GPs to discuss pill contraceptives is lower compared to non-pill alternatives. In contrast, the inverse-Wishart prior assumes the same variability for all variance parameters $\sigma^2_{\alpha_i}$ in $\boldsymbol \Sigma_{\boldsymbol \alpha}$. Figure~\ref{fig:IW vs HIW} shows the distribution of the average RMSE ratio of each type of parameter in $\boldsymbol \Sigma_{\boldsymbol \alpha}$, based on 1\,000 replicate simulations, for the hierarchical inverse-Wishart prior versus the inverse-Wishart prior. Although the hierarchical inverse-Wishart prior is flexible enough to specify different strengths of prior on each $\sigma^2_{\alpha_i}$, Figure~\ref{fig:IW vs HIW} shows that in this case its performance is similar to the more restrictive inverse-Wishart prior. This result is somewhat unsurprising considering that the estimated $\sigma^2_{\alpha_i}$ in the application example are more or less similar across the different contraceptive products (see Appendix~\ref{app:covariance}). The distributions for the posterior RMSE ratio of the correlation coefficients and the partial correlations are concentrated around 1 since both the hierarchical inverse-Wishart prior with $\lambda=2$ and the inverse-Wishart prior with $D+1$ degrees of freedom and scale matrix $\bm{I}$ induce the same marginally uniform prior, i.e.~(\ref{eqn:marginally uniform prior}) with $\nu = D+1$, on the resulting correlation matrix $\bm{R}_{\boldsymbol \alpha}$, which in turn gives the same implied LKJ distribution on the partial correlations. \begin{figure}[t!] \centering \begin{subfigure}{.475\textwidth} \centering \includegraphics[trim= 2.25cm 1.75cm 12.25cm 11.5cm,clip,width=0.8\textwidth]{hiw_vs_iw} \caption{Hierarchical inverse-Wishart prior versus inverse-Wishart prior on $\boldsymbol \Sigma_{\boldsymbol \alpha}$.} \label{fig:IW vs HIW} \end{subfigure} \hfill \begin{subfigure}{.475\textwidth} \centering \includegraphics[trim= 2.25cm 1.75cm 12.25cm 11.5cm,clip,width=0.8\textwidth]{hs_vs_n} \caption{Horseshoe shrinkage prior versus normal prior on $\boldsymbol \beta$.} \label{fig:horseshoe vs normal} \end{subfigure} \caption{Distributions of the average posterior RMSE ratio of all parameters in (a) $\boldsymbol \Sigma_{\boldsymbol \alpha}$ or (b) $\boldsymbol \beta$, based on 1\,000 replicate analyses, under different prior choices. (a) Standard deviations, correlations and partial correlations for parameters in $\boldsymbol \Sigma_{\boldsymbol \alpha}$ for the hierarchical inverse-Wishart prior versus the inverse-Wishart prior on $\boldsymbol \Sigma_{\boldsymbol \alpha}$. (b) Sparse regression coefficients $\beta_i=0$ and non-sparse coefficients $\beta_i\neq 0$ for the horseshoe prior versus the $\mathcal{N}(\boldsymbol{0},100\boldsymbol{I})$ prior on $\boldsymbol \beta$. } \label{fig:prior beta and sigma} \end{figure} To identify sparse signals (coefficients which are significant) in the regression parameter $\boldsymbol \beta$, we employ the horseshoe shrinkage prior \citep{carvalho2010horseshoe} given by \begin{equation*} \beta_i|\lambda_i, \tau \sim \mathcal{N}(0, \tau^2 \lambda_i^2), \quad \lambda_i \sim \mathcal{C}^+(0,1), \quad \tau \sim \mathcal{C}^+(0,1), \end{equation*} where $\mathcal{C}^+(0,1)$ is a half-Cauchy distribution with location 0 and scale 1 restricted to positive support. The simulation is carried out by setting 75\% of the smallest non-intercept regression coefficients (in absolute value) in $\boldsymbol \beta$ to 0, from which we generate the simulated datasets. We model the prior on each intercept separately by a flat $\mathcal{N}(0, 100)$ distribution to avoid heavily penalising these parameters. Gibbs sampling from the posterior distribution of $\boldsymbol \beta$ is implemented by adopting the latent variable formulation in \cite{makalic2016simple}. Figure~\ref{fig:horseshoe vs normal} displays the results of comparing this prior specification for $\boldsymbol \beta$ to a $\mathcal{N}({\bf 0}, 100\bm{I})$ prior, again in terms of the average RMSE ratio over all regression parameters. The horseshoe prior performs as well as the $\mathcal{N}({\bf 0}, 100\bm{I})$ prior on non-zero entries of $\boldsymbol \beta$, although the variability in the RMSE ratio is large. On the other hand, the horseshoe prior outperforms the normal prior for those parameters whose true values are zero, reducing the RMSE by half. This occurs as the horseshoe prior places a greater density around zero, which results in a more concentrated posterior distribution for parameters which are truly zero. Therefore, it is an attractive default option when we expect sparsity in the regression parameters, as is the case for our analysis of the characteristics affecting the decision-making behaviour of GPs in the next section. \section{Discussion of female contraceptive products by Australian GPs} \label{sec:application} \subsection{Background and aims of study} \label{subsec:background} In order to study the decision-making behaviour of Australian GPs, we obtain data from \cite{fiebig2017consideration} who design a stated preference experiment in which GPs are asked to select the contraceptive products that they would consider discussing with hypothetical female patients. The GPs evaluate a sequence of vignettes where patients are defined in terms of socio-economic and clinical characteristics that are varied as part of the experimental design. Table~\ref{table:attributes} in Appendix~\ref{app:attributes} contains the attributes of the patients with a description for each level of the categorical variables. The GPs choose from a set of 9 products that they would discuss with the patient before deciding upon their most preferred product to be subsequently prescribed to the patient. A sample of 162 GPs participated in the experiment where each subject makes choices for 16 different patients, resulting in 2\,592 observations. The following covariate information is collected on the GPs themselves: age, gender, whether they are registered as a Fellow of the Royal Australian College of GPs, whether they have a certificate in family planning, whether they are an Australian medical graduate, whether their location of practice is in an urban area and whether they bulk-bill patients. Analysis of this panel data is based on the set of binary outcomes as to whether or not to discuss each of the contraceptive products. Due to low occurrences for the prescription of the hormonal patch which was yet to be released in the Australian market, we removed this product from the dataset leaving observations on the 8 remaining products. The experiment is designed to mimic the choice problem faced by GPs in a consultation where they need to match a product with a particular patient. In characterising such a decision problem, \cite{frank2007custom} distinguish between ``custom-made'' and ``ready-to-wear'' (or norm-based) choices. A custom-made choice involves the GP undertaking a careful evaluation of the patient and then matching her to an appropriate product. However, as new products are introduced, GPs face considerable costs in the process of gaining the knowledge and expertise required to discuss and prescribe these products. This is particularly the case when more familiar products are available even though they may be somewhat inferior to the new products; an especially salient situation in the market for contraceptive products. In such cases, some GPs will tend to adopt norms (here particular products) that work well for a broad class of patients and to place less weight on certain patient attributes that would indicate a different product that is potentially a better match. Particular interest is in the dependence between the products. That is, which products tend to be discussed together and which tend to form distinct clusters. If GPs pursue custom-made strategies, then a considerable portion of the dependence between products will be explained by the attributes of the patient. Conditional on the observable features of the patient and characteristics of the GPs, remaining dependencies will reflect the relationship between unobservables related to evaluations of the suitability of certain products for a particular patient, and how individual GP's product effects are correlated across products. The proposed model is designed to capture these forms of heterogeneity and will permit a detailed analysis of the choices. The prevalence of ready-to-wear choices is one possible explanation for the relatively low uptake of long acting reversible contraceptive (LARC) methods in Australia \citep{black2013australian}. LARC methods are contraceptives that are administered less frequently than monthly and include hormonal implants, intrauterine contraception (IUC), both hormonal and copper-bearing, and contraceptive injections. There is increasing support for the greater use of these more effective methods to reduce unintended pregnancies and abortion rates. In our analysis below, we will use the model to explore a case where there is no clinical reason why at least one of these LARC methods should not be considered for discussion by GPs. For ease of presentation, we will use the subscripts in Table~\ref{table:products} to denote the products. \begin{table}[ht!] \centering \onehalfspacing \begin{tabular}{|c|l|} \hline Subscript & \multicolumn{1}{c|}{Product} \\ \hline 1 & Combined pill \\ 2 & Mini-pill \\ \cellcolor{gray!25}3 & \cellcolor{gray!25}Hormonal injection \\ \cellcolor{gray!25}4 & \cellcolor{gray!25}Hormonal implant \\ \cellcolor{gray!25}5 & \cellcolor{gray!25}Hormonal IUD \\ 6 & Vaginal ring \\ \cellcolor{gray!25}7 & \cellcolor{gray!25}Copper IUD \\ 8 & Condom \\ \hline \end{tabular} \caption{Correspondence of parameter subscripts to each female contraceptive product. Long acting reversible contraceptive methods are shown in grey.} \label{table:products} \end{table} \subsection{Analysis and results} \label{subsec:discussion} We consider two different models for the data: \begin{align} \text{Model 1: }& \bm{y}^\ast_{it} = \boldsymbol \alpha_i + \bm{B} \bm{x}_{it} + \boldsymbol \epsilon_{it}, \label{eqn:model 1}\\ \text{Model 2: }& \bm{y}^\ast_{it} = \boldsymbol \alpha_i + \bm{B} \bm{x}_{it} + \bm{C} \bm{z}_i + \boldsymbol \epsilon_{it}, \label{eqn:model 2} \end{align} for $i=1,\dotsc,P=162$ GPs and $t=1,\dotsc,T=16$ patients. Here $\boldsymbol \alpha_i$ and $\bm{C} \bm{z}_i$ respectively represent GP-specific random and fixed effects with $\bm{z}_i$ being a vector of GP characteristics, and $\bm{B} \bm{x}_{it}$ represents fixed effects of the patient. We select a horseshoe prior on $\boldsymbol \beta=\textrm{vec}(\bm{B})$ and model the covariance matrix $\boldsymbol \Sigma_{\boldsymbol \alpha}$ of the random effects by the $\mathcal{HIW}(2,\bm{A})$ prior in Section~\ref{sec:simulation} where $\bm{A}=(0.23, 0.23, 0.46, \dotsc, 0.46)^\top$. The scale is chosen to express the prior information that the variances of the random effects are expected to be small, with those for the pill products being less variable compared to the non-pill alternatives. The difference between these two models is the presence of the GP-specific fixed effects in Model 2, which explain some of the relationships in the random effects of Model 1. Let $\bm{X}=(X_1, \dotsc, X_D)^\top$ be a vector of normal random variables with covariance matrix given by $\boldsymbol \Sigma_{\bm{X}}$. Recall that $X_i$ and $X_j$ are conditionally independent given the other random variables if the $(i,j)$-th entry of the precision matrix $\boldsymbol \Sigma^{-1}_{\bm{X}}$ is zero. Figures~\ref{fig:correlation graph} and \ref{fig:random effect graph} give graphical summaries of the posterior distribution of the dependence structures of the latent variable $\bm{y}^\ast_{it}$ conditional on $\boldsymbol \alpha_i$ and $\bm{x}_{it}$ (as well as $\bm{z}_i$ for Model 2), and the random effects $\boldsymbol \alpha_i$ respectively. All graphs are obtained by computing the 95\% credible interval of the posterior distribution for each entry of $\bm{R}_{\boldsymbol \epsilon}^{-1}$ and $\boldsymbol \Sigma^{-1}_{\boldsymbol \alpha}$, where an edge is formed between two nodes if the credible interval does not include 0. The absence of an edge between any two nodes indicates a potential conditional independence between the two variables given the rest. The dependence structures associated with the latent variables are the same for both models. This supports the use of the MVP model in order to capture the complex dependencies between different products that would otherwise be ignored in separate univariate analyses on each product. \begin{figure}[ht!] \centering \begin{tikzpicture}[>=stealth',shorten >=1pt,auto,thick,main node/.style={circle,draw}] \draw (67.5:2cm) node[main node, scale=0.9] (1) {$y^\ast_1$}; \draw (22.5:2cm) node[main node, scale=0.9] (2) {$y^\ast_2$}; \draw (337.5:2cm) node[main node, scale=0.9] (3) {$y^\ast_3$}; \draw (292.5:2cm) node[main node, scale=0.9] (4) {$y^\ast_4$}; \draw (247.5:2cm) node[main node, scale=0.9] (5) {$y^\ast_5$}; \draw (202.5:2cm) node[main node, scale=0.9] (6) {$y^\ast_6$}; \draw (157.5:2cm) node[main node, scale=0.9] (7) {$y^\ast_7$}; \draw (112.5:2cm) node[main node, scale=0.9] (8) {$y^\ast_8$}; \path[every node/.style={font=\sffamily\small}] (1) edge[red, line width=0.08cm] node [left] {} (5) edge[blue, line width=0.16cm] node [left] {} (6) (2) edge[blue, line width=0.055cm] node [left] {} (3) edge[red, line width=0.091cm] node [left] {} (6) (3) edge[blue, line width=0.055cm] node [left] {} (2) edge[blue, line width=0.2cm] node [left] {} (4) (4) edge[blue, line width=0.2cm] node [left] {} (3) edge[blue, line width=0.082cm] node [left] {} (5) (5) edge[red, line width=0.08cm] node [left] {} (1) edge[blue, line width=0.082cm] node [left] {} (4) edge[blue, line width=0.176cm] node [left] {} (7) edge[red, line width=0.026cm] node [left] {} (8) (6) edge[blue, line width=0.16cm] node [left] {} (1) edge[red, line width=0.091cm] node [left] {} (2) edge[blue, line width=0.062cm] node [left] {} (8) (7) edge[blue, line width=0.176cm] node [left] {} (5) edge[blue, line width=0.07cm] node [left] {} (8) (8) edge[red, line width=0.026cm] node [left] {} (5) edge[blue, line width=0.062cm] node [left] {} (6) edge[blue, line width=0.07cm] node [left] {} (7); \end{tikzpicture} \caption{Graphical model illustrating substantial dependence structure of the latent variables $\bm{y}^\ast$ conditional on the random effects and the covariates in both Model 1 and 2. Edges between $y^\ast_i$ and $y^\ast_j$ are included if the 95\% credible interval of the marginal posterior distribution of the $(i,j)$-th entry of $\bm{R}_{\boldsymbol \epsilon}^{-1}$ does not contain 0. Blue edges represent positive dependence while red edges represent negative dependence. The thickness of the edges is proportional to the strength of the dependence.} \label{fig:correlation graph} \vspace{10pt} \begin{subfigure}[b]{0.45\textwidth} \begin{tikzpicture}[>=stealth',shorten >=1pt,auto,thick,main node/.style={circle,draw}] \draw (67.5:2cm) node[main node, scale=0.9] (1) {$\alpha_1$}; \draw (22.5:2cm) node[main node, scale=0.9] (2) {$\alpha_2$}; \draw (337.5:2cm) node[main node, scale=0.9] (3) {$\alpha_3$}; \draw (292.5:2cm) node[main node, scale=0.9] (4) {$\alpha_4$}; \draw (247.5:2cm) node[main node, scale=0.9] (5) {$\alpha_5$}; \draw (202.5:2cm) node[main node, scale=0.9] (6) {$\alpha_6$}; \draw (157.5:2cm) node[main node, scale=0.9] (7) {$\alpha_7$}; \draw (112.5:2cm) node[main node, scale=0.9] (8) {$\alpha_8$}; \path[every node/.style={font=\sffamily\small}] (1) edge[blue, line width=0.17cm] node [left] {} (2) edge[blue, line width=0.114cm] node [left] {} (3) (2) edge[blue, line width=0.17cm] node [left] {} (1) (3) edge[blue, line width=0.114cm] node [left] {} (1) edge[blue, line width=0.074cm] node [left] {} (7) (4) edge[blue, line width=0.141cm] node [left] {} (5) edge[red, line width=0.063cm] node [left] {} (7) (5) edge[blue, line width=0.141cm] node [left] {} (4) (6) edge[blue, line width=0.098cm] node [left] {} (8) (7) edge[blue, line width=0.074cm] node [left] {} (3) edge[red, line width=0.063cm] node [left] {} (4) edge[blue, line width=0.068cm] node [left] {} (8) (8) edge[blue, line width=0.098cm] node [left] {} (6) edge[blue, line width=0.068cm] node [left] {} (7); \end{tikzpicture} \centering \caption{Model 1.} \label{fig:model 1 graph} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \begin{tikzpicture}[>=stealth',shorten >=1pt,auto,thick,main node/.style={circle,draw}] \draw (67.5:2cm) node[main node, scale=0.9] (1) {$\alpha_1$}; \draw (22.5:2cm) node[main node, scale=0.9] (2) {$\alpha_2$}; \draw (337.5:2cm) node[main node, scale=0.9] (3) {$\alpha_3$}; \draw (292.5:2cm) node[main node, scale=0.9] (4) {$\alpha_4$}; \draw (247.5:2cm) node[main node, scale=0.9] (5) {$\alpha_5$}; \draw (202.5:2cm) node[main node, scale=0.9] (6) {$\alpha_6$}; \draw (157.5:2cm) node[main node, scale=0.9] (7) {$\alpha_7$}; \draw (112.5:2cm) node[main node, scale=0.9] (8) {$\alpha_8$}; \path[every node/.style={font=\sffamily\small}] (1) edge[blue, line width=0.165cm] node [left] {} (2) (2) edge[blue, line width=0.165cm] node [left] {} (1) (4) edge[blue, line width=0.134cm] node [left] {} (5) edge[red, line width=0.042cm] node [left] {} (7) (5) edge[blue, line width=0.134cm] node [left] {} (4) (6) edge[blue, line width=0.119cm] node [left] {} (8) (7) edge[red, line width=0.042cm] node [left] {} (4) edge[blue, line width=0.086cm] node [left] {} (8) (8) edge[blue, line width=0.119cm] node [left] {} (6) edge[blue, line width=0.086cm] node [left] {} (7); \end{tikzpicture} \centering \caption{Model 2.} \label{fig:model 2 graph} \end{subfigure} \caption{Graphical models illustrating substantial dependence structure of the GP-specific random effects $\boldsymbol \alpha$ in each model. Edges between $\alpha_i$ and $\alpha_j$ are included if the 95\% credible interval of the marginal posterior distribution of the $(i,j)$-th entry of $\boldsymbol \Sigma_{\boldsymbol \alpha}^{-1}$ does not contain 0. Blue edges represent positive dependence while red edges represent negative dependence. The thickness of the edges is proportional to the strength of the dependence.} \label{fig:random effect graph} \end{figure} Figure~\ref{fig:correlation graph} is also instrumental in explaining the suitability of the contraceptive products for a patient in terms of substitute goods, which in consumer theory is defined as products with similar functions that can be used in place of each other. For conciseness, we only focus on some important relationships illustrated in the graphical model. The propensity to discuss pill products $(y^\ast_1, y^\ast_2)$ are independent of each other given the hormonal IUD and the vaginal ring $(y^\ast_5, y^\ast_6)$ by the Markov property since all paths from $y^\ast_1$ to $y^\ast_2$ pass through $(y^\ast_5, y^\ast_6)$, reflecting the use of these non-pill contraceptives as pill alternatives dictated by particular clinical conditions. The clique formed between $(y^\ast_5, y^\ast_7, y^\ast_8)$ suggests dependence in the propensity to discuss the hormonal IUD, the copper IUD and the condoms. In fact, the posterior correlation between the propensity scores for both the IUD methods $(y^\ast_5, y^\ast_7)$ is around 0.52 on average (see Appendix~\ref{app:correlation}), suggesting a high tendency of these products to be discussed together. This also reflects the fact that these IUD methods are substitutes. Noticeably, the propensity to discuss the hormonal injection and the hormonal implant $(y^\ast_4, y^\ast_5)$ exhibit the highest level of association as indicated by our model, with a mean posterior correlation of 0.59. This indicates the likelihood of these two prominent LARC products being included together in discussions, and it is consistent with them being close substitutes for each other for many patients. Figure~\ref{fig:random effect graph} can be interpreted in the same way as Figure~\ref{fig:correlation graph}, regarding the substitutability of different products but in the context of ready-to-wear choices. This is because the random effects in (\ref{eqn:model 1}) characterise the persistence of GPs in discussing a particular product after observing the patient's attributes. There are clear differences in the graphical structure when comparing Figures~\ref{fig:model 1 graph} and \ref{fig:model 2 graph}. The changes in the dependence structure of the GP random effects arise because some of the persistence in product choices can be explained by GP characteristics. For example, the tendency of GPs to include both the hormonal injection and the copper IUD $(\alpha_3, \alpha_7)$ as ready-to-wear choices is due to their age (see significance of GP characteristics in Appendix~\ref{app:estimation result patient}). The posterior structure also provides some confidence that the random effects specification is useful in capturing important GP characteristics that are not directly observed. Three clusters of products with substantial dependence in ready-to-wear choices are identified from the model after accounting for the observed GP characteristics. Particularly relevant is the dependence between the hormonal IUD and the implant $(\alpha_4, \alpha_5)$. There is positive correlation between these two LARCs, indicating the tendency for GP attitudes (either positive or negative) to be aligned. A second cluster includes both of the pills $(\alpha_1, \alpha_2)$ which is consistent with these products being used as a ready-to-wear default. GPs who are more likely to discuss the combined pill after conditioning on the patient's attributes behave similarly when considering the mini-pill. Contraceptives that are not pill- or hormone-based form the final bundle. Our models allow us to examine posterior predictions for a range of patients. Since we are interested in the uptake of LARC products, we specify a particular female patient where there is no clinical reason why a LARC should not be considered for discussion. Table~\ref{table:attributes} of Appendix~\ref{app:attributes} gives the attributes of this base-case patient. Figure~\ref{fig:marginal probability} summarises the estimate of the predictive probability of a GP discussing a particular product, where the range of predictions shown is generated for all GPs in the sample based on Model 2. For this particular base-case patient, there is considerable agreement amongst all GPs in the sample that the combined pill (product 1) is one of the most suitable products to be discussed, but they have much more variable views on the other products. Amongst the LARCs (products 3, 4, 5 and 7), the hormonal injection (product 3) and the implant (product 4) are the products which are the most likely to be discussed, with the variability across GPs perhaps simply reflecting a view that they are good substitutes to each other, which is in fact what we find in Figure~\ref{fig:correlation graph}. GPs could indeed have consistent views about the need to discuss LARCs, as they do with the combined pill, but they are divided on which of the LARC products to discuss. To explore this possibility, the final column in Figure~\ref{fig:marginal probability} shows the predicted probability of the GPs discussing at least one of these two products, that is $\mathbb{P}(y_3 + y_4 \geq 1)$. The results suggest that the GPs will discuss either product 3 or 4 (or both) with similar probability to the combined pill. While this joint probability does indicate a median that is similar to that of discussing the combined pill, the variability across GPs remains much larger than that associated with the combined pill. This evidence is consistent with the hypothesised resistance amongst some GPs to even discuss LARCs, let alone recommend them. \begin{figure}[t!] \onehalfspacing \centering \includegraphics[trim= 2cm 1.5cm 2cm 3.5cm,clip,width=0.6\textwidth]{marginal_probability} \caption{Predicted probability of a GP discussing each product for a base-case patient for each of the 162 Australian GPs.} \label{fig:marginal probability} \end{figure} \subsection{Comparing sampling schemes} In order to investigate the performance of the antithetic sampler, Figure~\ref{fig:posterior application} illustrates marginal posterior distributions of those Model 2 parameters whose densities demonstrate the greatest visual differences between independent and antithetic sampling of the random effects $\boldsymbol \alpha_{1:P}$ and regression parameters $\boldsymbol \beta$. The marginal posterior distributions of $\alpha_{7,110}$ and $\beta_{236}$ are effectively the same under both updating approaches. This occurs because the mean of the conditional posterior distribution, which is a key ingredient in the deterministic antithetic sampler proposal, changes between iterations; a change largely driven by the stochastic update of the latent variable $\bm{y}^\ast$. This outcome suggests that the posterior distribution of the other parameters remains adequately explored by the antithetic sampler. \begin{figure}[t!] \centering \includegraphics[trim= 0.5cm 2.25cm 0.5cm 8cm,clip,width=0.8\textwidth]{posterior_application} \caption{Marginal posterior density estimates of those Model 2 parameters with the greatest visual differences between using independent sampling (IS) and antithetic sampling (AS) for $\boldsymbol \alpha_{1:P}$ and $\boldsymbol \beta$.} \label{fig:posterior application} \end{figure} Table~\ref{table:IACT model 2} compares the performance between independent and antithetic sampling schemes when estimating Model 2. The antithetic variable method generates samples marginally faster than independent sampling because it is deterministic. Based on the results shown, we observe an improvement of 4.86 and 3.31 times performance gain on average in the mixing of $\boldsymbol \alpha_{1:P}$ and $\boldsymbol \beta$ respectively. As a result of this, the mean IACT of $\bm{y}^\ast$ is also improved. \begin{table}[t!] \centering \doublespacing \begin{tabular}{|c|r|r|r|r|r|} \hline \multirow{2}{*}{Parameter} & \multicolumn{2}{c|}{Mean IACT} & \multicolumn{3}{c|}{IACT Ratio} \\ \cline{2-6} & \multicolumn{1}{c|}{IS} & \multicolumn{1}{c|}{AS} & \multicolumn{1}{c|}{Min} & \multicolumn{1}{c|}{Max} & \multicolumn{1}{c|}{Mean} \\ \hline $\bm{y}^\ast$ & 3.6387 & 2.6686 & 0.8242 & 3.1419 & 1.2127 \\ $\boldsymbol \alpha_{1:P}$ & 16.8872 & 4.6456 & 1.4857 & 13.3424 & 4.8632 \\ $\boldsymbol \beta$ & 15.0446 & 4.0105 & 1.4566 & 16.0173 & 3.3111 \\ $\textrm{vechL}(\bm{\mathcal{L}}_{\boldsymbol \epsilon})$ & 14.8292 & 14.5422 & 0.9338 & 1.1737 & 1.0191 \\ $\textrm{vechL}(\bm{R}_{\boldsymbol \epsilon})$ & 12.7311 & 12.5170 & 0.9147 & 1.1509 & 1.0180 \\ $\textrm{diag}(\boldsymbol \Sigma_{\boldsymbol \alpha})$ & 24.8056 & 14.6929 & 1.3130 & 2.0651 & 1.7222 \\ $\textrm{vechL}(\bm{R}_{\boldsymbol \alpha})$ & 9.5025 & 5.1716 & 1.4599 & 2.3336 & 1.8424 \\ \hline Time per iteration & 0.0243 & 0.0239 & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} \\ \hline \end{tabular} \caption{Comparison of the performance between independent sampling (IS) and antithetic sampling (AS) in the contraceptive products preference data in terms of the speed (seconds per iteration), the mean IACT and the IACT ratio for each block of parameter.} \label{table:IACT model 2} \end{table} \section{Conclusion} \label{sec:conclusion} Many methods exist for fitting a multinomial logit model with random effects, such as simulated maximum likelihood \citep{gong2004mobility}, quadrature \citep{hartzel2001multinomial,hedeker2003mixed}, multinomial-Poisson transformation \citep{lee2017poisson}, and moment-based estimation \citep{perry2017fast}, among others. Computational strategies for the MVP model, on the other hand, are less well studied. In this article, we introduce a HMC sampling approach to generate the posterior samples of $\bm{R}_{\boldsymbol \epsilon}$. This method requires reparameterising $\bm{R}_{\boldsymbol \epsilon}$ into an unconstrained Cholesky factor in order to circumvent the restrictive properties of a correlation matrix having diagonal entries of 1 and being positive definite. Furthermore, we propose a novel antithetic variable technique to accelerate the mixing of the random effects and the regression parameters, where significant gains in efficiency are observed in our application. Although our antithetic sampling deterministically specifies the proposal distribution within the Metropolis-Hastings update, the ergodicity of the Markov chain is unaffected when it is embedded within a larger system of stochastic updates. Our application considers the discussion of female contraceptive products by Australian GPs based on outcomes from the second stage of the stated preference data from \cite{fiebig2017consideration}. An examination of the correlation matrix underlying the choices reveals a complex dependence structure between the products, hence indicating the plausibility of our formulation to model these choices in a multivariate setting. Our empirical study also suggests evidence of medical practice variation among the GPs, especially with regard to the inclusion of LARCs in the discussion with patients. The combined pill was the most popular contraceptive choice among the patients, and it represented a likely ready-to-wear default for many GPs. Without GPs even discussing LARCs, their uptake was likely to remain relatively constrained in such a context. \section*{Acknowledgements} David Gunawan, Denzil Fiebig and Robert Kohn were partially supported by the Australian Research Council Discovery Project scheme DP150104630 and Scott Sisson was partially supported by the Discovery Project grant DP160102544. Vincent Chin, David Gunawan, Robert Kohn and Scott Sisson were also partially supported by the Australian Centre of Excellence for Mathematical \& Statistical Frontiers (ACEMS) grant CE140100049. \bibliographystyle{chicago} \interlinepenalty=10000 \section{Parameter expansion for the MVP model} \label{app:parameter} We first describe the parameter expansion (PX) strategy in \cite{talhouk2012efficient} for the MVP model based on the marginally uniform prior on $\bm{R}_{\boldsymbol \epsilon}$ of \cite{barnard2000modeling}. We then illustrate the extension of this method to the panel data setting where the resulting sampler fails to converge to the correct posterior distribution. Consider the MVP model as in Section~\ref{sec:multivariate}, but without the random effects, \begin{gather} \bm{y}^*_{i} = \bm{B} \bm{x}_{i} + \boldsymbol \epsilon_{i}, \\ \boldsymbol \epsilon_{i} = (\epsilon_{1,i}, \dotsc, \epsilon_{D,i})^\top \stackrel{iid}{\sim} \mathcal{N}({\bf 0}, \bm{R}_{\boldsymbol \epsilon}), \end{gather} for $i=1,\dotsc,n$. The posterior density of interest given the set of observed discrete outcomes $\bm{y}=(\bm{y}_1,\dotsc,\bm{y}_n)$ and prior distribution $p(\bm{B}, \bm{R}_{\boldsymbol \epsilon})$ is \begin{equation} \pi(\bm{y}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}|\bm{y}) \propto p(\bm{y}|\bm{y}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon})p(\bm{y}^*|\bm{B}, \bm{R}_{\boldsymbol \epsilon})p(\bm{B}, \bm{R}_{\boldsymbol \epsilon}). \label{eqn:MVP posterior} \end{equation} By taking advantage of the non-identifiability of the regression parameter $\bm{B}$ when $\bm{R}_{\boldsymbol \epsilon}$ is not set to be a correlation matrix, \cite{talhouk2012efficient} introduce an expansion parameter $\bm{\mathcal{D}}=\textrm{diag}(\delta_1, \dotsc, \delta_D)$ and define a new set of latent variables $\bm{z}^*_i=\bm{\mathcal{D}}\bm{y}^*_i, i=1, \dotsc, n$ such that the posterior density under this transformation is \begin{equation} \pi(\bm{z}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{\mathcal{D}}|\bm{y}) \propto p(\bm{y}|\bm{z}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{\mathcal{D}})p(\bm{z}^*|\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{\mathcal{D}})p(\bm{B}, \bm{R}_{\boldsymbol \epsilon},\bm{\mathcal{D}}), \label{eqn:MVP PX posterior} \end{equation} where $\bm{z}^*=(\bm{z}^*_1, \dotsc, \bm{z}^*_n)$. The marginal posterior distribution of $(\bm{B}, \bm{R}_{\boldsymbol \epsilon})$ in (\ref{eqn:MVP PX posterior}) is the same as (\ref{eqn:MVP posterior}) provided that the marginal prior of $p(\bm{B}, \bm{R}_{\boldsymbol \epsilon},\bm{\mathcal{D}})$ is $p(\bm{B}, \bm{R}_{\boldsymbol \epsilon})$. Using the prior distribution on $\bm{R}_{\boldsymbol \epsilon}$ in (\ref{eqn:marginally uniform prior}) with $\nu=D+1$ and choosing \begin{equation} \delta_i^2 \stackrel{iid}{\sim} \mathcal{IG}((D+1)/2, \bm{R}_{\boldsymbol \epsilon}^{-1}(i,i)/2), \label{eqn:inverse gamma} \end{equation} where $\bm{R}_{\boldsymbol \epsilon}^{-1}(i,i)$ is the $i$-th diagonal entry of $\bm{R}_{\boldsymbol \epsilon}$, the covariance matrix $\boldsymbol \Sigma_{\boldsymbol \epsilon}=\bm{\mathcal{D}} \bm{R}_{\boldsymbol \epsilon} \bm{\mathcal{D}}$ is shown to have an $\mathcal{IW}(D+1, \bm{I})$ distribution. With this reparameterisation of the constrained $\bm{R}_{\boldsymbol \epsilon}$ into an unconstrained $\boldsymbol \Sigma_{\boldsymbol \epsilon}$, posterior sampling from the conditional posterior distribution $\pi(\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{\mathcal{D}}|\bm{z}^*, \bm{y})$ can be circumvented by a change of variables technique. That is, by letting $\boldsymbol \Sigma_{\boldsymbol \epsilon}=\bm{\mathcal{D}} \bm{R}_{\boldsymbol \epsilon} \bm{\mathcal{D}}$ and $\bm{\gamma}=\bm{\mathcal{D}} \bm{B}$, iterates from the posterior distribution in (\ref{eqn:MVP posterior}) can be obtained by the following steps: \begin{enumerate} \item Sample $\bm{y}^*$ from a truncated multivariate normal distribution (Step 1 in Appendix~\ref{app:sampling scheme}). \item Sample $\bm{\mathcal{D}}$ following (\ref{eqn:inverse gamma}) and set $\bm{z}_i^* = \bm{\mathcal{D}} \bm{y}_i^*, i=1,\dotsc,n$. \item Sample $(\bm{\gamma}, \boldsymbol \Sigma_{\boldsymbol \epsilon}) \sim \pi(\bm{\gamma}, \boldsymbol \Sigma_{\boldsymbol \epsilon}|\bm{z}^*, \bm{y})$. \item Set $\bm{B}=\bm{\mathcal{D}}^* \bm{\gamma}, \bm{R}_{\boldsymbol \epsilon}=\bm{\mathcal{D}}^* \boldsymbol \Sigma_{\boldsymbol \epsilon} \bm{\mathcal{D}}^*$ and $\bm{y}_i^*=\bm{\mathcal{D}}^* \bm{z}_i^*, i=1, \dotsc, n$ where $\bm{\mathcal{D}}^* = \textrm{diag}(\boldsymbol \Sigma_{\boldsymbol \epsilon})^{-1/2}$. \end{enumerate} The posterior density of the MVP model with random effects in (\ref{eqn:MVP})--(\ref{eqn:latent}), on the other hand, is \begin{equation} \begin{aligned} \pi(\bm{y}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \alpha_{1:P}, \boldsymbol \Sigma_{\boldsymbol \alpha}|\bm{y}) \propto & p(\bm{y}|\bm{y}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \alpha_{1:P}, \boldsymbol \Sigma_{\boldsymbol \alpha}) \times \\ & p(\bm{y}^*|\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \alpha_{1:P}, \boldsymbol \Sigma_{\boldsymbol \alpha})p(\boldsymbol \alpha_{1:P}|\boldsymbol \Sigma_{\boldsymbol \alpha})p(\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \Sigma_{\boldsymbol \alpha}). \end{aligned} \label{eqn:MVP with RE posterior} \end{equation} Following \cite{talhouk2012efficient}, we define new sets of latents variables $\bm{z}^*_{it} = \bm{\mathcal{D}} \bm{y}^*_{it}$ and $\bm{w}_i = \bm{\mathcal{D}} \boldsymbol \alpha_i$ for $i=1,\dotsc,P$ and $t=1,\dotsc,T$, which gives the new posterior density as \begin{equation} \begin{aligned} \pi(\bm{z}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{w}_{1:P}, \boldsymbol \Sigma_{\boldsymbol \alpha}, \bm{\mathcal{D}}|\bm{y}) \propto & p(\bm{y}|\bm{z}^*, \bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{w}_{1:P}, \boldsymbol \Sigma_{\boldsymbol \alpha}, \bm{\mathcal{D}}) p(\bm{z}^*|\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \bm{w}_{1:P}, \boldsymbol \Sigma_{\boldsymbol \alpha}, \bm{\mathcal{D}}) \times \\ &p(\bm{w}_{1:P}|\boldsymbol \Sigma_{\boldsymbol \alpha}, \bm{\mathcal{D}})p(\bm{B}, \bm{R}_{\boldsymbol \epsilon}, \boldsymbol \Sigma_{\boldsymbol \alpha}, \bm{\mathcal{D}}), \end{aligned} \label{eqn:MVP with RE PX posterior} \end{equation} where $\bm{z}^*=(\bm{z}_{it};i=1, \dotsc,P, t=1, \dotsc, T)$ and $\bm{w}_{1:P}=(\bm{w}_1, \dotsc, \bm{w}_P)$. Assuming a conjugate prior on $\boldsymbol \Sigma_{\boldsymbol \alpha}$, we then generate the samples from the posterior distribution in (\ref{eqn:MVP with RE PX posterior}) by a change of variables ($\boldsymbol \Sigma_{\boldsymbol \epsilon}=\bm{\mathcal{D}} \bm{R}_{\boldsymbol \epsilon} \bm{\mathcal{D}}$ and $\bm{\gamma}=\bm{\mathcal{D}} \bm{B}$) according to \begin{enumerate} \item Sample $\bm{y}^*$ from a truncated multivariate normal distribution (Step 1 in Appendix~\ref{app:sampling scheme}). \item Sample $\boldsymbol \alpha_{1:P}$ from a multivariate normal distribution (Step 4 in Appendix~\ref{app:sampling scheme}). \item Sample $\boldsymbol \Sigma_{\boldsymbol \alpha}$ from an inverse-Wishart distribution (Step 6 in Appendix~\ref{app:sampling scheme}). \item Sample $\bm{\mathcal{D}}$ following (\ref{eqn:inverse gamma}) and set $\bm{z}_{it}^* = \bm{\mathcal{D}} \bm{y}_{it}^*$ and $\bm{w}_i = \bm{\mathcal{D}} \boldsymbol \alpha_i$ for $i=1,\dotsc,P$ and $t=1,\dotsc,T$. \item Sample $(\bm{\gamma}, \boldsymbol \Sigma_{\boldsymbol \epsilon}) \sim \pi(\bm{\gamma}, \boldsymbol \Sigma_{\boldsymbol \epsilon}|\bm{z}^*, \bm{y})$. \item Set $\bm{B}=\bm{\mathcal{D}}^* \bm{\gamma}, \bm{R}_{\boldsymbol \epsilon}=\bm{\mathcal{D}}^* \boldsymbol \Sigma_{\boldsymbol \epsilon} \bm{\mathcal{D}}^*$ and $\bm{y}_{it}^*=\bm{\mathcal{D}}^* \bm{z}_{it}^*, i=1, \dotsc, P, t=1,\dotsc,T$ where $\bm{\mathcal{D}}^* = \textrm{diag}(\boldsymbol \Sigma_{\boldsymbol \epsilon})^{-1/2}$. \end{enumerate} We choose the conjugate matrix normal prior on $\bm{B}$, the marginally uniform prior in (\ref{eqn:marginally uniform prior}) on $\bm{R}_{\boldsymbol \epsilon}$ and the hierarchical inverse-Wishart prior on $\boldsymbol \Sigma_{\boldsymbol \alpha}$. Kernel density plots comparing the posterior distribution of the variance parameter in $\boldsymbol \Sigma_{\boldsymbol \alpha}$ between the HMC algorithm and the parameter expansion technique are given in Figure~\ref{fig:hmc vs px}. Using the posterior distribution from the HMC algorithm as a benchmark, we notice that the parameter expansion method tends to overestimate the variance of the random effects and the posterior densities are flatter. The observed differences are due to the PX technique, not the choice of kernel density bandwidth. \begin{figure}[h!] \onehalfspacing \centering \includegraphics[trim= 0.5cm 1cm 1.5cm 1.75cm,clip,width=0.75\textwidth]{hmc_vs_px_hiw} \caption{Kernel density plots comparing the posterior density of the variance parameter $\sigma^2_{\alpha_i}$ in $\boldsymbol \Sigma_{\boldsymbol \alpha}$ between the HMC algorithm (blue solid) and the parameter expansion technique (red dashed).} \label{fig:hmc vs px} \end{figure}
1,314,259,995,514
arxiv
\section{Introduction} \begin{center} {\bf \S 1 Introduction} \end{center} The notion of the {\em Holonomy} of an affine connection was introduced originally by \'Elie Cartan in the 1920s who used it as an important tool in his attempt to classify all locally symmetric manifolds. Over time, the holonomy group proved to be one of the most informative and useful characteristics of an affine connection and found many applications in both mathematics and physics. By definition, the holonomy of an affine connection on a connected manifold $M$ is the subgroup of all linear automorphisms of $T_pM$ which are induced by parallel translation along $p$-based loops. In 1955, Berger \cite{Berger} showed that the list of irreducibly acting matrix Lie groups which can, in principle, occur as the holonomy of a torsion-free affine connection is very restricted. Berger presented his classification of all possible candidates for irreducible holonomies in two parts. The first part contains all possible groups preserving a non-degenerate symmetric bilinear form, the second part consists of those groups which do {\em not} preserve such a form; the latter part was stated to be complete up to a finite number of missing terms and was given without a proof. Bryant \cite{Bryant2} was the first to discover the incompleteness of the second part of Berger's list, and referred to the missing entries as {\em exotic holonomies}. Since then, several other families of exotic holonomies have been found \cite{Bryant3,CS,CMS}. In this paper we present one more family of exotic holonomies associated with various real forms of the complex 56-dimensional representation of $\mbox{E}_7^{{\Bbb C}}$. \paragraph{Main Theorem.}{\em \begin{itemize} \item[(i)] All representations in the following table {\em \begin{center} \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} \vspace{-3mm} \\ Group: \\ \vspace{-3 mm} \end{tabular} & $\mbox{E}_7^{(5)}$ & $\mbox{E}_7^{(7)}$ & $\mbox{E}_7^{{\Bbb C}}$ \\ \hline \begin{tabular}{c}\vspace{-3mm}\\ Representation space: \\ \vspace{-3mm} \end{tabular} & ${\Bbb R}^{56}$ & ${\Bbb R}^{56}$ & ${\Bbb R}^{112}$ \\ \hline \end{tabular} \end{center} } occur as holonomies of torsion-free affine connections. \item[(ii)] Any torsion-free affine connection with one of these holonomies is analytic. \item[(iii)] The moduli space of torsion-free affine connections with one of these holonomies is finite dimensional. \item[(iv)] Any such connection has a (local) symmetry group of positive dimension. \end{itemize} } This theorem is proved by combining twistor techniques of \cite{Me1} used to compute all the necessary $\mbox{E}_7$-modules $K({\frak e}_7)$, $K^1({\frak e}_7)$ and ${\cal P}^{(1)}({\frak e}_7)$ with the construction of torsion-free affine connections with prescribed holonomy via deformations of a certain linear Poisson structure \cite{CMS}. \begin{center} {\bf \S 2 \, Borel-Weil approach to $\mbox{E}_7^{(a)}$ } \end{center} Let $V$ be a vector space and ${\frak g}$ an irreducible Lie subalgebra of ${\frak gl}(V)\simeq V\otimes V^*$. In the holonomy group context, one is interested in the following three ${\frak g}$-modules: \begin{itemize} \item[(i)] ${\frak g}^{(1)}:= ({\frak g}\otimes V^*)\cap (V\otimes \odot^2 V^*$), \item[(ii)] the {\em curvature space} $K({\frak g}):=\ker i_1$, where $i_1$ is the composition $$ i_1: {\frak g}\otimes \Lambda^2 V^* \mbox{$\longrightarrow$} V\otimes V^*\otimes \Lambda^2 V^* \mbox{$\longrightarrow$} V\otimes \Lambda^3 V^* , $$ \item[(iii)] the {\em 2nd curvature space} $K^1({\frak g}):=\ker i_2$, where $i_2$ is the composition $$ i_2: K({\frak g})\otimes V^* \mbox{$\longrightarrow$} {\frak g}\otimes \Lambda^2 V^* \otimes V^* \mbox{$\longrightarrow$} V \otimes\Lambda^3 V^*. $$ \end{itemize} Note that if $\partial$ is the composition $$ \partial: {\frak g}^{(1)}\otimes V^* \rightarrow {\frak g}\otimes V^*\otimes V^* \rightarrow {\frak g}\otimes \Lambda^2 V^* $$ then $\partial({\frak g}^{(1)}\otimes V^*)\subset K({\frak g})$. The geometric meaning of ${\frak g}^{(1)}$ is that if there exists a (local) torsion-free affine connection $\nabla$ on a manifold $M$ with holonomy algebra ${\frak g}$ then, for any (local) function $\Gamma: M\rightarrow {\frak g}^{(1)}$, the affine connection $\nabla + \Gamma$ is again torsion-free and has holonomy algebra ${\frak g}$; thus, in some sense, ${\frak g}^{(1)}$ measures the non-uniqueness of torsion-free affine connections with holonomy ${\frak g}$ on a fixed manifold. The significance of $K({\frak g})$ and $K^1({\frak g})$ is that the curvature tensor (the covariant derivative of the curvature tensor respectively) of a torsion-free affine connection $\nabla$ with holonomy ${\frak g}$ at a point $p \in M$ is represented by an element of $K({\frak g})$ ($K^1({\frak g})$ respectively). Therefore, ${\frak g}$ can be a candidate to the holonomy algebra of a torsion-free affine connection only if $K({\frak g})\neq 0$. The question then remains how to compute $K({\frak g})$. With any real irreducible representation of a real reductive Lie algebra one may associate an irreducible complex representation of a complex reductive Lie algebra. Since all the above ${\frak g}$-modules behave reasonably well under this association, we may assume from now on that $V$ is a finite dimensional complex vector space and ${\frak g}\subset {\frak gl}(V)$ is an irreducible representation of a complex reductive Lie algebra. Clearly, $G=\exp({\frak g})$ acts irreducibly in $V^*$ via the dual representation. Let $\tilde{X}$ be the $G$-orbit of a highest weight vector in $V^*\setminus 0$. Then the quotient $X:= \tilde{X}/{\Bbb C}^*$ is a compact complex homogeneous-rational manifold canonically embedded into ${\P}(V^*)$, and there is a commutative diagram $$ \begin{array}{ccc} \tilde{X} & \hookrightarrow & V^* \setminus 0 \\ \downarrow & & \downarrow \\ X & \hookrightarrow & \P(V^*) \end{array} $$ In fact, $X=G_s/ P $, where $G_s$ is the semisimple part of $G$ and $P$ is the parabolic subgroup of $G_s$ leaving a highest weight vector in $V^*$ invariant up to a scalar. Let $L$ be the restriction of the hyperplane section bundle ${\cal O}(1)$ on $\P(V^*)$ to the submanifold $X$. Clearly, $L$ is an ample homogeneous line bundle on $X$. We call $(X,L)$ the {\em Borel-Weil data}\, associated with $({\frak g},V)$. According to Borel-Weil, the representation space $V$ can be easily reconstructed from $(X,L)$ as $V = \H^0(X,L)$. What about ${\frak g}$? The Lie algebra of the Lie group of all global biholomorphisms of the line bundle $L$ which commute with the projection $L\rightarrow X$ is isomorphic to $H^0(X,L\otimes (J^1 L)^*)$ --- a central extension of the Lie algebra $\H^0(X,TX)$. Whence, as a complex Lie algebra, $\H^0(X,L\otimes (J^1 L)^*)$ has a natural complex irreducible representation in $\H^0(X,L)=V$; with very few (and well studied in the holonomy context) exceptions \cite{Ah}, this representation is, up to a central extension, isomorphic to the original ${\frak g}$. Remarkably enough, the basic ${\frak g}$-modules defined above fit nicely into the Borel-Weil pa\-ra\-digm as well. \begin{prop}{\em\cite{Me1}}\label{Spencer} For a compact complex homogeneous-rational manifold $X$ and an ample line bundle $L\rightarrow X$, there is an isomorphism $$ {\frak g}^{(1)} = \mbox{\em H}^0\left(X, L\otimes \odot^{2} N^*\right), $$ and an exact sequence of ${\frak g}$-modules, $$ 0\mbox{$\longrightarrow$} \frac{K({\frak g})}{\partial({\frak g}^{(1)}\otimes V^*)} \mbox{$\longrightarrow$} \mbox{\em H}^1\left(X,L\otimes \odot^{3} N^*\right) \mbox{$\longrightarrow$} \mbox{\em H}^1\left(X,L\otimes \odot^{2}N^*\right) \otimes V^* , $$ where ${\frak g}$ is $\mbox{\em H}^0(X,L\otimes N^*)$ represented in $V=\mbox{\em H}^0(X,L)$. \end{prop} {\em Proof}. The result follows easily from the exact sequences $$ 0\mbox{$\longrightarrow$} L\otimes \odot^2 N^* \mbox{$\longrightarrow$} L\otimes N^* \otimes V^* \mbox{$\longrightarrow$} L\otimes \Lambda^2V^* $$ and $$ 0\mbox{$\longrightarrow$} L\otimes \odot^3 N^* \mbox{$\longrightarrow$} L\otimes \odot^2 N^* \otimes V^* \mbox{$\longrightarrow$} L\otimes N^*\otimes \Lambda^2V^* \mbox{$\longrightarrow$} L\otimes \Lambda^3V^*, $$ where arrows are a combination of a natural monomorphism $N^* \mbox{$\longrightarrow$} V^*\otimes {\cal O}_X$ (which holds due to ampleness of $L$) with the antisymmetrization. $\Box$ It is well known that the complex exceptional Lie algebra ${\frak e}_7^{{\Bbb C}}$ has four real forms ${\frak e}_7^{(4)}$, ${\frak e}_7^{(5)}$, ${\frak e}_7^{(6)}$ and ${\frak e}_7^{(7)}$ with signatures 0, 54, 64 and 70 respectively (see, e.g., \cite{HS,On}). Two of these, ${\frak e}_7^{(5)}$ and ${\frak e}_7^{(6)}$, can be irreducibly represented in ${\Bbb R}^{56}$. Let $\rho$ denote the irreducible real representation ${\frak e}_7^{(a)}\rightarrow{\frak gl}(V)$, where $V$ is ${\Bbb R}^{56}$ for $a=5,7$ and $V={\Bbb C}^{56}\simeq {\Bbb R}^{112}$ for $(a)={\Bbb C}$. Let $\mbox{Ad}: {\frak e}_7^{(a)}\rightarrow {\frak gl}({\frak e}_7^{(a)})$ denote the adjoint representation. \begin{theorem}\label{T1} $$ K(\rho({\frak e}_7^{(a)})) \simeq {\em\mbox{Ad}} ({\frak e}_7^{(a)}), \ \ \ \ \ K^1(\rho({\frak e}_7^{(a)})) \simeq V^*. $$ \end{theorem} {\em Proof}. We shall prove this statement for the complex representation only. That it is true for real representations as well will follow from the invariance of all the constructions under the associated real structures in $e_7^{{\Bbb C}}$. Let $(X,L)$ be the Borel-Weil data associated to $\rho:{\frak e}_7^{{\Bbb C}} \rightarrow {\frak gl}(V)$, $V\simeq{\Bbb C}^{56}$. Then $X=\mbox{E}_7^{{\Bbb C}}/P$ is a 27-dimensional compact complex homogeneous-rational manifold whose tangent bundle has, as an irreducible homogeneous vector bundle, the Dynkin diagram representation \cite{BE} $$ TX=\ \ \TXX \ \ \ , \vspace{7 mm} $$ while $L\rightarrow X$ is given by $$ \ \ \, L=\ \ \LL \ \ \ . \vspace{7 mm} $$ Here and below the weights of irreducible homogeneous vector bundles are given in the basis of fundamental weights. Using Konstant's formula and Table 5 in the reference chapter of \cite{On} to find irreducible decompositions of tensor powers of the simplest 27-dimensional irreducible representation of $E_{6}^{{\Bbb C}}$ (which, in our case, is isomorphic to the semisimple part of the parabolic $P$), one obtains $$ \odot^2 TX\otimes L^* = \ \ \TXO \ \ \ + \ \ \ \TXP \vspace{7 mm} $$ and $$ \odot^3 TX\otimes L^{*2} = \ \ \, \TXL \ \ \, + \ \ \, \TXM \ \ \, + \ \, \ \TXN. \vspace{7 mm} $$ Then, using the long exact sequences of the extensions $$ 0\mbox{$\longrightarrow$} N^* \mbox{$\longrightarrow$} L\otimes \odot^2 N^* \mbox{$\longrightarrow$} \odot^2 TX \otimes L^* \mbox{$\longrightarrow$} 0, $$ $$ 0\mbox{$\longrightarrow$} \odot^2 N^* \mbox{$\longrightarrow$} L\otimes \odot^3 N^* \mbox{$\longrightarrow$} \odot^3 TX \otimes L^{2*} \mbox{$\longrightarrow$} 0, $$ the Bott-Borel-Weil theorem and Proposition~\ref{Spencer}, one easily finds $$ \H^0(X,L\otimes \odot^2 N^*) = \H^1(X,L\otimes \odot^2 N^*) = 0 $$ and $$ K(\rho({\frak e}_7^{{\Bbb C}} + {\Bbb C})) \simeq \H^1(X,L\otimes \odot^3 N^*)= \ \ \TT \ \ \simeq \mbox{Ad}({\frak e}_{7}^{{\Bbb C}}). \vspace{7 mm} $$ Let us find next the explicit form of $K(\rho({\frak e}_7^{{\Bbb C}} + {\Bbb C}))$ as a subset of all elements in $\rho({\frak e}_7^{{\Bbb C}} + {\Bbb C}))\otimes \Lambda^2 V^*$ satisfying the first Bianchi identities. Recall \cite{Adams} that $\rho: {\frak e}_7^{{\Bbb C}} \rightarrow {\frak gl}(V)$ enjoys a non-zero invariant skew symmetric invariant product \begin{eqnarray*} \Lambda^2 V & \mbox{$\longrightarrow$} & {\Bbb C} \\ u\otimes v & \mbox{$\longrightarrow$} & <\,u,v\,>, \end{eqnarray*} and a non-zero invariant symmetric map \begin{eqnarray*} \odot^2 V & \mbox{$\longrightarrow$} & e_7^{{\Bbb C}} \\ u\otimes v & \mbox{$\longrightarrow$} & u\circ v, \end{eqnarray*} which are unique up to non-zero scalar factor and satisfy \begin{eqnarray*} <\rho(A)u, v> &=& \lambda B(A,u\circ v)\\ B(u\circ v, s\circ t) - B(u\circ t, s\circ v)&=& \mu \left(2<\!u,s\!><\!v,t\!> - <\!u,t\!><\!v,s\!> - <\!u,v\!><\!s,t\!>\right) \end{eqnarray*} for all $A\in e_7^{{\Bbb C}}$ and $u,v,s,t\in V$. Here $\lambda$ and $\mu$ are fixed non-zero constants and $B(\ , \ )$ is the Killing form. Then it is not hard to check that, for any fixed $A\in e_7^{{\Bbb C}}$, the following map \begin{eqnarray} \Lambda^2 V &\mbox{$\longrightarrow$} & \rho(e_7^{{\Bbb C}} + {\Bbb C}) \nonumber\\ u\otimes v & \mbox{$\longrightarrow$} & 2\lambda \mu <\!u,v\!> \rho(A) + \rho(u\circ \rho(A)v) - \rho(v\circ \rho(A)u) \label{K} \end{eqnarray} defines an element of $\rho(e_7^{{\Bbb C}} + {\Bbb C})\otimes \Lambda^2 V^*$ which lies in the kernel of the composition $$ \rho(e_7^{{\Bbb C}} + {\Bbb C})\otimes \Lambda^2 V^*\mbox{$\longrightarrow$} V\otimes V^* \otimes \Lambda^2 V^* \mbox{$\longrightarrow$} V\otimes \Lambda^3 V^*. $$ Thus, the above formula gives an explicit realization of the isomorphism $K(\rho(e_7^{{\Bbb C}} + {\Bbb C}))=\mbox{Ad}(e_7^{{\Bbb C}})$. In particular, it shows that $K(\rho(e_7^{{\Bbb C}} + {\Bbb C}))=K(\rho(e_7^{{\Bbb C}}))$. Having obtained an explicit structure of $K(\rho(e_7^{{\Bbb C}}))$, it is straightforward to show that a generic element of $K^1(\rho(e_7^{{\Bbb C}})) \subset \rho(e_7^{{\Bbb C}})\otimes V^*\otimes \Lambda^2 V^*$ is of the form \begin{eqnarray*} V\otimes \Lambda^2 V &\mbox{$\longrightarrow$} & \rho(e_7^{{\Bbb C}}) \nonumber\\ s\otimes u\otimes v & \mbox{$\longrightarrow$} & 2\lambda\mu <\!u,v\!> \rho(s\circ w) + \rho(u\circ \rho(s\circ w)v) - \rho(v\circ \rho(s\circ w)u) \end{eqnarray*} for some fixed $w\in V\simeq V^*$. This establishes the isomorphism $K^1(\rho(e_7^{{\Bbb C}}))=V^*$. $\Box$ \begin{center} {\bf \S 3 A construction of torsion-free connections} \end{center} We briefly describe here the construction of torsion-free connections with prescribed holonomy which was presented in \cite{CMS}. Let ${\frak g} \subset {\frak gl}(V)$ a Lie sub-algebra where $V$ is a finite-dimensional vector space. A $G$-equivariant $C^\infty$-map $\phi: {\frak g}^* \rightarrow \Lambda^2 V^*$ is called {\em admissible} if for every $p \in {\frak g}^*$, the map $d\phi_p^*: \Lambda^2 V \mbox{$\longrightarrow$} \mbox{T}_p^*{\frak g}^* \simeq {\frak g}$ lies in $K({\frak g})$. For a given admissible map $\phi$, one may define the following Poisson structure on the dual $W^*$ of the semi-direct Lie algebra $W = {\frak g} \oplus V$: \begin{equation} \label{Poisson} \{f,g\}_{p+\nu} = p ([A,B]) + \nu(A \cdot y - B \cdot x) + \phi(p)(x,y), \end{equation} where $df = A + x$ and $dg = B + y$ are the decompositions of $df, dg \in T^*W^* \simeq {\frak g} \oplus V$, and $p \in {\frak g}^*, \nu \in V^*$. This Poisson structure may be regarded as a deformation of the natural linear Poisson structure on $W^*$. Let $\pi: S \rightarrow U$ be a {\em symplectic realization} of an open subset $U \subset W^*$, i.e. $\pi$ is a submersion from a symplectic manifold $S$ with symplectic 2-form $\Omega$ such that $$ \{\pi^*(f), \pi^*(g)\}_S = \pi^*(\{f,g\}) \ \ \ \mbox{for all $f,g \in C^\infty(P,{\Bbb R})$,} $$ where $\{\ ,\ \}_S$ is the Poisson structure on $S$ induced by the symplectic structure. At those points where the rank of the Poisson structure is maximal, such a symplectic realization exists at least locally. Regarding each element $w \in W \simeq T^* W^*$ as a 1-form on $W$, we define the distribution $$ {\cal D}= \{ \xi_w := \#(\pi^*(w)) \mid w\in W\} \subset TS $$ on $S$, where $\#$ is the index-raising map induced by $\Omega$. Since $\Omega$ is non-degenerate, \linebreak $\mbox{rank\ } {\cal D} = \dim W$. Moreover, for the bracket relations one calculates \begin{equation}\label{brackets} \begin{array}{llll} [\xi_A, \xi_B] = \xi_{[A,B]}, & [\xi_A, \xi_x] = \xi_{A \cdot x}, & \mbox{and} & [\xi_x, \xi_y](s) = \xi_{d\phi_p^* (x,y)}, \end{array} \end{equation} where $A,B \in {\frak g}$, $x,y \in V$ and $p = \pi(s)$. Let $F\subset S$ be an integral leaf of ${\cal D}$. By the very definition of ${\cal D}$, $F$ comes equipped with a $W$-valued coframe $\theta + \omega$, where $\theta$ and $\omega$ take values in $V$ and ${\frak g}$ respectively, defined by the equation $$ (\omega + \theta) (\xi_w) = w. $$ Note that by the first equation in (\ref{brackets}), the vector fields $\xi_A$, $A \in {\frak g}$, induce a free local group action of $G$ on $F$, where $G \subset Gl(V)$ is the connected Lie subgroup corresponding to ${\frak g} \subset {\frak gl}(V)$. After shrinking $F$ as necessary, we may assume that $M := F/G$ is a manifold. Standard arguments then imply that there is a unique embedding of $\imath: F \hookrightarrow {\frak F}_V$, where ${\frak F}_V$ denotes the $V$-valued coframe bundle of $M$ and a torsion-free connection on $M$ such that $\imath^*(\underline\theta + \underline\omega) = \theta + \omega$, where $\underline\theta$ and $\underline\omega$ denote the tautological and the connection 1-form on ${\frak F}_V$, respectively. Clearly, the holonomy of this connection is contained in $G$; in fact, by the {\em Ambrose-Singer-Holonomy Theorem}, the holonomy algebra is generated by $\{ d\phi^*_p(x,y)\ |\ x,y \in V, p \in \pi(F)\}$. A connection which comes from this construction is called a {\em Poisson connection}. This leads to the following \begin{theorem}{\em \cite{CMS}}\label{T2} Let ${\frak g} \subset {\frak gl}(V)$ be a Lie sub-algebra where $V$ is a finite-dimensional vector space, and let $$ K_0({\frak g}) = \left\{ R \in K({\frak g}) \mid \mbox{\em span}\{R(x,y), \ \mbox{\em all}\ x,y\in V\}= {\frak g}\right\}. $$ If $\phi: {\frak g}^* \rightarrow \Lambda^2 V^*$ is admissible, and if the open set $U_0 \subset {\frak g}^*$ given by \[ U_0 := (d\phi^*)^{-1} (K_0({\frak g})) \] is non-empty, then there exist Poisson connections induced by $\phi$ whose holonomy representations are equivalent to ${\frak g}$. Moreover, if $\phi|_{U_0}$ is not affine, then not all of these connections are locally symmetric. \end{theorem} It is not clear at present how general the class of Poisson connections is, nor how many irreducible Lie subalgebras ${\frak g}\subset {\frak gl}(V)$ admit admissible maps $\phi: {\frak g}^*\rightarrow \Lambda^2V^*$ which are not affine. However, there is a class of Lie subalgebras for which the above construction exhausts {\em all possible torsion-free connections} with this holonomy. Namely, we define the ${\frak g}$-module $$ {\cal P}^{(1)}({\frak g}) = (\odot^2 {\frak g}\otimes \Lambda^2 V^*) \cap ({\frak g}\otimes K({\frak g})) \subset V\otimes K^1({\frak g}) $$ and regard elements $\phi_2 \in {\cal P}^{(1)}({\frak g})$ as polynomial maps ${\frak g}^* \rightarrow \Lambda^2 V^*$ of degree $2$. It is then obvious that each $G$-invariant $\phi_2 \in {\cal P}^{(1)}({\frak g})$ is admissible, and we have the following result. \begin{theorem}{\em \cite{CMS}} \label{T3} Let ${\frak g} \subset {\frak gl}(V)$ be an irreducibly acting subalgebra, and suppose that there is an invariant element $\phi_2 \in {\cal P}^{(1)}({\frak g})$ such that the associated $G$-equivariant linear maps \begin{equation} \label{isomorphisms} \begin{array}{rrcl} \phi_2': & {\frak g}^* & \longrightarrow & K({\frak g})\\ \phi_2'': & V^* & \longrightarrow & K^1({\frak g}). \end{array} \end{equation} are isomorphisms. Then every torsion-free affine connection whose holonomy algebra is contained in ${\frak g}$ is a Poisson connection induced by an admissible map $$ \phi = \phi_2 + \tau, $$ where $\tau \in \Lambda^2 V^*$ is a (possibly vanishing) ${\frak g}$-invariant 2-form. In particluar, the moduli space of such connections is finite dimensional, and each such connection is analytic. Also, the dimension of the symmetry group of this connection equals $\dim W^* - 2k$ where $k$ is the half-rank of the Poisson structure on $W^*$ induced by $\phi$ in (\ref{Poisson}). \end{theorem} At first sight, the premise that the maps (\ref{isomorphisms}) be isomorphisms looks like an unreasonably strong condition in order to utilize this Theorem. Nevertheless, this premise {\em does} hold for the exotic holonomies $\mbox{SO}(p,q)\mbox{SL}(2,{\Bbb R})$ and $\mbox{SO}(n,{\Bbb C})\mbox{SL}(2,{\Bbb C})$ which were discovered in \cite{CMS}. Also, we will show in \S 4 that it also holds for the representations $E_7^{(a)}$ from the main theorem. For the proof, we shall need the following version of Schur's Lemma: \begin{lemma} \label{L4} Let ${\frak g}$ be a reductive Lie algebra, and suppose that ${\frak g}$ acts irreducibly on the finite dimensional vector spaces $V$ and $W$. If $\rho: V \rightarrow W$ is a linear map satisfying \begin{eqnarray} \label{Schur} A\ \rho\ B = B\ \rho\ A & \mbox{for all $A, B \in {\frak g}$,} \end{eqnarray} then $\rho = 0$. \end{lemma} \medskip \noindent {\bf Proof of Theorem \ref{T3}} Let $F \subset {\frak F}_V$ be a $G$-structure on the manifold $M$ where ${\frak F}_V \rightarrow M$ is the $V$-valued coframe bundle of $M$, and denote the tautological $V$-valued 1-form on $F$ by $\theta$. Suppose that $F$ is equipped with a torsion-free connection, i.e. a ${\frak g}$-valued 1-form $\omega$ on $F$. Since $\phi_2'$ is an isomorphism, the {\em first and second structure equations} read \begin{equation} \label{eq:struct} \begin{array}{ll} d\theta & = - \omega \wedge \theta\\ d\omega & = - \omega \wedge \omega - 2 (\phi_2'({\bf a})) \circ (\theta \wedge \theta),\end{array} \end{equation} where ${\bf a}: F \rightarrow {\frak g}^*$ is a $G$-equivariant map. Differentiating (\ref{eq:struct}) and using that $\phi_2''$ is an isomorphism yields the {\em third structure equation} for the differential of ${\bf a}$: \begin{equation} \label{eq:struct3} d{\bf a} = -\omega \cdot {\bf a} + \jmath({\bf b} \otimes \theta), \end{equation} for some $G$-equivariant map ${\bf b}: F \rightarrow V^*$, where $\jmath: V^* \otimes V \rightarrow {\frak g}^*$ is the natural projection. The multiplication in the first term refers to the coadjoint action of ${\frak g}$ on ${\frak g}^*$. Let us define the map ${\bf c}: F \rightarrow V^* \otimes V^*$ by \begin{equation} \label{eq:struct4} {\bf c}_p (x,y) := d{\bf b}(\xi_x) (y) - \phi_2({\bf a}_p, {\bf a}_p, x, y). \end{equation} Differentiation of (\ref{eq:struct3}) yields \begin{equation} \label{eq:cinvariance} \begin{array}{ll} {\bf c}_p(x, Ay) = {\bf c}_p(y, Ax) & \mbox{for all $x,y \in V$ and all $A \in {\frak g}$.} \end{array} \end{equation} If we let $\rho: V \rightarrow V^*, x \mapsto {\bf c}_p(x,\underline{\ \ }) + {\bf c}_p(\underline{\ \ }, x)$, then (\ref{eq:cinvariance}) and Lemma \ref{L4} imply that $\rho = 0$, i.e. ${\bf c}_p$ must be skew-symmetric and $G$-invariant. This and differentiation of (\ref{eq:struct4}) implies that \[ d{\bf c} = 0, \] i.e. ${\bf c}_p \equiv \tau \in \Lambda^2 V^*$ is {\em constant}. Thus, the $G$-equivariance of ${\bf b}$ and (\ref{eq:struct4}) yield \begin{equation} \label{eq:newstruct4} d{\bf b} = -\omega \cdot {\bf b} + \left( {\bf a}_p^2 \contr \phi_2 + \tau \right) \circ \theta, \end{equation} where $\contr$ refers to the contraction of ${\bf a}_p^2 \in \odot^2({\frak g}^*)$ with $\phi_2 \in \odot^2({\frak g}) \otimes \Lambda^2 V^*$. Let us now define the Poisson structure on $W^* = {\frak g}^* \oplus V^*$ induced by $\phi := \phi_2 + \tau$, and let $\pi := {\bf a} + {\bf b}: F \rightarrow W^*$. From (\ref{eq:struct3}) and (\ref{eq:newstruct4}), one can now show that, at least locally, the connection is indeed a Poisson connection induced by $\phi$. Let ${\frak s} \subset {\frak X}(F)$ be the Lie algebra of infinitesimal symmetries. Let $f: W^* \supset U \rightarrow {\Bbb F}$ be a local function which is constant on the symplectic leaves. Then it is easy to see that $\# \pi^*(df)$ is an infinitesimal symmetry. It follows that $\dim {\frak s} \geq \dim W^* - 2k$. On the other hand, if $X \in {\frak s}$ then $\pi_*(X) = 0$, hence $\dim {\frak s} \leq \dim W^* - 2k$. The statements about analyticity and the moduli space are now immediate. {\hfill \rule{.5em}{1em}\mbox{}\bigskip} \medskip \noindent {\bf Proof of Lemma \ref{L4}} Throughout the proof, we make the simplifying assumption that $\mbox{rank\ } {\frak g} >1$, as the case $\mbox{rank\ } {\frak g} = 1$ is straightforward. Let $P \subset V^* \otimes W$ be the subspace of all maps $\rho: V \rightarrow W$ satisfying (\ref{Schur}). It is easy to verify that $P$ is ${\frak g}$-invariant. We complexify ${\frak g}$, $V$ and $W$ and pick Cartan and weight space decompositions \[ \begin{array}{llll} {\frak g} = {\frak t} \oplus \bigoplus_\alpha {\frak g}_\alpha, & V = \bigoplus_{\mu} V_\mu & \mbox{and} & W = \bigoplus_{\mu} W_\mu. \end{array} \] Let $\rho \in P$, and let $x_\mu \in V_\mu$ with $\mu \neq 0$. Then choosing $A, B \in {\frak t}$, with $\mu(A) = 0,\ \mu(B) \neq 0$, (\ref{Schur}) implies that $A \rho x_\mu = 0$, and therefore, \begin{equation} \label{**} \rho x_\mu \in \sum_k W_{k \mu}, \end{equation} where the sum is taken over all weights of $W$ which are scalar multiples of $\mu$. Now let $\rho_\lambda \in P$ be an element of weight $\lambda \neq 0$. Then $\rho_\lambda x_\mu \in W_{\lambda + \mu}$, and thus from (\ref{**}) we conclude: \begin{equation} \label{indep} \mbox{$\rho_\lambda x_\mu = 0$ whenever $\lambda, \mu$ are linearly independent.} \end{equation} Let $x_{k \lambda} \in V_{k \lambda}$ be a weight vector with $k \neq 0$, and let $A_\alpha \in {\frak g}_\alpha$ where $\alpha$ is a root independent of $\lambda$. Then, using (\ref{indep}) twice, we get \[ \begin{array}{ll} 0 & = (A_\alpha \cdot \rho_\lambda) x_{k \lambda}\\ & = A_\alpha (\rho_\lambda x_{k \lambda}) - \rho_\lambda (A_\alpha x_{k \lambda})\\ & = A_\alpha (\rho_\lambda x_{k \lambda}) - 0. \end{array} \] Next, note that $V_0$ is spanned by elements of the form $A_\alpha x_{-\alpha}$ with $A_\alpha \in {\frak g}_\alpha$ and $x_{-\alpha} \in V_{-\alpha}$. If $\alpha, \lambda$ are independent, pick $A_0 \in {\frak t}$ with $\alpha(A_0) = 0$ and $\lambda(A_0) \neq 0$. Then (\ref{Schur}) implies that $\rho_\lambda (A_\alpha x_{-\alpha}) = 0$. Finally, if $\alpha, \lambda$ are dependent, then for $\beta \neq \pm \alpha$, we get from (\ref{Schur}) and (\ref{indep}) that $A_\beta (\rho_\lambda (A_\alpha x_{-\alpha})) = 0$. In either case, we get that for any $\mu$, $A_\alpha (\rho_\lambda x_\mu) = 0$ whenever $\lambda, \alpha$ are independent, and hence \[ {\frak g} \cdot (\rho_\lambda V) \subset \sum_k W_{k \lambda}. \] Since there must be weights in $W$ independent of $\lambda$, and since $W$ is irreducible, we conclude that $\rho_\lambda = 0$, contradicting $\lambda \neq 0$. Thus, $P$ has no weights $\neq 0$, i.e. $P$ is acted on trivially by ${\frak g}$, and from there it is easy to conclude that $P = 0$. $\Box$ \begin{center} {\bf \S 4 Proof of the main theorem} \end{center} Let ${\frak g} \subset {\frak gl}(V)$ be one of the representations in the Main Theorem. Evidently, the Main Theorem will follow from Theorems~\ref{T2} and \ref{T3} if we can find an element $\phi_2 \in {\cal P}^{(1)}({\frak g})$ such that $K_0({\frak g})$ is dense in $K({\frak g})$, and the corresponding maps in (\ref{isomorphisms}) are isomorphisms. In particular, (iv) of the main Theorem follows since, in each case, $\dim W^* = \dim V + \dim {\frak g} = 56 + 133$ is odd. The density of $K_0({\frak g})$ in $K({\frak g})$ follows immediately from (\ref{K}). To compute ${\cal P}^{(1)}({\frak g})$, first note that the ${\frak g}$-module $K({\frak g})\otimes {\frak g}\simeq {\frak g}\otimes {\frak g}$ has only one 1-dimensional ${\frak g}$-submodule. So, if there is an invariant element $\phi_2$ in ${\cal P}^{(1)}({\frak g})$, it is unique up to a non-zero scalar factor. Since $\phi_2': {\frak g}^* \mbox{$\longrightarrow$} K({\frak g})$ must be an isomorphism, the formula (\ref{K}) leaves no choice but the following element of ${\frak g}\otimes K({\frak g})$ as a candidate for $\phi_2$: $$ \phi_2(C,D, u,v) = 2\lambda\mu<\!u,v\!>B(C,D) + B(u\circ \rho(C)v, D) - B(v\circ \rho(C)u, D), $$ where $C,D\in {\frak g}$, $u,v\in V$ and where we identify ${\frak g}={\frak g}^*$ via the Killing form. Clearly, this element is ${\frak g}$-invariant. Since $B(u\circ \rho(C)v, D) = \lambda <\!\rho(D)u, \rho(C)v\!>$, we have \begin{eqnarray*} \phi_2(C,D, u,v) & = & 2\lambda\mu<\!u,v\!>B(C,D) + \lambda <\!\rho(D)u, \rho(C)v\!> - \lambda <\!\rho(D)v, \rho(C)u\!> \\ &=& 2\lambda\mu<\!u,v\!>B(C,D) - \lambda <\!\rho(C)v, \rho(D)u\!> - \lambda <\!\rho(D)v, \rho(C)u\!> \\ \end{eqnarray*} which makes it evident that $\phi_2\in (\odot^2{\frak g}\otimes \Lambda^2 V^*) \cap ({\frak g}\otimes K({\frak g}))$. That $\phi_2': {\frak g}^* \mbox{$\longrightarrow$} K({\frak g})$ is an isomorphism follows from the very definition of $\phi_2$. Since $\phi_2'': V^* \mbox{$\longrightarrow$} K^1({\frak g})$ is evidently non-zero, then, by Theorem~\ref{T1} and the ${\frak g}$-invariance of $\phi_2$, $\phi''_2$ must be an isomorphism as well. $\Box$ \vspace{5 mm} \noindent{\small {\em Acknowledgement}. One of the authors (SM) would like to thank Andrew Swann for valuable discussions and for drawing his attention to the very informative lecture notes \cite{Adams}.} \pagebreak {\small
1,314,259,995,515
arxiv
\section{Introduction} \label{Scdrum1} Two years before the publication of Kac's famous paper \cite{Kac} `Can one hear the shape of a drum' Milnor \cite{Milnor} gave a counter example showing that one cannot hear the shape of a compact Riemannian manifold. Milnor presented two 16-dimensional Riemannian manifolds for which the associated Laplace--Beltrami operators have the same spectrum, i.e.\ are isospectral. The latter is equivalent with the existence of a unitary operator $U$ which intertwines the heat semigroups on the compact manifolds. The heat semigroups are positive, which means that they map positive functions (i.e.\ positive heat) to positive functions on the $L_2$-spaces of the compact manifolds. In this paper we replace the unitary operator by an order isomorphism, i.e.\ a linear bijective mapping $U$ such that $U \varphi \geq 0$ if and only if $\varphi \geq 0$. Then we show that the manifolds are indeed isomorphic. This may be interpreted in the following way. The heat semigroups are positive, which means that positive functions (heat densities) are mapped to positive functions. The orbit corresponding to a positive initial value describes the propagation of the heat density, i.e.\ the diffusion. Thus to say that an order isomorphism intertwines between two heat semigroups means that the positive orbits are mapped to positive orbits. So our result may be rephrased by saying that diffusion determines the compact manifold. For open connected subsets of ${\bf R}^d$ satisfying a weak smoothness condition Arendt \cite{Are3} proved that diffusion determines the body (see also \cite{Are4}). In a recent paper \cite{ABE} this was extended to connected Riemannian manifolds satisfying the same smoothness condition. Every compact connected Riemannian manifold satisfies this smoothness condition. The aim of this paper is to give a direct and short proof that diffusion determines the body for compact Riemannian manifolds. The compact Riemannian manifolds do not have to be connected. \smallskip Let $(M,g)$ be a compact Riemannian manifold of dimension $d$. Then $M$ has a natural Radon measure with respect to which we define the $L_p$-spaces on $M$. Set \[ H^1(M) = \{ \varphi \in L_2(M) : \varphi \circ x^{-1} \in H^1(x(V)) \mbox{ for every chart } (V,x) \} \;\;\; . \] If $\varphi \in H^1(M)$ and $(V,x)$ is a chart on $M$ then set $\frac{\partial}{\partial x^i} \varphi = (D_i (\varphi \circ x^{-1})) \circ x \in L_2(V)$, where $D_i$ denotes the partial derivative in ${\bf R}^d$. Moreover, for all $\varphi,\psi \in H^1(M)$ there exists a unique element $\nabla \varphi \cdot \nabla \psi \in L_1(M)$ such that \[ \nabla \varphi \cdot \nabla \psi \Big|_V = \sum_{i,j=1}^d g^{ij} \Big( \frac{\partial}{\partial x^i} \varphi \Big) \Big( \frac{\partial}{\partial x^j} \psi \Big) \] for every chart $(V,x)$ on $M$. Set $|\nabla \varphi| = (\nabla \varphi \cdot \nabla \varphi)^{1/2}$. We provide $H^1(M)$ with the norm $\varphi \mapsto ( \|\varphi\|_2^2 + \| \, |\nabla \varphi| \, \|_2^2 )^{1/2}$. Then $H^1(M)$ is a Hilbert space. Define the bilinear form $a \colon H^1(M) \times H^1(M) \to {\bf R}$ by $a(\psi,\varphi) = \int \nabla \psi \cdot \nabla \varphi$. Then $a$ is a closed and positive form in $L_2(M)$. The {\bf Dirichlet Laplace--Beltrami operator} $\Delta$ on $M$ is the associated self-adjoint operator. If $(V,x)$ is a chart on $M$ then \[ \Delta \, \varphi = - \sum_{i,j=1}^d \frac{1}{\sqrt{g}} \, \frac{\partial}{\partial x^i} \, g^{ij} \, \sqrt{g} \, \frac{\partial}{\partial x^j} \, \varphi \] for all $\varphi \in C_c^\infty(V)$. Let $S$ be the semigroup on $L_2(M)$ generated by $-\Delta$ and let $p \in [1,\infty)$. By the Beurling--Deny criteria the operator $S_t|_{L_2(M) \cap L_p(M)}$ extends to a positive contraction operator on $L_p(M)$ for all $t > 0$. Moreover, $S^{(p)}$ is a $C_0$-semigroup. Since the semigroup $S$ has a smooth kernel satisfying Gaussian bounds (\cite{Sal5} Theorem~5.4.12), it follows that $S_t C(M) \subset C(M)$ and $S|_{C(M)}$ is a $C_0$-semigroup on $C(M)$. If $(M_1,g_1)$ and $(M_2,g_2)$ are two compact Riemannian manifolds then a map $\tau \colon M_1 \to M_2$ is called an {\bf isometry} if it is a $C^\infty$-diffeomorphism and \[ g_2|_{\tau(p)}(\tau_*(v), \tau_*(w)) = g_1|_p(v,w) \] for all $p \in M_1$ and $v,w \in T_p M_1$. The Riemannian manifolds $(M_1,g_1)$ and $(M_2,g_2)$ are called {\bf isomorphic} if there exists an isometry from $M_1$ onto $M_2$. If $\tau \colon M_1 \to M_2$ is an isometry and $p \in [1,\infty]$ then $\varphi \circ \tau \in L_p(M_1)$ and \begin{equation} \|\varphi \circ \tau\|_{L_p(M_1)} = \|\varphi\|_{L_p(M_2)} \label{eSdrum1;1} \end{equation} for all $\varphi \in L_p(M_2)$. A linear operator $U \colon E \to F$ between two Riesz spaces is said to be a {\bf lattice homomorphism} if \[ U (\varphi \wedge \psi) = (U \varphi) \wedge (U \psi) \] for all $\varphi,\psi \in E$. For alternative equivalent definitions see \cite{AB} Theorem~7.2. Each lattice homomorphism $U$ is positive, i.e.\ $\varphi \geq 0$ implies $U \varphi \geq 0$. An {\bf order isomorphism} $U \colon E \to F$ is a bijective mapping such that $U \varphi \geq 0$ if and only if $\varphi \geq 0$. Equivalently, $U$ is an order isomorphism if and only if $U$ is a bijective lattice homomorphism. Then also $U^{-1}$ is an order isomorphism. Recall also that each positive operator between $L_p$-spaces, or from $C(M_1)$ into $C(M_2)$ where $M_1$ and $M_2$ are compact Hausdorff spaces, is continuous by \cite{AB} Theorem~12.3. The main theorem of this paper is the following. \begin{thm} \label{tcdrum101} Let $(M_1,g_1)$ and $(M_2,g_2)$ be two compact Riemannian manifolds. Let $p \in [1,\infty)$. For all $j \in \{ 1,2 \} $ let $\Delta_j$ be the Laplace--Beltrami operator on $M_j$ and let $S^{(j)}$ and $T^{(j)}$ be the associated semigroups on $L_p(M_j)$ and $C(M_j)$. Then the following three conditions are equivalent. \begin{tabel} \item \label{tcdrum101-1} $(M_1,g_1)$ and $(M_2,g_2)$ are isomorphic. \item \label{tcdrum101-3} There exists an order isomorphism $U \colon L_p(M_1) \to L_p(M_2)$ such that \[ U S^{(1)}_t = S^{(2)}_t U \] for all $t > 0$. \item \label{tcdrum101-4} There exists an order isomorphism $U \colon C(M_1) \to C(M_2)$ such that \[ U T^{(1)}_t = T^{(2)}_t U \] for all $t > 0$. \end{tabel} Moreover, if the manifolds are connected and if $U$ is an order isomorphism as in Condition~{\rm \ref{tcdrum101-3}} or {\rm \ref{tcdrum101-4}} then there exist $c>0$ and a $($surjective$)$ isometry $\tau \colon M_2 \to M_1$ such that $U \varphi = c \, \varphi \circ \tau$ for all $\varphi \in L_p(M_1)$. \end{thm} The implications \ref{tcdrum101-1}$\Rightarrow$\ref{tcdrum101-3} and \ref{tcdrum101-1}$\Rightarrow$\ref{tcdrum101-4} are an easy consequence of (\ref{eSdrum1;1}). \section{Proof of Theorem~\ref{tcdrum101}} The first part in the proof of Theorem~\ref{tcdrum101} is the observation that $C^\infty$-functions are invariant under intertwining operators. \begin{lemma} \label{lcdrum201} Let $(M_1,g_1)$ and $(M_2,g_2)$ be two compact Riemannian manifolds. Let $p \in [1,\infty)$. For all $j \in \{ 1,2 \} $ let $\Delta_j$ be the Laplace--Beltrami operator on $M_j$ and let $S^{(j)}$ and $T^{(j)}$ be the associated semigroups on $L_p(M_j)$ and $C(M_j)$. Let either $U \colon L_p(M_1) \to L_p(M_2)$ be an order isomorphism such that \[ U S^{(1)}_t = S^{(2)}_t U \] for all $t > 0$, or $U \colon C(M_1) \to C(M_2)$ be an order isomorphism such that \begin{equation} U T^{(1)}_t = T^{(2)}_t U \label{elcdrum201;2} \end{equation} for all $t > 0$. Then \begin{subtabel} \item \label{lcdrum201-1} $U C^\infty(M_1) = C^\infty(M_2)$. \item \label{lcdrum201-2} $U \varphi \geq 0$ if and only if $\varphi \geq 0$, for all $\varphi \in C^\infty(M_1)$. \item \label{lcdrum201-3} $(U \varphi) (U \psi) = 0$ for all $\varphi,\psi \in C^\infty(M_1)$ with $\varphi \, \psi = 0$. \item \label{lcdrum201-4} $\Delta_2 U \varphi = U \Delta_1 \varphi$ for all $\varphi \in C^\infty(M_1)$. \end{subtabel} \end{lemma} \proof\ Suppose $U$ is an order isomorphism from $C(M_1)$ onto $C(M_2)$. Let $H_j$ be the generator of $T^{(j)}$ for all $j \in \{ 1,2 \} $. If $\varphi \in D(H_1)$ then it follows from (\ref{elcdrum201;2}) that \[ \tfrac{1}{t} (I - T_t^{(2)}) U \varphi = \tfrac{1}{t} U (I - T^{(1)}) \varphi \] for all $t > 0$. Since $U$ is continuous one deduces that $U \varphi \in D(H^{(2)})$. So $U D(H_1) \subset D(H_2)$ and $H_2 U \varphi = U H_1 \varphi$ for all $\varphi \in D(\Delta_1)$. Similarly $U^{-1} D(H_2) \subset D(H_1)$ and therefore $U D(H_1) = D(H_2)$. Hence by iteration $U \bigcap_{n=1}^\infty D(H_1^n) = \bigcap_{n=1}^\infty D(H_2^n)$. But $C^\infty(M_j) = \bigcap_{n=1}^\infty D(H_j^n)$ for all $j \in \{ 1,2 \} $ by elliptic regularity. Here we use that the manifolds are compact. This shows \ref{lcdrum201-1} and \ref{lcdrum201-4}. Property~\ref{lcdrum201-2} follows since $U$ is an order isomorphism. Moreover, $|U \varphi| = U |\varphi|$ for all $\varphi \in C(M_1)$. Hence if $\varphi,\psi \in C(M_1)$ and $\varphi \, \psi = 0$ then $|\varphi| \wedge |\psi| = 0$ and $|U \varphi| \wedge |U \psi| = U |\varphi| \wedge U |\psi| = U(|\varphi| \wedge |\psi|) = 0$. Therefore $|(U \varphi) (U \psi)| = |U \varphi| \, |U \psi| = 0$ and $(U \varphi) (U \psi) = 0$. This implies Property~\ref{lcdrum201-3}. The proof on the $L_p$-spaces is similar.\hfill$\Box$ \vskip10.0pt plus 4.0pt minus 6.0pt The next lemma is a $C^\infty$-version of the Riesz representation theorem. (Cf.\ \cite{EvG} Corollary~1.8.1.) \begin{lemma} \label{lcdrum201.3} Let $M$ be a compact Riemannian manifold and $F \colon C^\infty(M) \to {\bf R}$ a positive linear functional such that \begin{equation} F(\varphi) \, F(\psi) = 0 \mbox{ for all } \varphi,\psi \in C^\infty(M) \mbox{ with } \varphi \, \psi = 0 . \label{elcdrum201.3;1} \end{equation} Then there exist $c \in [0,\infty)$ and $p \in M$ such that $F(\varphi) = c \, \varphi(p)$ for all $\varphi \in C^\infty(M)$. \end{lemma} \proof\ Let $\varphi \in C^\infty(M)$. Then $\|\varphi\|_\infty \, \mathds{1} - \varphi \geq 0$, so it follows from positivity that $F(\varphi) \leq F(\mathds{1}) \, \|\varphi\|_\infty$. Since $C^\infty(M)$ is dense in $C(M)$ one can extend $F$ to a continuous linear function from $C(M)$ into ${\bf R}$. This extension is again positive since positive functions in $C(M)$ can be approximated uniformly by positive functions in $C^\infty(M)$. By the Riesz representation theorem there exists a unique Radon measure $\mu$ on $M$ such that $F(\varphi) = \int \varphi \, d \mu$ for all $\varphi \in C^\infty(M)$. Then it follows from (\ref{elcdrum201.3;1}) that $\mu$ is a point measure. Hence there exist $p \in M$ and $c \in [0,\infty)$ such that $F(\varphi) = c \, \varphi(p)$ for all $\varphi \in C^\infty(M)$.\hfill$\Box$ \begin{prop} \label{pdrum202} Let $(M_1,g_1)$ and $(M_2,g_2)$ be two compact Riemannian manifolds. Suppose there exists a linear bijection $U \colon C^\infty(M_1) \to C^\infty(M_2)$ such that \begin{subtabel} \item \label{pdrum202-1} $U \varphi \geq 0$ if and only if $\varphi \geq 0$, for all $\varphi \in C^\infty(M_1)$. \item \label{pdrum202-2} $(U \varphi) (U \psi) = 0$ if and only if $\varphi \, \psi = 0$, for all $\varphi,\psi \in C^\infty(M_1)$. \item \label{pdrum202-3} $\Delta_2 U \varphi = U \Delta_1 \varphi$ for all $\varphi \in C^\infty(M_1)$. \end{subtabel} Then the Riemannian manifolds $(M_1,g_1)$ and $(M_2,g_2)$ are isomorphic. \end{prop} \proof\ Let $q \in M_2$. Then the map $\varphi \mapsto (U \varphi)(q)$ from $C^\infty(M_1)$ into ${\bf R}$ is linear, positive and non-zero. So by Lemma~\ref{lcdrum201.3} there exist $\tau(q) \in M_1$ and $h(q) \in (0,\infty)$ such that \begin{equation} (U \varphi)(q) = h(q) \, \varphi(\tau(q)) \label{epdrum202;1} \end{equation} for all $\varphi \in C^\infty(M_1)$. So one obtains functions $\tau \colon M_2 \to M_1$ and $h \colon M_2 \to (0,\infty)$. Similarly, there exist $\widetilde \tau \colon M_1 \to M_2$ and $\tilde h \colon M_1 \to (0,\infty)$ such that $(U^{-1} \psi)(p) = \tilde h(p) \, \psi(\widetilde \tau(p))$ for all $\psi \in C^\infty(M_2)$ and $p \in M_1$. Then $\varphi(p) = \tilde h(p) \, h(\widetilde \tau(p)) \, \varphi(\tau( \widetilde \tau(p)))$ for all $\varphi \in C^\infty(M_1)$ and $p \in M_1$. Choosing $\varphi = \mathds{1}$ gives $\tilde h(p) \, h(\widetilde \tau(p)) = 1$. Hence $\varphi = \varphi \circ \tau \circ \widetilde \tau$ for all $\varphi \in C^\infty(M_1)$ and $\tau \circ \widetilde \tau = I$. Similarly $\widetilde \tau \circ \tau = I$ and $\tau$ is a bijection. Choosing again $\varphi = \mathds{1}$ in (\ref{epdrum202;1}) gives $h = U \mathds{1} \in C^\infty(M_2)$. Hence $\varphi \circ \tau = h^{-1} U \varphi \in C^\infty(M_2)$ for all $\varphi \in C^\infty(M_1)$ and $\tau$ is a $C^\infty$-function. Thus $\tau$ is a $C^\infty$-diffeomorphism and the two manifolds have the same dimension. Let $d = \dim M_1 = \dim M_2$. It follows from Property~\ref{pdrum202-3} that \begin{equation} \Delta_2( h \cdot (\varphi \circ \tau)) = h \cdot ( (\Delta_1 \varphi) \circ \tau) \label{epdrum202;2} \end{equation} for all $\varphi \in C^\infty(M_1)$. Let $q \in M_2$. There exists a chart $(V,x)$ on $M_1$ such that $\tau(q) \in V$ and $x(\tau(q)) = 0$. Let $\Omega \subset M_1$ be open such that $\tau(q) \in \Omega \subset \overline\Omega \subset V$. Let $\lambda_1,\ldots,\lambda_d \in {\bf R}$. For all $t > 0$ there exists a $\varphi_t \in C^\infty(M_1)$ such that \[ \varphi_t |_\Omega = e^{t \sum_{k=1}^d \lambda_k x^k} |_\Omega . \] Since \[ \Delta_1 = \sum_{i,j=1}^d \frac{1}{\sqrt{g_1}} \, \frac{\partial}{\partial x^i} \, g_1^{ij} \, \sqrt{g_1} \, \frac{\partial}{\partial x^j} \] on $V$ it follows that \[ \Delta_1 \varphi_t = \sum_{i,j=1}^d t^2 g_1^{ij} \, \lambda_i \, \lambda_j \, \varphi_t - t \, \frac{\lambda_j}{\sqrt{g_1}} \, \varphi_t \, \frac{\partial}{\partial x^i} ( g_1^{ij} \, \sqrt{g_1} ) \] on $\Omega$. Hence \[ \lim_{t \to \infty} t^{-2} \Big( h \cdot ( (\Delta_1 \varphi_t) \circ \tau) \Big) (q) = h(q) \sum_{i,j=1}^d g_1^{ij}(\tau(q)) \, \lambda_i \, \lambda_j . \] Next, $(\tau^{-1}(V),y)$ is a chart on $M_2$, where $y = x \circ \tau$. Then it follows similarly that \begin{eqnarray*} \lim_{t \to \infty} t^{-2} \Big( \Delta_2( h \cdot (\varphi_t \circ \tau)) \Big) (q) & = & \sum_{i,j=1}^d h(q) \, g_2^{ij}(q) \Big( \frac{\partial}{\partial y_i} \sum_{k=1}^d \lambda_k x^k \circ \tau \Big)(q) \Big( \frac{\partial}{\partial y_j} \sum_{l=1}^d \lambda_l x^l \circ \tau \Big)(q) \\ & = & \sum_{i,j=1}^d h(q) \, g_2^{ij}(q) \Big( \frac{\partial}{\partial y_i} \sum_{k=1}^d \lambda_k y^k \Big)(q) \Big( \frac{\partial}{\partial y_j} \sum_{l=1}^d \lambda_l y^l \Big)(q) \\ & = & \sum_{i,j=1}^d h(q) \, g_2^{ij}(q) \, \lambda_i \, \lambda_j . \end{eqnarray*} But then (\ref{epdrum202;2}) gives \[ \sum_{i,j=1}^d g_1^{ij}(\tau(q)) \, \lambda_i \, \lambda_j = \sum_{i,j=1}^d g_2^{ij}(q) \, \lambda_i \, \lambda_j \] for all $\lambda_1,\ldots,\lambda_d \in {\bf R}$ and $(g_1^{ij} \circ \tau)(q) = g_2^{ij}(q)$ for all $i,j \in \{ 1,\ldots,d \} $. Hence $g_{1 \, ij}|_{\tau(q)} = g_{2 \, ij}|_q$. In particular, \[ g_1|_{\tau(q)}(\frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j}) = g_2|_q(\frac{\partial}{\partial y^i}, \frac{\partial}{\partial y^j}) = g_2|_q(\tau_* \frac{\partial}{\partial x^i}, \tau_* \frac{\partial}{\partial x^j}) \] for all $i,j \in \{ 1,\ldots,d \} $. Hence $\tau$ is an isomorphism from $(M_2,g_2)$ onto $(M_1,g_1)$.\hfill$\Box$ \vskip10.0pt plus 4.0pt minus 6.0pt Now the implications \ref{tcdrum101-3}$\Rightarrow$\ref{tcdrum101-1} and \ref{tcdrum101-4}$\Rightarrow$\ref{tcdrum101-1} in Theorem~\ref{tcdrum101} follow easily from Lemma~\ref{lcdrum201} and Proposition~\ref{pdrum202}. Substituting $\varphi = \mathds{1}$ in (\ref{epdrum202;2}) gives $\Delta_2 h = 0$ in the proof of Proposition~\ref{pdrum202}. If $M_2$ is connected this implies that $h$ is constant. Then the last part in Theorem~\ref{tcdrum101} is obvious. \subsection*{Acknowledgements} The second named author is most grateful for the hospitality extended to him during a fruitful stay at the University of Ulm. He wishes to thank the University of Ulm for financial support. Part of this work is supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand.
1,314,259,995,516
arxiv
\section{Introduction} General nuclear many-body theory states that optical potentials should be nonlocal \cite{Feshbach}. Today several groups are investing significant effort in development of these potentials based on modern ab-initio microscopic approaches \cite{Idi16,Rot17,Idi17,Rot18,Bur19}. However, most phenomenological studies employ local optical model to analyse elastic scattering data, providing either individual or global sets of local optical potential parameters. Elastic scattering cross sections depend on the scattering phase shifts only, which means that they are not sensitive to the scattering wave functions in the nuclear interior, so elastic data cannot distinguish between local and non-local parameterizations. However, when optical potentials are used to calculate non-elastic scattering cross sections the role of the internal part of the wave function becomes more important and differences arise between the use of local and nonlocal potentials to generate scattering waves in the entrance and/or exit channels. To account for the effects of nonlocality in non-elastic nuclear reactions, described by models with two-body scattering wave functions, Perey introduced a smooth function $f$ of the projectile-target distance $r$ that multiplies the two-body scattering wave functions \cite{Perey}. At large $r$, $f(r) \rightarrow 1$ and it gradually decreases to 0.7-0.8 for small $r$ associated with the nuclear interior. The introduction of the Perey-factor $f(r)$ is justified by the small range of nonlocality, of the order of 1 fm, which allows for the nonlocal Schr\"odinger equation to be localized using the local energy approximation. Such an approximation results in a Schr\"odinger equation with a local optical potential and a gradient term (or momentum-dependent term). The latter could easily be removed using the Perey factorization. Its role in most cases reduces to decreasing reaction cross sections that are sensitive to contributions from the nuclear interior. The Perey factor is included in some reaction codes (DWUCK4 and DWUCK5 \cite{DWUCKs}, TWOFNR \cite{twofnr}) so that many reaction calculations, including $(d,p)$ reactions, reported over the last 50 years, have the Perey effect accounted for. Very recently, the accuracy of the Perey-effect has been investigated for the proton channel of $(d,p)$ and $(p,d)$ reactions \cite{Tit14,Ros15} by comparing it with calculations that used the exact solution for the proton scattering wave obtained from the nonlocal Schr\"odinger equation directly. However, the situation with the deuteron channel for this reactions depends on the reaction model used. The distorted-wave Born approximation (DWBA) employs deuteron scattering waves calculated from the two-body Schr\"odinger equation and thus the same Perey-factor approximation approach is justified. However, when deuteron breakup is taken into account, the deuteron-channel wave function is obtained from the $A+n+p$ three-body Schr\"odinger equation, in which case the nonlocality for optical potentials needs a special treatment. Exact calculations of this three-body equation within the Faddeev method showed a difference between nonlocal calculations and local-equivalent calculations \cite{Del09}. Since the Faddeev approach is not easily adapted for the analysis of $(d,p)$ experiments, an approach to treat the nonlocality in the deuteron channel has been introduced in \cite{Tim13a} within the adiabatic distorted wave approximation (ADWA), which is a popular method for $(d,p)$ data analysis. This method produced a nonlocal deuteron adiabatic model which can either be solved via localization with the introduction of a local-equivalent potential and the Perey factor \cite{Tim13b,Wal16} or can be solved exactly \cite{Tit16,Bai17}. At low deuteron incident energies the former method can give results accurate to 3$\%$, which is demonstrated in Fig. 2b of Ref. \cite{Bai17} by curves labelled ``Hulten, full" and ``Hulthen, TJ-NLO". It was recently noticed that ADWA gives rise to dependence of $(d,p)$ cross sections on high-momentum components in the $n$-$p$ motion in a deuteron when nonlocal nucleon optical potentials are used \cite{Bai16}. Exact Faddeev calculations with a wide range of nucleon-nucleon (NN) interactions and the corresponding high $n$-$p$ momentum content revealed a very limited sensitivity to the choice of deuteron model \cite{Del18}, pointing out that this dependence should be spurious. A leading-order local-equivalent continuum-discretized coupled channel (CDCC) method, developed to take nonlocality of optical potentials into account, has also shown reduced sensitivity to high $n$-$p$ momenta \cite{Gom18}. Extension of this local-equivalent CDCC beyond the leading order requires the development of coupled-channel methods with first derivatives both in diagonal and non-diagonal coupling potentials, in which case, the generalization of the Perey factors may not be possible. However, it was shown in \cite{Tim19} that using local-equivalent nucleon potentials with first derivatives in a three-body problem involves a product of Perey factors for neutron and proton. Such a problem was solved for the $^{40}$Ca$(d,p)^{41}$Ca reaction both in an adiabatic approximation and in he Watanabe folding model \cite{Tim19} and the Perey-effect was shown to affect the cross section by about 9$\%$. In this paper, we investigate the Perey-effect in CDCC calculations of $(d,p)$ reactions, which has not been explored previously. We expect the Perey-effect in CDCC with local-equivalent potentials to be similar to that in (not-yet-developed) next-to-leading-order local-equivalent CDCC with fully nonlocal potentials, so the study of the magnitude of this effect with local-equivalent potentials will be a useful antecedent when solving the nonlocal Schr\"odinger equation in a practical way. On the other hand, the Perey-effect can modify the information extracted from analysis of experimental $(d,p)$ data within the local CDCC and, therefore, the relevance of this effect must be explored. In section 2 we remind the reader of the formulation of the local-equivalent model for nonlocal potentials, while in section 3 we discuss the CDCC approach to solve three-body $A+n+p$ model with these potentials. Results of the CDCC calculations for $(d,p)$ reactions on two chosen targets are given in section 4 and conclusions are drawn in section 5. \section{Nonlocal two-body problem and its localized version} The nonlocal two-body $A+N$ problem is described by equation, \beq (T -E)\Psi(\ve{r}) = -\int d\ve{r}' \, V(\ve{r},\ve{r}') \Psi(\ve{r}'). \eeqn{NLE} where $\ve{r} = \ve{r}_A - \ve{r}_N$ is the radius-vector between $A$ and $N$, $T$ is the corresponding kinetic energy operator and $E$ is the energy of the $N-A$ system in the centre of mass. An approximate local-equivalent two-body problem is easily constructed if the non-local potential $V_{NA}$ has the Perey-Buck form \cite{PB}, \beq V_{NA}(\ve{r},\ve{r}')=H(\ve{r}-\ve{r}')U_{NA}((\ve{r}+\ve{r}')/2), \eeqn{PB} with the non-locality factor $H$ of range $\beta$ \beq H(\ve{x}) = \pi^{-3/2} \beta^{-3} e^{-\left(\frac{\ve{x}}{\beta}\right)^2}. \eeqn{H} In a leading order the local potential $U_{loc}^0$ can be obtained as the solution of the transcendental equation \cite{PB} \beq U_{loc}^{0}(r)=U_{NA}(r)\exp\left[-\frac{\mu_N\beta^2}{2\hbar^2}(E-U_{loc}^{0}(r))\right], \eeqn{Uloc0} where $\mu_N$ is the reduced mass of the $N+A$ system. This equation must be corrected for proton scattering, by reducing the centre-of-mass energy $E$ in the r.h.s. of Eq. (\ref{Uloc}) by the local Coulomb interaction $V_{coul}(r)$, which can be represented by a constant, for example, by $\bar{V}_{coul} = -1.08 + 1.35 ((Z-1)/A^{1/3})$ MeV as given in \cite{Gia76}. Taking the next step beyond the leading order, the wave function $\Psi$ is found from the following equation derived in \cite{Fie66}, \beq (T+{\tilde U}_{loc}+ \nabla F\cdot \nabla -E) \Psi =0. \eeqn{BLO} Here \beq {\tilde U}_{loc} = U_{loc} - \frac{\hbar^2}{2\mu}\frac{\nabla^2f}{f}, \eeqn{tUloc} \beq U_{loc} = U_{loc}^0 - \frac{ \beta^2}{16} (U_{loc}^0)^{\prime \prime} -\frac{\beta^2}{8} \frac{(U_{loc}^0)^{\prime}}{r} - \frac{ \mu\beta^4}{32\hbar^2} \left[\left( U_{loc}^0\right)^{\prime}\right]^2 \left(1 - \frac{ \mu\beta^2}{2\hbar^2} U_{loc}^0\right )^{-1}, \eeqn{Uloc} \beq f(r) = \exp \left( \frac{ \mu\beta^2}{4\hbar^2}U_{loc}^0(r) \right), \eeqn{Perey} and \beq \nabla F = - \frac{\hbar^2}{ \mu}\frac{\nabla f}{f} . \eeqn{} In this case $\Psi(\ve{r}) = f(r) \varphi(\ve{r})$ and $\varphi$ is found from the local Schr\"odinger equation $(T+U_{loc}-E)\varphi=0$. Both $\Psi$ and $\varphi$ are identical in the asymptotic region giving the same scattering cross sections. The Schr\"odinger equation (\ref{BLO}) does not include spin-orbit interaction. Since our aim is to use ${\tilde U}_{loc}$ in a three-body problem and solve this problem within the CDCC approach and the implementation of the formalism used does not include spin-orbit, we neglect it everywhere below. \section{Three-body model with first derivatives and CDCC approach} Let us consider the three-body Schr\"odinger equation for the three-body wave function $\Psi(\ve{R},\ve{r})$ with two-body interactions ${\tilde U}_{nA}$ and ${\tilde U}_{pA}$ given by Eq.(\ref{tUloc}): \beq ( T_3 + V_{np}(\ve{r}) &+& {\tilde U}_{nA}(\ve{r}_n)+\nabla_n{F_n(\ve{r}_n)} \cdot \nabla_n +V^c_{pA}(r_{p}) \eol &+& {\tilde U}_{pA}(\ve{r}_p)+ \nabla_p{F_p(\ve{r}_p)} \cdot \nabla_p - E ) \Psi(\ve{R},\ve{r})=0, \eeqn{3bSE1} where $T_3 $ is the three-body kinetic energy operator, which is a sum of the operators of the $n$-$p$ relative kinetic energy and deuteron kinetic energy in the centre of mass, $\ve{r}_n$ and $\ve{r}_p$ are the coordinate-vectors of the neutron and proton with respect to the target $A$, while $\ve{r} = \ve{r}_n - \ve{r}_p$ and $\ve{R} = (\ve{r}_n+\ve{r}_p)/2$. The gradients $\nabla_n$ and $\nabla_p$ are with respect to variables $\ve{r}_n$ and $\ve{r}_p$ respectively. Equation (\ref{3bSE1}) could be transformed to a form that does not contain first derivatives of variable $\ve{R}$ \cite{Tim19}, which is convenient for expanding the wave function $\Psi(\ve{R},\ve{r})$ over the CDCC basis. This is achieved by introducing the representation \beq \Psi(\ve{R},\ve{r}) = P_n(\ve{r}_n) P_p(\ve{r}_p) \varphi (\ve{R},\ve{r}) \eeqn{psiPP} and requiring that $P_i(r_i) \rightarrow 1$ for $r_i \rightarrow \infty$, which gives for the nucleon Perey factor $P_N$ ($N$ is $n$ for neutrons and $p$ for protons) the first-order differential equation \beq \frac{\nabla_N P_N}{P_N}= \frac{\mu_{dA}}{2\hbar^2} \nabla_N F_N . \eeqn{peq} Then $\varphi(\ve{R},\ve{r})$ satisfies the Schr\"odinger equation \beq [T_3 - E + V_{np}(\ve{r}) &+& {\tilde U}^{\rm eff} _{nA}(\ve{r}_n)+{\tilde U}^{\rm eff}_{pA}(\ve{r}_p) +V^c_{pA}(r_{p}) \eol &+& \Delta U_1 (\ve{r}_n,\ve{r}_p) + \Delta U_2(\ve{r}_n,\ve{r}_p) ] \varphi(\ve{R},\ve{r})=0 \eeqn{3bSEIa} with effective $N-A$ potentials, given by \beq {\tilde U}^{\rm eff}_{NA} = {\tilde U}_{NA} -\frac{1}{2} \frac{A+1}{A+2} \nabla^2_N F_N+ \left( 1 - \frac{1}{2} \frac{A+1}{A+2} \right) \frac{\mu_{dA}}{2\hbar^2} (\nabla_NF_N)^2, \\ \Delta U_1 (\ve{r}_n,\ve{r}_p) = \frac{2}{A+2}( \nabla_n{F_n}-\nabla_p{F_p} )\cdot \nabla_r, \\ \Delta U_2(\ve{r}_n,\ve{r}_p) =-\left.\frac{\mu^2_{dA}}{4\hbar^2M_A} \nabla_nF_n\cdot\nabla_pF_p \right. \eeqn{} and additional contributions that could be considered as a three-body force since they depend on the positions of both the neutron and the proton at the same time. One of these contributions has $n-p$ velocity-dependence that comes through $\nabla_r$. It is worth noticing that the nature of this three-body contribution is the recoil of target $A$. For infinitely large $A$ it vanishes. We will use the CDCC expansion \cite{Raw74,Aus87} to solve Eq. (\ref{3bSEIa}): \beq \varphi(\ve{R},\ve{r}) = \sum_{i=0}^{n_{\max}} \chi_i(\ve{R}) \phi_i (\ve{r}), \eeqn{wf1} where $\phi_0$ is the deuteron bound state wave function $\phi_d$ and $\phi_i$ $ (i\ne0)$ are continuum bins. The channel distorted waves $\chi_i$ are found from the coupled set of differential equations \beq (T_R + U_C(R) &-& E_d) \chi_i(\ve{R}) = - \sum_{i'=0}^{n_{\max}} U _{ii'}(\ve{R}) \,\chi_{i'}(\ve{R}), \eeqn{nleq2} where the coupling potentials $U _{ii'}(\ve{R})$ are the matrix elements \beq U _{ii'}(\ve{R}) = \la \phi_i | {\tilde U}^{\rm eff} _{nA}(\ve{r}_n)+{\tilde U}^{\rm eff}_{pA}(\ve{r}_p) +\Delta U_1 (\ve{r}_n,\ve{r}_p) + \Delta U_2(\ve{r}_n,\ve{r}_p) |\phi_{i'} \ra. \eeqn{} The contribution from the effective $p$-$A$ and $n$-$A$ potentials ${\tilde U}^{\rm eff} _{NA}$ to $U_{ii'}$ is standard and its calculation is built into computer code FRESCO \cite{FRESCO}. The two new terms have been calculated in \cite{Tim19} but only for a different basis expansion, namely over Weinberg states \cite{JT}, in which the first component only is retained. The CDCC expressions for $U_{ii'}^{(1)}(R)$ and $U_{ii'}^{(2)}(R)$ arising from $\Delta U_1$ and $\Delta U_2$ terms, respectively, are easily obtained from generalizations of those in \cite{Tim19}: \beq \Delta U_{ii'}^{(1)}(R) &=& \frac{2}{A+2} \int d\ve{r} \, \phi_i(r) \left[\frac{r}{2}\left(\frac{F^{\prime}_n(r_n)}{r_n} +\frac{F^{\prime}_p(r_p)}{r_p}\right) \right. \eol &+&\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,R\nu \left. \left(\frac{F^{\prime}_n(r_n)}{r_n} -\frac{F^{\prime}_p(r_p)}{r_p}\right)\right] \phi^{\prime}_{i'}(r), \label{delU1} \\ \Delta U_{ii'}^{(2)}(R) &=& - \frac{\mu^2_{dA}}{4 \hbar^2 M_A} \int d\ve{r} \,\phi^*_i(\ve{r}) \frac{F^{\prime}_n(r_n) F^{\prime}(r_p)}{r_nr_p} \left( R^2 - \frac{1}{4}r^2\right) \,\phi_{i'}(\ve{r}) \label{deladpots2} \eeq in which $\nu = \cos(\hat{\ve{R},\ve{r}})$, $r_n = \sqrt{R^2+\nu r R + r^2/4}$ and $r_p = \sqrt{R^2-\nu r R + r^2/4}$. In these equations $\phi_i$ includes $Y_{00}(\ve{r})$. In all numerical calculations of $U_{ii'}^{(1)}(R)$ and $U_{ii'}^{(2)}(R)$ below we also assumed $s$-wave functions $\varphi_i$ only. These terms, originating from the target recoil, are already small and employing bins with non-zero angular momentum $l$, which behave as $r^l$ at small $r$, should not change our conclusions much. The amplitude of the $A(d,p)B$ reaction is now obtained as \beq T_{(d,p)}= \la \psi_B \psi_p \chi(\ve{R}_{p}) | V_{pn} |\psi_A P_n(\ve{r}_n) P_p(\ve{r}_p) \varphi (\ve{R},\ve{r}) \ra, \eeqn{Tdp} where $\ve{R}_p$ is the radius-vector of the proton in the exit channel with respect to the centre-of-mass of the $p+A$ system. Because $V_{np}$ does not involve any internal coordinates of the target $A$, this amplitude contains the overlap function $I_{AB}(\ve{r}_n)= \la \psi_A| \psi_B\ra$. The neutron Perey factor $P_n(\ve{r}_n)$ depends on the same coordinate $\ve{r}_n$. Therefore, to take this Perey factor into account, the overlap function should be modified. Moreover, due to the short range of the interaction $V_{np}$ only those values of $\ve{r}_p$ which are very close to $\ve{r}_n$ will contribute to the amplitude $(\ref{Tdp})$. To check this assumption we have performed a series of simple finite- and zero-range ADWA calculations for all those cases considered below and found that finite-range effects in the first maximum do not exceed 2.5$\%$. Thus, to take the Perey-effect into account it is sufficient to use a modified overlap function $I_{AB}^{\rm mod} (\ve{r}_n)=P_n(r_n)P_p(r_n) I_{AB}(\ve{r}_n)$ in a standard CDCC calculation (either zero-range or finite-range) with effective nucleon optical potentials ${\tilde U}^{\rm eff}_{NA}$ supplemented by three-body recoil terms $\Delta U_1$ and $\Delta U_2$. Below, all the CDCC calculations treated finite-range exactly. \section{Perey-effect study for nonlocal optical potentials} We have performed numerical calculations for the $^{12}$C($d,p)^{13}$C reaction populating the lowest 1/2$^-$, $1/2^+$ and $5/2^+$ states at $E_d = 30$ MeV and for $^{40}$Ca($d,p)^{41}$Ca(g.s.) reaction at three incident deuteron energies $E_d= 10$, 20 and 56 MeV. The non-local energy-independent optical potential $U_{NA}$ from \cite{Gia76} was used to construct $U^{0}_{loc}$, ${\tilde U}^{\rm eff}_{NA}$, $\nabla F_{N}$ and $P_N$ in the entrance deuteron channel and the same parameterization was used to construct optical potentials (\ref{tUloc}) and (\ref{Uloc}) in the exit proton channel. In \cite{Gia76} the nonlocality range is $\beta = 0.9552 \left(\frac{A+1}{A}\right)^{1/2}$ fm and the $U_{NA}$ is given by a real part Woods-Saxon form with the depth of 88.6 MeV, radius of $1.25A^{1/3} - 0.282$ fm and diffuseness $a$ of 0.57 fm. Imaginary potential is of the surface type with the depth of 23.7 MeV and with the same radius and diffuseness as for the real part. The spin-orbit potential in neglected in the present work. The standard Perey factors (\ref{Perey}) were applied in the outgoing proton channels as well. In the case of $^{12}$C target the imaginary part of the nonlocal potential from \cite{Gia76} was reduced to 13 MeV according to \cite{Del09} to prevent unphysically large absorption. We have also used the overlap between $^{12}$C and $^{13}$C represented by a single-particle wave function obtained in the Wood-Saxon potential well of radius $1.1359A^{1/3}$ fm and diffuseness $0.57$ fm as suggested in \cite{Del09}. The spin-orbit potential's geometry was the same and its depth was 5.5 MeV. For $^{40}$Ca we adopted the overlap function from \cite{Wal16} where the radius $ 1.252 A^{1/3}$ rm and diffuseness 0.718 fm were used both for the central and spin-orbit Woods-Saxon potential and the depth of the spin-orbit potential was 6.25 MeV. In both cases the depth of the central potential was adjusted to reproduce the bound neutron separation energy. The spectroscopic factors were kept equal to one everywhere. The finite-range CDCC calculations were done using the code FRESCO \cite{FRESCO} considering only the $s$-wave continuum which was discretized in eight bins equispaced in momentum with an energy from 0 to 24 MeV, for the $^{12}$C($d,p)^{13}$C reaction, 7 bins from 0 to 7 MeV for the $^{40}$Ca($d,p)^{41}$Ca reaction at 10 MeV, 8 bins from 0 to 16 MeV for the reaction at 20 MeV and 10 bins between 0 and 50 MeV for the reaction at 56 MeV. In these calculations we used the Hulth\'en model of the $n$-$p$ interaction \cite{Hulthen}. This potential gives the same low-momentum $n$-$p$ behaviour as all the modern NN models do \cite{Bai16} and the dominance of the low $n$-$p$ momentum contribution to the $(d,p)$ reactions has been confirmed by rigorous nonlocal Faddeev calculations in \cite{Del18}. We should note that derivatives in Eqs. (\ref{delU1})-(\ref{deladpots2}) may suggest an enhanced sensitivity to the $n$-$p$ interaction model, however, the contributions $\Delta U^{(1)}_{ii'}$ and $\Delta U^{(2)}_{ii'}$ are recoil-induced and enter the CDCC equations with a scaling factor of $1/A$ thus weakening a possible $n$-$p$ model dependence. In these calculations we neglected contribution from the $d$-wave continuum. Test calculations for $^{40}$Ca at $E_d = 20$ MeV showed that this contribution does not exceed 3\% at the maximum of the angular distributions. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{Fig1.eps} \caption{Cross sections ratio $\sigma_0/\sigma_{\rm eff}$ ($a$) and $\sigma_{\rm eff}/\sigma_{\rm all}$ ($b$) for $^{12}$C($d,p)^{13}$C reaction, and for the $^{40}$Ca($d,p)^{41}$Ca reaction, c) and d) respectively. For the $^{12}$C($d,p)^{13}$C reaction, results are shown for the population of the $1/2^-$, $1/2^+$ and $5/2^+$ final states, while for the $^{40}$Ca($d,p)^{41}$Ca, results are shown for three different reaction energies: 10, 20 and 56 MeV. See text for further explanations. } \label{fig:c12ca40} \end{figure} We have compared several CDCC calculations for $(d,p)$ cross sections without Perey-factors. First of all, differential cross sections $\sigma_0$ were calculated using the leading-order local-equivalent neutron and proton potentials $U_{loc}^0$ only. Then we calculated differential cross sections $\sigma_{\rm eff}$ with the modified next-to-leading order two-body nucleon potentials ${\tilde U}^{\rm eff}_{NA}$. Finally, we added recoil-induced three-body terms $\Delta U_1$ and $\Delta U_2$ and obtained the differential cross sections $\sigma_{\rm all}$. We found that it is more informative to plot ratios between $\sigma_0$, $\sigma_{\rm eff}$ and $ \sigma_{\rm all}$ as all of them look rather similar. Results are shown in Fig.~\ref{fig:c12ca40}. The top panels correspond to the ratios $\sigma_0/\sigma_{\rm eff}$ a) and $\sigma_{\rm eff}/\sigma_{\rm all}$ b) for $^{12}$C$(d,p)^{13}$C reaction populating the first lowest $1/2^-$, $1/2^+$ and $5/2^+$ final states while the bottom panels c), d) show the same quantities for $^{40}$Ca$(d,p)^{41}$Ca for three deuteron incident energies: 10, 20 and 56 MeV. The values of these ratios in the first peak are also given in Table \ref{tab:ratios}. It can be seen that the modification of $U^0_{loc}$ can affect $\sigma_0$ by up to 7$\%$ if the final neutron bound state does not have a node. For the population of the final $1s_{1/2}$ state in $^{13}$C this effect may be stronger, up to 11$\%$. Adding recoil-induced three-body force in most cases has less effect and it is much smaller for the heavier target $^{40}$Ca, as expected. We would also like to note that in a particular case of populating the weekly-bound $s$-wave $^{13}$C(1/2$^+$) state the differential cross sections decrease fast with the scattering angle. Adding small corrections to $U_{loc}^0$ changes the radius of the local optical potentials in the CDCC equations and, therefore, slightly shifts the position of the first minimum of the angular distribution thus creating a sharp jump of the ratio $\sigma_o/\sigma_{\rm eff}$ in the vicinity of this minimum. This jump therefore does not carry any important information. The role of the Perey factor is demonstrated in Fig. \ref{fig:xsecs} for the same reactions. Only for one case, $^{40}$Ca$(d,p)^{41}$Ca at $E_d = 10$ MeV, does the introduction of the Perey-factor not affect the cross sections in the main maximum, which is most likely caused by the more peripheral nature of this reaction due to its low incident energy. Indeed, as discussed in section 3, including the Perey factor can be applied to the overlap function, in other words, the Perey-effect reduces this function in the nuclear interior by 70-80$\%$. Insensitivity of the peak cross section to this reduction is a sign of the peripherality of this nuclear reaction. In all other cases, the cross section at the first peak is affected to a significant factor, displayed in Table \ref{tab:ratios}, showing that the internal contributions become more important. For those cases where the overlap function of the transferred nucleon does not have any nodes the peak cross section decreases by 7-19$\%$. For the transfer of a neutron to the $1s_{1/2}$ state with one node the Perey factor redistributes the cross section between the first and the second maxima, leading to differences of 23$\%$ and 47$\%$, respectively. In general, the cross sections outside the fist maxima seem to be affected by the Perey effect more strongly. \begin{table}[t] \caption{Ratios of various first-peak differential cross sections for two reactions,$^{12}$C$(d,p)^{13}$C and $^{40}$Ca$(d,p)^{41}$Ca. For $^{12}$C$(d,p)^{13}$C($1/2^+$), these ratios are shown at the second peak as well. The angle at which the ratios have been calculated are shown in the third column. See text for detailed notation. } \centering \begin{tabular} {p {4 cm} p{ 1. cm} p{ 1. cm} ccc c } \hline reaction & $E_d$ & $\theta_{\rm peak}$ & $\sigma_0/\sigma_{\rm eff}$ & $\sigma_{\rm eff}/\sigma_{\rm all}$ & $\sigma_{\rm all}/\sigma_{\rm all}^{\rm P}$ & $\sigma_{\rm all}/\sigma_{\rm LECDCC}$\\ \hline \hline $^{12}$C$(d,p)^{13}$C($1/2^-$) & 30 & 6 & 0.942 & 1.03 & 1.15 & 0.978 \\ $^{12}$C$(d,p)^{13}$C($1/2^+$) & 30 & 0 & 0.89 & 0.984 & 0.77 & 1.046\\ & & 32 & 1.02 & 1.03 & 1.47 & 0.782\\ $^{12}$C$(d,p)^{13}$C($5/2^+$) & 30 & 5 & 0.936 & 1.12 & 1.19 & 0.986\\ $^{40}$Ca$(d,p)^{41}$Ca($7/2^-$) & 10 & 37 & 0.973 & 0.995 & 1.005 & 0.93 \\ $^{40}$Ca$(d,p)^{41}$Ca($7/2^-$) & 20 & 25.5 & 0.979 & 1.006 & 1.07 & 0.93 \\ $^{40}$Ca$(d,p)^{41}$Ca($7/2^-$) & 56 & 0 & 0.946 & 0.987 & 1.34 & 1.21 \\ \hline\hline \end{tabular} \label{tab:ratios} \end{table} \section{Comparison to Local-Equivalent CDCC} So far we considered the CDCC method based on nucleon optical potentials in which non-locality is contained in the first-order derivatives only. Such potentials arise in the process of localization of the nonlocal two-body problem. No methods have yet been developed to solve CDCC equations for nonlocal optical potentials. However, a leading-order local-equivalent CDCC - LECDCC - has been derived in \cite{Gom18} for nonlocal potentials of the Perey-Buck type (\ref{PB}). We can now compare the CDCC with local potentials $U^0_{loc}$ with the LECDCC results for the same underlying nonlocal potential. In the LECDCC, with the assumption of just the $s$-wave continuum of the deuteron, the channel functions $\chi_i(R)$ are computed from a coupled set of differential equations (\ref{nleq2}) with local-equivalent coupling potentials $U_{ii'}^{\rm loc}$ that satisfy a system of the transcendental matrix equations written for $X_{ii'} \equiv (E_{i'}-U_C) \delta_{ii'} - U^{\rm loc}_{ii'}$: \beq f_{ii'}^{(0)} - (E_j-U_C) \delta_{ij} &+& \sum_k (f_{ii'}^{(1)} +\delta_{ik})X_{ki'} + \sum_{kl}f_{ii'}^{(2)}X_{kl}X_{li'} +...=0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \eeqn{nl1} where \beq f_{ij}^{(n)}= \frac{(-)^n}{n!(2n+1)!!} \left(\frac{\mu_d \alpha_2^2 \beta^2}{4\hbar^2} \right)^n U^{(n)}_{ii'} \eeqn{} and \beq U^{(n)}_{ii'}(\ve{R}) = \int d\ve{x}\left[{\bar \phi}_i^{(n)}(\ve{x}) \right]^{*} \left[ \sum_{N} U_{NA}\left( \frac{\ve{x}}{2}-\ve{R}\right)\right] \phi_{i'}(\ve{x}). \eeqn{} The coupling potentials $U^{(n)}_{ii'}$ are constructed using the usual bin functions $\phi_i$ and the continuum bin functions modified by nonlocality \beq {\bar \phi}_i^{(n)}(\ve{x}) = \int d\ve{s}\, H(s) \left(\frac{s}{\beta}\right)^{2n} \phi_i(\ve{x}+\alpha_1\ve{s}). \eeqn{modbins} Transcendental equations (\ref{nl1}) have been solved in \cite{Gom18} using the Newton method, with the obtained $U_{ii'}^{\rm loc}$ being read into the CDCC reaction code FRESCO. Although the left-hand side of (\ref{nl1}) contains an infinite sum of terms it was found that truncating it up to cubic terms on $X_{il}$ was sufficient to get converged solutions $U_{ii'}^{\rm loc}$ with a good accuracy. The difference between the cubic and quadratic approximations was less than 0.5$\%$ for the values of $R$ most important for reaction calculations. \begin{figure}[t] \centering \includegraphics[scale=0.30]{JphysG-c12-a_MGR.eps} \includegraphics[scale=0.30]{JPhysG-ca40-a_MGR.eps} \includegraphics[scale=0.30]{JphysG-c12-b_MGR.eps} \includegraphics[scale=0.30]{JPhysG-ca40-b_MGR.eps} \includegraphics[scale=0.30]{JPhysG-c12-c_MGR.eps} \includegraphics[scale=0.30]{JPhysG-ca40_c_MGR.eps} \caption{Cross sections of the $^{12}$C($d,p)^{13}$C reaction populating the lowest $1/2^-$ ($a$), $1/2^+$ ($b$) and $5/2^+$ ($c$) states and cross sections of the $^{40}$Ca($d,p)^{41}$Ca reaction at 10 ($d$), 20 ($e$) and 56 ($f$) MeV calculated with effective local potentials ${\tilde U}^{\rm eff}_{NA}$ and induced three-body terms $\Delta U_1$ and $\Delta U_2$ without (dashed lines) and with (dots on solid lines) Perey factors in comparison with the LECDCC calculations (solid line). } \label{fig:xsecs} \end{figure} The LECDCC calculations are shown in Fig. \ref{fig:xsecs} together with the local-equivalent CDCC results obtained with ($\sigma_{\rm all}^{\mathrm{P}}$) and without ($\sigma_{\rm all}^\mathrm{noP}$) the deuteron-channel Perey factor. In all cases they are close to $\sigma_{\rm all}$. The deviation in the first peak is 3-7$\%$ for $E_d \leq 30$ MeV but can reach 20$\%$ at 56 MeV. This is an encouraging result suggesting that next-to-leading order LECDCC may be similar to $\sigma_{\rm all}^{\mathrm{noP}}$, for deuteron energies below 30 MeV, the typical value for transfer reaction experiments. \section {Conclusion} Treating exactly the nonlocal nucleon-target interactions in the $(d,p)$ reactions within the CDCC has not yet been done because no methods to calculate the relevant matrix elements have been developed. We have proposed the CDCC model of $(d,p)$ reactions that includes the nonlocality of the nucleon optical potentials via the Perey-effect. This model assumes that a nonlocal optical two-body model of the Perey-Buck type has an approximately equivalent representation in term of a local optical potential supplemented by a velocity-dependent part. Including velocity-dependent potentials in the three-body $A+p+n$ model leads to factorization of the total wave function via a product of two nucleon Perey-factors and a solution of a Schr\"odinger equation that does not contain velocity-dependence on coordinate $\ve{R}$. It is well known that in the leading order the local-equivalent potentials $U^0_{loc}$ are found from the transcendental equation (Eq. \ref{Uloc0}). The local potentials used in the CDCC differ from $U^0_{loc}$ and this difference can affect the maxima of CDCC cross sections by 2-10$\%$. In addition, velocity-dependence generates a three-body potential whose contribution is proportional to $1/A$, so for most nuclei it can be neglected. The Perey-effect can be easily incorporated into available CDCC codes as long as the remnant term in the $(d,p)$ amplitude is not included. In this case, due to the short-range of the interaction $V_{np}$, one can simply multiply the overlap function for the transferred nucleon by nucleon Perey factors. Their influence on CDCC cross sections is largest for reactions where the contribution from the nuclear interior is not suppressed. Also, their importance increases with the incoming deuteron energy. While for energies typical for $(d,p)$ reaction experiments the Perey-effect could affect the main peak, used for spectroscopic factors determination, by 3-7$\%$, for higher energies it can reach up to 20$\%$, and thus must be taken into account when extracting spectroscopic factors. On the other hand, for peripheral reactions, in particular at low incident energies, the Perey-effect can be neglected. In this case nonlocality can be mainly described through the modification of $U^0_{loc}$. We have checked that the CDCC with velocity-dependent potentials without the Perey-effect gives cross sections similar to those obtained in the CDCC, based on nonlocal optical potentials, treated within the leading-order local-energy approximation. This gives an encouraging indication to the possibility that including the Perey-effect within CDCC, as has been developed here, may represent a full nonlocal problem in a satisfactory way. The estimates shown here can give an idea of the uncertainties, arising due to nonlocality, in spectroscopic factors and asymptotic normalization coefficients extracted using CDCC. If they are found by comparing the cross sections in the main peak only then these uncertainties could be comparable with those of experimental measurements. However, in some cases the introduction of the Perey-effect affects the slope of the angular distributions as well. More research is needed for such cases. Finally, accounting for nonlocality in the CDCC, proposed in this paper, is based on analytical representation (\ref{PB}) of the nonlocal potentials. The optical potentials obtained in many-body approaches, in general, do not have this form \cite{Rot17}. Approximating such potentials by analytical functions of $\ve{r}-\ve{r}'$ and $(\ve{r}+\ve{r}')/2$ would make possible the approximate treatment of their nonlocality along the lines suggested here. Otherwise new methods of the CDCC matrix elements calculation should be developed. \section*{Acknowledgements} This work was supported by the United Kingdom Science and Technology Facilities Council (STFC) under Grant No. ST/P005314/1. M.G.-R. acknowledges funding from the Spanish Government under project No. FIS2014-53448-C2-1-P and FIS2017-88410-P and by the European Unions Horizon 2020 research and innovation program under grant agreement No. 654002. He also acknowledges partial support from the Nuclear.Theory.Vision$@$UK visitors program. \section*{References}
1,314,259,995,517
arxiv
\section*{Research Highlights} \section{Introduction} Classic online prediction with expert advice \cite{vovk1998game} is a competition between the learner and the adversary which consists of $T$ sequential steps. At each step of the game the learner has to efficiently aggregate the predictions of a given (fixed) pool of experts (algorithms, machine learning models, human experts). The described scenario is formalized in Protocol \ref{protocol}. At the beginning of each step $t=1,2,\dots,T$ experts $1,2,3,\dots,N$ from a given (fixed) pool of experts $\mathcal{N}=\{1,2,\dots,N\}$ output their predictions $\gamma_{t}^{n}\in\Gamma$ of yet unknown value $\omega_{t}\in\Omega$. Next, the learner has to combine (merge, mix, average) these forecasts into a single forecast $\overline{\gamma}_{t}\in\Gamma$. At the end of the step the true outcome is revealed, and both the learner and the experts suffer their losses by using a loss function $\lambda:\Gamma\times \Omega\rightarrow \mathbb{R}_{+}$. The loss of expert $n\in\mathcal{N}$ is denoted by $l_{t}^{n}=\lambda(\gamma_{t}^{n},\omega_{t})$; the loss of the learner is denoted by $h_{t}=\lambda(\overline{\gamma_{t}},\omega_{t})$. We use $H_{t}=\sum_{\tau=1}^{t}h_{\tau}$ and $L_{t}^{N}=\sum_{\tau=1}^{t}l_{\tau}^{n}$ to denote the cumulative loss of the learner and the expert $n\in\mathcal{N}$ respectively. \begin{algorithm} \SetAlgorithmName{Protocol}{empty}{Empty} \SetKwInOut{Parameters}{Parameters} \Parameters{Pool of experts $\mathcal{N}=\{1,2,3\dots,N\}$; Game length $T$; Loss function $\lambda:\Gamma\times \Omega\rightarrow \mathbb{R}$} \For{$t=1,2,\dots,T$}{ 1. Experts $n\in\mathcal{N}$ provide forecasts $\gamma_{t}^{n}\in \Gamma$\; 2. Learner combines forecasts $\gamma_{t}^{n}$ into forecast $\overline{\gamma_{t}}\in\Gamma$\; 3. Nature reveals true outcome $\omega_{t}\in\Omega$\; 4. Experts $n\in\mathcal{N}$ suffer losses $l_{t}^{n}=\lambda(\gamma_{t}^{n},\omega_{t})$\; 5. Learner suffers loss $h_{t}=\lambda(\overline{\gamma_{t}},\omega_{t})$\; } \caption{Online Prediction with Expert Advice} \label{protocol} \end{algorithm} The goal of the learner is to perform as efficiently as possible w.r.t the best expert\footnote{In the online scenario, the best expert is unknown beforehand.} in the pool, i.e. to minimize the regret $$R_{T}=\sum_{t=1}^{T}h_{t}-\min_{n\in\mathcal{N}} \sum_{t=1}^{T}l_{t}^{n}=H_{T}-\min_{n\in\mathcal{N}} L_{T}^{n}.$$ Among lots of existing learner's strategies for combining experts predictions \cite{cesa-bianchi,hazan2016introduction}, the aggregating algorithm (AA) by \cite{vovk1998game} is typically considered to be the best. For a wide class of $\eta$-mixable ($\eta$-exponentially concave) loss functions $\lambda$ it provides the way to efficiently combine experts' predictions $\{\gamma_{t}^{n}\}_{n=1}^{N}$ to a single prediction $\overline{\gamma_{t}}$ so that the learner achieves a small regret bound. More precisely, if the learner follows the AA strategy, the regret w.r.t. the best expert will not exceed $\frac{\ln N}{\eta}$, i.e. $R_{T}\leq \frac{\ln N}{\eta}$. Here $\eta$ is the maximal constant for which the loss function $\lambda$ is $\eta$-mixable ($\eta$-exp-concave). The bound does not depend on the game length $T$. Besides, the knowledge of $T$ is not required before the game. Many existing loss functions $\lambda:\Gamma\times \Omega \rightarrow \mathbb{R}$ are mixable (exp-concave). Thus, AA can be efficiently applied to combine experts' predictions. Below we briefly review two most common practical online learning problems and two corresponding (mixable) loss functions which are typically used. The most straightforward practical application of online prediction with expert advice is online \textbf{time-series forecasting} or online \textbf{regression}. Typical examples include electricity price and load forecasting \cite{gaillard2015forecasting}, thermal load forecasting \cite{geysen2018operational}, etc. Predictions $\gamma^{n}\in\Gamma$ and the outcome $\omega\in\Omega$ are usually assumed to be real-valued, i.e. $\Omega, \Gamma\subset \mathbb{R}$. Naturally, the \textbf{squared loss function} $\lambda(\gamma,\omega)=(\gamma-\omega)^{2}$ is used to compare the prediction with the true output. It is known to be mixable \cite{vovk1998game} and exponentially concave \cite{hazan2016introduction} under certain boundness conditions. Another practical scenario is online \textbf{classification with expert advice}. In this case the goal of the learner is to predict probabilities $\gamma\in\Gamma=\Delta_{K}=\{(\|\gamma\|_{1}=1)\wedge(\gamma>0)\}$ for a given list of events $\Omega=\{1,2,\dots,K\}$ based on experts' predictions. After the true outcome $\omega$ is revealed, the forecast is typically assessed by using the \textbf{logarithmic loss}, i.e. $\lambda(\gamma,\omega)=-\log \gamma(\omega)$. It is mixable and exp-concave, see \cite{vovk1998competitive,adamskiy2012putting}. Such a multi-class classification scenario may, for example, refer to prediction of sports results \cite{boulier2003predicting} or the forecasting of precipitation occurrence \cite{kumar1999operational}. Other classification losses such as \textbf{Brier score} $\lambda(\gamma,\omega)=\sum_{k=1}^{K}(\gamma_{k}-\mathbb{I}[\omega=k])^{2}$ are also known to be mixable, see \cite{vovk2009prediction}. Mixability holds true even if the output is itself a distribution $\omega\in \Omega=\Delta_{K}$ and the score is given by $\lambda(\gamma,\omega)=\|\omega-\gamma\|^{2}$, see \cite{zhdanov2012universal}. A more advanced problem is the prediction of \textbf{vector-valued outcomes}. In this case both the experts and the learner output a finite-dimensional vector, e.g. a weekly weather forecast \cite{khotanzad1996artificial}. For example, for $D$-dimensional outputs in regression, it is natural to consider $\Omega:=\Omega^{D}$ and $\Gamma:=\Gamma^{D}$. Naturally, the vector-input square loss \begin{equation}\lambda_{D}(\gamma,\omega)=\frac{1}{D}\sum_{d=1}^{D}\lambda(\gamma_{d},\omega_{d})=\frac{1}{D}\sum_{d=1}^{D}(\omega_{d}-\gamma_{d})^{2}=\frac{1}{D}\|\omega-\gamma\|^{2}\label{vector-square-loss}\end{equation} can be used to assess the quality of the forecasts. Until recently, it was unknown whether \textbf{vectorized loss functions} such as \eqref{vector-square-loss} are mixable and AA can be efficiently applied. The notion of vector mixability was introduced by \cite{Kaln2017}. They proved that every vector loss function of the form $\lambda(\omega,\gamma)=\frac{1}{D}\sum_{d=1}^{D}\lambda(\omega_{d},\gamma_{d})$ is $\eta$-mixable (exp-concave) if the corresponding $1$-dimensional-input $\lambda$ is $\eta$-mixable (exp-concave). Meanwhile, the aggregated forecast is built by coordinate-wise aggregation of experts' forecasts. In this paper, we introduce the general notion of \textbf{integral mixability} (\textbf{exp-concavity}). We consider the online scenario to predict the function: at each step $t$ the experts output functions $\gamma_{t}^{n}:\mathcal{X}\rightarrow \Gamma$ (i.e. $\gamma_{t}^{n}\in\Gamma^{\mathcal{X}}$) and the learner has to combine these functions into a single function $\overline{\gamma}_{t}:\mathcal{X}\rightarrow \Gamma$. The true output is a function $\omega_{t}:\mathcal{X}\rightarrow \Omega$. For the function-valued forecasting it is reasonable to measure loss via \textbf{integral loss functions} which naturally arise from loss functions used for comparing one-dimensional outcomes. \begin{definition}[Integral loss function]Let $(\Gamma,\sigma_{\Gamma}),(\Omega,\sigma_{\Omega}),(\mathcal{X},\sigma_{\mathcal{X}})$ be measurable spaces. Assume that ${\lambda:\Gamma\times\Omega\rightarrow\mathbb{R}_{+}}$ is a loss function measurable w.r.t. $\sigma_{\Gamma}\times\sigma_{\Omega}$. Let $\mu_{\omega}$ be $\omega$-dependent measure on $(\mathcal{X},\sigma_{\mathcal{X}})$ and $u_{\omega}$ be some $\omega$-dependent $\sigma$-finite non-negative measurable function satisfying $$\int_{\mathcal{X}}u_{\omega}(x)d\mu_{\omega}(x)=1$$ for all $\omega$. Then the function $\lambda_{u,\mu}:\mathcal{M}(\Gamma^{\mathcal{X}})\times\mathcal{M}(\Omega^{\mathcal{X}})\rightarrow\mathbb{R}_{+}$ defined by \begin{equation}\lambda_{u,\mu}(\gamma,\omega)=\int_{\mathcal{X}}\lambda\big(\gamma(x),\omega(x)\big) u_{\omega}(x)d\mu_{\omega}(x)\label{integral-loss}\end{equation} is called an $\mathcal{X}$-integral $\lambda$-loss function.\footnote{The usage of weight function $u_{\omega}$ together with measure $\mu_{\omega}$ is redundant. One may naturally eliminate it by changing variables: $(u_{\omega},\mu_{\omega})\mapsto (u_{\omega}',\mu_{\omega}')=(1,\mu_{\omega}')$, with $d\mu_{\omega}':=u_{\omega}d\mu_{\omega}$. However, we keep the notation over-parametrized to be compatible with all the losses discussed in Section \ref{sec-forecasting-probs}.} Here we use $\mathcal{M}(\Gamma^{\mathcal{X}}),\mathcal{M}(\Omega^{\mathcal{X}})$ to denote the sets of all measurable functions $\mathcal{X}\rightarrow\Gamma$ and $\mathcal{X}\rightarrow\Omega$ respectively. \end{definition} Clearly, such a general scenario of forecasting under integral loss functions extends vector-valued forecasting scenario of \cite{Kaln2017}. Indeed, if ${|\mathcal{X}|=D}$, one may use $u_{\omega}(x)\equiv 1$ and $\mu_{\omega}(x)\equiv \frac{1}{|\mathcal{X}|}=\frac{1}{D}$ and obtain a vectorized loss. In real life, function-valued predictions can be used for forecasting physical processes for a period ahead, e.g. temperature distribution \cite{chen2012high}, ocean wave prediction \cite{rusu2013evaluation}, etc. Besides, every \textbf{probabilistic forecast} (see \cite{gneiting2014probabilistic}) is actually a function, e.g. density or cumulative distribution function, and classical function-based losses can be used to assess the quality. \vspace{2mm}\noindent\textbf{The main contributions of the paper are:} \begin{enumerate} \item We introduce the concept of function-valued forecasting and related concept of integral mixability (exponential concavity). \item We prove that for every $\eta$-mixable (exp-concave) measurable loss function ${\lambda:\Gamma\times\Omega\rightarrow \mathbb{R}}$ its corresponding $\mathcal{X}$-integral $\lambda$-loss function $\lambda_{u,\mu}:\mathcal{M}(\Gamma^{\mathcal{X}})\times \mathcal{M}(\Omega^{\mathcal{X}})\rightarrow \mathbb{R}$ is $\eta$-mixable (exp-concave) for every admissible $u,\mu$. The aggregated forecast is built point-wise according to the aggregating rule for $\lambda$. \item We demonstrate applications of our results to probabilistic forecasting. We derive mixability (exp-concavity) for Sliced Continuous Ranking Probability Score, Energy-Based Distance, Beta-2 and Kullback-Leibler Divergences, Optimal transport costs \& Sliced Wasserstein-2 Distance, Characteristic Function and Maximum Mean Discrepancies. The results are summarised in Table 1 of Subsection \ref{sec-table}. \end{enumerate} Although our paper is mainly built around online learning framework of prediction with expert advice, we emphasize that the properties of mixability and exponential concavity that we study are extremely useful in other areas of machine learning. In \textbf{statistical machine learning}, mixability typically guarantees faster convergence, see e.g. \cite{van2015fast,erven2012mixability}. In \textbf{online convex optimization} exponential concavity usually leads to better regret bounds, see e.g. \cite{hazan2016introduction}. \vspace{2mm}\noindent\textbf{The article is structured as follows.} In Section \ref{sec-preliminaries}, we recall the definitions of mixability and exponential concavity of loss functions. In Section \ref{sec-int-mix}, we state the theorem on mixability (exp-concavity) of integral loss functions and prove it. In Section \ref{sec-forecasting-probs}, we apply our result to prove mixability of different loss functions used for comparing probability distributions. The results are summarised in Table 1 of Subsection \ref{sec-table}. In \ref{sec-complex-mix}, we give minor technical details. In \ref{sec-aa}, we review the strategy of AA and recall derivation of algorithm's constant regret bound. \section{Preliminaries} \label{sec-preliminaries} In this section we recall the definition of mixability and exponential concavity of loss functions. \begin{definition}[Mixable loss function]A function $\lambda:\Gamma\times\Omega\rightarrow \mathbb{R}$ is called $\eta$-mixable if for all $N=1,2,\dots$, probability vectors $(w^{1},\dots,w^{N})$ and vectors of forecasts $(\gamma^{1},\dots,\gamma^{N})\in \Gamma^{N}$ there exists an aggregated forecast $\overline{\gamma}\in\Gamma$ such that for all $\omega\in\Omega$ the following holds true: \begin{equation}\exp\big[-\eta\lambda(\overline{\gamma}, \omega)\big]\geq \sum_{n=1}^{N}w^{n}\exp\big[-\eta \lambda(\gamma^{n},\omega)\big]. \label{def-formula-mixability} \end{equation} \end{definition} If function $\lambda$ is $\eta$-mixable, then it is also $\eta'$-mixable for all $0<\eta'\leq \eta$. The \textbf{maximal} $\eta$ (for which $\lambda$ is mixable) is always used in order to obtain lower regret bound for AA. For $\eta$-mixable function $\lambda$ there exists a \textbf{substitution function} $${\Sigma:\Gamma^{N}\times\Delta_{N}\rightarrow\Gamma}$$ which performs aggregation \eqref{def-formula-mixability} of forecasts $\gamma^{n}$ w.r.t. weights $w^{n}$, and outputs aggregated forecast $\overline{\gamma}$. Such a function may be non-unique. For common loss functions, there are usually specific substitution functions (given by exact formulas) under consideration. \begin{example}[Square loss] The function $\lambda(\gamma, \omega)=(\gamma-\omega)^{2}$ with $\Omega=\Gamma=[l,r]$ is $\frac{2}{(r-l)^{2}}$-mixable, see \cite{vovk1998game}. Its substitution $\Sigma_{L^{2}}^{[l,r]}$ is defined by \begin{equation}\Sigma_{L^{2}}^{[l,r]}\big(\{\gamma^{n},w^{n}\}_{n=1}^{N}\big)=\frac{r+l}{2}+\frac{(r-l)}{4}\log \frac{\sum_{n=1}^{N}w^{n}\exp[-2\big(\frac{r-\gamma^{n}}{r-l}\big)^{2}]}{\sum_{n=1}^{N}w^{n}\exp[-2\big(\frac{\gamma^{n}-l}{r-l}\big)^{2}]}. \label{squared-loss-substitution} \end{equation} \end{example} \begin{example}[Logarithmic loss] The function $\lambda(\gamma,\omega)=-\log\gamma(\omega)$ with $\Omega=\{1,\dots,K\}$ and $\Gamma=\Delta_{K}$ is $1$-mixable, see \cite{adamskiy2012putting}. The substitution function $\Sigma_{log}$ is defined by $$\big[\Sigma_{log}\big(\{\gamma^{n},w^{n}\}_{n=1}^{N}\big)\big](k)=\sum_{n=1}^{N}w^{n}\gamma^{n}(k).$$ \end{example} \begin{definition}[Exponentially concave loss function] Let $\Gamma$ be a \textbf{convex subset of a linear space} over $\mathbb{R}$. A function $\lambda:\Gamma\times\Omega\rightarrow \mathbb{R}$ is called $\eta$-exponentially concave if for all $N=1,2,\dots$, probability vector $(w^{1},\dots,w^{N})$ and vectors of forecast $(\gamma^{1},\dots,\gamma^{N})\in \Gamma^{N}$ the following holds true for all $\omega\in\Omega$: $$\exp\big[-\eta\lambda(\overline{\gamma}, \omega)\big]\geq \sum_{n=1}^{N}w^{n}\exp\big[-\eta \lambda(\gamma^{n},\omega)\big],$$ where $\overline{\gamma}=\sum_{n=1}^{N}w^{n}\gamma^{n}$. Note that $\overline{\gamma}\in\Gamma$ due to the convexity of $\Gamma$. \end{definition} Clearly, exponential concavity leads to mixability, but the inverse is not always true, see discussion in \cite{cesa-bianchi}.\footnote{In specific cases one may apply \textbf{exp-concavifying} transform to reparametrize the loss to make it exponentially concave, see \cite{kamalaruban2015exp}.} Also exp-concavity is naturally defined only for convex subsets of linear spaces, while mixability can be defined on arbitrary sets. The square loss function $\lambda(\gamma, \omega)=(\gamma-\omega)^{2}$ with ${\Gamma=\Omega=[l,r]}$ is $\frac{1}{2(r-l)^{2}}$-exponentially concave, see \cite{kivinen1999averaging}.\footnote{The maximal exponential concavity rate $\eta$ of squared loss is $4$ times lower than the corresponding mixability rate.} Also, from the previous subsection we see that the logarithmic loss is $1$-exponentially concave. \section{Integral Mixability (Exp-Concavity)} \label{sec-int-mix} In the framework of Protocol \ref{protocol} we consider function-valued forecasting. We prove that $\eta$-mixable (exp-concave) loss $\lambda$ for comparing single-value outcomes admits an $\eta$-mixable (exp-concave) integral extension for comparing function-valued outcomes. \begin{theorem}[Integral Mixability \& Exp-concavity] Let $(\Gamma,\sigma_{\Gamma})$, $(\Omega,\sigma_{\Omega})$, $(\mathcal{X},\sigma_{\mathcal{X}})$ be measurable spaces. Assume that ${\lambda:\Gamma\times\Omega\rightarrow\mathbb{R}_{+}}$ is a loss function measurable w.r.t. product $\sigma_{\Gamma}\times\sigma_{\Omega}$. Let $\lambda_{u,\mu}$ be $\mathcal{X}$-integral $\lambda$-loss function. Assume that the substitution function $\Sigma_{\lambda}$ is measurable. Then function $\lambda_{u,\mu}$ is $\eta$-mixable, and as a substitution function (for $N$ experts) we can use $$\Sigma_{\lambda_{u,\mu}}:\big(\mathcal{M}(\Gamma^{\mathcal{X}})\big)^{N}\times \Delta_{N}\rightarrow \Gamma^{\mathcal{X}}$$ defined by point-wise ($x\in \mathcal{X}$) application of substitution function $\Sigma_{\lambda}$ for $\lambda$: $$\Sigma_{\lambda_{u,\mu}}\big[\{\gamma^{n}, w^{n}\}_{n=1}^{N}\big](x):=\Sigma_{\lambda}\big(\{\gamma^{n}(x), w^{n}\}_{n=1}^{N}\big).$$ \label{theorem-main} \end{theorem} We emphasize that the suggested substitution function $\Sigma_{\lambda_{u,\mu}}$ is \textbf{independent} of both $u$ and $\mu$. Thus, the same substitution function attains efficient prediction for all possible $\mathcal{X}$-integral $\lambda$-loss functions (all admissible $\mu$ and $u$), which may be even chosen by an adversary after the prediction at each step is made. To prove our main Theorem \ref{theorem-main}, we will need the following \begin{theorem}[Generalized Holder Inequality] \label{generalized-holder} Let $(\mathcal{X},\mu)$ and $(\mathcal{Y},\nu)$ denote two $\sigma$-finite measure spaces. Let $f(x,y)$ be positive and measurable on $(\mathcal{X}\times \mathcal{Y}, \mu\times \nu)$ function, and $u(x), v(y)$ be weight functions and ${\int_{\mathcal{X}}u(x)d\mu(x)=1}$. Then \begin{eqnarray}\int_{\mathcal{Y}}\exp\bigg(\int_{\mathcal{X}}\log f(x,y)u(x)d\mu(x)\bigg)v(y)d\nu(y)\leq \nonumber \\ \exp\bigg(\int_{\mathcal{X}}\log \bigg[\int_{\mathcal{Y}}f(x,y)v(y)d\nu(y)\bigg]u(x)d\mu(x)\bigg). \label{generalized-holder-inequality} \end{eqnarray} \label{theorem-generalized-holder-inequality} \end{theorem} \noindent An explicit discussion of inequality \eqref{generalized-holder-inequality} is provided in \cite{nikolova2017new}. Inequality \eqref{generalized-holder-inequality} is also known as \textbf{Continuous Form of Holder Inequality} by \cite{dunford1958linear} and \textbf{Extended Holder Inequality} by \cite{kwon1995extension}. The proof can be found within the mentioned works. Simplified versions of the inequality appear in \cite{nikolova1991some,nikolova2016continuous}. Now we prove our main Theorem \ref{theorem-main}. Consider the pool $\mathcal{N}$ of experts. Let $\gamma^{n}:\mathcal{X}\rightarrow \Gamma$ be their measurable forecasts and $w^{1},\dots, w^{N}$ be the experts' weights. We denote the forecast aggregated according to $\Sigma_{\lambda_{u,\mu}}$ by $\overline{\gamma}\in\mathcal{M}(\Gamma^{\mathcal{X}})$. In the following proof we will directly check that for every $\omega\in\mathcal{M}(\Omega^{\mathcal{X}})$ it holds true (in fact, for all admissible $u$ and $\mu$) that \begin{equation} \exp\big[-\eta \lambda_{u,\mu}(\overline{\gamma},\omega)\big]\geq \sum_{n=1}^{N}w^{n}\exp(-\eta \lambda_{u,\mu}(\gamma^{n},\omega)), \label{integral-mixability-requirement} \end{equation} so that $\Sigma_{\lambda_{u,\mu}}$ is a proper substitution function and $\lambda_{u,\mu}$ is indeed $\eta$-mixable. \begin{proof} Choose any $\omega\in \mathcal{M}(\Omega^{\mathcal{X}})$. Since $\lambda$ is $\eta$-mixable with substitution function $\Sigma_{\lambda}$, for all $x\in \mathcal{X}$ we have $$\exp\big[-\eta\lambda\big(\overline{\gamma}(x), \omega(x)\big)\big]\geq \sum_{n=1}^{N}w^{n}\bigg[\exp\big[-\eta \lambda\big( \gamma^{n}(x), \omega(x)\big)\big]\bigg] .$$ We take the logarithm of both parts of the inequality and for every $x\in \mathcal{X}$ obtain $$ -\eta\lambda\big(\overline{\gamma}(x), \omega(x)\big)\geq \log\sum_{n=1}^{N}w^{n} \bigg[\exp \big[-\eta \lambda\big(\gamma^{n}(x), \omega(x)\big)\big]\bigg]. $$ We multiply both sides by $u_{\omega}(x)\geq 0$ and integrate over all $x\in \mathcal{X}$ w.r.t. measure $\mu_{\omega}$: \begin{eqnarray}\underbrace{\int_{\mathcal{X}}\big[-\eta \lambda\big(\overline{\gamma}(x), \omega(x)\big)\big]u_{\omega}(x)d\mu_{\omega}(x)}_{-\eta \lambda_{u,\mu}(\overline{\gamma},\omega)}\geq \nonumber \\ \int_{\mathcal{X}}\log\sum_{n=1}^{N}w^{n}\bigg[\exp \big[-\eta \lambda\big(\gamma^{n}(x), \omega(x)\big)\big]\bigg]u_{\omega}(x)d\mu_{\omega}(x) \label{intemediate-mix} \end{eqnarray} The left part of inequality \eqref{intemediate-mix} equals to $-\eta \lambda_{u,\mu}(\overline{\gamma},\omega)$. Next, for $x\in\mathcal{X}$ and $n\in\mathcal{N}$ we define $$f(x,n):=\exp \big[-\eta \lambda\big(\gamma^{n}(x), \omega(x)\big)\big].$$ By applying the notation change and taking the exponent of both sides of \eqref{intemediate-mix}, we obtain \begin{equation} \exp\big[-\eta \lambda_{u,\mu}(\overline{\gamma},\omega)\big]\geq \exp\bigg(\int_{\mathcal{X}}\log \big[\sum_{n=1}^{N}w^{n}\cdot f(x,n)\big] u_{\omega}(x)d\mu_{\omega}(x)\bigg) \label{exp-intemediate-mix} \end{equation} The final step is to apply Theorem \ref{theorem-generalized-holder-inequality}, i.e. Generalized Holder inequality \eqref{generalized-holder}. In the notation of Theorem we use $\mathcal{Y}:=\mathcal{N}$, $v(y)\equiv 1$, $\nu(y):=w^{y}$ and obtain \begin{eqnarray}\exp\bigg(\int_{\mathcal{X}}\log \big[\sum_{n=1}^{N}w^{n}\cdot f(x,n)\big] u_{\omega}(x)d\mu_{\omega}(x)\bigg)\geq \nonumber \\ \sum_{n=1}^{N}w^{n}\exp\bigg(\int_{\mathcal{X}}\log\big(f(x,n)\big)u_{\omega}(x)d\mu(x)\bigg)= \nonumber \\ \sum_{n=1}^{N}w^{n}\bigg[\exp\bigg(\int_{\mathcal{X}}\underbrace{\log\big(f(x,n)\big)}_{-\eta \lambda(\gamma^{n}(x),\omega(x))}u_{\omega}(x)d\mu_{\omega}(x)\bigg)\bigg]= \nonumber \\ \sum_{n=1}^{N}w^{n}\bigg[\exp\bigg(-\eta\underbrace{\int_{\mathcal{X}}\lambda(\gamma^{n}(x),\omega(x))u(x)dx}_{\lambda_{u,\mu}(\gamma^{n},\omega)}\bigg)\bigg]= \nonumber \\ \sum_{n=1}^{N}w^{n}\exp(-\eta \lambda_{u,\mu}(\gamma^{n},\omega)) \label{holder-to-mix} \end{eqnarray} Now we combine \eqref{holder-to-mix} with \eqref{exp-intemediate-mix} and obtain the desired inequality \eqref{integral-mixability-requirement}. \end{proof} \section{Forecasting of Probability Distributions} \label{sec-forecasting-probs} In this section, we consider online probabilistic forecasting. At each step of the game experts provide forecasts as probability distributions on $\mathcal{X}$. The learner has to aggregate forecasts into a single forecast being a probability distribution on $\mathcal{X}$. Next, the true probability distribution, probably empirical, is revealed. Both the experts and the learner suffer losses using a loss function. We analyse loss functions which are widely used for comparing probability distributions, show that they are actually integral loss functions and prove their mixability (exp-concavity). The results are summarised in Table 1 of Subsection \ref{sec-table}. Each following subsection is devoted to a particular loss function. \subsection{Mixability \& Exp-concavity Table} \label{sec-table} Our results on mixability (exp-concavity) of common loss functions used to compare probability distributions are summarised in Table 1. The column "\textbf{empirical outcomes}"\ indicates whether it is possible to use the loss function to compare the predicted distribution $\gamma$ (typically continuous) with a discrete outcome $\omega$ (empirical distribution). \begin{table}[h] \tiny \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Loss Function}} & \multirow{2}{*}{\textbf{Specifications}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Empirical\\ Outcomes\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Aggregating Rule}} \\ \cline{4-5} & & & \textbf{Exp-concavity} & \textbf{Mixability} \\ \hline \begin{tabular}[c]{@{}c@{}}Continuous Ranking\\ Probability Score\\ (CRPS)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Borel distributions on\\ $\mathcal{X}=\prod_{d=1}^{D}[a_{d},b_{d}]$\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Mixture for\\ $\eta=\frac{1}{2\prod_{d=1}^{D}(b_{d}-a_{d})}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Formula for\\ $\eta=\frac{2}{\prod_{d=1}^{D}(b_{d}-a_{d})}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Sliced Continuous\\ Ranking Probability\\ Score (SCRPS)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Borel distributions on\\ $\mathcal{X}\subset \text{Ball}_{\mathbb{R}^{D}}(0, R)$\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Mixture for\\ $\eta=\frac{1}{8R}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}No closed form,\\ $\eta=\frac{1}{2R}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Energy-Based\\ Distance ($\mathcal{E}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Borel distributions on\\ $\mathcal{X}\subset \text{Ball}_{\mathbb{R}^{D}}(0, R)$\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Mixture for\\ $\eta=\frac{S_{D-2}}{8R(D-1)S_{D-1}}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}No closed form,\\ $\eta=\frac{S_{D-2}}{2R(D-1)S_{D-1}}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Kullback–Leibler\\ divergence (KL)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distributions on\\ $(\mathcal{X},\sigma,\mu)$ with\\ non-zero density\end{tabular} & \begin{tabular}[c]{@{}c@{}}Reduces to\\ Log-loss\end{tabular} & \multicolumn{2}{c|}{Mixture for $\eta=1$} \\ \hline \begin{tabular}[c]{@{}c@{}}Beta-2 divergence\\ ($\mathcal{B}_{2}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distributions on\\ $(\mathcal{X},\sigma,\mu)$ with $\|\mu\|_{1}<\infty$\\ and $M$-bounded density\end{tabular} & If $|\mathcal{X}|<\infty$ & \begin{tabular}[c]{@{}c@{}}Mixture for\\ $\eta=\frac{1}{2\|\mu\|_{1}M^{2}}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}No closed form,\\ $\eta=\frac{2}{\|\mu\|_{1}M^{2}}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Characteristic\\ function\\ discrepancy (CFD)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Borel distributions\\ on $\mathcal{X}\subset\mathbb{R}^{D}$\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Mixture for\\ $\eta=\frac{1}{8}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}No closed form,\\ $\eta=\frac{1}{4}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Maximum mean\\ discrepancy (MMD)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Borel distributions\\ on $\mathcal{X}\subset\mathbb{R}^{D}$,\\Positive definite kernel\\$k(x,y)=\psi(x-y)$ for\\positive definite function\\$\psi(x)=\int_{\mathbb{R}^{D}}e^{-i\langle x,t\rangle}d\mu(t)$\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Mixture for\\ $\eta=\frac{1}{8\|\mu\|_{1}}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}No closed form,\\ $\eta=\frac{1}{4\|\mu\|_{1}}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}1-dimensional\\ optimal transport\\ cost (OT)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Borel distributions\\ on $\mathcal{X}\subset\mathbb{R}$;\\ $\eta$-mixable\\ (exp-concave)\\ cost $c:\mathcal{X}\times\mathcal{X}\rightarrow \mathbb{R}$\\ satistying $\frac{\partial^{2} c}{\partial x\partial x'}<0$\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Wasserstein-2\\ barycenter\end{tabular} & \begin{tabular}[c]{@{}c@{}}Mixed quantile\\for monotone\\ substitution $\Sigma_{c}$;\\No closed form\\for arbitrary\\substitution,\\but can be modeled\\implicitly (Lemma \ref{lemma-implicit})\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Sliced \\ Wasserstein-2\\ distance ($\text{SW}_{2}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Radon distributions on\\ $\mathcal{X}\subset \text{Ball}_{\mathbb{R}^{D}}(0, R)$,\\scaled and translated\\copies of each other.\end{tabular} & Possible & \begin{tabular}[c]{@{}c@{}}Sliced\\ Wasserstein-2\\ barycenter for\\ $\eta=\frac{1}{8R^{2}}$\\(Lemma \ref{lemma-scale-rot-bar})\end{tabular} & \begin{tabular}[c]{@{}c@{}}Unknown,\\ $\eta=\frac{1}{2R^{2}}$?\\(see Subsection \ref{sec-sliced-ot})\end{tabular} \\ \hline \end{tabular} \label{mix-table} \caption{Mixability \& exp-concavity of various loss functions used for assessing probabilitic forecasts.} \end{table} \subsection{Continuous Ranking Probability Score} \label{sec-crps} \subsubsection{One-Dimensional Case} Let $\mathcal{X}=[a,b]\subset \mathbb{R}$ and assume that $\Gamma=\Omega$ is the space of all probability measures over Borel field of $\mathcal{X}$. In this case \textbf{Continuous Ranking Probability Score} (CRPS) by \cite{matheson1976scoring,von1977continuous} is widely used for comparing probability distributions: \begin{equation} \text{CRPS}(\gamma,\omega)=\int_{a}^{b}|\text{CDF}_{\gamma}(x)-\text{CDF}_{\omega}(x)|^{2}dx, \label{crps-definition} \end{equation} where by $\text{CDF}_{\upsilon}:\mathcal{X}\rightarrow[0,1]$ we denote cumulative distribution function of probability distribution $\upsilon\in\Gamma$, i.e. ${\text{CDF}_{\upsilon}(x)=\upsilon([a,x])}$ for all $x\in[a,b]$. We visualize CRPS in the following Figure \ref{fig:crps}. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\linewidth]{CRPS} \caption{Arbitrary distribution $\omega$.} \end{subfigure} \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\linewidth]{CRPS-E} \caption{Empirical (Dirac) distribution $\omega$.} \end{subfigure} \caption{Visualization of the comparison of CDFs of distributions $\gamma,\omega$ on $[a,b]$ by using Continuous Ranking Probability Score.} \label{fig:crps} \end{figure} In practice, CRPS is used, for example, for assessing the quality of forecasts of different weather-related variables \cite{zamo2018estimation}. As we know, so far CRPS is the only non-trivial loss on continuous outcomes which is already known to be mixable. Indeed, \cite{v2019online,dzhamtyrova2019competitive} proved the mixability of CRPS but only for the case when the outcome $\omega$ is a Dirac distribution $\omega=\delta_{x_{0}}$ for some $x_{0}\in[a,b]$. Within our integral mixability framework CRPS equals to the integral square loss on $[a,b]$ with density $u_{\omega}(x)\equiv \frac{1}{b-a}$ and Lebesgue measure $\mu_{\omega}$, multiplied by $(b-a)$: \begin{equation}\text{CRPS}(\gamma,\omega)=(b-a)\underbrace{\int_{a}^{b}|\text{CDF}_{\gamma}(x)-\text{CDF}_{\omega}(x)|^{2}\overbrace{\frac{1}{b-a}}^{u_{\omega}(x)}dx}_{\text{Integral squared loss}}. \label{crps-to-integral} \end{equation} Since for all $x\in[a,b]$ it holds true that $\text{CDF}_{\gamma}(x), \text{CDF}_{\omega}(x)\in [0, 1]$, the point-wise loss function ${|\text{CDF}_{\gamma}(x)-\text{CDF}_{\omega}(x)|^{2}}$ is $2$-mixable ($\frac{1}{2}$-exp-concave). Thus, by Theorem \ref{theorem-main} the corresponding integral loss on $[a,b]$ is also $2$-mixable ($\frac{1}{2}$-exp-concave). We conclude that CRPS is mixable (exp-concave) but with $(b-a)$ times lower learning rate, i.e. $\eta=\frac{2}{(b-a)}$ (or $\eta=\frac{1}{2(b-a)}$ for exp-concavity). For $\frac{2}{(b-a)}$-mixable CRPS the aggregated prediction $\overline{\gamma}$ is given by its CDF: \begin{equation}\overline{\text{CDF}}(x)=\frac{1}{2}+\frac{1}{4}\log \frac{\sum_{n=1}^{N}w^{n}\exp[-2\big(1-\text{CDF}_{\gamma^{n}}(x)\big)^{2}]}{\sum_{n=1}^{N}w^{n}\exp[-2\big(\text{CDF}_{\gamma^{n}}(x)\big)^{2}]}, \label{crps-substitution} \end{equation} for weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$ and experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$. It equals to the point-wise application of $\Sigma_{L^{2}}^{[0,1]}$ (defined by \eqref{squared-loss-substitution}) to experts' forecasts CDF's. One may perform straightforward check to verify that the resulting aggregated function \eqref{crps-substitution} is indeed a CDF. For $\frac{1}{2(b-a)}$-exponentially concave CRPS we have \begin{equation} \overline{\text{CDF}}(x)={\sum_{n=1}^{N}w^{n}\cdot \text{CDF}_{\gamma_{n}}(x)}, \label{crps-wa-substitution} \end{equation} which means that aggregated prediction $\overline{\gamma}$ is a mixture of probability distributions $\gamma^{n}$ w.r.t. weights $w^{n}$. \subsubsection{Multi-Dimensional Case} \label{sec-scrps} Let $\mathcal{X}\subset \mathbb{R}^{D}$ be a compact subset. Assume that $\Gamma=\Omega$ is the space of all probability measures over Borel field of $\mathcal{X}$. One may naturally extend CRPS formula \eqref{crps-definition} for comparing multi-dimensional distributions $\gamma,\omega$ on $\mathcal{X}$: \begin{equation}\text{CRPS}(\text{CDF}_{\gamma},\text{CDF}_{\omega})=\int_{\mathcal{X}}|\text{CDF}_{\gamma}(x)-\text{CDF}_{\omega}(x)|^{2}dx, \label{crps-multidim-definition} \end{equation} where $\text{CDF}_{\upsilon}:\mathcal{X}\rightarrow [0,1]$ denotes CDF of multi-dimensional $\upsilon\in\Gamma$. Similar to the analysis of the previous subsection, one may prove that if $\mathcal{X}\subset \prod_{d=1}^{D}[a_{d},b_{d}]$, then the loss \eqref{crps-multidim-definition} is ${\big[2\cdot \big(\prod_{d=1}^{D}(b_{d}-a_{d})\big)^{-1}\big]}$-mixable (and $4$-times lower exp-concave), and obtain an analogue to aggregation rule \eqref{crps-substitution} for mixability and \eqref{crps-wa-substitution} for exp-concavity. The analogue to \eqref{crps-wa-substitution} is straightforward: the aggregated distribution is a mixture. However, the substitution analogue to \eqref{crps-substitution} is hard to use in high-dimensional spaces, e.g. to sample from the distribution or to calculate moments, although modern Monte Carlo Markov Chain methods can be used, see \cite{brooks2011handbook}. Besides, definition \eqref{crps-multidim-definition} is not symmetric to probability measure rotations and is sensitive to the choice of the coordinate system. To overcome above-mentioned issues, we propose to use \textbf{Sliced Continuous Ranking Probability Score} (SCRPS) which reduces to estimation of multiple one-dimensional CRPS and is invariant to the choice of the coordinate system. First, for all $\upsilon\in\Gamma$ we define $\text{SCDF}: \mathbb{S}^{D-1}\times \mathbb{R}\rightarrow [0,1]$ by $$\text{SCDF}_{\upsilon}(\theta,t)=\upsilon\big(\{x\in\mathcal{X}\text{ }|\text{ }\langle x,\theta\rangle\leq t\}\big),$$ where $\theta\in \mathbb{S}^{D-1}=\{x\in \mathbb{R}^{D}\wedge \|x\|_{2}=1\}$ is a unit sphere. Function $\text{SCDF}_{\upsilon}(\theta,\cdot)$ is a CDF of the distribution $\upsilon$ projected on to the line orthogonal to $\langle x,\theta\rangle=0$. Let $S_{D-1}$ be the surface area of $(D-1)$-dimensional unit sphere. Now we define SCRPS: \begin{equation}\text{SCRPS}(\gamma,\omega)=\frac{1}{S_{D-1}}\int_{\mathbb{S}^{D-1}}\bigg[\int_{-\infty}^{\infty}|\text{SCDF}_{\gamma}(\theta,t)-\text{SCDF}_{\omega}(\theta,t)|^{2}dt\bigg]d\theta. \label{scrps-definition} \end{equation} From the definition we see that SCRPS is the average over all the sliced CRPS scores. Thus, similar to \textbf{Sliced Wasserstein distances} \cite{kolouri2018sliced}, its stochastic computation in practice can be efficiently preformed via projections on random directions ${\theta\in \mathbb{S}^{D-1}}$. Let us prove that SCRPS is mixable (exp-concave) for bounded $\mathcal{X}\subset \text{Ball}_{\mathbb{R}^{D}}(0,R)$. In this case limits $\pm \infty$ of the inner integral in \eqref{scrps-definition} are replaced by $\pm R$ respectively. We have \begin{eqnarray}\text{SCRPS}(\gamma,\omega)= \nonumber \\ 2R\underbrace{\int_{\mathbb{S}^{D-1}\times [-R,R]}|\text{SCDF}_{\gamma}(\theta,t)-\text{SCDF}_{\omega}(\theta,t)|^{2}\frac{1}{2RS_{D-1}}d\theta dt}_{\text{Integral squared loss}}, \label{scrps-to-integral} \end{eqnarray} which similar to \eqref{crps-to-integral} reduces SCRPS to a multiple integral of the squared loss, which is $2$-mixable ($\frac{1}{2}$-exp-concave) on $[0,1]$. Thus, the integral squared loss is $2$-mixable ($\frac{1}{2}$-exp-concave) by Theorem \ref{theorem-main}. We conclude that SCRPS is also mixable (exp-concave) but with $2R$ lower learning rate. For $\frac{1}{2R}$-exp-concave SCRPS the aggregated prediction $\overline{\gamma}$ is given by its sliced CDF: \begin{equation}\overline{\text{SCDF}}(\theta,t)=\sum_{n=1}^{N}w^n \cdot\text{SCDF}_{\gamma^{n}}(\theta,t) \label{scrps-exp-substitution}\end{equation} for experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$ and weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$. The mixture $\overline{\gamma}$ of probability distributions $\gamma^{n}$ w.r.t. weights $w^{n}$ can be used as an aggregated prediction because its sliced CDF equals \eqref{scrps-exp-substitution}. For $\frac{2}{R}$-mixable SCRPS the aggregated prediction's sliced CDF should satisfy \begin{equation}\overline{\text{SCDF}}(\theta,t)=\frac{1}{2}+\frac{1}{4}\log \frac{\sum_{n=1}^{N}w^{n}\exp[-2\big(1-\text{SCDF}_{\gamma^{n}}(\theta,t)\big)^{2}]}{\sum_{n=1}^{N}w^{n}\exp[-2\big(\text{SCDF}_{\gamma^{n}}(\theta,t)\big)^{2}]}. \label{scrps-substitution} \end{equation} However, we do not know whether $\overline{\text{SCDF}}(\theta,t)$ is necessarily a \textbf{sliced CDF} of some distribution $\overline{\gamma}$. To fix this issue, one may consider the \textbf{projection trick} and define $$\overline{\gamma}=\argmin_{\gamma\in\Gamma}\bigg(\frac{1}{S_{D-1}}\int_{\mathbb{S}^{D-1}}\bigg[\int_{-\infty}^{\infty}|\text{SCDF}_{\gamma}(\theta,t)-\overline{\text{SCDF}}(\theta,t)|^{2}dt\bigg]d\theta\bigg).$$ SCRPS is a squared norm of difference of SCDFs in ${\mathcal{L}^{2}:=\mathcal{L}^{2}(\mathbb{S}^{D-1}\times [-R,R])}$ of quadratically integrable functions (w.r.t. the product of uniform probability measure on $\mathbb{S}^{D-1}$ and Lebesgue measure on $[-R,R]$). Thus, $\text{SCDF}_{\overline{\gamma}}$ can be viewed as the projection of $\overline{\text{SCDF}}$ on to the convex subset of sliced cumulative distribution functions. Hence, for all $\omega\in\Omega$ it satisfies $$\|\text{SCDF}_{\omega}-\overline{\text{SCDF}}\|_{\mathcal{L}^{2}}^{2}\geq \|\text{SCDF}_{\omega}-\text{SCDF}_{\overline{\gamma}}\|_{\mathcal{L}^{2}}^{2}=\text{SCRPS}(\omega,\overline{\gamma}),$$ i.e. distribution $\overline{\gamma}$ can be used as the aggregated prediction. Although the projection trick potentially allows to obtain admissible aggregated SCDF and the corresponding distribution, we do not know whether in a general practical case the computation of $\overline{\gamma}$ is feasible for SCRPS. Thus, from the computational point of view, formula \eqref{scrps-exp-substitution} should be preferred despite the fact it provides $4$-times lower learning rate. \subsection{Energy-Based Distance} Let $\mathcal{X}\subset \mathbb{R}^{D}$ be a compact subset and $\Gamma=\Omega$ is the space of all probability measures over Borel field of $\mathcal{X}$. For $\gamma,\omega\in\Gamma$ we consider \textbf{Energy-Based Distance}: \begin{equation} \mathcal{E}(\gamma,\omega)=2\cdot \mathbb{E}_{\gamma\times\omega}\|x-y\|_{2}-\mathbb{E}_{\gamma\times\gamma}\|x-x'\|_{2}-\mathbb{E}_{\omega\times\omega}\|y-y'\|_{2}. \label{energy-based-distance} \end{equation} It is absolutely non trivial to show that $\mathcal{E}$ is actually defines a metric on the space of probability distributions, see e.g. \cite{rizzo2016energy} for explanations. Formula \eqref{energy-based-distance} naturally admits unbiased estimates from empirical samples, which makes Energy-Based distance attractive for the usage in generative models, see \cite{bellemare2017cramer}. Also, in practice Energy-Based distance is used for statistical hypothesis testing, see \cite{rizzo2016energy}. Note that at the first glance Energy-based distance defines $\mathcal{L}^{1}$-like metric because it operates with distances in $\mathcal{X}\subset \mathbb{R}^{D}$ rather than their squares. Since $\mathcal{L}^{1}$-loss is not mixable (unlike squared loss $\mathcal{L}^{2}$), it is reasonable to expect that $\mathcal{E}$ is also not mixable (exp-concave). Surprisingly, Energy-based distance equals SCRPS up to a constant which depends on the dimension $D$. Thus, Energy-based distance is mixable (exp-concave). Equivalence is known for $D=1$, see \cite{rizzo2016energy}. We prove the equivalence for arbitrary ${D\geq 1}$. \begin{theorem}[Equivalence of SCRPS and Energy-Based Distance]Let $\gamma,\omega$ be two Borel probability measures on $\mathcal{X}=\mathbb{R}^{D}$ with finite first moments, i.e. $\mathbb{E}_{X\sim\gamma}|X|, \mathbb{E}_{Y\sim\gamma}|Y|<\infty$. Then for $D>1$ $$\mathcal{E}(\gamma,\omega)=(D-1)\frac{S_{D-1}}{S_{D-2}}\text{\normalfont SCRPS}(\gamma,\omega).$$ \label{thm-energy-scrps} \end{theorem} \begin{proof} For $x\in\mathcal{X}$ consider the value $s(x)=\int_{\theta\in \mathbb{S}^{D-1}}|\langle x,\theta\rangle| d\theta$. Note that $s(x)$ depends only on $\|x\|$, i.e. $$s(x)=\int_{\theta\in \mathbb{S}^{D-1}}|\langle x,\theta\rangle| d\theta=\|x\|\cdot \int_{\theta\in \mathbb{S}^{D-1}}|\theta_{1}| d\theta=\|x\|\cdot s(1),$$ where $\theta_{1}$ is the first coordinate of $\theta=(\theta_{1},\dots,\theta_{D})\in\mathbb{S}^{D-1}\subset\mathbb{R}^{D}$. Thus, \begin{eqnarray}\mathbb{E}_{\gamma\times\gamma}\|x-x'\|=\mathbb{E}_{\gamma\times\gamma}\frac{s(x-x')}{s(1)}=\nonumber \\ \frac{1}{s(1)}\mathbb{E}_{\gamma\times\gamma}\big[\int_{\theta\in \mathbb{S}^{D-1}}|\langle x-x',\theta\rangle| d\theta\big]= \nonumber \\ \frac{1}{s(1)}\int_{\theta\in \mathbb{S}^{D-1}}\big[\mathbb{E}_{\gamma\times\gamma}|\langle x-x',\theta\rangle|\big] d\theta= \nonumber \\ \frac{1}{s(1)}\int_{\theta\in \mathbb{S}^{D-1}}\big[\mathbb{E}_{\gamma\times\gamma}|\langle x,\theta\rangle-\langle x',\theta\rangle|\big] d\theta \label{abs-to-slice} \end{eqnarray} We derive analogs of \eqref{abs-to-slice} for other terms of \eqref{energy-based-distance} and obtain \begin{eqnarray}\mathcal{E}(\gamma,\omega)=\frac{1}{s(1)}\int_{\theta\in \mathbb{S}^{D-1}}\bigg[2\mathbb{E}_{\gamma\times\omega}|\langle x,\theta\rangle-\langle y,\theta\rangle| \nonumber \\ -\mathbb{E}_{\gamma\times\gamma}|\langle x,\theta\rangle-\langle x',\theta\rangle|-\mathbb{E}_{\omega\times\omega}|\langle y,\theta\rangle-\langle y',\theta\rangle|\bigg] d\theta \label{energy-to-scrps-int} \end{eqnarray} The expression within the large square brackets of \eqref{energy-to-scrps-int} equals Energy-based score between 1-dimensional projections of $\gamma$ and $\omega$ onto the direction $\theta$. According to \cite{szekely2003statistics,rizzo2016energy}, it equals to CRPS multiplied by $2$, i.e. \eqref{energy-to-scrps-int} turns into \begin{eqnarray}\mathcal{E}(\gamma,\omega)=\frac{1}{s(1)}\int_{\theta\in \mathbb{S}^{D-1}}\bigg[2\int_{-\infty}^{\infty}|\text{SCDF}_{\gamma}(\theta,t)-\text{SCDF}_{\omega}(\theta,t)|^{2}dt\bigg] d\theta= \nonumber \\ \frac{2S_{D-1}}{s(1)}\cdot \text{SCRPS}(\gamma,\omega) \nonumber \end{eqnarray} Now we compute $s(1)$. Let $\mathbb{B}^{D}=\text{Ball}_{\mathbb{R}^{D}}(0,1)$ be the $D$-dimensional unit ball (whose boundary is $\mathbb{S}^{D-1}$). Let $V^{D}$ be the volume of $\mathbb{B}^{D}$. We note that \begin{eqnarray} \int_{\mathbb{B}^{D}}|\theta_{1}|d\theta=\int_{-1}^{1}|\theta_{1}|\cdot \big[(\sqrt{1-\theta_{1}^{2}})^{D-1}\cdot V^{D-1}\big]d\theta_{1}= \nonumber \\ 2\int_{0}^{1}\theta_{1}\cdot \big[(\sqrt{1-\theta_{1}^{2}})^{D-1}\cdot V^{D-1}\big]d\theta_{1}=\frac{2V^{D-1}}{D+1}, \label{int-using-slices} \end{eqnarray} where we to compute the integral we decompose it into ball sliced orthogonal to the first axis. Now we compute the integral again, but by decomposing it into the integrals over spheres: \begin{eqnarray} \int_{\mathbb{B}^{D}}|\theta_{1}|d\theta=\int_{0}^{1}\big[\int_{r\cdot\mathbb{S}^{D-1}}|\theta_{1}|d\theta\big]dr=\int_{0}^{1}r^{D}\underbrace{\big[\int_{\mathbb{S}^{D-1}}|\theta_{1}|d\theta\big]}_{s(1)}dr=\frac{s(1)}{D+1}. \label{int-using-spheres} \end{eqnarray} Finally, by matching \eqref{int-using-slices} with \eqref{int-using-spheres} and using equality $V^{D-1}=\frac{S^{D-2}}{D-1}$ we conclude that $s(1)=2V^{D-1}=\frac{2S^{D-2}}{D-1}$ and finish the proof. \end{proof} From Theorem \ref{thm-energy-scrps} we immediately conclude that Energy-Based loss is mixable (exp-concave). If $\mathcal{X}\subset\text{Ball}_{\mathbb{R}^{D}}(0,R)$, then the learning rate for mixability (exp-concavity) is $(D-1)\frac{S_{D-1}}{S_{D-2}}$ times lower than the analogous rate for SCRPS. The aggregated prediction is computed exactly the same way as for SCRPS, see discussion in previous Subsection \ref{sec-scrps}. \subsection{Density-Based Losses} \label{sec-density-based} Let $(\mathcal{X},\sigma,\mu)$ be a probability space with $\sigma$-finite measure $\mu$. Denote the set of all absolutely continuous w.r.t. $\mu$ probability measures on $(\mathcal{X},\sigma)$ by $\Omega=\Gamma$. For every $\upsilon\in\Gamma$ we denote its density by $p_{\upsilon}(x)=\frac{d\upsilon(x)}{d\mu(x)}$. \subsubsection{Kullback-Leibler Divergence} \label{sec-rkl} Consider \textbf{Kullback-Leibler Divergence} between the outcome and the predicted distribution: \begin{eqnarray}\text{KL}(\omega||\gamma)=-\int_{\mathcal{X}}\log\bigg[\frac{p_{\gamma}(x)}{p_{\omega}(x)}\bigg]p_{\omega}(x)d\mu(x)= \nonumber \\ \underbrace{-\int_{\mathcal{X}}\log [p_{\gamma}(x)] \cdot p_{\omega}(x)d\mu(x)}_{-\int_{\mathcal{X}}\log [p_{\gamma}(x)] d\omega(x)}-H_{\mu}(p_{\omega}). \label{kl-divergence} \end{eqnarray} KL is a key tool in Bayesian machine learning. It is probably the most known representative of the class of $\alpha$-divergences \cite{amari2009alpha} which is a subclass of $f$-divergences \cite{nowozin2016f}. By skipping the $\gamma$-independent (prediction) entropy term $-H_{\mu}(p_{\omega})$ one may clearly see that the resulting loss is integral loss with density $u_{\omega}(x)=p_{\omega}(x)$ and $\mu_{\omega}\equiv \mu$ for the logarithmic loss function. The logarithmic function is $1$-mixable (exp-concave), thus, from Theorem \ref{theorem-main} we conclude that KL divergence is also $1$-mixable (exp-concave). For $1$-mixable (exp-concave) KL-divergence the aggregated prediction $\overline{\gamma}$ is given by its density $$\overline{p}(x)=\sum_{n=1}^{N}w^{n}p_{\gamma^{n}}(x)$$ for experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$ and weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$. The resulting $\overline{p}$ is the density of a mixture of probability distributions $\gamma^{n}$ w.r.t. weights $w^{n}$. KL divergence (and the related log-loss) is known to have \textbf{mode seeking property}, i.e. the divergence between $\text{KL}(p_{\omega}||p_{\gamma})$ is small when $p_{\gamma}$ attains huge values in areas where $p_{\omega}$ has huge values. Such behaviour is known to be sensitive to \textbf{outliers} and in some problems KL may not be a good measure of dissimilarity of distributions. Although some other representatives of $f$-divergence class are known to be robust, e.g. Reverse KL-divergence, see \cite{regli2018alpha}, we do not know whether they are mixable. \subsubsection{Beta-2 Divergence} \label{sec-beta-2} The well-known representative of \textbf{$\beta$-divergences} (\cite{cichocki2010families,cichocki2011generalized}) class used to compare probability distributions is \textbf{Beta-2 divergence}: $$\mathcal{B}_{2}(p_{\gamma}, p_{\omega})=\int_{\mathcal{X}}|p_{\gamma}(x)-p_{\omega}(x)|^{2}d\mu(x).$$ While KL divergence is known to have \textbf{mode seeking property}, Beta-2 is more \textbf{robust}, see \cite{regli2018alpha}. This property is useful when comparing distributions with outliers. When $|\mathcal{X}|<\infty$ and $\mu(x)\equiv 1$ for all $x\in\mathcal{X}$, values $p_{\gamma}(x),p_{\omega}(x)$ become probabilities. In this case within the framework of online learning the loss in known as \textbf{Brier score}. In practice it is widely used (similar to CRPS) to assess the quality of weather forecasts \cite{rufibach2010use}. Brier score between the distributions on the finite sets is mixable, see e.g. \cite{vovk2009prediction}. We consider a more general case. Assume that $\|\mu\|_{1}<\infty$ and $\Omega=\Gamma$ is the set of all probability measures $\upsilon$ on $(X,\sigma)$ with $p_{\upsilon}(x)\in [0, M]$ for all $x\in \mathcal{X}$. In this case, the point-wise quadratic loss function $|p_{\gamma}(x)-p_{\omega}(x)|^{2}$ is $\frac{2}{M^{2}}$-mixable ($\frac{1}{2M^{2}}$-exp-concave). The divergence $\mathcal{B}_{2}$ is a multiple of an integral squared loss $$\mathcal{B}_{2}(p_{\gamma}, p_{\omega})=\|\mu\|_{1}\underbrace{\int_{\mathcal{X}}|p_{\gamma}(x)-p_{\omega}(x)|^{2}\frac{1}{\|\mu\|_{1}}d\mu(x)}_{\text{Integral squared loss.}}.$$ We conclude that $\mathcal{B}_{2}$ is $\frac{2}{\|\mu\|_{1}M^{2}}$-mixable ($\frac{1}{2\|\mu\|_{1}M^{2}}$-exp-concave). For $\frac{1}{2\|\mu\|_{1}M^{2}}$-exp-concave Beta-2 divergence the aggregated prediction $\overline{\gamma}$ is given by its density: \begin{eqnarray} \overline{p}(x)=\sum_{n=1}^{N}w^{n}\cdot p_{\gamma^{n}}(x) \label{beta-2-exp-substitution} \end{eqnarray} for weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$ and experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$. The resulting $\overline{p}$ is the density of a mixture of probability distributions $\gamma^{n}$ w.r.t. weights $w^{n}$. For $\frac{2}{\|\mu\|_{1}M^{2}}$-mixable Beta-2 divergence the density of aggregated prediction $\overline{\gamma}$ should satisty: \begin{equation} \overline{p}(x)=\frac{M}{2}+\frac{M}{4}\log \frac{\sum_{n=1}^{N}w^{n}\exp[-2\big(\frac{M-p_{\gamma^{n}}(x)}{M}\big)^{2}]}{\sum_{n=1}^{N}w^{n}\exp[-2\big(\frac{p_{\gamma^{n}}(x)}{M}\big)^{2}]}. \label{beta-2-pre-substitution} \end{equation} Similar to SCRPS in Subsection \ref{sec-scrps}, the result of \eqref{beta-2-pre-substitution} may not be a density function w.r.t. $\mu$. This issue can be solved by the projection trick, i.e. projecting $p_{\overline{\gamma}}$ onto the convex subset of $\mathcal{L}^{2}(\mathcal{X},\mu)$ of $\mathcal{L}^{1}$-integrable non-negative functions which represent densities of distributions w.r.t. $\mu$. The resulting projection will by definition be the density function of some distribution $\overline{\gamma}$. Whereas for finite $\mathcal{X}$ formulas for the projection are tractable and given in e.g. \cite{vovk2009prediction}, in the case of continuous $\mathcal{X}$ analytic formulas for the projection become intractable for arbitrary distributions and finite approximations should be used. Similar to SCRPS, formula \eqref{beta-2-exp-substitution} for the aggregation of predictions is more preferable from the computational point of view. \subsection{Characteristic Function Discrepancy} \label{sec-cfd} Let $\mathcal{X}=\mathbb{R}^{D}$ and assume that $\Gamma=\Omega$ is the set of all probability measures over its Borel field. For a fixed $\sigma$-finite measure $\mu$ on $\mathcal{X}$ and measurable non-negative function $u:\mathcal{X}\rightarrow \mathbb{R}_{+}$ satisfying $\int_{\mathbb{R}^{D}}u(t)d\mu(t)=1$, we consider the \textbf{Characteristic Function Discrepancy} (CFD): \begin{equation}\text{CFD}_{u,\mu}(\gamma, \omega)=\int_{\mathbb{R}^{D}}\|\phi_{\gamma}(t)-\phi_{\omega}(t)\|^{2}_{\mathbb{C}}u(t)d\mu(t), \label{cfd} \end{equation} where $\phi_{\upsilon}(t)=\mathbb{E}_{x\sim\upsilon}e^{i\langle x,t\rangle}$ denotes the characteristic function of a distribution $\upsilon\in\Gamma$. This measure of dissimilarity is a useful tool for two-sample testing \cite{epps1986omnibus}. In particular, in the tests for normality of random variables it is known as BHEP statistic \cite{tenreiro2009choice}. CFD is highly related to \textbf{Maximum Mean Discrepancy} which we discuss in the next Subsection \ref{sec-mmd}, yet in practice it attains faster stochastic computation that was noted by \cite{fatir2019characteristic}. By varying $u$ and $\mu$ it is possible to assign different importances to frequencies of compared probability distributions. For all $ t\in \mathbb{R}^{D}$ we have $\phi_{\omega}(t),\phi_{\gamma}(t)\in \mbox{Ball}_{\mathbb{C}}(0, 1)$. Thus, $\lambda: \text{Ball}_{\mathbb{C}}(0, 1)\rightarrow \mathbb{R}_{+}$ defined by $\lambda(z,z')=(z-z')\cdot \overline{(z-z')}=\|z-z'\|^{2}_{\mathbb{C}}$ is $\frac{1}{4}$-mixable and $\frac{1}{8}$-exp-concave, see \ref{sec-complex-mix}. By Theorem \ref{theorem-main} the same is applied to CFD. For $\frac{1}{8}$-exp-concave CFD the aggregated prediction $\overline{\gamma}$ is given by its CF \begin{eqnarray} \overline{\phi}(x)=\sum_{n=1}^{N}w^{n}\cdot \phi_{\gamma^{n}}(x) \label{cf-exp-substitution} \end{eqnarray} for weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$ and experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$. The resulting $\overline{\phi}$ is the CF of a mixture of probability distributions $\gamma^{n}$ w.r.t. weights $w^{n}$. Although we pointed that CFD is $\frac{1}{4}$-mixable, we note that the aggregated result obtained by point-wise function (provided in \ref{sec-complex-mix}) will not be necessarily a CF of some distribution $\overline{\gamma}$. Thus, we have the situation similar to SCRPS and Beta-2 divergence, see Subsections \ref{sec-scrps}, \ref{sec-beta-2}. \subsection{Maximum Mean Discrepancy} \label{sec-mmd} Let $\mathcal{X}=\mathbb{R}^{D}$ and assume that $\Gamma=\Omega$ is the set of all probability measures over its Borel field. Let $k:\mathcal{X}\times\mathcal{X}\rightarrow \mathbb{R}$ be a symmetric positive definite kernel, i.e. for all $N=1,2,\dots$, points $x_{1},\dots,x_{N}\in\mathcal{X}$ and $\alpha_{1},\dots,\alpha_{N}\in \mathbb{R}$ it satisfies \begin{equation} \sum_{n,n'=1}^{N}\alpha_{n}\alpha_{n'} k(x_{n},x_{n'})\geq 0. \label{psd-kernel} \end{equation} Consider (the square of) \textbf{Maximum Mean Discrepancy} (MMD): \begin{eqnarray}\text{MMD}_{k}^{2}(\gamma,\omega)=\mathbb{E}_{\gamma\times\gamma}\big[k(x,x')\big]-2\cdot \mathbb{E}_{\gamma\times\omega}\big[k(x,y)\big]+\mathbb{E}_{\omega\times\omega}\big[k(y,y')\big]. \label{mmd-def} \end{eqnarray} It is known that \eqref{mmd-def} is non-negative and its root satisfies the triangle inequality. For characteristic kernels $k$ it also holds true that $\text{MMD}_{k}(\gamma,\omega)=0\iff \gamma=\omega$. Thus, for such kernels $\text{MMD}_{k}$ turns to be a metric on the space of probability distributions, see \cite{sriperumbudur2010hilbert} for detailed explanations. Maximum mean discrepancy is used to compare probability distributions in two-sample hypothesis testing, see \cite{gretton2007kernel}. Also, useful properties of $\text{MMD}_{k}$, e.g. it naturally admits unbiased estimates from empirical samples, made it widely applicable to generative modeling problems of machine learning, see \cite{li2015generative,li2017mmd}. In this paper, we consider only symmetric positive definite kernels of the form $k(x,y)=\psi(x-y)$, where $\psi$ is a bounded and continuous function. Such kernels are usually called \textbf{translation-invariant}. We note that the majority of kernels used in practice are actually translation-invariant (Gaussian, Laplacian, Sine, etc.), see e.g. Table 2 in \cite{sriperumbudur2010hilbert}. In the case of translation-invariance, the positive definiteness of kernel $k$ \eqref{psd-kernel} turns to the a positive definiteness of a function $\psi$, i.e. for all $N=1,2,\dots$, points $x_{1},\dots,x_{N}\in\mathcal{X}$ and $\alpha_{1},\dots,\alpha_{N}\in \mathbb{R}$ it holds true that \begin{equation} \sum_{n,n'=1}^{N}\alpha_{n}\alpha_{n'} \psi(x_{n}-x_{n'})\geq 0. \label{psd-function} \end{equation} According to well-celebrated \textbf{Bochner's Theorem}, see \cite[Theorem 3]{sriperumbudur2010hilbert}, function $\psi$ is positive definite if and only if it is the Fourier transform of a finite nonnegative Borel measure $\mu$ on $\mathbb{R}^{D}$, that is, $$\psi(x)=\int_{\mathbb{R}^{D}}e^{-i\langle x,t\rangle}d\mu(t).$$ By using this correspondence between kernel $k$, function $\psi$ and measure $\mu$, \cite[Corollary 4]{sriperumbudur2010hilbert} prove that \begin{eqnarray} \text{MMD}_{k}^{2}(\gamma,\omega)=\int_{\mathbb{R}^{D}}\|\phi_{\gamma}(t)-\phi_{\omega}(t)\|^{2}_{\mathbb{C}}d\mu(t), \label{mmd-to-cfd} \end{eqnarray} where $\phi_{\gamma},\phi_{\omega}$ are the characteristic functions of $\gamma$ and $\omega$ respectively. From \eqref{mmd-to-cfd} we see that $\text{MMD}_{k}$ turns to be a multiple of Characteristic function discrepancy \eqref{cfd}. Indeed, by introducing $u(t)\equiv \frac{1}{\|\mu\|_{1}}$ we obtain \begin{eqnarray} \text{MMD}_{k}^{2}(\gamma,\omega)=\|\mu\|_{1}\int_{\mathbb{R}^{D}}\|\phi_{\gamma}(t)-\phi_{\omega}(t)\|^{2}_{\mathbb{C}}u(t)d\mu(t)= \nonumber \\ \|\mu\|_{1}\cdot \text{CFD}_{\frac{1}{\|\mu\|_{1}},\mu}(\gamma,\omega), \label{mmd-to-cfd-final} \end{eqnarray} Equation \eqref{mmd-to-cfd-final} immediately means that $\text{MMD}_{k}$ is $\frac{1}{8\|\mu\|_{1}}$-exponentially concave. Analogously to \eqref{cf-exp-substitution}, the aggregated predictive distribution is a mixture of input distributions. We also conclude that $\text{MMD}_{k}$ is $\frac{1}{4\|\mu\|_{1}}$-mixable, although the aggregated prediction is infeasible in general (analogously to CFD, see the discussion of Subsection \ref{sec-cfd}). \subsection{Optimal Transport Costs} \label{sec-ot-costs} \subsubsection{1-Dimensional Optimal Transport} \label{sec-ot-1d} Let $\mathcal{X}=\mathbb{R}^{1}$ and assume that $\Gamma=\Omega$ is the space of all probability measures over its Borel field. For a cost function $c:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ consider the optimal transport cost \cite{kantorovitch1958translocation} between distributions ${\gamma,\omega\in\Gamma}$: \begin{equation} C(\gamma,\omega)=\min_{\mu\in\Pi(\gamma,\omega)}\int_{\Gamma\times\Omega}c(x,x')d\mu(x, x'), \label{ot-cost} \end{equation} where $\Pi(\gamma,\omega)$ is the set of all probability distributions (transport plans) on $\mathcal{X}\times\mathcal{X}$ whose left and right marginals are distributions $\gamma$ and $\omega$ respectively. In contrast to the losses considered in Subsection \ref{sec-density-based}, optimal transport cost is defined for arbitrary distributions which may not have densities. Thus, similar to CRPS and CFD, it can be used to compare predicted distribution with discrete outcomes. Besides, optimal transport costs are widely used in many machine learning and image-processing problems, see \cite{peyre2019computational}. In particular, they are applied to generative modeling \cite{arjovsky2017wasserstein,korotin2019wasserstein}. If cost function is the $p$-th degree of the $p$-th Euclidean norm, i.e. $c(x,x')=\|x-x'\|_{p}^{p}$, the resulting distance $C(\omega,\gamma)$ is called \textbf{Wasserstein-$p$ distance} and denoted by $\mathbb{W}_{p}^{p}$. In this section, we show that under specific conditions the optimal transport cost is mixable (exp-concave). To begin with, we recall that for $\mathcal{X}=\mathbb{R}$ the optimal transport has a superior property which brings linearity structure to the space of $1$-dimensional probability distributions. \begin{lemma}[Explicit 1D Optimal Transport] \label{lemma-ot-1d} If the transport cost $c:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ is twice diffirentiable and ${\frac{\partial^{2}c(x, x')}{\partial x\partial x'}<0}$, then the optimal transport cost between $\gamma,\omega$ is given by \begin{equation}C(\gamma,\omega)=\int_{0}^{1}c\big(\text{\normalfont Q}_{\gamma}(t),\text{\normalfont Q}_{\omega}(t)\big)dt, \end{equation} where $\text{\normalfont Q}_{\upsilon}:[0,1]\rightarrow\mathbb{R}$ is the quantile function of $\upsilon\in\Gamma$ defined by $${\text{\normalfont Q}_{\upsilon}(t)=\inf\{x\in\mathbb{R}:t\leq \text{\normalfont CDF}_{\upsilon}(x)\}}.$$ \label{ot-quantile-cost} \end{lemma} The result was initially proved by \cite{lorentz1953inequality} and then rediscovered several times, see \cite{becker1973theory,mirrlees1971exploration,spence1978job}. The lemma makes it possible to consider the probability distribution's quantile function (inverse cumulative distribution function) instead of the probability distribution for computation of 1-dimensional optimal transport cost. Unlike CRPS (see Figure \ref{fig:crps}), optimal transport compares CDFs not vertically but horizontally, see the illustration in Figure \ref{fig:ot}. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\linewidth]{OT} \caption{Arbitrary distribution $\omega$.} \end{subfigure} \begin{subfigure}[b]{0.48\columnwidth} \centering \includegraphics[width=\linewidth]{OT-E} \caption{Empirical (Dirac) distribution $\omega$.} \end{subfigure} \caption{Visualization of the comparison of CDFs of distributions $\gamma,\omega$ on $[a,b]$ by using Optimal Transport Cost.} \label{fig:ot} \end{figure} From Lemma \ref{lemma-ot-1d} we conclude that if transport cost $c:\mathcal{X}\times \mathcal{X}\rightarrow\mathbb{R}_{+}$ is $\eta$-mixable, then by Theorem \ref{theorem-main} the corresponding $C(\gamma,\omega)$ is $\eta$-mixable (exp-concave) $\mathcal{X}$-integral $c$-loss function. For $\eta$-mixable transport cost the aggregated prediction's quantile function should satisfy: $$\overline{\text{Q}}(t)=\Sigma_{c}\big(\{\text{Q}_{\gamma^{n}}(t),w^{n}\}_{n=1}^{N}\big).$$ for experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$ and weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$. In general, $\overline{\text{Q}}(t)$ may not be a quantile function of some distribution. It is necessarily a quantile function if the substitution $\Sigma_{c}$ for transport cost $c$ is monotone. However, even if $\Sigma_{c}$ is not monotone, $\overline{\text{Q}}(t)$ can be used to implicitly model the aggregated prediction. To show it, we state and prove \begin{lemma}[Implicit Aggregation for 1D Optimal Transport] Let $\zeta$ be the uniform probability measure on $[0,1]$. Consider the pushforward probability measure $\overline{\gamma}=\overline{\text{Q}}\circ \zeta$, i.e. $\overline{\gamma}$ is distributed according to $\overline{\text{Q}}(t)$ for $t\sim \zeta$. Then for all $\omega\in\Omega$ it satisfies: \begin{equation}\exp\big(-\eta C(\overline{\gamma}, \omega)\big)\geq \sum_{n=1}^{N}w^{n}\exp\big[-\eta C(\gamma^{n},\omega)\big]. \label{implicit-lemma-expr} \end{equation} \label{lemma-implicit} \end{lemma} Lemma \ref{lemma-implicit} explains the mixability of the optimal transport cost $C$ with arbitrarily substitution $\Sigma_{c}$. Additionally, it provides a natural way to sample from the aggregated distribution $\overline{\gamma}$, i.e. sampling from $t\sim\text{Uniform}[0,1]$ and applying $\overline{\text{Q}}$. Now we prove the lemma. \begin{proof} Consider any $\omega\in\Omega$ and let $\xi\in\Pi(\overline{\gamma},\omega)$ be a transport plan (between $\overline{\gamma}$ and $\omega$) given by $\xi=[\overline{\text{Q}}, \text{Q}_{\omega}]\circ \zeta=[\overline{\text{Q}}\circ \zeta, \text{Q}_{\omega}\circ \zeta]$. Since $\xi$ is not necessarily the optimal transport plan, its cost is not smaller then the cost of the optimal one (which is given by Lemma \ref{lemma-ot-1d}): \begin{eqnarray}\int_{0}^{1}c(\overline{\text{Q}}(t),\text{Q}_{\omega}(t))dt=\int_{\mathcal{X}\times\mathcal{X}}c(x,x')d\xi(x,x')\geq \nonumber \\ \int_{0}^{1}c(\text{Q}_{\overline{\gamma}}(t),\text{Q}_{\omega}(t))dt=C(\overline{\gamma},\omega), \label{non-optimal-plan} \end{eqnarray} where $\text{Q}_{\overline{\gamma}}$ is the quantile function of $\overline{\gamma}$. Since $\overline{\text{Q}}$ is obtained by the substitution function $\Sigma_{c}$, we have \begin{equation}\exp\big[-\eta \int_{0}^{1}c(\overline{\text{Q}}(t),\text{Q}_{\omega}(t))dt\big]\geq \sum_{n=1}^{N}w^{n}\exp\big[-\eta C(\gamma^{n},\omega)\big]. \label{mixability-implicit-quantile} \end{equation} We combine \eqref{non-optimal-plan} with \eqref{mixability-implicit-quantile} and obtain desired \eqref{implicit-lemma-expr}. \end{proof} For $\eta$-exp-concave cost the aggregated prediction's quantile function is given by: $$\overline{\text{Q}}(t)=\sum_{n=1}^{N}w^{n}\cdot \text{Q}_{\gamma^{n}}(t).$$ The obtained function (for all admissible costs $c$!) is a quantile function of a \textbf{Wasserstein-2 barycenter} $\overline{\gamma}$ of distributions $\gamma^{1},\dots,\gamma^{N}$ w.r.t. weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$, see \cite[Corollary 1]{bonneel2015sliced}. \subsubsection{Sliced Wasserstein-2 Distance} \label{sec-sliced-ot} Definition \eqref{ot-cost} has a natural multidimensional extension to $\mathcal{X}=\mathbb{R}^{D}$. However, in this case the resulting optimal transport cost does not admit representation analogous to the one provided in Lemma \ref{lemma-ot-1d}. Even for the squared cost $c(x,x')=\|x-x'\|^{2}_{2}$ the Wasserstein-2 metric space of distributions is highly non-linear and has negative curvature, see e.g. \cite{ambrosio2013user}. Instead, we prove that \textbf{Sliced Wasserstein-2 distance} might by mixable under certain conditions. Following \cite{bonneel2015sliced}, we assume that $\Gamma=\Omega$ is the set of Radon (locally finite Borel) probability measures on $\mathcal{X}\subset \mathbb{R}^{D}$ with finite second moment. For all $\upsilon\in\Gamma$ we define sliced quantile function by $$\text{SQ}(\theta,t)=\inf\{s\in\mathbb{R}:t\leq \text{SCDF}_{\upsilon}(\theta,s)\}.$$ Next, we define sliced quadratic transport cost (the square of Sliced $\mathbb{W}_{2}$ distance): \begin{equation}\text{S}\mathbb{W}_{2}^{2}(\gamma,\omega)=\frac{1}{S_{D-1}}\int_{\mathbb{S}^{D-1}}\bigg[\int_{0}^{1}\big(\text{SQ}_{\gamma}(\theta,t)-\text{SQ}_{\omega}(\theta,t)\big)^{2}dt\bigg]d\theta. \label{scost-definition} \end{equation} From the definition we see that $\text{S}\mathbb{W}_{2}^{2}$ is the average over all the sliced quadratic transport costs. If $\gamma,\omega$ have supports $\subset \text{Ball}_{\mathbb{R}^{D}}(0,R)$, then $\text{SQ}(\theta,t)\in[-R,R]$ and point-wise squared loss is $(2R^{2})^{-1}$-mixable ($(8R^{2})^{-1}$-exp-concave). We use Theorem \ref{theorem-main} and (similar to CRPS) conclude that $\text{S}\mathbb{W}^{2}_{2}$ is $(2R^{2})^{-1}$-mixable ($(8R^{2})^{-1}$-exp-concave). For $(8R^{2})^{-1}$-exp-concave sliced cost the aggregated prediction's sliced quantile should satisfy: \begin{equation}\overline{\text{SQ}}(\theta,t)=\sum_{n=1}^{N}w^{n}\cdot \text{SQ}_{\gamma_{n}}(\theta,t) \label{sc-exp-substitution} \end{equation} for experts' predictions $\gamma^{1},\dots,\gamma^{N}\in\Gamma$ and weights ${(w^{1},\dots,w^{N})\in\Delta_{N}}$. Unfortunately, for $D>1$ function $\overline{\text{SQ}}(\theta,t)$ \textbf{is not} necessarily a quantile function of some distribution, see the discussion in \cite{bonneel2015sliced}. In particular, the image of the map $\gamma\mapsto SQ_{\gamma}$ is not necessarily convex (as a subset of the space $\mathcal{L}^{2}([0,1])$). As a corollary, we see that the projection trick (the one we used for other losses, see e.g. Subsection \ref{sec-scrps}) is \textbf{not} applicable due to the mentioned non-convexity. We recall the \textbf{sufficient condition} \cite{bonneel2015sliced} for $\overline{\text{SQ}}(\theta,t)$ being a sliced quantile of some distribution. For probability measure $\gamma$ and $(s,u)\in\mathbb{R}_{+}\times \mathbb{R}^{D}$ we use $\psi_{s,u}\circ\gamma$ to denote a probability measure obtained from $\gamma$ by pushing it forward with $\psi_{s,u}(x)=sx+u$, i.e. $\psi_{s,u}\circ\gamma$ is a scaled and translated copy of $\gamma$. \begin{lemma}[Barycenter of scaled and translated distributions] Assume that all the predictions $\gamma^{1},\dots,\gamma^{N}$ are scaled and translated by $\psi_{s^{1},u^{1}},\dots,\psi_{s^{N},u^{N}}$ copies of some reference probability measure $\gamma^{0}$. Then aggregated $\overline{\text{SQ}}$ (given in \eqref{sc-exp-substitution}) is the sliced quantile of $\psi_{\overline{s},\overline{u}}\circ\gamma^{0}$, where $$\overline{s}=\big(\sum_{n=1}^{N}\frac{w^{n}}{s^{n}}\big)^{-1}\qquad\text{and}\qquad\overline{u}=\frac{\sum_{n=1}^{N}\frac{w^{n}u^{n}}{s^{n}}}{\sum_{n=1}^{N}\frac{w^{n}}{s^{n}}}.$$ Measure $\psi_{\overline{s},\overline{u}}\circ\gamma^{0}$ is called \textbf{Sliced Wasserstein-2 Barycenter} of $\gamma^{1},\dots,\gamma^{N}$ w.r.t. weights $w^{1},\dots,w^{N}$. \label{lemma-scale-rot-bar} \end{lemma} The proof of the fact is given in \cite[Proposition 12]{bonneel2015sliced}. Lemma \ref{lemma-scale-rot-bar} provides an explicit formula to compute the aggregated prediction when all the predictive distributions are scaled and translated copies of each other. We do not now whether the \textbf{necessary condition} for $\overline{\text{SQ}}$ being a quantile of some distribution exists. We also leave the question for existence of the aggregated prediction for $(2R^{2})^{-1}$-mixability property open (even in the case of scaled and translated copies that we considered for exp-concavity). These open questions serve as the challenge for our further research. \section{Conclusion} In this paper we defined a notion of integral mixability (exp-concavity) and proved that natural integral losses arising from (one dimensional) mixable (exp-concave) losses are also mixable (exp-concave). As a consequence of our main result, we demonstated that a wide range losses for comparing the probability distributions are indeed mixable (exp-concave). In most cases the substitution function for mixability is computationally heavy and practically inaplicable. Yet for exp-concavity the aggregated prediction simply reduces to computation of the mixture of predictions or Wasserstein-2 barycenter of predictions. \section*{Acknowledgements} \noindent This work was supported by the Ministry of Education and Science of Russian Federation (Grant no. 14.756.31.0001).
1,314,259,995,518
arxiv
\section{Supplementary information} \subsection{Extraction of the exciton population from measured data} \label{population} The exciton population created by a pump pulse tuned into the phonon sideband can be extracted from our two-pulse spectra (see Fig.~2) in three different ways by analyzing the peak heights of the three transitions: $|0\rangle \rightarrow |X\rangle$, $|0\rangle \rightarrow |\bar X\rangle$ and $|X\rangle \rightarrow |2X\rangle$. The relation between the photocurrent ($PC$) signal and the exciton population is as follows. The $PC$ measured in our experiment is determined by the number of electron-hole pairs in the sample, including pairs in the quantum dot (QD) and pairs excited in the surrounding material by the pump and probe pulses, according to: \begin{equation} PC=\alpha(C^{'}_X+C^{'}_{\bar{X}})+2\beta C^{'}_{2X}+\gamma N^{'}_\text{s}, \label{Eq:PC} \end{equation} where $C^{'}_X$ and $C^{'}_{\bar{X}}$ are the populations of the two exciton states with opposite spins that are created in the QD after circularly polarized pump and probe pulses, while $C^{'}_{2X}$ is the biexciton population. $N^{'}_\text{s}$ is the number of electron-hole pairs created in the surrounding material which scales linearly with the laser intensity. $\alpha$, $\beta$ and $\gamma$ are the detection efficiencies of our photocurrent measurement for each type of charge complexes. Specifically, for our pump-probe measurement, $PC$ can be further subdivided into two parts: a reference level $PC_\text{R}$ and the change of the photocurrent signal originating from the QD induced by the probe pulse $\Delta PC$ according to: \begin{subequations} \begin{align} PC&=PC_\text{R} + \Delta PC,\\ PC_\text{R} &= (\alpha(C_X+C_{\bar{X}})+ 2\beta C_{2X})e^{-\tau_\text{delay}/T_1} + \gamma N^{'}_\text{s},\\ \Delta PC &= \alpha (\Delta C_X +\Delta C_{\bar{X}})+2\beta \Delta C_{2X}, \end{align} \label{Eq:C} \end{subequations} where $C_{X,\bar{X},2X}$ are the exciton and biexciton populations created immediately after the pump pulse and $\Delta C_{X,\bar{X},2X}$ is the change of the exciton population induced by the probe pulse in the QD. Since the electron can tunnel out from the QD, the exciton and biexciton populations decay exponentially with time. Thus, $C_{X,\bar{X},2X}e^{(-\tau_\text{delay}/T_1)}=C^{'}_{X,\bar{X},2X}$ represent the pump-created exciton or biexciton populations that have remained in the QD until the arrival of the probe pulse. Here, $\tau_\text{delay}$ is the delay time between the pump and probe pulses and $T_1$ is the electron tunnelling time determined using inversion recovery measurement \cite{Kolodka2007}. In the above discussion, we have neglected the loss of the exciton population due to the radiative decay, since the radiative decay time ($\sim 600$~ps) is much longer than our typical delay times $\tau_\text{delay}$ (10 - 30ps). $PC_\text{R}$ is determined by measuring the photocurrent signal from an off-resonant measurement where the pump pulse is tuned into the phonon sideband and the probe frequency is far from any of the resonances of the dot or the surrounding material such that $\Delta PC$ becomes negligible and the photocurrent signal coincides with $PC_\text{R}$ in this case. $\Delta PC$ is the differential photocurrent signal that is discussed in the paper, which is determined from our experiment by subtracting the $PC_\text{R}$ obtained from the off-resonant measurement from the total photocurrent signal $PC$. The detection efficiency $\alpha$ can be extracted from the single $\pi$ pulse experiment. Denoting by $\Delta PC(\pi)$ the maximum of the differential photocurrent signal reached with a single $\pi$ pulse we find: \begin{equation} \Delta PC(\pi)=\alpha C_X(\pi)=\alpha. \label{Eq:alpha} \end{equation} Here, we have used that according to our path-integral simulations, the phonon-induced deviation of $C_X(\pi)$ from the ideal value of 1 is negligible at low temperatures. Now let us derive the relation between the differential PC signal and the exciton population for the case when pump and probe pulse are co-circularly polarized and the probe pulse is resonant to the $|0\rangle \rightarrow |X\rangle$ transition. In this case, firstly, the $\sigma^{+}$ polarized pump pulse creates a certain exciton population $C_X$ and consequently the ground-state occupation after the pump pulse is given by $C_{0}=1-C_{X}$. Then, in the time interval until the probe pulse arrives, the exciton population is reduced to $C_Xe^{-\tau_\text{delay}/T_1}$ due to the tunneling. The ground-state occupation, on the other hand, is not affected by the tunneling and thus stays at the value of $C_{0}=1-C_{X}$ until the arrival of the probe. Finally, the $\pi$-pulse $\sigma^{+}$ polarized probe exchanges the populations of the states $|0\rangle$ and $|X\rangle$ resulting in an exciton population after the probe of $C^{'}_{X}=1-C_{X}$. Therefore, the change of the $X$-exciton occupation induced by the probe pulse with the energy of $\hbar \omega_X$ is $\Delta C_X = 1-(1+ e^{-\tau_\text{delay}/T_1})\,C_{X}$. Using this result together with the fact, that for co-polarized $\sigma^{+}$-pulses the populations of the $\bar X$-exciton and the biexciton never build up, we find from Eq.~2(c): \begin{equation} \Delta PC_{0-X}=\alpha[1-(1+ e^{-\tau_\text{delay}/T_1})\,C_{X}] \label{Eq:Hph} \end{equation} and thus with the help of Eq.~(\ref{Eq:alpha}) we eventually end up with: \begin{equation} C_X=\dfrac{1}{1+e^{-\tau_\text{delay}/T_1}}\left(1-\dfrac{\Delta PC_{0-X}}{\Delta PC(\pi)}\right). \end{equation} The situation is different when cross-circularly polarized pulses are used. Let us first discuss the case where the $\bar X$ polarized probe pulse is resonant to the $|0\rangle \rightarrow |\bar X\rangle$ transition. After the action of the circularly $\sigma^{+}$ polarized pump pulse, the QD again has a certain probability $C_X$ to be in the $|X\rangle$ state and the probability to find the dot in the ground-state is $C_{0}=1-C_{X}$. The $\sigma^{-}$ polarized probe pulse induces transitions from the ground-state to the $\bar X$ exciton. Since the probe pulse has a pulse area of $\pi$ and the occupation of $|\bar X\rangle$ is zero before the arrival of the probe, the probe pulse fully converts the occupation that was left in the ground-state after the pump pulse into an occupation of the $\bar X$ exciton, i.e., $C^{'}_{\bar X}=C_{0}=1-C_{X}$. Again, the ground-state occupation is not affected by the electron tunnelling and therefore no correction involving the tunnelling time $T_{1}$ should be applied. Since the probe pulse is off-resonant to the $X-2X$ transition we can neglect the probe induced change of the $|X\rangle$ and $|2X\rangle$ occupations. Recalling that $|\bar X\rangle$ is unoccupied before the probe, we find for the resulting differential photocurrent signal $\Delta PC_{0-\bar{X}}$ with the probe pulse being in resonance to the exciton transition: \begin{equation} \Delta PC_{0-\bar{X}}=\alpha (1-C_X), \label{Eq:Hph} \end{equation} which yields: \begin{equation} C_X=1-\Delta PC_{0-\bar{X}}/ \Delta PC(\pi). \label{Eq:Hph} \end{equation} Besides from the data measured at the $|0\rangle \rightarrow |X\rangle$ and $|0\rangle \rightarrow |\bar X\rangle$ transitions, the exciton population created by the pump can also be extracted from the exciton to biexciton transition. An $\sigma^{+}$ polarized pump pulse tuned into the high-energy phonon sideband of the neutral exciton transition again creates a certain exciton population $C_X$ which evolves into $C_Xe^{-\tau_\text{delay}/T_1}$ until the arrival of the probe. Biexcitons are not created, i.e., we have $C_{2X}=0$. A cross-polarized $\pi$-power probe pulse resonant to the $|X\rangle \rightarrow |2X\rangle$ transition converts the $|X\rangle$ population completely into an $|2X\rangle$ population, which gives $C^{'}_X=0$ and $C^{'}_{2X}=C_Xe^{-\tau_\text{delay}/T_1}$. Since the probe is now off-resonant to the $|0\rangle \rightarrow |\bar X\rangle$ transition, the occupation of the $|\bar X\rangle$ exciton induced by the probe is negligible and the ground-state occupation is not affected. Thus, the $\Delta PC$ signal resulting from a probe pulse in resonance to the $|X\rangle \rightarrow |2X\rangle$ transition is given by: \begin{equation} \Delta PC_{X-2X}=2\beta \Delta C_{2X}+\alpha \Delta C_X=(2\beta - \alpha)C_Xe^{-\tau_\text{delay}/T_1}. \label{Eq:PCbiexciton} \end{equation} $\beta$ can be determined from a separate experiment in whihc the pump is a $\pi$ pulse resonant with $X$ and the probe is a $\pi$ pulse resonant with $2X$. According to Eqs.~\eqref{Eq:alpha} and \eqref{Eq:PCbiexciton}, we have: \begin{equation} \beta=0.5(e^{\tau_\text{delay}/T_1}\Delta PC_{X-2X}(\pi)+\Delta PC(\pi)). \label{Eq:beta} \end{equation} Inserting Eqs.~\eqref{Eq:alpha} and \eqref{Eq:beta} into Eq.~\eqref{Eq:PCbiexciton} we can extract the exciton population after the pump from: \begin{equation} C_X= \dfrac{\Delta PC_{X-2X}}{\Delta PC_{X-2X}(\pi)}. \end{equation} \subsection{Model} \label{Model} For our calculations we used the same model for an optically driven strongly confined quantum dot as in Ref.~\cite{Glassl2013}, which is based on the Hamiltonian \begin{align} \label{eq:Hamiltonian} H = H_{\rm{QD-light}} + H_{\rm{QD-phonon}}, \end{align} where \begin{align} H_{\rm{QD-light}} = \hbar\omega^{0}_{X}| X\rangle\langle X| +\frac{\hbar\Omega(t)}{2} \left[ | 0\rangle\langle X| + |X\rangle \langle 0| \right], \end{align} and \begin{align} H_{\rm{QD-phonon}} \!=\! \sum_{\bf q} \hbar\omega_{\bf q}\,b^\dag_{\bf q} b_{\bf q} \!+\! \sum_{\bf q} \hbar \big( \gamma_{\bf q} b_{\bf q} \!+\! \gamma^{\ast}_{\bf q} b^\dag_{\bf q} \big) |X \rangle\langle X|. \label{dot-ph} \end{align} The ground-state $|0\rangle$ is chosen as the zero of the energy and the phonon-free energy of the transition to the single exciton state $|X\rangle$ is denoted by $\hbar\omega^{0}_{X}$. The Rabi frequency $\Omega(t)$ is proportional to the electric field envelope of a circularly polarized Gaussian laser pulse with frequency $\omega_{L}$, which is detuned from the ground-state to exciton transition by $\Delta = \omega_{L}-\omega_{X}$, where $\omega_{X}$ is the frequency of the single exciton resonance which deviates from $\omega^{0}_{X}$ by the polaron shift that results from the dot-phonon coupling in Eq.~(\ref{dot-ph}). The coupling to the laser field is treated in the common rotating wave and dipole approximations. The operator $b^\dag_{\bf q}$ creates a longitudinal acoustic (LA) bulk phonon with wave vector $\bf{q}$ and energy $\hbar \omega_{\bf{q}}$. We assume a linear dispersion relation $\omega_{\bf{q}} = c_{s} |\bf{q}|$, where $c_{s}$ denotes the speed of sound. The phonons are coupled via the deformation potential only to the exciton state. This coupling is expressed by the exciton-phonon coupling $\gamma_{\bf{q}}=\frac{|\bf{q}|}{\sqrt{2V\rho \hbar \omega_{\bf{q}}}} \left(D_{\rm{e}} \Psi^{\rm{e}}({\bf q}) - D_{\rm{h}} \Psi^{\rm{h}}({\bf q})\right)$, where $\rho$ denotes the mass density of the crystal, $V$ the mode volume, $D_{\rm{e/h}}$ the deformation potential constants, and $\Psi^{\rm{e/h}}(\bf{q})$ the formfactors of electron and hole, respectively. As explained in the main article, we calculate the formfactors from the ground-state wavefunctions of a spherical symmetric, parabolic confinement potential. It should be noted that, in the pure dephasing model for the dot-phonon coupling, no transitions between the bare electronic states can be induced by the continuum of LA phonons, which can change the electronic occupations only in the presence of the laser field. We assume the system to be initially in a product state of a thermal phonon-distribution at the temperature of the cryostat and a pure ground-state of the electronic subsystem. We use the material parameters given in Ref.~\cite{Krummheuer2002} for GaAs, which are: $\rho = 5370 \; \rm{kg}/\rm{m}^3$, $c_{s} = 5110 \; \rm{m}/\rm{s}$, $D_{\rm{e}} = 7.0 \; \rm{eV}$, and $D_{\rm{h}} = -3.5 \; \rm{eV}$. To obtain the time evolution of the electronic density matrix elements predicted by this model, we make use of a numerically exact real-time path-integral approach, described in detail in Ref.~\cite{Vagov2011}. This gives us the opportunity to calculate the dynamics of the quantum dot with a high and controllable numerical precision and without further approximations to the given Hamiltonian. This includes taking into account all multi-phonon processes and non-Markovian effects. \end{document}
1,314,259,995,519
arxiv
\section{Introduction} Wavelet and frame shrinkage operators became very popular in recent years. A certain starting point was the ISTA algorithm in \cite{DDM04} which was interpreted as a special case of the forward-backward algorithm in \cite{CW05}. For relations with other algorithms see also \cite{BSS2017,Se09}. Let $T \in \mathbb R^{n \times d}$, $n\ge d$ have full column rank. Then, the setting \begin{equation} \label{eq:problem} \argmin_{y \in \mathbb R^d} \Bigl\{ \frac12 \|x -y \|_2^2 + \lambda \|Tx\|_{1} \Bigr\}, \quad \lambda >0 \end{equation} is known as analysis point of view, while the ,,opposite'' case $T \in \mathbb R^{n \times d}$ with $n \le d$ can be seen as synthesis approach, see \cite{EMR07}. For orthogonal $T \in {\mathbb R}^{d \times d}$, the frame soft shrinkage operator $T^\dagger \, S_\lambda \, T = T^\mathrm{T} \, S_\lambda \, T$ is the solution of the above problem. In the synthesis case, with the additional assumption $T T^\mathrm{T} = I_n$, the solution of problem \eqref{eq:problem} is given by $(I_d - T^\mathrm{T} T) x + T^\mathrm{T} S_\lambda T$, see \cite[Theorem 6.15]{Beck17}. For arbitrary $T \in {\mathbb R}^{n \times d}$, $n \ge d$, there do not exist analytic expressions for the solution of \eqref{eq:problem} in the literature. Recently, some authors of this paper considered the question, whether the frame soft shrinkage operator can itself be seen as a proximity operator, see \cite{GP2019}. They showed that the set-valued operator $(T^\dagger S_\lambda T)^{-1} - I_d$ is maximally cyclically monotone, which implies that it is a proximity operator with respect to some norm in $\mathbb R^d$. In this short paper, we prove that for any injective operator $T\in \mathcal{B} (\mathcal{H},\mathcal{K})$ with closed range and any proximity operator $\mathrm{Prox}\colon \mathcal{K} \to \mathcal{K}$, the new operator $T^\dagger \, \mathrm{Prox} \, T\colon \mathcal{H} \rightarrow \mathcal{H}$ is also a proximity operator on the linear space $\mathcal{H}$, but equipped with another inner product. This includes the above mentioned finite dimensional setting as a special case. In contrast to \cite{GP2019}, we directly approach the question using a classical result of Moreau \cite{Moreau65}. Moreover, we explicitly provide the function for the definition of the proximity operator. There are several fields where our results may be of interest. So different norms in the definition of the proximity operator were successfully used in variable metric algorithms, see \cite{CPR2013}. Recently, it was shown that many activation functions appearing in neural networks are indeed proximity functions \cite{CP2018}. Here we concatenate proximity operators with a linear operators and stay within the set of proximity operators just in different Hilbert spaces. Finally, our findings may be of interest in connection with so-called Plug-and-Play algorithms \cite{CWE2016,SVW2016,TBF2018}. Our paper is organized as follows: We start with preliminaries on convex analysis in Hilbert spaces in Section \ref{sec:prelim}. In Section \ref{sec:interplay}, we prove our general results on the interplay between proximity and certain linear operators. As a special case we emphasize that the frame soft shrinkage operator is itself a proximity operator. \section{Preliminaries} \label{sec:prelim} Let $\mathcal{H}$ be a real Hilbert space with inner product $\langle \cdot,\cdot \rangle$ and norm $\| \cdot \|$. By $\Gamma_0(\mathcal{H})$ we denote the set of proper, convex, lower semi-continuous functions on $\mathcal{H}$ mapping into $(-\infty, \infty]$. For $f \in \Gamma_0(\mathcal{H})$ and $\lambda > 0$, consider the proximity operator $\mathrm{prox}_{\lambda f}\colon \mathcal{H} \rightarrow \mathcal{H}$ and its Moreau envelope $M_{\lambda f}\colon \mathcal{H} \rightarrow \ensuremath{\mathbb{R}}$ defined by \begin{align} \mathrm{prox}_{\lambda f} (x) &\coloneqq \argmin_{y \in \mathcal{H}} \Bigl\{ \frac12 \|x-y\|^2 + \lambda f(y) \Bigr\}, \\ M_{\lambda f} (x) &\coloneqq \min_{y \in \mathcal{H}} \Bigl\{ \frac12 \|x-y\|^2 + \lambda f(y) \Bigr\}. \end{align} Clearly, the proximity operator and its Moreau envelope depend on the underlying space $\mathcal{H}$, in particular on the chosen inner product. Recall that an operator $G\colon \mathcal{H} \to \mathcal{H}$ is called firmly nonexpansive if for all $x,y \in \mathcal{H}$ the following relation is fulfilled $$ \|Gx -Gy\|^2 \le \bigl\langle x-y,Gx-Gy \bigr\rangle . $$ Obviously, firmly nonexpansive operators are nonexpansive. For a Fr\'echet differentiable functions $\Phi\colon \mathcal{H} \to \ensuremath{\mathbb{R}}$, the gradient $\nabla \Phi(x)$ at $x \in \mathcal{H}$ is defined as the vector satisfying for all $h \in \mathcal{H}$, \[\langle \nabla \Phi(x), h \rangle = D\Phi(x) h,\] where $D\Phi\colon \mathcal{H}\to \mathcal{B} (\mathcal{H},\ensuremath{\mathbb{R}})$ denotes the Fr\'echet derivative of $\Phi$, i.e., for all $x,h \in H$, \begin{equation} \label{frechet} \Phi(x+h) - \Phi(x) = D\Phi(x) h + o(\|h\|). \end{equation} Note that the gradient crucially depends on the chosen inner product in $\mathcal{H}$. The following results can be found, e.g., in \cite[Props.~12.27, 12.29]{BC11}. \begin{thm} \label{lem:1} Let $f \in \Gamma_0(\mathcal{H})$. Then the following holds true: \\ i) The operator $\mathrm{prox}_{\lambda f} \colon \mathcal{H} \to \mathcal{H}$ is firmly nonexpansive. \\ ii) The function $M_{\lambda f}$ is (Fr\'echet) differentiable and has a Lipschitz-continuous gradient given by \[\nabla M_{\lambda f}(x) = x - \mathrm{prox}_{\lambda f}(x).\] \end{thm} Clearly, ii) implies that \begin{equation}\label{proxi} \mathrm{prox}_{\lambda f} (x) = \nabla \left( \frac12 \| \cdot \|^2 - M_{\lambda f}(x) \right) = \nabla \Phi(x), \end{equation} where $\Phi \coloneqq \frac12 \|x\|^2 - M_{\lambda f}$ is convex since $\mathrm{prox}_{\lambda f}$ is nonexpansive \cite[Prop.~17.10]{BC11}. Further, it was shown by Moreau that also the following (reverse) statement holds true \cite[Cor.~10c]{Moreau65}. \begin{thm} \label{thm:1}  The operator $\mathrm{Prox}\colon \mathcal{H} \rightarrow \mathcal{H}$ is a proximity operator  if and only if it is nonexpansive and there exists a function $\Psi \in \Gamma_0(\mathcal{H})$,  such that for any $x \in \mathcal{H}$ we have $\mathrm{Prox}(x) \in \partial \Psi(x)$, where $\partial \Psi$ denotes the subdifferential of $\Psi$ \end{thm} Thanks to \eqref{proxi}, we conclude that $\mathrm{Prox}\colon \mathcal{H} \rightarrow \mathcal{H}$ is a proximity operator if and only if it is nonexpansive and the gradient of a convex, differentiable function $\Phi \colon \mathcal{H} \to \ensuremath{\mathbb{R}}$. Note that recently, the characterization of Bregman proximity operators in a more general setting was discussed in \cite{GN2018}. In the following example, we recall the Moreau envelope and the proximity operator related to the soft thresholding operator. \begin{example} \label{ex:1} Let $\mathcal{H} = \ensuremath{\mathbb{R}}$ with usual norm $|x|$ and $f(x) \coloneqq |x|$. Then, $\mathrm{prox}_{\lambda f}$ is the soft thresholding operator $S_\lambda$ defined by \[S_\lambda(x)\coloneqq \left\{ \begin{array}{cl} x - \lambda& \mathrm{for} \; x > \lambda,\\ 0 & \mathrm{for} \; x \in [-\lambda,\lambda],\\ x + \lambda& \mathrm{for} \; x < -\lambda, \end{array} \right.\] and $m_{\lambda | \cdot|}$ is the Huber function \[ m_{\lambda | \cdot|} (x) = \left\{ \begin{array}{cl} x - \frac{\lambda}{2} & \mathrm{for} \; x > \lambda,\\[0.5ex] \frac{1}{2\lambda} x^2 & \mathrm{for} \; x \in [-\lambda,\lambda],\\[0.5ex] - x - \frac{\lambda}{2}& \mathrm{for} \; x < -\lambda. \end{array} \right.\] Hence, $\mathrm{prox}_{\lambda f} = \nabla \varphi$, where $\varphi(x) = \frac{x^{2}}{2} - m_{\lambda | \cdot|}(x)$, i.e., \[\varphi(x) = \left\{\begin{array}{cl} \tfrac12(x-\lambda)^2& \mathrm{for} \; x > \lambda,\\ 0 &\mathrm{for} \; x \in [-\lambda,\lambda],\\[0.5ex] \tfrac12 (x + \lambda)^2& \mathrm{for} \; x < -\lambda. \end{array} \right.\] For $\mathcal{H} = \mathbb R^d$ and $f(x) \coloneqq \|x\|_1$, we can just use a componentwise approach. Then $S_\lambda$ is defined componentwise, the Moreau envelope reads as $M_{\lambda \| \cdot \|_1} (x) = \sum_{i=1}^d m_{\lambda | \cdot|}(x_i)$ and the potential of $\mathrm{prox}_{\lambda \| \cdot \|_1}$ is $\Phi(x) = \sum_{i=1}^d \varphi(x_i)$. \end{example} \section{On the interplay between proximity and linear operators} \label{sec:interplay} Let $\mathcal{H}$ and $\mathcal{K}$ be real Hilbert spaces with inner products $\langle \cdot,\cdot \rangle_\mathcal{H}$ and $\langle \cdot,\cdot \rangle_\mathcal{K}$ and corresponding norms $\| \cdot \|_\mathcal{H}$ and $\| \cdot \|_\mathcal{K}$, respectively. By $\mathcal{B}(\mathcal{H},\mathcal{K})$ we denote the space of bounded, linear operators from $\mathcal{H}$ to $\mathcal{K}$ with domain $\mathcal{H}$. In this section, we show that for any injective operator $T \in \mathcal{B}(\mathcal{H},\mathcal{K})$ with closed range $\mathcal{R}(T)$ and any proximity operator $\mathrm{Prox} \colon \mathcal{K} \to \mathcal{K}$, the operator $T^{\dagger} \, \mathrm{Prox} \, T\colon \mathcal{H} \rightarrow \mathcal{H}$ is itself a proximity operator on the linear space $\mathcal{H}$ equipped with a suitable (equivalent) norm $\| \cdot \|_{{\mathcal{H}_{T}}}$, i.e., there exits a function $f \in \Gamma_0(\mathcal{H})$ such that \[ T^{\dagger} \, \mathrm{Prox} \, T (x) = \argmin_{y \in \mathcal{H}} \Bigl\{ \frac12 \|x-y\|_{\mathcal{H}_{T}}^2 + f(y) \Bigr\}. \] We prove our main result in the next subsection and consider the important case of frame soft shrinkage subsequently. \subsection{Main Result} \label{sec:main} For any injective $T \in \mathcal{B}(\mathcal{H},\mathcal{K})$ with closed range, the Moore-Penrose inverse (generalized inverse, pseudo-inverse) $T^\dagger \in \mathcal{B}(\mathcal{K},\mathcal{H})$ is explicitly given by $$T^{\dagger} = (T^*T)^{-1} T^*.$$ It satisfies $$T^{\dagger} T = \text{Id}, \quad T T^{\dagger} = P_{\mathcal{R}(T)},\quad T^\dagger = T^\dagger P_{\mathcal{R}(T)},$$ where $P_{\mathcal{R}(T)}$ is the orthogonal projection onto $\mathcal{R}(T)$, see \cite[Ex.~3.27 \& Prop.~3.28]{BC11}. Every injective $T\in \mathcal{B}(\mathcal{H},\mathcal{K})$ gives rise to an inner product in $\mathcal{H}$ via $\langle x, y \rangle_{\mathcal{H}_{T}} = \langle Tx, Ty \rangle_\mathcal{H}$ and corresponding norm $\|x\|_{\mathcal{H}_{T}}=\| Tx\|_\mathcal{H}$. In general this defines a pre-Hilbert space. Since $T$ has additionally closed range, the norms $\| \cdot\|_\mathcal{H}$ and $\| \cdot\|_{\mathcal{H}_{T}}$ are equivalent on $\mathcal{H}$ due to \[\Vert T\Vert_{\mathcal{B}(\mathcal{H},\mathcal{K})}^{-1} \| x\|_{\mathcal{H}_{T}} \leq \| x\|_\mathcal{H} = \Vert T^{\dagger} T x \Vert_\mathcal{H} \leq \Vert T^{\dagger}\Vert_{\mathcal{B}(\mathcal{K},\mathcal{H})} \Vert x \Vert_{\mathcal{H}_{T}}\] for all $x \in \mathcal{H}$. The norm equivalence also ensures the completeness of $\mathcal{H}$ equipped with the new norm. To emphasize that we consider the linear space $\mathcal{H}$ with this norm we write ${\mathcal{H}_{T}}$. For a Fr\'echet differentiable function $\Phi \colon \mathcal{H} \to \ensuremath{\mathbb{R}}$, the gradient $\nabla_{\mathcal{H}_{T}} \Phi(x)$ at $x \in \mathcal{H}$ in the space ${\mathcal{H}_{T}}$ is given by the vector satisfying \[\langle \nabla_{\mathcal{H}_{T}} \Phi(x), h \rangle_{\mathcal{H}_{T}} = D\Phi(x) h = \langle \nabla_\mathcal{H} \Phi(x), h \rangle_\mathcal{H}\] for all $h \in \mathcal{H}$. Hence, the gradient depends on the chosen norm through $$ \nabla_{\mathcal{H}_{T}} \, \Phi(x) = (T^* T )^{-1}\nabla_\mathcal{H} \Phi(x). $$ Now, the desired result follows from the next theorem. \begin{thm}\label{thm:Existence} Let $T \in \mathcal{B}(\mathcal{H},\mathcal{K})$ be injective with closed range and $\mathrm{Prox} \colon \mathcal{K} \to \mathcal{K}$ a proximity operator on $\mathcal{K}$. Then, the operator $T^\dagger \, \mathrm{Prox} \, T \colon {\mathcal{H}_{T}} \to {\mathcal{H}_{T}}$ is a proximity operator. \end{thm} \begin{proof} In view of Theorems \ref{lem:1} and \ref{thm:1}, it suffices to show that $T^\dagger \, \mathrm{Prox} \, T$ is nonexpansive and that there exists a convex function $\Psi \colon {\mathcal{H}_{T}} \to \ensuremath{\mathbb{R}}$ with $T^\dagger \, \mathrm{Prox} \, T = \nabla_{\mathcal{H}_{T}} \Psi$. 1.~First, we show that $T^\dagger \, \mathrm{Prox} \, T$ is firmly nonexpansive, and thus nonexpansive. Using that $T T^\dagger = P_{\mathcal{R}(T)}$, it follows \begin{align} \Vert T^\dagger \, \mathrm{Prox} \, T x - T^\dagger \, \mathrm{Prox} \, T y \Vert_{\mathcal{H}_{T}}^2 = \Vert T T^\dagger \left( \, \mathrm{Prox} \, T x - \mathrm{Prox} \, T y \right)\Vert_\mathcal{K}^2 \leq \Vert \mathrm{Prox} \, T x - \mathrm{Prox} \, T y \Vert_\mathcal{K}^2. \label{xx} \end{align} Now, we obtain \begin{align} \bigl \langle T^\dagger \, \mathrm{Prox} \, T x - T^\dagger \, \mathrm{Prox} \, T y, x-y \bigr \rangle_{\mathcal{H}_{T}} &= \bigl\langle T T^\dagger \bigl( \mathrm{Prox} \, T x - \mathrm{Prox} \, T y \bigr), Tx - Ty \bigr\rangle_\mathcal{K}\\ &= \bigl\langle T^*T T^\dagger \bigl( \mathrm{Prox} \, T x - \mathrm{Prox} \, T y \bigr), x - y \bigr\rangle_\mathcal{H}\\ &= \bigl\langle \mathrm{Prox} \, T x - \mathrm{Prox} \, T y, Tx - Ty \bigr\rangle_\mathcal{K}, \end{align} and since $\mathrm{Prox}$ is firmly nonexpansive with respect to $\|\cdot\|_{\mathcal{K}}$ and by \eqref{xx} further \begin{align} \bigl \langle T^\dagger \, \mathrm{Prox} \, T x - T^\dagger \, \mathrm{Prox} \, T y, x-y \bigr \rangle_{\mathcal{H}_{T}} &\geq \Vert \mathrm{Prox} \, T x - \mathrm{Prox} \, T y \Vert^2_\mathcal{K}\\ & \geq \Vert T^\dagger \, \mathrm{Prox} \, T x - T^\dagger \, \mathrm{Prox} \, T y \Vert_{\mathcal{H}_{T}}^2. \end{align} Thus, $T^{\dagger} \, \mathrm{Prox} \, T$ is firmly nonexpansive. 2.~It remains to prove that there exists a convex function $\Psi\colon {\mathcal{H}_{T}} \to \ensuremath{\mathbb{R}}$ with $\nabla_{\mathcal{H}_{T}} \Psi = T^{\dagger} \, \mathrm{Prox} \, T$. Since $\mathrm{Prox}$ is a proximity operator, there exists $\Phi \colon \mathcal{H} \to\ensuremath{\mathbb{R}}$ with $\mathrm{Prox} = \nabla_\mathcal{K} \Phi$. Then, a natural candidate is given by $\Psi =\Phi \, T$. Using the definition of the gradient and the chain rule, it holds for all $x,h\in \mathcal{H}$ that \begin{align} \langle \nabla_\mathcal{H} \Psi(x) , h \rangle_\mathcal{H} = D\Psi (x)h = D\Phi(Tx)\,Th = \langle \nabla_\mathcal{K} \Phi(Tx) , Th \rangle_\mathcal{K} = \langle T^* \, \mathrm{Prox} \, Tx , h \rangle_\mathcal{H}. \end{align} Hence, we conclude $ \nabla_{\mathcal{H}_{T}} \Psi = (T^* T )^{-1}\nabla_\mathcal{H} \Psi(x) = (T^* T )^{-1} T^* \, \mathrm{Prox} \, Tx = T^\dag \, \mathrm{Prox} \, Tx$. Finally, $\Psi$ is convex since it is the concatenation of a convex function with a linear function. \end{proof} For $\mathrm{Prox} \coloneqq \mathrm{prox}_{g}$ with $g \in \Gamma_0(\mathcal{H})$, we are actually able to explicitly compute $f \in \Gamma_0(\mathcal{H})$ such that $T^\dagger \, \mathrm{Prox} \, T = \mathrm{prox}_f$ on ${\mathcal{H}_{T}}$. Clearly, this also gives an alternative proof for Theorem~\ref{thm:Existence}. \begin{thm}\label{thm:ExpForm} Let $T \in \mathcal{B}(\mathcal{H},\mathcal{K})$ be an injective operator with closed range and $\mathrm{Prox} \coloneqq \mathrm{prox}_{g}$ for some $g \in \Gamma_0(\mathcal{K})$. Then $T^{\dagger} \, \mathrm{prox}_{g} \, T \colon {\mathcal{H}_{T}} \to {\mathcal{H}_{T}}$ is the proximity operator on ${\mathcal{H}_{T}}$ of $f \in \Gamma_0(\mathcal{H})$ given by \begin{equation} \label{prox_expl} f(x) \coloneqq g \square \Bigl( \tfrac12 \| \cdot \|_{\mathcal{K}}^2 + \iota_{\mathcal{N}(T^*)} \Bigr) (Tx), \end{equation} where $\mathcal{N}(T^*)$ denotes the kernel of $T^*$. For bijective $T \in \mathcal{B}(\mathcal{H},\mathcal{K})$ this expression simplifies to $ f(x) = g(Tx). $ \end{thm} \begin{proof} Since $\mathcal{R}(T)$ is closed, we have the orthogonal decomposition $\mathcal{H} = \mathcal{R}(T) \oplus \mathcal{N}(T^*)$. Then, we get \begin{align} T^\dagger \, \mathrm{prox}_{g} \, (T x) &= T^\dagger \argmin_{z \in \mathcal{K}} \Bigl\{ \tfrac12 \|z - Tx\|_{\mathcal{K}}^2 + g(z)\Bigr\}\\ &= T^\dagger P_{\mathcal{R}(T)} \argmin_{z = z_1+z_2, z_1 \in \mathcal{R}(T), z_2 \in \mathcal{N}(T^*)} \Bigl\{ \frac12 \|z_1 + z_2 - Tx\|_{\mathcal{K}}^2 + g(z_1 + z_2) \Bigr\}\\ &= T^\dagger \argmin_{z_1 \in \mathcal{R}(T)} \inf_{z_2 \in \mathcal{N}(T^*)} \Bigl\{ \frac12 \|z_1 - Tx\|_{\mathcal{K}}^2 + \frac12 \|z_2\|_{\mathcal{K}}^2 + g(z_1 + z_2) \Bigr\}\\ &= T^\dagger \argmin_{z_1 \in \mathcal{R}(T)} \left\{ \frac12 \|z_1 - Tx\|_{\mathcal{K}}^2 + \inf_{z_2 \in \mathcal{N}(T^*)} \Bigl\{ \frac12\|z_2\|_{\mathcal{K}}^2 + g(z_1 + z_2) \Bigr\} \right\}\\ &= T^\dagger T \argmin_{y \in \mathcal{H}} \left\{ \frac12 \|Ty - Tx\|_{\mathcal{K}}^2 + \inf_{z_2 \in \mathcal{N}(T^*)} \Bigl\{ \frac12\|z_2\|_{\mathcal{K}}^2 + g(Ty + z_2) \Bigr\} \right\}\\ &= \argmin_{y \in \mathcal{H}} \left\{ \frac12 \|y - x\|_{\mathcal{H}_{T}}^2 + \inf_{z_2 \in \mathcal{N}(T^*)} \Bigl\{ \frac12\|z_2\|_{\mathcal{K}}^2 + g(Ty + z_2) \Bigr\} \right\} \label{magic}\\ &= \argmin_{y \in \mathcal{H}} \left\{ \frac12 \|y - x\|_{\mathcal{H}_{T}}^2 + g \square \Bigl( \tfrac12 \| \cdot \|_{\mathcal{K}}^2 + \iota_{\mathcal{N}(T^*)} \Bigr) (Ty) \right\}, \end{align} where $f\square g(x) \coloneqq \inf_{y\in \mathcal{H} } f(y) + g(x-y)$ denotes the infimal convolution of $f,g \in \Gamma_0(\mathcal{H})$ and $x \mapsto \iota_S(x)$ is the indicator function of the set $S$ taking the value $0$ if $x \in S$ and $+\infty$ otherwise. Hence, we conclude that $T^{\dagger} \, \mathrm{prox}_{g} \, T$ is the proximity operator on ${\mathcal{H}_{T}}$ of $f$ in \eqref{prox_expl}. In particular, we conclude for bijective $T$ by \eqref{magic} that \[ T^{\dag} \, \mathrm{prox}_{g} \, (T x) = \argmin_{y \in \mathcal{H}} \Bigl\{ \frac12 \| x-y\|_{\mathcal{H}_{T}}^2 + g(Ty) \Bigr\}. \] \end{proof} Note that, in general $f$ is a weaker regularizer than $g$, i.e., $f \leq g$. This is necessary since for the latter we would get using the same reasoning as in \eqref{magic} \[ \argmin_{y \in \mathcal{H}} \Bigl\{\frac12 \| x-y\|_T^2 + g(Ty) \Bigr\} = T^\dagger \argmin_{z \in \mathcal{K}} \Bigl\{ \frac12 \|z - Tx\|_\mathcal{K}^2 + g(z) + \iota_{{\cal R}(T)}(z)\Bigr\} \neq T^\dagger \mathrm{prox}_{g} (T x). \] \subsection{Frame Soft Shrinkage} In this section, we investigate the frame soft shrinkage as a special proximity operator. Let $\mathcal{K} = \ell_2$ be the Hilbert space of quadratic summable sequences $c = \{c_k\}_{k \in \mathbb N}$ with norm $\|c \|_{\ell_2} \coloneqq ( \sum_{k \in \mathbb N} |c_k|^2)^{\frac12}$ and assume further that $\mathcal{H}$ is separable. A set $\{x_k\}_{k\in\ensuremath{\mathbb{N}}}$, $x_k \in \mathcal{H}$ is called a frame of $\mathcal{H}$, if there exist constants $0 < A \le B < \infty$ such that for all $x \in \mathcal{H}$, \begin{equation} \label{frame} A \|x\|_\mathcal{H}^2 \le \sum_{k\in \ensuremath{\mathbb{N}}} |\langle x,x_k \rangle_\mathcal{H} |^2 \le B \|x\|_\mathcal{H}^2. \end{equation} Given a frame $\{x_k\}_{k\in\ensuremath{\mathbb{N}}}$ of $\mathcal{H}$, the corresponding analysis operator $T \colon \mathcal{H} \to\ell_2$ is defined as $$Tx=\{ \langle x,x_k \rangle_\mathcal{H}\}_{k\in\ensuremath{\mathbb{N}}}, \quad x\in \mathcal{H}.$$ Its adjoint $T^*\colon\ell_2 \to \mathcal{H}$ is the synthesis operator given by $$T^*\{c_k\}_{k\in\ensuremath{\mathbb{N}}} = \sum_{k\in\ensuremath{\mathbb{N}}} c_k x_k, \quad \{c_k\}_{k\in\ensuremath{\mathbb{N}}} \in\ell_2.$$ By composing $T$ and $T^*$, we get the frame operator $$T^*Tf = \sum_{k\in\ensuremath{\mathbb{N}}} \langle x , x_k \rangle_\mathcal{H} x_k, \quad x\in \mathcal{H}. $$ The frame operator $T^*T$ is invertible on $\mathcal{H}$, see \cite{CB2016}, such that $$f = \sum_{k\in\ensuremath{\mathbb{N}}} \langle x , x_k \rangle (T^*T)^{-1} x_k, \quad x\in \mathcal{H}. $$ The sequence $\{ (T^*T)^{-1}x_k\}_{k\in\ensuremath{\mathbb{N}}}$ is called the canonical dual frame of $\{ x_k\}_{k\in\ensuremath{\mathbb{N}}}$. Note that $T^\dagger$ is indeed the synthesis operator for the canonical dual frame of $\{ f_k\}_{k\in\ensuremath{\mathbb{N}}}$. The relation between linear, bounded, injective operators of closed range and frame analysis operators reads as follows: \begin{proposition} \label{prop:1} An operator $T \in \mathcal{B}(\mathcal{H},\ell_2)$ is injective and has closed range if and only if it is the analysis operator of some frame of $\mathcal{H}$. \end{proposition} \begin{proof} If $T$ is the analysis operator for a frame $\{x_k\}_{k\in\ensuremath{\mathbb{N}}}$, then $T$ is bounded, injective and has closed range, see \cite{CB2016}. Conversely, assume that $T \in \mathcal{B}(\mathcal{H},\ell_2)$ is injective and that $\mathcal{R}(T)$ is closed. By the closed range theorem, it holds $\mathcal{R}(T^*) = H$. Let $\{\delta_k\}_{k\in\ensuremath{\mathbb{N}}}$ be the canonical basis of $\ell_2$ and set $\{x_k \}_{k\in\ensuremath{\mathbb{N}}}\coloneqq \{T^{*} \delta_k\}_{k\in\ensuremath{\mathbb{N}}}$. Since $\sum_{k\in \ensuremath{\mathbb{N}}} |\langle x,x_k \rangle_\mathcal{H} |^2 = \Vert Tx \Vert_{\ell_2}^2$, we conclude that $\{x_k \}_{k\in\ensuremath{\mathbb{N}}}$ is a frame of $\mathcal{H}$ and $T$ is the corresponding analysis operator. \end{proof} The soft shrinkage operator $S_\lambda$ on $\ell_2$ (applied componentwise) is the proximity operator corresponding to the function $g \coloneqq \lambda \| \cdot \|_1$, $\lambda>0$. Then we obtain as immediate consequence of Theorem~\ref{thm:ExpForm} the following corollary. \begin{corollary}\label{cor:f1} Let $T\colon \mathcal{H} \rightarrow \ell_2$ be an analysis frame operator and $\mathrm{Prox}\colon \ell_2 \to \ell_2$ an arbitrary proximity operator. Then $T^{\dagger} \, \mathrm{Prox} \, T$ is itself a proximity operator on $\mathcal{H}$ equipped with the norm $\| \cdot \|_{\mathcal{H}_{T}}$. In particular, it holds for $\mathrm{Prox} \coloneqq S_\lambda$, $\lambda >0$ that $$ T^{\dagger} \, S_\lambda \, T (x) = \argmin_{y \in \mathcal{H}} \left\{ \|x-y\|_{\mathcal{H}_{T}}^2 + f(y)\right\}, \quad f(y) \coloneqq \lambda \|\cdot\|_1 \square \Bigl( \tfrac12 \| \cdot \|_{\ell_2}^2 + \iota_{\mathcal{N}(T^*)} \Bigr) (Ty). $$ \end{corollary} Finally, let us have a look at the finite dimensional setting with $\mathcal{H} \coloneqq \ensuremath{\mathbb{R}}^d$, $\mathcal{K} \coloneqq \ensuremath{\mathbb{R}}^d$, $n\ge d$. Then we have for any $T \in \ensuremath{\mathbb{R}}^{n,d}$ with full rank $d$ and an arbitrary proximity operator on $\ensuremath{\mathbb{R}}^n$ that \begin{equation} \label{rechnen} T^{\dagger} \, \mathrm{Prox} \, T (x) = \argmin_{y \in \ensuremath{\mathbb{R}}^d} \left\{ \frac12 \|x-y\|_T^2 + f(y) \right\}, \quad f(y) \coloneqq \lambda \|\cdot\|_1 \square \Bigl( \tfrac12 \| \cdot \|_{\ell_2}^2 + \iota_{\mathcal{N}(T^\mathrm{T})} \Bigr) (Ty). \end{equation} \begin{example} \label{ex:2} We want to compute $f$ for the matrix $T\colon \ensuremath{\mathbb{R}}^{1} \to \ensuremath{\mathbb{R}}^{2}$ given by $T = \begin{pmatrix} 1 & 2\end{pmatrix}^\mathrm{T}$ and and the soft shrinkage operator $S_1$ on $\ensuremath{\mathbb{R}}^2$. Note that this example was also considered in \cite{{GP2019}}. By \eqref{rechnen} and since $x = (x_1, x_2)^\mathrm{T} \in \mathcal{N}(T^*)$ if and only if $x_1 = -2 x_2$ we obtain \begin{align} f(y) & = \Vert \cdot \Vert_1 \square \Bigl( \frac12 \| \cdot \|_2^2 + \iota_{\mathcal{N}(T^*)} (\cdot) \Bigr) (Ty)\\ &= \min_{Ty = z+x} \left\{ \|z\|_1 + \tfrac12 \| x \|_{\ell_2}^2 + \iota_{\mathcal{N}(T^\mathrm{T})}(x) \right\} = \min_{x} \left\{ \|Ty-x\|_1 + \tfrac12 \| x \|_{\ell_2}^2 + \iota_{\mathcal{N}(T^\mathrm{T})}(x) \right\}\\ & = \min_{x \in \ensuremath{\mathbb{R}}^2} \bigl\Vert (y , 2y )^\mathrm{T} - ( x_1, x_2)^\mathrm{T} \bigr\Vert_1 + \frac{1}{2} \Vert x \Vert_2^2 + \iota_{N(T^\mathrm{T})}(x)\\ & = \min_{x_2 \in \ensuremath{\mathbb{R}}} \vert y +2x_2 \vert + \vert2y-x_2 \vert + \frac{5}{2} x_2^2. \end{align} Consider the strictly convex function $g_y(x_2) = \vert y +2x_2 \vert + \vert2y-x_2 \vert + \frac{5}{2} x_2^2$. For $\vert y \vert \leq \frac{2}{5}$, it holds \begin{equation} 0 \in \partial_{x_2} g_y \left(-\frac{y}{2} \right) = [-2,2] - \mathrm{sgn}(y) -\frac{5}{2} y. \end{equation} Hence, by Fermat's theorem, the unique minimizer of $g_y(x_2)$ is given by $-\frac{y}{2}$. Consequently, we have for $\vert y \vert \leq \frac{2}{5}$ that \begin{equation} f(y) = \frac{5}{2} \vert y \vert + \frac{5}{8} y^2. \end{equation} For $\vert y \vert > \frac{2}{5}$, the function $g_y$ is differentiable in $-\frac15 \mathrm{sgn}(y)$ and it holds \begin{equation} \partial_{x_2} g_y(-\frac15 \mathrm{sgn}(y)) = 2\mathrm{sgn}(y) - \mathrm{sgn}(y) -\mathrm{sgn}(y) = 0. \end{equation} Therefore, for $\vert y \vert > \frac{2}{5}$, the minimizer of $g_y$ is $-\frac15 \mathrm{sgn}(y)$ and \begin{equation} f(y) = 3\vert y \vert - \frac{1}{10}. \end{equation} \end{example} \subsection*{Acknowledgments} We like to thank J.-C. Pesquet for pointing us to \cite{Combettes2018}, we were not aware of when writing this paper. It appears that Remark 3.10 iv) in \cite{Combettes2018} can be somehow reformulated towards our setting. Funding by the German Research Foundation (DFG) within the project STE 571/13-1, the RTG 1932 and the RTG 2088 is gratefully acknowledged. \bibliographystyle{abbrv}
1,314,259,995,520
arxiv
\section{Introduction} The notion of energy is of central importance in modern culture and technology. It allows us to calculate how we can substitute man power by engines and thus facilitate daily life. We trade energy and we even ascribe prices to the different types of energy. We talk of fossil fuels, oil and gas, and renewable energies, wind and water power. In these cases we do not mean the types of energy but the types of fuels. In physics we define different types of energy, potential, kinetic and thermal energy and explain the conservation of energy. We know that stable systems are located at a minimum of potential energy. The mathematical expression for the energy, the Hamilton function $H$, has become a central notion of many branches of physics. From $H$ we can derive the stability of systems and the time evolution of unstable systems. If we know the particles and fields, thus the degrees of freedom of a system and the expression for the energy, we have the important information to do successful calculations. Nevertheless, we can ask, what energy really is. We realise that energy is strongly related to time. Energy is conserved, if the Hamilton function is invariant against translations of time. In classical mechanics, the evolution of a system is given by the temporal derivatives of the generalized coordinates $q_i$ and momenta $p_i$, which are expressed in terms of the partial derivatives of the Hamilton function $H(q_k,p_k)$ according to Hamilton's equations~\cite{Gosson:2018aa} \begin{equation} \dot q_i=\frac{\partial H(q_k,p_k)}{\partial p_i},\quad \dot p_i=-\frac{\partial H(q_k,p_k)}{\partial q_i}. \end{equation} The evolution of a quantum mechanical system is described by the time-evolution operator~\cite{Messiah:1995aa} \begin{equation}\label{ZeitEntOp} \exp\{-\mathrm iHt/\hbar\}, \end{equation} where $H$ is the Hamiltonian of the system. In submicroscopic physics the product of energy and time, the action $S$, is measured in units of Planck's constant $h=2\pi\hbar=6,626\cdot10^{-34}~$Js, one of the fundamental constants of nature. \section{Experimental results} The relation of energy and time shows up even more directly in experiments. The precise measurements of time, which are available today with atomic clocks, demonstrate that clocks of the same type are running faster, if they are moved to regions of higher potential energy or if the velocity is decreased~\cite{Hafeleaa:1972,Hafeleab:1972}. Today measurements are enormously precise~\cite{Chou24092010}. For a difference in height $\Delta h=0.33~$m a relative difference of frequency $\nu$ \begin{equation} \frac{\Delta\nu}{\nu}=(4,1\pm1,6)\cdot10^{-17} \end{equation} was measured. This value agrees with the relative increase of the energy \begin{equation}\label{GravEneZunahme} \frac{m_0g\,\Delta h}{m_0c^2}=3,6\cdot10^{-17} \end{equation} of a mass $m_0$ in the gravitational field with acceleration $g$. Time dilations for velocities of less than 10 meters per second where reported in the above mentioned article agreeing with the predictions of special relativity. \section{Theoretical reasoning} These experimental results may help us better understand the notion of energy. Despite the fact that clocks at different altitudes run with different speeds, we relate our (universal) time scale to Coordinated Universal Time (UTC), based on International Atomic Time (TAI), a weighted average of the time kept at present by the proper time~\footnote{The proper time is the time measured along a timelike world line in four-dimensional spacetime. Thus, clocks by definition measure the proper time.} of over 400 atomic clocks. In order to avoid errors, using this time in classical and quantum mechanics, we have learned to account for the different speeds of clocks in the product $Et$. We use a universal time scale and adjust the factor energy, and in this way we take the variations of proper time into account. Therefore, an increase in the potential energy \begin{equation}\label{EneZunahme} E=m_0c^2\;\to\;E=m_0c^2+m_0g\Delta h \end{equation} indicates that the system is in a region, where the proper time $\tau$ runs faster, see Eq.~(\ref{GravEneZunahme}), \begin{equation} \mathrm d\tau\;\to\;\mathrm d\tau^\prime =\mathrm d\tau(1+\frac{g\,\Delta h}{c^2}), \end{equation} or more precise for varying gravitation acceleration $\vec a(\vec r)$ along a path $\mathcal C$ \begin{equation} \mathrm d\tau\;\to\;\mathrm d\tau^\prime=\mathrm d\tau \big(1+\frac{1}{c^2}\int_{\mathcal C}\vec a(\vec r)\mathrm d\vec r\big). \end{equation} If we insert the acceleration \begin{equation} \vec a=-\frac{GM}{r^2}\vec e_r \end{equation} in the gravitational field of the earth of mass $M$ we get \begin{equation}\label{PotEinfl} \mathrm d\tau^\prime =\mathrm d\tau\left[1+\frac{GM}{c^2}\left(\frac{1}{r_1} -\frac{1}{r_2}\right)\right]\quad\textrm{with}\quad\Delta h=r_2-r_1,\; g=\frac{GM}{r^2}. \end{equation} In quantum mechanics, where the states are represented by the wave functions $\psi(\vec r,t)$, the time evolution can be expanded in a Taylor series which can be formally summed up \begin{equation}\label{TaylorZeitEntw} \psi(\vec r,t)=\sum_{k=0}^\infty\frac{t^k\partial_t^k}{k!}\,\psi(\vec r,t) \Big|_{t=0}=\mathrm e^{t\,\partial_t}\psi(\vec r,t)\Big|_{t=0}. \end{equation} As $\partial_t$ is an antihermitean operator we get real eigenvalues for the hermitean operator $\mathrm i\partial_t$. Using universal time $t$, we have to take into account the different speeds of time in the operator $\mathrm i\partial_t$. According to the time dependent Schrödinger equation \begin{equation}\label{ZASchroGl} \mathrm i\hbar\partial_t\psi=H\psi, \end{equation} the temporal derivative of the wave function is given by the Hamiltonian $H$ acting on that function. With Eq.~(\ref{ZASchroGl}), the time evolution operators~(\ref{ZeitEntOp}) and (\ref{TaylorZeitEntw}) become identical. $H$ takes into account that the energy is proportional to the speed of time, as measured with precise clocks. In the path integral formulation of quantum mechanics~\cite{Feynman:1948ur,Feynman:1965aa,Ramond:1981pw,Kleinert:2004ev} the time evolution of a free particle in the comoving frame is described by the transition amplitude \begin{equation}\label{PfadFaktor} \mathrm e^{-\mathrm i\tau\,m_0c^2/\hbar}. \end{equation} To get the general expression we have to perform a Lorentz transformation from the comoving frame, we call it $\mathcal S^\prime$ with the coordinates $t^\prime=\tau$ and $\vec r^\prime=(x^\prime,y^\prime,z^\prime)=0$, to the laboratory frame $\mathcal S$ moving with the velocity $-\vec v$ relative to $\mathcal S^\prime$, \begin{equation}\label{LorTrafoTau} \tau=t^\prime=\gamma(t-\frac{\vec v\vec r}{c^2}),\quad \vec r^\prime=0\quad\textrm{with}\quad \gamma=\frac{1}{\sqrt{1-\beta^2}},\quad\beta=\frac{v}{c}. \end{equation} Multiplying by $m_0c^2$ we get \begin{equation}\label{LorTrafo} m_0c^2\tau=\underbrace{m_0c^2\gamma}_{E}t-\underbrace{m_0\gamma\vec v}_{\vec p}\vec r. \end{equation} Introducing energy $E$ and momentum $\vec p$ \begin{equation}\label{DefEp} E:=\gamma m_0c^2,\quad\vec p:=\gamma m_0\vec v \end{equation} we shift the dependence on the speed of time to these velocity depending quantities. Thus, the transition amplitude~(\ref{PfadFaktor}) transforms from the comoving to the laboratory frame \begin{equation}\label{ZeitEntwLabor} \mathrm e^{-\mathrm i\tau\,m_0c^2/\hbar} \ist{LorTrafo}\mathrm e^{\mathrm i(\vec p\vec r-Et)/\hbar}. \end{equation} The propagation of a free particle is described by the action $S=\vec p\vec r-Et$, where energy and momentum are related by \begin{equation}\label{EneImpBez} E^2\ist{DefEp}m_0^2c^4+\vec p^{\,2}c^2 \end{equation} and in the non relativistic approximation (v<<c) by \begin{equation}\label{EApprox} E\ca{EneImpBez}m_0c^2+\frac{\vec p^{\,2}}{2m_0}. \end{equation} If the velocity $\vec v$ of a mass $m_0$ relative to an inertial frame $\mathcal S$ depends on time, we attribute this to an interaction. From the kinematical laws of special relativity we get the relation for the force $\vec F$~\cite{Goldstein:2002aa} \begin{equation}\label{EnergieErh} m_0c^2\mathrm d\gamma=\vec F\mathrm d\vec r. \end{equation} If the force $\vec F$ is an integrable quantity, it can be expressed by the gradient of a potential $V(\vec r)$ \begin{equation}\label{DefPot} \vec F=-\vec\nabla V(\vec r). \end{equation} Integrating Eq.~(\ref{EnergieErh}) we get the well-known relativistic expression for the energy $E$ \begin{equation}\label{GesamtENichtRel} m_0c^2\;\to\;E=\gamma m_0c^2+V\ist{EApprox}m_0c^2+T+V\quad\textrm{with}\quad T\approx\frac{\vec p^{\,2}}{2m_0}. \end{equation} This equation does also include the case of the gravitational field, see Eqs.~(\ref{EneZunahme})-(\ref{PotEinfl}), if we write \begin{equation}\label{GraviV} V(r)=-G\frac{Mm_0}{r}. \end{equation} Thus, taking into account both effects, moving frames and the effects of conservative forces, the time evolution (\ref{ZeitEntwLabor}) generalises to \begin{equation}\label{WirkungBeideBeitr} S=\int L\mathrm dt,\quad L=\vec p\,\dot{\vec r}-H,\quad H=\gamma m_0c^2+V. \end{equation} In the case of fields the Lagrangian $L$ can be written as an integral over a Lagrange density $\mathcal L$ with $L=\int\mathcal L\,\mathrm d^3r$. The transition amplitude for quantum fields is then given by \begin{equation}\label{BoltzFak} \mathrm e^{\mathrm iS/\hbar},\quad S=\frac{1}{c}\int \mathrm d^4x\,\mathcal L,\quad x=(ct,\vec r). \end{equation} The fact that in a quantum system the action $S$ is measured in natural units of $\hbar$, reflects the property of path integrals that quantum fields are not restricted to the minimum of action. The action can fluctuate in the order of $\hbar$ around these minima~\cite{Feynman:1948ur,Feynman:1965aa,Ramond:1981pw,Kleinert:2004ev}. Every particle has its Compton time $\hbar/(m_0c^2)$, which also appears in the Schrödinger equation in natural units \begin{equation}\label{scaleSchroGl} \mathrm i\frac{\hbar}{m_0c^2}\partial_t\psi(\vec r,t) =\left[1-\frac{1}{2}\left(\frac{\hbar}{m_0c}\right)^2\Delta +\frac{V}{m_0c^2}\right]\psi(\vec r,t). \end{equation} This variation of the time scale with the rest mass of the particle is in agreement with the claim that we are using in calculations a universal time scale. If the particle would annihilate with an antiparticle, we say in the energy language that its rest energy $m_0c^2$ could be transformed to work on another mass or to an increase of its velocity. The ratio of proper time to universal time for this other particle would change accordingly. The rest energy of a particle has the ability to change the scale of the proper time of other particles. Every subsystem of a composite system has therefore its own time scale given by its Compton time. As the temporal derivative in Eq.~(\ref{TaylorZeitEntw}) of a composite system appears in the exponent, the temporal derivatives of the subsystems are summed up. Momenta are canonically conjugate to the coordinates. The gradient $\vec\nabla$ is an antihermitean operator and performs a shift of the coordinates, in the same way as $\partial_t$ shifts the time, see Eq.~(\ref{TaylorZeitEntw}). Under Lorentz transformation the quantum mechanical length scale, which for the particle at rest is the Compton wavelength $\hbar/(m_0c)$, is modified according to Eqs.~(\ref{LorTrafo}) and (\ref{DefEp}) with the factor $\gamma$ and indicates that in our calculations we are also using universal length scales. General relativity introduces coordinate lines in a four-dimensional pseudo-Rie\-mannian manifold, coordinates are labels to specify events in 3+1-dimensional spacetime. Gravity is not regarded as a force but a manifestation of the curvature of spacetime. The concept of proper time is directly used without transition to the concept of gravitational forces and gravitational energy. Since gravitational effects are taken into account in the metric, it is impossible to define a generally covariant energy-momentum tensor for the gravitational field~\cite{Padmanabhan:2010zzb}. The energy-momentum tensor takes into account only the strong, electromagnetic and weak interactions as sources of curvature and determines the time- and length scales. The transition to a direct description of these interactions in terms of the degrees of freedom of space- and time is not done. The unification of gravitation and particle physics is still missing. The largest part of the present community of physicists in searching for a quantisation of gravity. One could also think of another type of unification by a geometrisation of particle physics. A step in this direction concerning electromagnetism, the only long-range interaction besides gravity, was done in Ref.~\cite{Faber:1999ia,Faber:2002nw,Faber:2012zz,Faber:2014bxa} on the classical level. \section{Conclusions} This paper addresses the conceptual issue how classical and quantum mechanics can account for the difference in the speed of clocks depending on their velocity or the strength of an external gravitational field. The precise measurements with atomic clocks have impressively verified the exact predictions of Special and General Relativity. Classical and quantum mechanics, on the other hand, are using a ``universal'' time scale, not depending on the velocities of the clocks or the strength of the gravitational field. Energy is introduced as quantity canonically conjugate to time. The product of energy and time is measured in units of one of the fundamental constants of nature, the quantum of action, Planck's constant $\hbar$. We argue in this article that during the history of physics we learned to construct the expressions for the energy in such a way as to describe physical processes by a ``universal`` time. The different speeds of clocks are taken into account by the factor energy. The reason for that is discussed in detail in classical and quantum mechanics for freely moving masses and masses under the influence of a potential. We should consider energy as physical quantity reflecting the variation of the speed of clocks. This point of view is not treated in the literature or taught in lectures on classical mechanics, quantum mechanics and gravitation. It could finally lead to a generalised definition of energy. \section{Acknowledgement} The author thanks Helmut Rauch for the interesting comment that neutron interference experiments~\cite{Rauch:1974aa} in the gravitational field can be described either by the Schrödinger equation or by the difference of proper time~\cite{Greenberger:2012aa,Scheich:2013aa}. I am also thankful to Dmitry Antonov for the critical reading of the manuscript and for indicating additional references. \bibliographystyle{utphys}
1,314,259,995,521
arxiv
\section{Introduction}\label{sec: intro} The Born-Oppenheimer (BO)~\cite{BO}, or adiabatic, treatment of the coupled motion of electrons and nuclei is among the most fundamental approximations in modern condensed-matter theory and forms the basis of our understanding of dynamical processes in molecules and solids. It offers a practical way to visualize a molecule or solid as a set of nuclei moving on a single potential energy surface (PES) generated by the electrons in a given eigenstate. However, it is based on the assumption that the electrons adjust instantaneously to adiabatic changes of the nuclear positions, and a variety of interesting phenomena in physics, chemistry and biology take place in the regime where this approximation breaks down. Prominent examples are the process of vision~\cite{cerulloN2010,schultenBJ2009,ishidaJPCB2012}, photo-synthesis~\cite{tapaviczaPCCP2011,flemingN2005}, photo-voltaic processes~\cite{rozziNC2013,silvaNM2013,jailaubekovNM2013}, proton-transfer/hydrogen storage ~\cite{sobolewski, varella, hammes-schiffer, marx} as well as phonon-induced superconductivity. Non-adiabatic molecular processes are usually explained in terms of BOPESs and transitions between the BO electronic states. In this context, the solution of the time-dependent Schr\"odinger equation (TDSE) is expanded in the complete system of BO electronic states, leading to a nuclear wave-packet with contributions on several BOPESs that undergo transitions in the regions of strong non-adiabatic coupling. This approach provides a formally exact description of the complete system if all the electronic states are taken into account. However, practical applications are limited to a small number of degrees of freedom. For large systems, the only feasible way of dealing with non-adiabatic processes is the introduction of classical or semi-classical approximations for the nuclear motion, coupled, non-adiabatically, to the (quantum mechanical) electrons. Although widely investigated~\cite{pechukas,ehrenfest,TSH,kapral-ciccotti}, the nature of the force driving the classical nuclei in this mixed quantum-classical treatment has not yet been fully identified. Recently~\cite{steps}, this problem has been addressed from a novel perspective by referring to the exact representation of the full molecular wave-function~\cite{AMG,AMG2} as a single product of a purely nuclear wave-function and an electronic factor that parametrically depends on the nuclear coordinates. In this framework, a TDSE for the nuclear wave-function is derived, where a time-dependent potential energy surface (TDPES) and a time-dependent vector potential arise as exact concepts and provide the \textit{driving force} for the nuclear evolution. The present paper discusses situations where the vector potential can be set to zero by an appropriate choice of gauge, thus leaving the TDPES as the only potential responsible for the nuclear dynamics. In this case, the force on the nuclei, in a classical sense, can be obtained as the gradient of the TDPES. But, is this the true classical force on the nuclei? We will try to address this issue by employing the exact TDPES, that is known for the simple system studied here, for the propagation of classical trajectories in order to (i) examine the quality of the classical approximation for the nuclear motion and (ii) get insight into the properties of approximated classical forces for an eventual mixed quantum-classical treatment of non-adiabatic processes. Moreover, we will discuss the connections~\cite{steps} between such novel approach, based on a \textit{single} TDPES, and the well-established description in terms of several static (coupled) BOPESs. The paper is organized as follows. In Section~\ref{sec: background}, the exact factorization of the time-dependent electron-nuclear wave-function is presented and the equations that govern the evolution of the electronic and nuclear subsystems are discussed. The TDPES is investigated and analyzed in detail in Section~\ref{sec: pes} for systems showing different degree of non-adiabaticity. Section~\ref{sec: dynamics} presents some results obtained by performing classical dynamics on the exact surface and in Section~\ref{sec: ehrenfest} we discuss the Ehrenfest theorem in the exact factorization representation of the full wave-function. In Section~\ref{sec: conclusion} some concluding words are given. \section{Exact decomposition of the electronic and nuclear motion}\label{sec: background} In the absence of a time-dependent external field, a system of interacting electrons and nuclei is described, non-relativistically, by the Hamiltonian \begin{equation}\label{eqn: hamiltonian} \hat H = \hat T_n+\hat H_{BO}, \end{equation} where $\hat T_n$ is the nuclear kinetic energy operator and \begin{equation}\label{eqn: boe} \hat{H}_{BO}({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}}) = \hat{T}_e({\underline{\underline{\bf r}}}) + \hat{W}_{ee}({\underline{\underline{\bf r}}}) + \hat{V}_{en}({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}}) + \hat{W}_{nn}({\underline{\underline{\bf R}}}), \end{equation} is the standard BO electronic Hamiltonian. The symbols ${\underline{\underline{\bf r}}}$ and ${\underline{\underline{\bf R}}}$ are used to collectively indicate the coordinates of $N_{e}$ electrons and $N_n$ nuclei, respectively. It has been proved in~\cite{AMG,AMG2} that the full time-dependent electron-nuclear wave function, $\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)$, that is the solution of the TDSE, \begin{equation}\label{eqn: tdse} \hat H\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)=i\hbar\partial_t \Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t), \end{equation} can be exactly factorized to the correlated product, \begin{equation}\label{eqn: factorization} \Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)=\chi({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t), \end{equation} of the nuclear wave-function, $\chi({\underline{\underline{\bf R}}},t)$, and the electronic wave-function, $\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)$, that parametrically depends on the nuclear configuration and satisfies the partial normalization condition (PNC), \begin{equation} \int d{\underline{\underline{\bf r}}} \left|\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right|^2 = 1, \quad\forall\,\,{\underline{\underline{\bf R}}},t. \end{equation} The PNC is an essential element of this representation. Without imposing the PNC, the full wave-function can be factorized in many different (unphysical) ways. It is the PNC that makes the factorization~(\ref{eqn: factorization}) unique up to within a $({\underline{\underline{\bf R}}},t)$-dependent gauge transformation, \begin{equation}\label{eqn: gauge} \begin{array}{rcl} \chi({\underline{\underline{\bf R}}},t)\rightarrow\tilde\chi({\underline{\underline{\bf R}}},t)&=&e^{-\frac{i}{\hbar}\theta({\underline{\underline{\bf R}}},t)}\chi({\underline{\underline{\bf R}}},t) \\ \Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\rightarrow\tilde\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)&=&e^{\frac{i}{\hbar}\theta({\underline{\underline{\bf R}}},t)}\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t). \end{array} \end{equation} Another important implication of imposing the PNC is that the diagonal of the $N$-body nuclear density matrix of the complete system is equal to $|\chi({\underline{\underline{\bf R}}},t)|^2$. The stationary variations~\cite{frenkel} of the quantum mechanical action\footnote{The PNC is inserted in the calculation of the stationary variations of the quantum mechanical action by means of Lagrange multipliers.} w.r.t. $\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)$ and $\chi({\underline{\underline{\bf R}}},t)$ lead to the derivation of the equations of motion \begin{eqnarray} \left(\hat{H}_{BO}({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})+\hat U_{en}^{coup}[\Phi_{\underline{\underline{\bf R}}},\chi]-\epsilon({\underline{\underline{\bf R}}},t)\right) \Phi_{{\underline{\underline{\bf R}}}}({\underline{\underline{\bf r}}},t)&=&i\hbar\partial_t \Phi_{{\underline{\underline{\bf R}}}}({\underline{\underline{\bf r}}},t)\label{eqn: exact electronic eqn} \\ \left(\sum_{\nu=1}^{N_n} \frac{\left[-i\hbar\nabla_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right]^2}{2M_\nu} + \epsilon({\underline{\underline{\bf R}}},t) \right)\chi({\underline{\underline{\bf R}}},t)&=&i\hbar\partial_t \chi({\underline{\underline{\bf R}}},t). \label{eqn: exact nuclear eqn} \end{eqnarray} Here, $\epsilon({\underline{\underline{\bf R}}},t)$ is the TDPES, defined as \begin{equation}\label{eqn: tdpes} \epsilon({\underline{\underline{\bf R}}},t)=\left\langle\Phi_{\underline{\underline{\bf R}}}(t)\right|\hat{H}_{BO}+\hat U_{en}^{coup}-i\hbar\partial_t\left| \Phi_{\underline{\underline{\bf R}}}(t)\right\rangle_{\underline{\underline{\bf r}}}, \end{equation} $\hat U_{en}^{coup}[\Phi_{\underline{\underline{\bf R}}},\chi]$ is what we name ``electron-nuclear coupling operator'', defined as \begin{align} \hat U_{en}^{coup}[\Phi_{\underline{\underline{\bf R}}},\chi]=&\sum_{\nu=1}^{N_n}\frac{1}{M_\nu}\left[ \frac{\left[-i\hbar\nabla_\nu-{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right]^2}{2}\right.\label{eqn: enco} \\ &\left.+\left(\frac{-i\hbar\nabla_\nu\chi}{\chi}+{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right) \left(-i\hbar\nabla_\nu-{\bf A}_{\nu}({\underline{\underline{\bf R}}},t)\right)\right],\nonumber \end{align} and ${\bf A}_{\nu}\left({\underline{\underline{\bf R}}},t\right)$ is the time-dependent vector potential potential, \begin{equation}\label{eqn: vector potential} {\bf A}_{\nu}\left({\underline{\underline{\bf R}}},t\right) = \left\langle\Phi_{\underline{\underline{\bf R}}}(t)\right|-i\hbar\nabla_\nu\left.\Phi_{\underline{\underline{\bf R}}}(t) \right\rangle_{\underline{\underline{\bf r}}}. \end{equation} The symbol $\left\langle\,\,\cdot\,\,\right\rangle_{\underline{\underline{\bf r}}}$ indicates an integration over electronic coordinates only. In Eqs.~(\ref{eqn: exact electronic eqn}) and~(\ref{eqn: exact nuclear eqn}), $\hat U_{en}^{coup}[\Phi_{\underline{\underline{\bf R}}},\chi]$, $\epsilon({\underline{\underline{\bf R}}},t)$ and ${\bf A}_{\nu}\left({\underline{\underline{\bf R}}},t\right)$ mediate the coupling between the electronic and nuclear motions in a formally exact way. The electron-nuclear coupling operator, $\hat U_{en}^{coup}[\Phi_{\underline{\underline{\bf R}}},\chi]$, in the electronic equation~(\ref{eqn: exact electronic eqn}), depends on the nuclear wave-function and the first and second derivatives of the electronic wave-function with respect to the nuclear coordinates. This operator includes the coupling to the nuclear subsystem beyond the parametric dependence in the BO Hamiltonian $\hat H_{BO}({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})$. The nuclear equation~(\ref{eqn: exact nuclear eqn}), on the other hand, has a particularly appealing form of a Schr\"odinger equation that contains a time-dependent vector potential~(\ref{eqn: vector potential}) and a time-dependent scalar potential~(\ref{eqn: tdpes}) that uniquely~\footnote{The scalar and vector potentials are uniquely determined up to within a gauge transformation, given in Eqs.~(\ref{eqn: transformation of epsilon}) and~(\ref{eqn: transformation of A}). However, as expected, the nuclear Hamiltonian in Eq.~(\ref{eqn: exact nuclear eqn}) is form-invariant under such transformations.} govern the nuclear dynamics and yield the nuclear wave-function. $\chi({\underline{\underline{\bf R}}},t)$ is interpreted as the nuclear wave-function since it leads to an $N$-body nuclear density, $\Gamma({\underline{\underline{\bf R}}},t)=\vert\chi({\underline{\underline{\bf R}}},t)\vert^2$, and an $N$-body current density, ${\bf J}_\nu({\underline{\underline{\bf R}}},t)=Im(\chi^*\nabla_\nu\chi)+ \Gamma({\underline{\underline{\bf R}}},t){\bf A}_\nu$, which reproduce the true nuclear $N$-body density and current density obtained from the full wave-function $\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)$~\cite{AMG2}. The uniqueness of $\epsilon({\underline{\underline{\bf R}}},t)$ and ${\bf A}_{\nu}({\underline{\underline{\bf R}}},t)$ can be straightforwardly proved by following the steps of the current-density version~\cite{Ghosh-Dhara} of the Runge-Gross theorem~\cite{RGT}. The scalar potential and the vector potential transform as \begin{eqnarray} \tilde{\epsilon}({\underline{\underline{\bf R}}},t) &=& \epsilon({\underline{\underline{\bf R}}},t)+\partial_t\theta({\underline{\underline{\bf R}}},t)\label{eqn: transformation of epsilon} \\ \tilde{\bf A}_{\nu}({\underline{\underline{\bf R}}},t) &=& {\bf A}_{\nu}({\underline{\underline{\bf R}}},t)+\nabla_\nu\theta({\underline{\underline{\bf R}}},t),\label{eqn: transformation of A} \end{eqnarray} under the gauge transformation~(\ref{eqn: gauge}). \section{Time-dependent potential energy surface}\label{sec: pes} In this work, we present a detailed study of the TDPES for strongly coupled electronic and nuclear motions. In order to obtain the TDPES, the full electron-nuclear wave-function has to be calculated. Therefore, we need to choose a system that is simple enough to allow for a numerically exact treatment and that nevertheless exhibits characteristic features associated with non-adiabatic dynamics. Here, we employ the model of Shin and Metiu~\cite{MM}, consisting of three ions and a single electron, as depicted in Fig.~\ref{fig: metiu model}. \begin{figure}[h!] \centering \includegraphics{./Figure1.pdf} \caption{Schematic representation of the model system described by the Hamiltonian~(\ref{eqn: metiu-hamiltonian}). $R$ and $r$ indicate the coordinates of the moving ion and electron, respectively, in one dimension. $L$ is the distance between the fixed ions.} \label{fig: metiu model} \end{figure} Two ions are fixed at a distance of $L=19.0$~$a_0$, the third ion and the electron are free to move in one dimension along the line joining the two fixed ions. The Hamiltonian of this system reads \begin{align}\label{eqn: metiu-hamiltonian} \hat{H}(r,R)= &-\frac{1}{2}\frac{\partial^2}{\partial r^2}-\frac{1}{2M}\frac{\partial^2}{\partial R^2} + \frac{1}{\left|\frac{L}{2} -R\right|}+\frac{1}{\left|\frac{L}{2} + R\right|}-\frac{\mathrm{erf}\left(\frac{\left|R-r\right|}{R_f}\right)} {\left|R - r\right|}\nonumber\\ &-\frac{\mathrm{erf}\left(\frac{\left|r-\frac{L}{2} \right|}{R_r}\right)}{\left|r-\frac{L}{2}\right|}-\frac{\mathrm{erf}\left(\frac{\left|r+\frac{L}{2}\right|} {R_l}\right)}{\left|r+\frac{L}{2}\right|}. \end{align} Here, the symbols ${\underline{\underline{\bf r}}}$ and ${\underline{\underline{\bf R}}}$ are replaced by $r$ and $R$, the coordinates of the electron and the movable ion measured from the center of the two fixed ions and $M=1836$ is the mass of the movable ion. The parameters $R_f$, $R_l$ and $R_r$ specify the interactions between the charged particles and can be tuned to have different couplings between the electronic and nuclear motions. To obtain the TDPES, we first solve the TDSE~(\ref{eqn: tdse}) for the complete system and obtain the full wave-function, $\Psi(r,R,t)$. This is done by the numerical integration of the TDSE using SPO-technique~\cite{spo}, with the time-steps of $2.4\times10^{-3}$~$fs$ (or $0.1$~$a.u.$). The nuclear density is calculated, at each time, as the marginal probability of the configuration ${\underline{\underline{\bf R}}}$~\footnote{We reintroduce the bold-double underlined symbols for electronic and nuclear positions whenever the statements have general validity. } from the full wave-function \begin{equation} \left|\chi({\underline{\underline{\bf R}}},t)\right|^2=\int d{\underline{\underline{\bf r}}} \left|\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)\right|^2. \end{equation} The phase $S({\underline{\underline{\bf R}}},t)$ of $\chi({\underline{\underline{\bf R}}},t)$ is determined by the choice of the gauge. We use the exact equality \begin{equation}\label{eqn: vector-exact} {\bf A}_\nu({\underline{\underline{\bf R}}},t) = \left|\chi({\underline{\underline{\bf R}}},t)\right|^{-2}\mbox{Im}\int d{\underline{\underline{\bf r}}}\,\Psi^*({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)\nabla_\nu \Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t) - \nabla_\nu S({\underline{\underline{\bf R}}},t) \end{equation} which follows immediately from the definition~(\ref{eqn: vector potential}) of the vector potential by inserting the factorization~(\ref{eqn: factorization}). The gauge is chosen by setting the vector potential to zero $A(R,t)\equiv0$ in Eq.~(\ref{eqn: vector-exact}), which is possible in our specific example because we are dealing with a one-dimensional system. Obviously, the choice of the gauge does not affect any physical observable. $S(R,t)$ is thus determined from the expression \begin{equation} S(R,t)=\int^R dR' \left|\chi(R',t)\right|^{-2}\mbox{Im}\int dr\,\Psi^*(r,R',t)\nabla_{R'}\Psi(r,R',t). \end{equation} From the calculated exact nuclear wave-function $\chi({\underline{\underline{\bf R}}},t)=e^{-\frac{i}{\hbar}S({\underline{\underline{\bf R}}},t)}|\chi({\underline{\underline{\bf R}}},t)|$, we obtain the TDPES $\epsilon({\underline{\underline{\bf R}}},t)$ from Eq.~(\ref{eqn: tdpes}) by explicitly calculating the electronic wave-function $\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)=\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)/\chi({\underline{\underline{\bf R}}},t)$. Alternatively, we may invert the nuclear equation~(\ref{eqn: exact nuclear eqn}). In the gauge we have implemented to perform the calculations, the TDPES alone determines the time evolution of $\chi({\underline{\underline{\bf R}}},t)$. In order to investigate the TDPES in detail, we study its gauge-invariant (GI) and gauge-dependent (GD) constituents separately (it can be easily proven that $\tilde{\epsilon}_{GI}({\underline{\underline{\bf R}}},t)=\epsilon_{GI}({\underline{\underline{\bf R}}},t)$ and $\tilde{\epsilon}_{GD}({\underline{\underline{\bf R}}},t)=\epsilon_{GD}({\underline{\underline{\bf R}}},t)+\partial_t\theta({\underline{\underline{\bf R}}},t)$ under the transformations in Eqs.~(\ref{eqn: gauge})), \begin{equation} \epsilon({\underline{\underline{\bf R}}},t)=\epsilon_{GI}({\underline{\underline{\bf R}}},t)+\epsilon_{GD}({\underline{\underline{\bf R}}},t), \end{equation} where \begin{equation}\label{eqn: gi tdpes} \epsilon_{GI}({\underline{\underline{\bf R}}},t)=\left\langle\Phi_{\underline{\underline{\bf R}}}(t)\right|\hat{H}_{BO}\left|\Phi_{\underline{\underline{\bf R}}}(t)\right\rangle_{\underline{\underline{\bf r}}} +\sum_{\nu=1}^{N_n}\bigg(\frac{\hbar^2}{2M_\nu}\left\langle\nabla_\nu\Phi_{\underline{\underline{\bf R}}}(t)|\nabla_\nu\Phi_{\underline{\underline{\bf R}}}(t) \right\rangle_{\underline{\underline{\bf r}}}-\frac{{\bf A}^2_\nu({\underline{\underline{\bf R}}},t)}{2M_\nu}\bigg), \end{equation} with the second term on the RHS obtained from the action of the electron-nuclear coupling operator in Eq.~(\ref{eqn: enco}) on the electronic wave-function, and \begin{equation}\label{eqn: gd tdpes} \epsilon_{GD}({\underline{\underline{\bf R}}},t)=\left\langle\Phi_{\underline{\underline{\bf R}}}(t)\right|-i\hbar\partial_t\left|\Phi_{\underline{\underline{\bf R}}}(t)\right\rangle_{\underline{\underline{\bf r}}}. \end{equation} The GI part of the TDPES, $\epsilon_{GI}$, is not affected by the gauge transformation~(\ref{eqn: gauge}). The GD part, on the other hand, depends on the choice of the gauge. They both have important features~\cite{steps} that will be discussed and analyzed in the following section. For this analysis, we will use a representation in terms of the BO electronic states, $\varphi_{{\underline{\underline{\bf R}}}}^{(l)}({\underline{\underline{\bf r}}})$, and BOPESs, $\epsilon_{BO}^{(l)}({\underline{\underline{\bf R}}})$, which are the eigenstates and eigenvalues of the BO electronic Hamiltonian~(\ref{eqn: boe}), respectively. If the full wave-function is expanded in this basis, \begin{equation}\label{eqn: expansion of Psi} \Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)=\sum_l F_l({\underline{\underline{\bf R}}},t)\varphi_{\underline{\underline{\bf R}}}^{(l)}({\underline{\underline{\bf r}}}), \end{equation} then the nuclear density may be written as \begin{equation}\label{eqn: chi and Fl} \left|\chi({\underline{\underline{\bf R}}},t)\right| = \sqrt{\sum_{l}\left|F_l({\underline{\underline{\bf R}}},t)\right|^2}. \end{equation} This identity is obtained by integrating the squared modulus of Eq.~(\ref{eqn: expansion of Psi}) over the electronic coordinates with normalized adiabatic states. The exact electronic wave-function may also be expanded in terms of the BO states, \begin{equation}\label{eqn: expansion of Phi} \Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)=\sum_l C_l({\underline{\underline{\bf R}}},t)\varphi_{\underline{\underline{\bf R}}}^{(l)}({\underline{\underline{\bf r}}}). \end{equation} The expansion coefficients of Eqs.~(\ref{eqn: expansion of Psi}) and~(\ref{eqn: expansion of Phi}) are related, \begin{equation}\label{eqn: relation coefficients} F_l({\underline{\underline{\bf R}}},t)= C_l({\underline{\underline{\bf R}}},t)\chi({\underline{\underline{\bf R}}},t), \end{equation} by virtue of the factorization~(\ref{eqn: factorization}). The PNC then reads \begin{equation}\label{eqn: PNC on BO} \sum_l\left|C_l({\underline{\underline{\bf R}}},t)\right|^2=1,\quad\forall\,\,{\underline{\underline{\bf R}}},t. \end{equation} In the cases studied in the following sections, the initial wave-function is the product of a real-valued normalized Gaussian wave-packet, centered at $R_c=-4.0$~$a_0$ with variance $\sigma=1/\sqrt{2.85}$~$a_0$ (black line in Fig.~\ref{fig: BO-data}), and the second BO electronic state, $\varphi_{R}^{(2)}(r)$. \subsection{Steps in the TDPES in strong non-adiabatic regime} \label{sec: strong coupling} We first study a case in which the electronic and nuclear motions are strongly coupled. In order to produce that situation, we choose the parameters of the Hamiltonian~(\ref{eqn: metiu-hamiltonian}) as $R_f=5.0$~$a_0$, $R_l=3.1$~$a_0$ and $R_r=4.0$~$a_0$ such that the first BOPES, $\epsilon^{(1)}_{BO}$, is strongly coupled to the second BOPES, $\epsilon^{(2)}_{BO}$, around the avoided crossing at $R_{ac}=-1.90~a_0$ and there is a weak coupling to the rest of the surfaces. The four lowest BOPESs for this set of parameters are shown in Fig.~\ref{fig: BO-data} (left panel), along with the initial nuclear density. Energies are given in atomic (Hartree) units $\epsilon_h$. The same figure (right panel) presents the time-evolution of the populations of the BO states, \begin{equation}\label{eqn: population BO} \rho_{l}(t) = \int d{\underline{\underline{\bf R}}} \left|F_l({\underline{\underline{\bf R}}},t)\right|^2, \end{equation} and underlines the strong non-adiabatic character of the system with the intense population exchange taking place at the passage through the avoided crossing ($t\simeq12$~$fs$). \begin{figure}[h!] \begin{center} \includegraphics{./Figure2.pdf} \end{center} \caption{Left: lowest four BO surfaces, as functions of the nuclear coordinate. The first (red line) and second (green line) surfaces will be considered in the actual calculations that follow, the third and forth (dashed black lines) are shown for reference. The squared modulus (reduced by ten times and rigidly shifted in order to superimpose it on the energy curves) of the initial nuclear wave-packet is also shown (black line). Right: populations of the BO states along the time evolution. The strong non-adiabatic nature of the model is underlined by the population exchange at the crossing of the coupling region.} \label{fig: BO-data} \end{figure} As recently discussed~\cite{steps}, the GI part of the TDPES~(\ref{eqn: gi tdpes}) shows, in general, two distinct features: (i) in the vicinity of the avoided crossing, as the nuclear wave-packet passes through the region of non-adiabatic coupling between different BOPESs, $\epsilon_{GI}(R,t)$ resembles the \textit{diabatic} surface that smoothly connects the two adiabatic surfaces; (ii) a bit further away from the avoided crossing, it shows \textit{dynamical steps} between regions in $R$-space where it is on top of one or the other BOPES. The GD part of the TDPES~(\ref{eqn: gd tdpes}), on the other hand, is a piecewise constant function of the nuclear coordinate. This is illustrated in detail in Fig.~\ref{fig: snapshots strong} that contains the GI part of the TDPES (upper panel), the GD part of the TDPES (middle panel) and the nuclear density together with $|F_1|^2$ and $|F_2|^2$ (lower panel) for three different snapshots of time. In all the plots, the regions highlighted within the boxes are the regions which we refer to in the following discussion. Outside such regions, the value of the nuclear density drops under the numerical accuracy and the resulting potentials are not meaningful. That is why the TDPES are trimmed. The left panels show, at the initial time-step, (top) the GI part of the TDPES (black dots), with the two lowest BOPESs ($\epsilon_{BO}^{(1)}(R)$, dashed red line, and $\epsilon_{BO}^{(2)}(R)$, dashed green line) as reference, (center) the GD part of the exact potential (dark-green dots) and (bottom) the nuclear density (dashed black line) and its components from on the BO states (see Eq.~(\ref{eqn: chi and Fl})), $|F_1(R,t)|^2$ (red line) and $|F_2(R,t)|^2$ (green line). \begin{figure}[h!] \centering \includegraphics{./Figure3.pdf} \caption{TDPES and nuclear densities at different time-steps, namely $t=0$~$fs$, $t=10.88$~$fs$ and $t=26.61$~$fs$. The different panels show: (top) GI part of the TDPES (black dots) and the two lowest BOPESs (first, dashed red line, and second, dashed green line) as reference; (center) the GD part of the TDPES (green dots); (bottom) nuclear density (dashed black line) and $|F_l(R,t)|^2$ ($l=1$ red line and $l=2$ green line). The gray boxes define the regions in $R$-space where the energies have been calculated, since the nuclear density is (numerically) not zero.} \label{fig: snapshots strong} \end{figure} At time $t=0$~$fs$, the electronic wave-function, $\Phi_R(r,t)$, coincides with the second adiabatic state $\varphi_R^{(2)}(r)$, therefore the GI component of the TDPES is identical with $\epsilon_{BO}^{(2)}(R)$, apart from a slight deviation due to the second term in Eq.~(\ref{eqn: gi tdpes}). This is easily confirmed by the expression of $\epsilon_{GI}(R,t)$ in terms of the BO states and energies \begin{align} \epsilon_{GI}(R,t)&=\sum_{l}\left|C_l(R,t)\right|^2\epsilon_{BO}^{(l)}(R)+\frac{\hbar^2}{2M} \left[\sum_{l,k}C_l^*(R,t)C_k(R,t)d_{lk}^{(2)}(R)\right. \label{eqn: gi tdpes on BO}\\ &\left.\sum_{l,k}\left({C_l^*}'(R,t)C_k(R,t)-C_l^*(R,t)C_k'(R,t)\right)d_{lk}^{(1)}(R)+ \sum_l\left|C_l'(R,t)\right|^2\right],\nonumber \end{align} where we use the prime to indicate the spatial derivative of the coefficients and we introduced the non-adiabatic couplings \begin{eqnarray} d_{lk}^{(1)}(R)=&\left\langle\varphi_{R}^{(l)}\right.\left|\nabla_{R}\varphi_{R}^{(k)}\right\rangle_r & =-{d_{kl}^{(1)}}^*(R) \\ d_{lk}^{(2)}(R)=&\left\langle\nabla_{R}\varphi_{R}^{(l)}\right.\left|\nabla_{R}\varphi_{R}^{(k)}\right\rangle_r & = {d_{kl}^{(2)}}^*(R). \end{eqnarray} The leading term in Eq.~(\ref{eqn: gi tdpes on BO}) is the average of the BOPESs weighted by $\left|C_l(R,t) \right|^2$, since the second term is $\mathcal O(M^{-1})$. The GD component of the TDPES in Eq.~(\ref{eqn: gd tdpes}), in terms of the BO states, becomes \begin{equation}\label{eqn: gd tdpes on BO} \epsilon_{GD}(R,t) = \sum_{l}\left|C_l(R,t)\right|^2\dot\gamma_{l}(R,t) \end{equation} where $\dot\gamma_{l}(R,t)$ is the time-derivative of the phase of the coefficients $C_l(R,t)= e^{\frac{i}{\hbar}\gamma_l(R,t)}|C_l(R,t)|$. The nuclear density, along with its components on the BO states from Eq.~(\ref{eqn: chi and Fl}), is presented in the bottom panels of Fig.~\ref{fig: snapshots strong}. At the initial time, $\left|\chi(R,t)\right|^2=\left|F_2(R,t)\right|^2$. At $t=10.88$~$fs$ in Fig.~\ref{fig: snapshots strong} (central panels), (top) the GI part of the TDPES resembles the diabatic surface~\cite{MM} that smoothly passes through the avoided crossing. This behavior allows the nuclear density moving on the upper BOPES to be partially ``transferred'' to the lower state, as the consistent increase of the population of state $\varphi_R^{(1)}(r)$ (red curve in the bottom plot in Fig.~\ref{fig: snapshots strong}) confirms. In the region highlighted by the dashed box, the GD part of the exact potential is constant, therefore, it does not affect nuclear dynamics. At later times ($t=26.61$~$fs$ shown in the right panels of Fig.~\ref{fig: snapshots strong}), when the nuclear wave-packet has split at the avoided crossing, both components of the TDPES present a pronounced stepwise behavior: the GI part follows one or the other BOPES in different regions of $R$-space that are connected by a step, whereas the GD part is stepwise constant, with steps appearing in the same region. The overall shape of the TDPES, at initial times, is determined by the GI part, as the effect of the GD part is no more than a constant shift. Hence, the TDPES, that drives the nuclear dynamics, behaves like a diabatic surface and ``opens'' in the direction of the wave-packet's motion in order to facilitate the population exchange between the adiabatic states. After the wave-packet splits at the avoided crossing, in different regions in $R$-space, the TDPES is parallel to one or the other BOPES and a step forms in the transition region. Therefore, the motion of the components $F_l(R,t)$ of the nuclear wave-packet is driven by single adiabatic surfaces and not (like, e.g., in Ehrenfest dynamics) by an average electronic potential. This feature is reminiscent of the way the well-known \textsl{trajectory surface hopping} (TSH) scheme~\cite{TSH} deals with the non-adiabatic dynamics. In this approach, the components (in our case identified by the symbol $|F_l(R,t)|^2$) of the nuclear density on different BO states are represented by \textsl{bundles} of classical trajectories evolving, independently from one another, on different BO surfaces. The ratio of the total number of trajectories occupying, at each time, the surfaces approximates the population $\rho_l$ of the corresponding BO state. The success of this method in reproducing non-adiabatic processes becomes clear in the light of the fact that the exact TDPES itself is parallel to different BOPESs in different regions along the nuclear coordinate. The usually abrupt transitions between the adiabatic surfaces, i.e., the steps in the exact treatment, are reminiscent to the stochastic jumps between BO surfaces in TSH. \subsection{Analysis of the steps} The behavior of the GI part of the TDPES is mainly determined by the first term in Eq.~(\ref{eqn: gi tdpes on BO}). The steps appear in the region around $R_0$, the cross-over of $|F_1(R,t))|^2$ and $|F_2(R,t))|^2$. In particular, at this point $|F_1(R_0,t)|^2=|F_2(R_0,t)|^2=|X(t)|$ and, irrespective of this value, the expansion coefficients in the electronic wave-function~({\ref{eqn: expansion of Phi}}) have the value $|C_1(R_0,t)|^2=|C_2(R_0,t)|^2=1/2$. This relation holds as consequence of Eq.~(\ref{eqn: relation coefficients}), which can be written as \begin{equation} \left|C_l(R_0,t)\right|^2 = \frac{\left|F_l(R_0,t)\right|^2}{\left|F_1(R_0,t)\right|^2+\left|F_2(R_0,t) \right|^2}=\frac{1}{2}\quad \mbox{with}\quad l=1,2, \end{equation} and is clearly shown in Fig.~\ref{fig: steps analysis}. \begin{figure}[h!] \begin{center} \includegraphics{./Figure4.pdf} \end{center} \caption{Top: GI part (black line) and the GD part (blue line, rigidly shifted along the energy axis) of the exact potential at time $t=26.61$~$fs$. The first (dashed red) and second (dashed green) BOPESs are shown as reference. Bottom: coefficients $|F_l(R,t)|^2$ of the expansion of the full wave-function (Eq.~(\ref{eqn: expansion of Psi})) on the BO states ($l=1$ dashed red line, $l=2$ dashed green line) and coefficients $|C_l(R,t)|^2$ of the expansion of the electronic wave-function ($l=1$ continuous red line, $l=2$ continuous green line); the black line represents the nuclear density. $R_0$ is the position where the coefficients $|F_1(R,t)|^2$ and $|F_2(R,t)|^2$ have the same value and the dashed box highlights the region of the step.} \label{fig: steps analysis} \end{figure} Here we present, in the upper panel, the GI part (black line) and the GD part (blue line, rigidly shifted along the energy axis) of the exact potential at time $t=26.62$~$fs$. The BO surfaces (dashed red and green lines) are also plotted as reference. In the lower panel, we plot the coefficients of the expansions in Eq.~(\ref{eqn: expansion of Psi}) (dashed red and green lines) and in Eq.~(\ref{eqn: expansion of Phi}) (continuous red and green lines). The continuous black line represents the nuclear density. The expression of the GI component of the TDPES for a two-state system, from Eq.~(\ref{eqn: gi tdpes on BO}), is \begin{equation} \epsilon_{GI}(R,t) \simeq \left|C_1(R,t)\right|^2 \epsilon_{BO}^{(1)}(R)+\left|C_2(R,t)\right|^2\epsilon_{BO}^{(2)}(R), \end{equation} neglecting terms $\mathcal O(M^{-1})$. If $\left|C_l(R,t)\right|^2$ is Taylor-expanded around $R_0$, up to within the linear deviations, \begin{eqnarray} \left|C_{\mathop{}_2^1}(R,t)\right|^2&\simeq&\left.\frac{\left|F_{\mathop{}_2^1}(R,t)\right|^2}{\left|\chi(R,t) \right|^2}\right|_{R_0}+ \left.\nabla_{R}\frac{\left|F_{\mathop{}_2^1}(R,t)\right|^2}{\left|\chi(R,t)\right|^2} \right|_{R_0} (R-R_0)\nonumber \\ &=&\frac{1}{2}\pm \frac{\alpha(t)}{2}\left(R-R_0\right), \end{eqnarray} one can identify the parameter $\alpha(t)$, defined as \begin{equation}\label{eqn: definition of alpha} \alpha(t) = \frac{\left(\nabla_R\left|F_1(R,t)\right|\right)_{R_0}-\left(\nabla_R\left|F_2(R,t) \right|\right)_{R_0}}{\left|X(t)\right|}, \end{equation} where $\alpha(t)$ is the slope of the coefficients in the step region from which the width of the region can be determined. Using the relation $0\leq\left|C_1(R,t)\right|^2\leq 1$, we get \begin{equation} 0\leq \frac{1}{2}+\frac{\alpha(t)}{2}\left(R-R_0\right)\leq 1\quad\mbox{with}\quad\frac{\Delta R}{2}= \left|R-R_0\right|\leq \frac{1}{\alpha(t)}. \end{equation} Therefore, $\Delta R$ is small because the step is steep, as consequence of a large $\alpha(t)$. $\alpha(t)$ can be large either because $|X(t)|$ is small, i.e., the cross-over is located in a region of small nuclear density, or because the terms in the numerator of Eq.~(\ref{eqn: definition of alpha}) have opposite slopes at $R_0$ (this is the case depicted in Fig.~\ref{fig: steps analysis}). Outside the region $\Delta R$, one or the other coefficients $|C_l(R,t)|^2$ dominates, thus leading to \begin{equation}\label{eqn: gi tdpes outside step region} \epsilon_{GI}(R,t)=\left\lbrace \begin{array}{cc} \epsilon_{BO}^{(2)}(R), & R< R_0 \\ & \\ \epsilon_{BO}^{(1)}(R), & R> R_0. \end{array} \right. \end{equation} The GD part of the TDPES can be analyzed similarly: $\epsilon_{GD}(R,t)$ from Eq.~(\ref{eqn: gd tdpes on BO}) may be written, in terms of the two BO states, as \begin{equation} \epsilon_{GD}(R,t) = \left|C_1(R,t)\right|^2 \dot\gamma_1(R,t)+\left|C_2(R,t)\right|^2\dot\gamma_2(R,t) \end{equation} and we recall that $\gamma_l(R,t)$ is the phase of the coefficient $C_l(R,t)$. As in Eq.~(\ref{eqn: gi tdpes outside step region}), outside the step region, this part of the potential becomes \begin{equation}\label{eqn: gd tdpes outside step region} \epsilon_{GD}(R,t)=\left\lbrace \begin{array}{cc} \dot\gamma_2(R,t), & R< R_0 \\ & \\ \dot\gamma_1(R,t), & R> R_0. \end{array} \right. \end{equation} Moreover, Fig.~\ref{fig: steps analysis} shows that in these regions $\dot\gamma_1(R,t)$ and $\dot\gamma_2(R,t)$ are constant functions of $R$. This is a consequence of the gauge we chose. The gauge condition, $A(R,t)=\langle\Phi_R(t)|-i\hbar\nabla_R\Phi_R(t)\rangle_r =0$, in terms of the two BO states involved in the dynamics, reads \begin{align} 0=\sum_{l=1,2}\left|C_l(R,t)\right|^2\nabla_R\gamma_l(R,t)-\frac{i\hbar}{2}\nabla_R\sum_{l=1,2}\left|C_l(R,t) \right|^2 \nonumber \\ -i\hbar\sum_{l,k=1,2}C_l^*(R,t)C_k(R,t)d_{lk}^{(1)}(R). \end{align} However, the second term of the RHS is identically zero, due to the PNC in Eq.~(\ref{eqn: PNC on BO}), and the third term can be neglected, due to the presence of the non-adiabatic couplings, $d_{lk}^{(1)}(R)$, that are small far from the avoided crossing. The gauge condition then states \begin{equation} \left|C_1(R,t)\right|^2\nabla_R\gamma_1(R,t) = -\left|C_2(R,t)\right|^2\nabla_R\gamma_2(R,t), \end{equation} or equivalently \begin{eqnarray} \nabla_R\gamma_2(R,t) = 0 &\quad\mbox{for}\quad R<R_0\quad&\mbox{where}\quad\left|C_1(R,t)\right|^2=0 \\ \nabla_R\gamma_1(R,t) = 0 &\quad\mbox{for}\quad R>R_0\quad&\mbox{where}\quad\left|C_2(R,t)\right|^2=0. \end{eqnarray} We obtain $\gamma_l(R,t)=\Gamma_l(t)$, namely the phase of the coefficient $C_l(R,t)$ is only a function of time (constant in space) in the region where the squared modulus of the corresponding coefficient is equal to unity. Similarly, $\dot\gamma_l(R,t)=\dot\Gamma_l(t)$, as shown in Fig.~\ref{fig: steps analysis}. In the step region, around $R_0$, the expression of the TDPES can be approximated as \begin{align} \epsilon(R,t) =& \frac{\epsilon_{BO}^{(1)}(R)+\epsilon_{BO}^{(2)}(R)}{2}+\frac{\dot\gamma_1(R,t)+ \dot\gamma_2(R,t)}{2}\nonumber \\ &+\alpha(t)\left[\frac{\epsilon_{BO}^{(1)}(R)-\epsilon_{BO}^{(2)}(R)}{2}+\frac{\dot\gamma_1(R,t)- \dot\gamma_2(R,t)}{2}\right](R-R_0). \label{eqn: full potential at R0} \end{align} The first two terms on the RHS are the average of the BO energies plus the average value of the time-derivative of the phases $\gamma_1(R,t)$ and $\gamma_2(R,t)$; the terms in square brackets are the energy gaps between the BO surfaces and between the time-derivative of the phases, which give the contribution proportional to the parameter $\alpha(t)$. From Fig.~\ref{fig: steps analysis}, we notice that, around $R_0$, the slope of $\epsilon_{GD}$ is opposite to the slope of $\epsilon_{GI}$ and this is a general feature in the studied system (in the absence of a time-dependent external field). Therefore, the GD part reduces the height of the steps in the GI part. We will see the effect of this contribution on (classical) nuclear dynamics in the section~\ref{sec: dynamics}. \subsection{Steps in the TDPES in weak non-adiabatic regime} In this section, we study a case of weaker non-adiabatic coupling between the two lowest BO states. In order to make the coupling weaker, we choose the parameters in the Hamiltonian~(\ref{eqn: metiu-hamiltonian}) as $L=19.0$~$a_0$, $R_f=3.8$~$a_0$, $R_l=2.0$~$a_0$ and $R_r=5.5$~$a_0$. The BO surfaces, along with the evolution of the populations of the BO states, are shown in Fig.~\ref{fig: BO-data weak}. \begin{figure}[h!] \begin{center} \includegraphics{./Figure5.pdf} \end{center} \caption{Same as Fig.~\ref{fig: BO-data} but for weaker non-adiabatic coupling between the two lowest BO states.} \label{fig: BO-data weak} \end{figure} The initial conditions for the dynamical evolution of this system are the same as in the previous example, however the coupling between the two lowest electronic states is weaker, thus leading to a reduced population exchange, clearly shown in Fig.~\ref{fig: BO-data weak} (right panel). Nonetheless, the process described here shows similarities to the previous case, as can be seen from Fig.~\ref{fig: snapshots weak}. \begin{figure}[h!] \centering \includegraphics{./Figure6.pdf} \caption{Same as Fig.~\ref{fig: snapshots strong} but for a weaker non-adiabatic coupling between the two lowest BO states, at time-steps $9.68$~$fs$, $27.33$~$fs$ and $32.65$~$fs$.} \label{fig: snapshots weak} \end{figure} The GI part of the TDPES presents again two main features, (i) the diabatization at the avoided crossing, when the nuclear wave-packet crosses the region of relatively strong non-adiabatic coupling and (ii) the steps at the cross-over of $|F_1(R,t)|^2$ and $|F_2(R,t)|^2$, signature of the splitting of the nuclear density. The GD part is either constant, before the splitting at the avoided crossing, or stepwise constant, with steps appearing in the same region as the steps in the GI term, but with opposite slope. At different snapshots of time, i.e., $9.68$~$fs$, $27.33$~$fs$ and $32.65$~$fs$, these properties are shown in Fig.~\ref{fig: snapshots weak}, along with the nuclear density and its components on the BO states. The notation used in the figures is the same as in Fig.~\ref{fig: snapshots strong}. A slightly different behavior from the situation of strong non-adiabatic coupling can be identified in $\epsilon_{GI}(R,t)$ before the passage through the avoided crossing. As the nuclear wave-packet approaches the avoided crossing, the GI part of the TDPES ``opens'' towards the direction of motion, resembling the diabatic surface that connects the BO surfaces through the avoided crossing. This is clearly shown in Fig.~\ref{fig: diabatization} (left) at time $t=9.68$~$fs$ for the strongly coupled system. In the case of weaker non-adiabatic coupling, $\epsilon_{GI}(R,t)$, at the avoided crossing, lies between the BO surfaces, as shown in Fig.~\ref{fig: diabatization} (right). \begin{figure}[h!] \begin{center} \includegraphics{./Figure7.pdf} \end{center} \caption{Diabatization feature of $\epsilon_{GI}(R,t)$ (blue dots) for the two model systems (left panel, strong coupling at $t=9.68$~$fs$, and right panel, weak coupling at $t=6.29$~$fs$) presented here. The dashed lines represent the BO surfaces ($\epsilon_{BO}^{(1)}(R)$ red line and $\epsilon_{BO}^{(2)}(R)$ green line) and the continuous black line represents the nuclear density (reduced by a factor 10 and rigidly shifted along the $y$-axis).} \label{fig: diabatization} \end{figure} Therefore, the diabatization feature strictly depends on the strength of the non-adiabatic coupling and, in general, can be viewed as a transient configuration of the GI part of the TDPES before the formation of the steps. \section{Classical dynamics on PESs}\label{sec: dynamics} In section~\ref{sec: pes}, we have addressed some of the generic features of the TDPES that governs the nuclear dynamics in the presence of non-adiabatic electronic transitions. As discussed before, some of these features, in particular the step that bridges between the two parts of the TDPES that are parallel to the BOPESs, are reminiscent of the jumping between the BOPESs in TSH methods~\cite{TSH}. These algorithms are based on the mixed quantum-classical treatment of the electronic and nuclear dynamics using stochastic jumps between BO surfaces. Therefore, an ensemble of classical trajectories with different initial conditions is needed to achieve statistically reasonable outcomes. On the other hand, the TDPES is the exact time-dependent potential that governs the nuclear dynamics (in general together with the vector potential) and contains the back-reaction resulting from the exact coupling to the electronic subsystem. This brings us to investigate how the TDPES drives the classical dynamics of point-like nuclei. In order to understand how the generic features of the TDPES affect the classical nuclear dynamics, we have employed the surfaces presented in section~(\ref{sec: pes}) to calculate the forces acting on the nuclear degree of freedom. We compare the resulting dynamics using the forces that are calculated from the gradient of the TDPES and from the gradient of its GI part. The classical propagation starts at the initial position $R_c=-4.0$~$a_0$ with zero initial momentum. Here, we use the velocity-Verlet algorithm to integrate Hamilton's equations, \begin{equation}\label{eq: hamilton-eom} \left\lbrace \begin{array}{ccl} \dot R &=& \dfrac{P}{M} \\ && \\ \dot P &=& -\nabla_R\epsilon(R)\,\,\mbox{ or }\,\,-\nabla_R\epsilon_{GI}(R), \end{array} \right. \end{equation} using the same time-steps as in the quantum propagation ($\delta t = 2.4\times10^{-3}$~$fs$). In Fig.~\ref{fig: position and velocity} (upper panels) we present the evolution of the classical position compared to the average nuclear position from the quantum calculation, for strong and weak coupling. In both cases, a single trajectory, evolving on the exact surface (blue lines in Fig.~\ref{fig: position and velocity}), is able to reproduce the mean nuclear path (dashed black lines) fairly well. A slight deviation from the quantum results happens only towards the end of the simulated trajectories. When the classical forces are calculated from the GI part of the TDPES, the corresponding classical trajectory in the strong coupling case, does not show a large deviation from the exact calculation. However, in the weak coupling case, after $20$~$fs$, the classical trajectory deviates considerably from the quantum mean path. This behavior is also confirmed by the pronounced increase of the velocity of the classical particle moving on $\epsilon_{GI}$, shown in Fig.~\ref{fig: position and velocity} (lower panels). \begin{figure}[h!] \begin{center} \includegraphics{./Figure8.pdf} \end{center} \caption{Classical position (upper panels) and velocity (lower panels) and mean nuclear position and velocity as functions of time for the systems in the presence of strong non-adiabatic coupling (left) and of weak non-adiabatic coupling (right). The dashed black line represents the average nuclear values from quantum calculation, the blue and orange lines are the positions and velocities of the classical particle when it evolves on the exact potential and on the GI part of the potential, respectively.} \label{fig: position and velocity} \end{figure} We now have a closer look at the classical dynamics and try to find out the source of the deviations, especially in the weaker coupling case. Fig.~\ref{fig: evolution} shows the classical positions calculated from the full TDPES (blue dots) and the GI part of it (orange dots) together with the corresponding potentials and the exact nuclear densities at the times indicated in the plots. It can be seen in the figure that the classical particle evolving on the GI part of the potential, in the case of weaker coupling, at the moment of the step formation feels an intense force, as its position is exactly in the region of the step (see $t=23.71$~$fs$ in Fig.~\ref{fig: evolution}). This happens also in the case of the strong coupling (see the blue line referring to the velocity in Fig.~\ref{fig: position and velocity}, left plot), to a lesser extent and the velocity of the classical particle does not show a strong peak. The evolution of the classical particle on the GI part, in the case of the strong coupling, shows that the step forms in the direction of larger nuclear density (see plot at $t=22.25$~$fs$), hence, the classical particle correctly follows the step and its position is approximately the mean nuclear position. However, in the case of weaker coupling, the step forms in the direction of smaller nuclear density and the classical particle can not move ``up the hill'' to follow the nuclear mean path, leading to a large deviation of the classical position from the quantum mean value. The intense force felt by the classical particle drives it to an unphysical region, where the nuclear density is very small. The presence of the GD part of the TDPES is responsible for the decrease (or even the inversion) of the ``energy gap'' in the GI part, thus producing a better agreement between classical and quantum results. \begin{figure}[h!] \begin{center} \includegraphics[width=\textwidth]{./Figure9.pdf} \end{center} \caption{Upper panels: strong coupling results. Lower panels: weak coupling results. The figure shows classical positions (dots) at different times, as indicated in the plots, with the corresponding potentials, $\epsilon_{GI}(R,t)$ (orange lines) and $\epsilon(R,t)$ (blue lines). The nuclear density (dashed black line) is plotted as reference, along with the mean position (black arrows).} \label{fig: evolution} \end{figure} From comparing the classical and quantum dynamics shown in Fig.~\ref{fig: evolution}, we observe that in the strong coupling case (upper panel), at $t=4.84$~$fs$ and at $t=11.37$~$fs$, the nuclear wave-packet has not yet crossed the avoided crossing, thus the GD part of the TDPES is a constant. Therefore, the classical force calculated from the TDPES is identical with the one calculated from its GI part. At these times, the classical positions of the nuclei evolving on the GI part of the potential (orange dots in the figure) and on the full TDPES (blue dots) coincide with the mean position of the nuclear wave-packet (black arrows). On the other hand, in the weaker coupling case (lower panels), a similar behavior is seen only before the wave-packet splitting, at $t=7.26$~$fs$ and $t=12.09$~$fs$. At later times, namely $t=22.25$~$fs$ for the strong coupling case and $t=23.71$~$fs$ for the weaker coupling case, the steps develop in $\epsilon_{GI}$ and the classical particle evolving on this potential follows the direction in which the step is forming: in the case of strong coupling, this region coincides with the region associated with larger nuclear density, whereas this is not the case for the weaker coupling case. As discussed above, this feature explains why the positions of the particles on $\epsilon$ and on $\epsilon_{GI}$, for the system in the presence of strong non-adiabatic coupling, are close to each other also at later times ($t=29.03$~$fs$ in Fig.~\ref{fig: evolution}), whereas they deviate in the weaker coupling regime as clearly shown in the figure at time $t=31.45$~$fs$. The results presented in this section offer interesting insights into possible ways of modeling non-adiabatic processes, within a mixed quantum-classical treatment. On one hand, the gradient of the GI part of the exact potential is the force that drives the classical nuclear motion and we have shown that such force is ``adiabatic'' in the sense that, far from the step, it is produced by a single BOPES. On the other hand, the GD part does not affect such force, but contributes in diminishing the energy separation between the two sides of the step. This energy barrier almost disappears in the full TDPES, but the difference in slopes indeed persists. If a gauge is chosen such that $\epsilon_{GD}(R,t)\equiv 0$, the non-zero vector potential compensates the effect of the energy step in the GI part of the TDPES by adding a kinetic energy contribution (the vector potential appears in the kinetic term of the nuclear Hamiltonian in Eq.~(\ref{eqn: exact nuclear eqn})). Such contribution would energetically favor the transfer of classical point-particles from one side of the step to the other. Once again, the comparison with TSH is inevitable: in the latter, different adiabatic surfaces are energetically accessible by the classical nuclei because of the \textsl{stochastic} jumps and the subsequent momentum rescaling (in order to impose energy conservation); in the scheme based on the exact TDPES, depending on the gauge, either the GD part of the potential is responsible for bringing ``energetically closer'' different BOPES or the vector potential gives the necessary kinetic energy contribution. So far, we have described where the steps appear, how they form and how they affect nuclear motion. From these observations, we expect that rigorous mixed quantum-classical schemes for dealing with non-adiabatic processes can be deduced in a systematic way from the classical forces associated with the exact TDPES and the exact vector potential. \section{Ehrenfest theorem for the nuclear wave-function}\label{sec: ehrenfest} In section~\ref{sec: dynamics}, we studied the classical nuclear dynamics on the TDPES. However, we did not provide any argument on how that study can be associated with a classical limit of the nuclear motion that is able to, approximately, reproduce the expectation values of the nuclear position and momentum of the complete electron-nuclear system. Here, using the Ehrenfest theorem, we show how the nuclear position and momentum calculated from Eq.~(\ref{eq: hamilton-eom}) can be linked to the expectation values of the nuclear position and momentum of the complete electron-nuclear system. The Ehrenfest theorem~\cite{ehrenfest} relates the time-derivative of the expectation value of a quantum-mechanical operator $\hat O$ to the expectation value of the commutator of that operator with the Hamiltonian, i.e. \begin{equation} \frac{d}{dt}\langle\hat O(t)\rangle = \frac{1}{i\hbar}\left\langle\left[\hat O(t),\hat H\right]\right\rangle+ \langle\partial_t\hat O(t)\rangle. \end{equation} The second term on the RHS refers to the explicit time-dependence of $\hat O$. In particular, the theorem leads to the classical-like equations of motion for the mean value of position and momentum operators. For a system of electrons and nuclei, described by the Hamiltonian in Eq.~(\ref{eqn: hamiltonian}) and the wave-function $\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)$, the mean values of the $\nu$-th nuclear position $\hat{\bf R}_{\nu}$ and momentum $\hat{\bf P}_{\nu}$ operators evolve according to the classical Hamilton's equations \begin{eqnarray} \frac{d}{dt}\langle\hat{\bf R}_{\nu}\rangle_{\Psi}=\frac{1}{i\hbar} \left\langle\left[\hat{\bf R}_\nu, \hat H({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})\right]\right\rangle_\Psi&=&\frac{\langle\hat{\bf P}_{\nu}\rangle_{\Psi}}{M_{\nu}} \label{eqn: general ehrenfest 1}\\ \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=\frac{1}{i\hbar} \left\langle\left[\hat{\bf P}_\nu, \hat H({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})\right]\right\rangle_\Psi&=& \langle-\nabla_{\nu}\big(\hat{V}_{en}({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})+\hat{W}_{nn}({\underline{\underline{\bf R}}})\big) \rangle_{{\Psi}}. \label{eqn: general ehrenfest 2} \end{eqnarray} Here, the operators do not depend explicitly on time and we indicate the integration over the full wave-function (electronic and nuclear coordinates) by $\langle\,\cdot\,\rangle_{{\Psi}}$. On the other hand, the nuclear equation~(\ref{eqn: exact nuclear eqn}) is a Schr\"odinger equation that contains a time-dependent vector potential and a time-dependent scalar potential. Therefore, the Ehrenfest theorem for the nuclear subsystem reads \begin{eqnarray} \frac{d}{dt}\langle\hat{\bf R}_{\nu}\rangle_\chi&=&\frac{1}{i\hbar}\left\langle\left[\hat{\bf R}_\nu, \hat H_n({\underline{\underline{\bf R}}})\right]\right\rangle_\chi\label{eqn: ehrenfest 1}\\ \frac{d}{dt}\langle\hat{\widetilde{\bf P}}_{\nu}\rangle_\chi&=&\frac{1}{i\hbar}\left\langle\left[ \hat{\widetilde{\bf P}}_\nu,\hat H_n({\underline{\underline{\bf R}}})\right]\right\rangle_\chi+ \left\langle\partial_t{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right\rangle_\chi \label{eqn: ehrenfest 2} \end{eqnarray} where~\cite{AMG2} \begin{equation} \hat{\widetilde{\bf P}}_{\nu} = -i\hbar\nabla_\nu+{\bf A}_{\nu}({\underline{\underline{\bf R}}},t) \end{equation} is the expression of the nuclear canonical momentum operator in position representation, and \begin{equation} \hat H_n({\underline{\underline{\bf R}}}) = \sum_{\nu=1}^{N_n} \frac{\left[-i\hbar\nabla_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right]^2}{2M_\nu} + \epsilon({\underline{\underline{\bf R}}},t) \label{eqn: nuclear-Hamiltonian} \end{equation} is the nuclear Hamiltonian from Eq.~(\ref{eqn: exact nuclear eqn}). Note that the average operation is performed only on the nuclear wave-function as indicated by $\langle\,\cdot\,\rangle_{\chi}$. An explicit time-dependence appears in the expression of the momentum operator, due to the presence of the vector potential. This dependence is accounted for in the second term on the RHS of Eq.~(\ref{eqn: ehrenfest 2}). While Eq.~(\ref{eqn: ehrenfest 1}) is easily obtained from Eq.~(\ref{eqn: general ehrenfest 1}) by performing the integration over the electronic part of full wave-function, Eq.~(\ref{eqn: ehrenfest 2}) is more involved and will be proved as follows. We rewrite LHS of Eq.~(\ref{eqn: general ehrenfest 2}) as \begin{align} \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=&\int d{\underline{\underline{\bf r}}} d{\underline{\underline{\bf R}}}\, \left[\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\partial_t\chi^*({\underline{\underline{\bf R}}},t)+\chi^*({\underline{\underline{\bf R}}},t)\partial_t\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\right] \hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) \nonumber\\ &+\int d{\underline{\underline{\bf r}}} d{\underline{\underline{\bf R}}}\, \chi^*({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu \left[\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\partial_t\chi({\underline{\underline{\bf R}}},t)+\chi({\underline{\underline{\bf R}}},t)\partial_t\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right]. \end{align} $\hat{\bf P}_\nu$ being a differential operator in position representation, its action on the factorized wave-function is \begin{equation} \hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)= \left(\hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\right)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)+ \chi({\underline{\underline{\bf R}}},t)\left(\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right). \end{equation} Then we use the nuclear equation~(\ref{eqn: exact nuclear eqn}) for \begin{equation} \partial_t\chi({\underline{\underline{\bf R}}},t)=\frac{1}{i\hbar}\hat H_n({\underline{\underline{\bf R}}}) \chi({\underline{\underline{\bf R}}},t) \end{equation} and its complex-conjugated ($\hat H_n({\underline{\underline{\bf R}}})$ is hermitian), the definition of the (real) vector potential \begin{equation} {\bf A}_\nu({\underline{\underline{\bf R}}},t) = \int d{\underline{\underline{\bf r}}} \,\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t) \hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) \end{equation} and the PNC, to derive \begin{align} \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=\frac{1}{i\hbar}\int d{\underline{\underline{\bf R}}}\,\chi^*({\underline{\underline{\bf R}}},t) &\left(\hat{\widetilde{\bf P}}_\nu\hat H_n({\underline{\underline{\bf R}}})-\hat H_n({\underline{\underline{\bf R}}})\hat{\widetilde{\bf P}}_\nu\right)\chi({\underline{\underline{\bf R}}},t) \nonumber \\ +\int d{\underline{\underline{\bf R}}}\left|\chi({\underline{\underline{\bf R}}},t)\right|^2 \int d{\underline{\underline{\bf r}}} &\left[\left(\partial_t\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\right) \hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) + \Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu\partial_t\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right] \end{align} with $\hat{\widetilde{\bf P}}_\nu=\hat{\bf P}_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)$. Using the relation \begin{equation} \left(\partial_t\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\right)\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) = \partial_t\left(\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right) - \Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu\partial_t\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t), \end{equation} for the term in the square brackets, leads to \begin{align} \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=\int d{\underline{\underline{\bf R}}}\,\chi^*({\underline{\underline{\bf R}}},t)\left( \frac{1}{i\hbar}\left[\hat{\widetilde{\bf P}}_\nu,\hat H_n({\underline{\underline{\bf R}}})\right]+\partial_t{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right) \chi({\underline{\underline{\bf R}}},t), \end{align} recovering the term on the RHS of Eq.~(\ref{eqn: ehrenfest 2}). A similar procedure \cite{AMG2} yields the relation \begin{align} \langle\hat{\bf P}_\nu\rangle_\Psi &= \int d{\underline{\underline{\bf r}}} d{\underline{\underline{\bf R}}} \,\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\chi^*({\underline{\underline{\bf R}}},t) \left[\left(\hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\right)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)+\chi({\underline{\underline{\bf R}}},t)\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right] \nonumber \\ &=\int d{\underline{\underline{\bf R}}} \,\chi^*({\underline{\underline{\bf R}}},t)\left[\hat{\bf P}_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right]\chi({\underline{\underline{\bf R}}},t)=\langle\hat{\widetilde{\bf P}}_\nu\rangle_\chi, \end{align} which proves the identity of the LHSs of Eqs.~(\ref{eqn: general ehrenfest 2}) and~(\ref{eqn: ehrenfest 2}). We have proved the Ehrenfest theorem for the nuclear wave-function and nuclear Hamiltonian, deriving exact relations for the evolution of the mean values of nuclear position and momentum operators over the complete system. This outcome is consistent with the interpretation of $\chi({\underline{\underline{\bf R}}},t)$ as the proper nuclear wave-function that reproduces the nuclear density and current density of the complete system (see the discussion in section~\ref{sec: background}). In the one-dimensional system studied here, the gauge is chosen such that $A(R,t)=0$, therefore, the Ehrenfest equations become \begin{eqnarray} \frac{d}{dt}\langle\hat R\rangle_\chi=\frac{1}{i\hbar}\left\langle\left[\hat R,\hat H_n\right]\right\rangle_\chi &=&\frac{\langle\hat P\rangle_\chi}{M}\label{eqn: ehrenfest 1 one-d}\\ \frac{d}{dt}\langle\hat P\rangle_\chi=\frac{1}{i\hbar}\left\langle\left[\hat P,\hat H_n\right]\right\rangle_\chi &=&\langle-\nabla_R\epsilon(R,t)\rangle_\chi, \label{eqn: ehrenfest 2 one-d} \end{eqnarray} where the mean force generating the classical-like evolution is determined as the expectation value, on the nuclear wave-function, of the gradient of the TDPES. If we replace the nuclear wave-function in Eqs.~(\ref{eqn: ehrenfest 1 one-d}) and~(\ref{eqn: ehrenfest 2 one-d}) by a delta-function centered at the classical position, we get Eqs.~(\ref{eq: hamilton-eom}) that was used in section~\ref{sec: dynamics} to generate classical dynamics on the exact PES. That is why the classical nuclear dynamics on the TDPES could actually approximate the mean nuclear position and momentum. We have numerically simulated classical dynamics under the following equations of motion \begin{equation} \left\lbrace \begin{array}{ccl} \dot R &=&\dfrac{P}{M}\\ && \\ \dot P&=&\langle-\nabla_R\epsilon(R,t)\rangle_\chi, \end{array} \right. \end{equation} where $\epsilon(R,t)$ is obtained from the solution of the TDSE with Hamiltonian~(\ref{eqn: metiu-hamiltonian}), for both sets of parameters producing strong and weak non-adiabatic coupling between the two lowest BO surfaces. The initial conditions for the classical evolution are exactly the initial mean position and mean velocity of the quantum particle. \begin{figure}[h!] \begin{center} \includegraphics{./Figure10.pdf} \end{center} \caption{Left: nuclear position as a function of time. Right: nuclear velocity as a function of time. The average position and velocity calculated from the quantum-mechanical (QM) propagation are shown as dotted red (strong coupling) and dotted green (weak coupling) lines. The long-dashed (strong coupling) and short-dashed (weak coupling) black lines are the results of classical propagation driven by the average force (AV) as in Eqs.~(\ref{eqn: ehrenfest 1 one-d}) and~(\ref{eqn: ehrenfest 2 one-d}).} \label{fig: ehrenfest} \end{figure} The results are shown in Fig.~\ref{fig: ehrenfest}, where we plot the mean position (left) and velocity (right) as functions of time from quantum-mechanical calculations, compared to the values of position and velocity of a classical particle moving according to the average force $\langle-\nabla_R\epsilon(R,t)\rangle_\chi$. As expected by the proof of the Ehrenfest theorem involving the nuclear wave-function $\chi({\underline{\underline{\bf R}}},t)$ and the nuclear Hamiltonian $\hat H_n$ presented in this section, the classical trajectory perfectly follows the evolution of the quantum mean values. In section~\ref{sec: dynamics}, we studied the classical nuclear dynamics on the TDPES. However, we did not provide any argument on how that study can be associated with a classical limit of the nuclear motion that is able to, approximately, reproduce the expectation values of the nuclear position and momentum of the complete electron-nuclear system. Here, using the Ehrenfest theorem, we show how the nuclear position and momentum calculated from Eq.~(\ref{eq: hamilton-eom}) can be linked to the expectation values of the nuclear position and momentum of the complete electron-nuclear system. The Ehrenfest theorem~\cite{ehrenfest} relates the time-derivative of the expectation value of a quantum-mechanical operator $\hat O$ to the expectation value of the commutator of that operator with the Hamiltonian, i.e. \begin{equation} \frac{d}{dt}\langle\hat O(t)\rangle = \frac{1}{i\hbar}\left\langle\left[\hat O(t),\hat H\right]\right\rangle+ \langle\partial_t\hat O(t)\rangle. \end{equation} The second term on the RHS refers to the explicit time-dependence of $\hat O$. In particular, the theorem leads to the classical-like equations of motion for the mean value of position and momentum operators. For a system of electrons and nuclei, described by the Hamiltonian in Eq.~(\ref{eqn: hamiltonian}) and the wave-function $\Psi({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}},t)$, the mean values of the $\nu$-th nuclear position $\hat{\bf R}_{\nu}$ and momentum $\hat{\bf P}_{\nu}$ operators evolve according to the classical Hamilton's equations \begin{eqnarray} \frac{d}{dt}\langle\hat{\bf R}_{\nu}\rangle_{\Psi}=\frac{1}{i\hbar} \left\langle\left[\hat{\bf R}_\nu, \hat H({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})\right]\right\rangle_\Psi&=&\frac{\langle\hat{\bf P}_{\nu}\rangle_{\Psi}}{M_{\nu}} \label{eqn: general ehrenfest 1}\\ \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=\frac{1}{i\hbar} \left\langle\left[\hat{\bf P}_\nu, \hat H({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})\right]\right\rangle_\Psi&=& \langle-\nabla_{\nu}\big(\hat{V}_{en}({\underline{\underline{\bf r}}},{\underline{\underline{\bf R}}})+\hat{W}_{nn}({\underline{\underline{\bf R}}})\big) \rangle_{{\Psi}}. \label{eqn: general ehrenfest 2} \end{eqnarray} Here, the operators do not depend explicitly on time and we indicate the integration over the full wave-function (electronic and nuclear coordinates) by $\langle\,\cdot\,\rangle_{{\Psi}}$. On the other hand, the nuclear equation~(\ref{eqn: exact nuclear eqn}) is a Schr\"odinger equation that contains a time-dependent vector potential and a time-dependent scalar potential. Therefore, the Ehrenfest theorem for the nuclear subsystem reads \begin{eqnarray} \frac{d}{dt}\langle\hat{\bf R}_{\nu}\rangle_\chi&=&\frac{1}{i\hbar}\left\langle\left[\hat{\bf R}_\nu, \hat H_n({\underline{\underline{\bf R}}})\right]\right\rangle_\chi\label{eqn: ehrenfest 1}\\ \frac{d}{dt}\langle\hat{\widetilde{\bf P}}_{\nu}\rangle_\chi&=&\frac{1}{i\hbar}\left\langle\left[ \hat{\widetilde{\bf P}}_\nu,\hat H_n({\underline{\underline{\bf R}}})\right]\right\rangle_\chi+ \left\langle\partial_t{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right\rangle_\chi \label{eqn: ehrenfest 2} \end{eqnarray} where~\cite{AMG2} \begin{equation} \hat{\widetilde{\bf P}}_{\nu} = -i\hbar\nabla_\nu+{\bf A}_{\nu}({\underline{\underline{\bf R}}},t) \end{equation} is the expression of the nuclear canonical momentum operator in position representation, and \begin{equation} \hat H_n({\underline{\underline{\bf R}}}) = \sum_{\nu=1}^{N_n} \frac{\left[-i\hbar\nabla_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right]^2}{2M_\nu} + \epsilon({\underline{\underline{\bf R}}},t) \label{eqn: nuclear-Hamiltonian} \end{equation} is the nuclear Hamiltonian from Eq.~(\ref{eqn: exact nuclear eqn}). Note that the average operation is performed only on the nuclear wave-function as indicated by $\langle\,\cdot\,\rangle_{\chi}$. An explicit time-dependence appears in the expression of the momentum operator, due to the presence of the vector potential. This dependence is accounted for in the second term on the RHS of Eq.~(\ref{eqn: ehrenfest 2}). While Eq.~(\ref{eqn: ehrenfest 1}) is easily obtained from Eq.~(\ref{eqn: general ehrenfest 1}) by performing the integration over the electronic part of full wave-function, Eq.~(\ref{eqn: ehrenfest 2}) is more involved and will be proved as follows. We rewrite LHS of Eq.~(\ref{eqn: general ehrenfest 2}) as \begin{align} \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=&\int d{\underline{\underline{\bf r}}} d{\underline{\underline{\bf R}}}\, \left[\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\partial_t\chi^*({\underline{\underline{\bf R}}},t)+\chi^*({\underline{\underline{\bf R}}},t)\partial_t\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\right] \hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) \nonumber\\ &+\int d{\underline{\underline{\bf r}}} d{\underline{\underline{\bf R}}}\, \chi^*({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu \left[\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\partial_t\chi({\underline{\underline{\bf R}}},t)+\chi({\underline{\underline{\bf R}}},t)\partial_t\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right]. \end{align} $\hat{\bf P}_\nu$ being a differential operator in position representation, its action on the factorized wave-function is \begin{equation} \hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)= \left(\hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\right)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)+ \chi({\underline{\underline{\bf R}}},t)\left(\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right). \end{equation} Then we use the nuclear equation~(\ref{eqn: exact nuclear eqn}) for \begin{equation} \partial_t\chi({\underline{\underline{\bf R}}},t)=\frac{1}{i\hbar}\hat H_n({\underline{\underline{\bf R}}}) \chi({\underline{\underline{\bf R}}},t) \end{equation} and its complex-conjugated ($\hat H_n({\underline{\underline{\bf R}}})$ is hermitian), the definition of the (real) vector potential \begin{equation} {\bf A}_\nu({\underline{\underline{\bf R}}},t) = \int d{\underline{\underline{\bf r}}} \,\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t) \hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) \end{equation} and the PNC, to derive \begin{align} \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=\frac{1}{i\hbar}\int d{\underline{\underline{\bf R}}}\,\chi^*({\underline{\underline{\bf R}}},t) &\left(\hat{\widetilde{\bf P}}_\nu\hat H_n({\underline{\underline{\bf R}}})-\hat H_n({\underline{\underline{\bf R}}})\hat{\widetilde{\bf P}}_\nu\right)\chi({\underline{\underline{\bf R}}},t) \nonumber \\ +\int d{\underline{\underline{\bf R}}}\left|\chi({\underline{\underline{\bf R}}},t)\right|^2 \int d{\underline{\underline{\bf r}}} &\left[\left(\partial_t\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\right) \hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) + \Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu\partial_t\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right] \end{align} with $\hat{\widetilde{\bf P}}_\nu=\hat{\bf P}_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)$. Using the relation \begin{equation} \left(\partial_t\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\right)\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t) = \partial_t\left(\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right) - \Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\hat{\bf P}_\nu\partial_t\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t), \end{equation} for the term in the square brackets, leads to \begin{align} \frac{d}{dt}\langle\hat{\bf P}_{\nu}\rangle_{\Psi}=\int d{\underline{\underline{\bf R}}}\,\chi^*({\underline{\underline{\bf R}}},t)\left( \frac{1}{i\hbar}\left[\hat{\widetilde{\bf P}}_\nu,\hat H_n({\underline{\underline{\bf R}}})\right]+\partial_t{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right) \chi({\underline{\underline{\bf R}}},t), \end{align} recovering the term on the RHS of Eq.~(\ref{eqn: ehrenfest 2}). A similar procedure \cite{AMG2} yields the relation \begin{align} \langle\hat{\bf P}_\nu\rangle_\Psi &= \int d{\underline{\underline{\bf r}}} d{\underline{\underline{\bf R}}} \,\Phi_{\underline{\underline{\bf R}}}^*({\underline{\underline{\bf r}}},t)\chi^*({\underline{\underline{\bf R}}},t) \left[\left(\hat{\bf P}_\nu\chi({\underline{\underline{\bf R}}},t)\right)\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)+\chi({\underline{\underline{\bf R}}},t)\hat{\bf P}_\nu\Phi_{\underline{\underline{\bf R}}}({\underline{\underline{\bf r}}},t)\right] \nonumber \\ &=\int d{\underline{\underline{\bf R}}} \,\chi^*({\underline{\underline{\bf R}}},t)\left[\hat{\bf P}_\nu+{\bf A}_\nu({\underline{\underline{\bf R}}},t)\right]\chi({\underline{\underline{\bf R}}},t)=\langle\hat{\widetilde{\bf P}}_\nu\rangle_\chi, \end{align} which proves the identity of the LHSs of Eqs.~(\ref{eqn: general ehrenfest 2}) and~(\ref{eqn: ehrenfest 2}). We have proved the Ehrenfest theorem for the nuclear wave-function and nuclear Hamiltonian, deriving exact relations for the evolution of the mean values of nuclear position and momentum operators over the complete system. This outcome is consistent with the interpretation of $\chi({\underline{\underline{\bf R}}},t)$ as the proper nuclear wave-function that reproduces the nuclear density and current density of the complete system (see the discussion in section~\ref{sec: background}). In the one-dimensional system studied here, the gauge is chosen such that $A(R,t)=0$, therefore, the Ehrenfest equations become \begin{eqnarray} \frac{d}{dt}\langle\hat R\rangle_\chi=\frac{1}{i\hbar}\left\langle\left[\hat R,\hat H_n\right]\right\rangle_\chi &=&\frac{\langle\hat P\rangle_\chi}{M}\label{eqn: ehrenfest 1 one-d}\\ \frac{d}{dt}\langle\hat P\rangle_\chi=\frac{1}{i\hbar}\left\langle\left[\hat P,\hat H_n\right]\right\rangle_\chi &=&\langle-\nabla_R\epsilon(R,t)\rangle_\chi, \label{eqn: ehrenfest 2 one-d} \end{eqnarray} where the mean force generating the classical-like evolution is determined as the expectation value, on the nuclear wave-function, of the gradient of the TDPES. If we replace the nuclear wave-function in Eqs.~(\ref{eqn: ehrenfest 1 one-d}) and~(\ref{eqn: ehrenfest 2 one-d}) by a delta-function centered at the classical position, we get Eqs.~(\ref{eq: hamilton-eom}) that was used in section~\ref{sec: dynamics} to generate classical dynamics on the exact PES. That is why the classical nuclear dynamics on the TDPES could actually approximate the mean nuclear position and momentum. We have numerically simulated classical dynamics under the following equations of motion \begin{equation} \left\lbrace \begin{array}{ccl} \dot R &=&\dfrac{P}{M}\\ && \\ \dot P&=&\langle-\nabla_R\epsilon(R,t)\rangle_\chi, \end{array} \right. \end{equation} where $\epsilon(R,t)$ is obtained from the solution of the TDSE with Hamiltonian~(\ref{eqn: metiu-hamiltonian}), for both sets of parameters producing strong and weak non-adiabatic coupling between the two lowest BO surfaces. The initial conditions for the classical evolution are exactly the initial mean position and mean velocity of the quantum particle. \begin{figure}[h!] \begin{center} \includegraphics{./Figure10.pdf} \end{center} \caption{Left: nuclear position as a function of time. Right: nuclear velocity as a function of time. The average position and velocity calculated from the quantum-mechanical (QM) propagation are shown as dotted red (strong coupling) and dotted green (weak coupling) lines. The long-dashed (strong coupling) and short-dashed (weak coupling) black lines are the results of classical propagation driven by the average force (AV) as in Eqs.~(\ref{eqn: ehrenfest 1 one-d}) and~(\ref{eqn: ehrenfest 2 one-d}).} \label{fig: ehrenfest} \end{figure} The results are shown in Fig.~\ref{fig: ehrenfest}, where we plot the mean position (left) and velocity (right) as functions of time from quantum-mechanical calculations, compared to the values of position and velocity of a classical particle moving according to the average force $\langle-\nabla_R\epsilon(R,t)\rangle_\chi$. As expected by the proof of the Ehrenfest theorem involving the nuclear wave-function $\chi({\underline{\underline{\bf R}}},t)$ and the nuclear Hamiltonian $\hat H_n$ presented in this section, the classical trajectory perfectly follows the evolution of the quantum mean values. \section{Conclusion}\label{sec: conclusion} In a system of interacting electrons and nuclei, the nuclear dynamics is fully determined by the TDPES and the time-dependent vector potential defined in the framework of the exact decomposition of the electronic and nuclear motions, as presented in this paper. We investigated some situations in which the vector potential can be gauged away, thus making the TDPES responsible for the nuclear evolution. This time-dependent scalar potential presents distinct and general features that can be analyzed in terms of its GI and GD components. The former, (i) in the region of an avoided crossing has a pronounced \textsl{diabatic} character, smoothly connecting different BOPESs along the direction of the nuclear wave-packet's motion, and, (ii) further away from the avoided crossing, \textsl{dynamical steps} appear between regions in which the (GI part of the) exact potential coincides with one or the other BOPES. The latter is either constant, if the nuclear wave-packet does not split, or stepwise constant, with the step at the same position, and with opposite slope, as in the GI part of the TDPES. We have analyzed in detail these features and discussed the connections with a classical picture of the nuclear evolution. To this end, we calculated the classical forces from the TDPES and from its GI component and performed classical nuclear dynamics driven by those forces. The importance of the GD part of the potential is evident as it improves the agreement of classical results with the quantum-mechanical calculations. We conclude that, if the exact TDPES is available, a single classical trajectory is able to reproduce quantum results fairly well, as long as quantum nuclear effects, such as tunneling or splitting of the nuclear wave-packet, are negligible. We have seen, in the example presented in the paper, that the splitting of the nuclear wave-function at the avoided crossing, that cannot be captured in the classical study, is responsible for the deviation of the classical results from the expected quantum behavior. Further analysis involving the propagation of multiple independent trajectories on the exact TDPES are envisaged. Such a multi-trajectory approach should be able to reproduce non-adiabatic effects, as those described above. The development of mixed quantum-classical schemes to treat the non-adiabatic coupled electron-nuclear dynamics is still a challenging topic in physics and chemistry. Investigating the properties of the exact potential, that incorporates the effects of the electronic quantum dynamics on the nuclei, is a first step towards understanding the key features of approximated potentials and algorithms. We did not consider here cases where the vector potential cannot be gauged away. This will be the subject of future investigations. In the final part of the paper, we have shown that the Ehrenfest theorem applied to calculate the mean nuclear position and momentum based on the nuclear equation alone reproduces the mean values calculated from the complete electron-nuclear system. \section*{Acknowledgements} Partial support from the Deutsche Forschungsgemeinschaft (SFB 762) and from the European Commission (FP7-NMP-CRONOS) is gratefully acknowledged. \addcontentsline{toc}{section}{References}
1,314,259,995,522
arxiv
\section{Introduction}\label{intro} \noindent Dynamical systems with invariant axes and (hyper) planes arise in various classes of models, such as in disease models (e.g. the malaria models reviewed by \citet{Teboh2013}), mass action kinetics (going all the way back to \citet{Lotka1920}) and plasma physics (see, e.g., \citet{Bian2003}). The invariance reflects an elementary property of the model. For instance, if no infected individuals are present, the disease cannot spread; a certain chemical complex can be recycled but not created in a reaction chain and turbulent fluctuations grow through self-interaction. A consequence of this special structure is that transcritical bifurcations can occur when varying a single parameter, just like saddle-node bifurcations in generic dynamical systems. In the context of population dynamics, the transcritical bifurcation is sometimes called {\em invasion}, as it marks the onset of a ``positive'' equilibrium, i.e. an equilibrium with positive values of the unknowns, which models co-existence of species\footnote{In this paper, a positive (negative, non-negative) solution (e.g. equilibrium or periodic) is a solution for which all variables remain positive (negative, non-negative) for all time.}. In the other contexts mentioned above, this would correspond to co-existence of infected and healthy individuals, multiple chemical complexes or background shear flow and turbulent fluctuations. Following the general philosophy of bifurcation analysis, it is natural to ask the question what interactions the transcritical bifurcation can have with other singularities. Such interaction points can act as ``organising centres'', tying together various codimension one bifurcations and organising the qualitative dynamics for ranges of parameter values. The simplest possibility is to have a zero eigenvalue along with a vanishing normal form coefficient. At such a codimension two point, curves of transcritical bifurcations and saddle-node bifurcations are tangent. This case can be realised in a model with a single degree of freedom and was investigated in detail by \citet{Saputra2010} and \citet{Saputra2015}. When one only considers non-negative equilibria, the unfolding looks similar to that of the generalised Hopf bifurcation, which explains why this singularity is sometimes called the generalised transcritical bifurcation. This is demonstrated in Fig. \ref{single_zero}. Here, we will refer to it as the single zero SNTC interaction. \begin{figure}[t] \begin{center} \includegraphics[width=250pt]{Single_zero_SNTC.eps} \end{center} \caption{Unfolding of the single zero SNTC bifurcation including only the non-negative equilibria. In the bifurcation diagram, the dashed line represents the transcritical bifurcation and the solid line represents the saddle-node bifurcation. In the phase portraits, the solid horizontal line represents an invariant axis. Due to the similarity to the unfolding of the generalised Hopf bifurcation, this singularity has been labeled ``generalised transcritical bifurcation''. In the complete diagram, found in \citet{Saputra2010}, the saddle-node bifurcation does not terminate. Note, that this singularity can occur in a system with a single degree of freedom. We have drawn two-dimensional phase portraits for easy comparison to the double zero case in Figs. \ref{unfolding1}-\ref{unfolding3}.} \label{single_zero} \end{figure} A more complicated interaction requires at least two degrees of freedom and involves a zero eigenvalue with algebraic multiplicity two and geometric multiplicity one. The corresponding eigenvector lies in the invariant plane, while the generalised eigenvector is transversal to it. Normal forms and unfoldings for the double zero SNTC interaction were presented by \citet{Saputra2010} and \citet{Saputra2015} and involve, in addition to the saddle-node and transcritical bifurcations, a Hopf bifurcation and a curve along which a periodic orbit disappears. The latter can be either a heteroclinic bifurcation or a homoclinic to a saddle-node. When we omit the negative equilibria, we obtain one of the three unfoldings shown in Figs. \ref{unfolding1}-\ref{unfolding3}. \begin{figure}[t] \begin{center} \includegraphics[width=300pt]{Double_zero_SNTC_1.eps} \end{center} \caption{Unfolding of the SNTC interaction with a double zero eigenvalue including only the non-negative equilibria. Shown is the saddle case with a positive periodic orbit. The bifurcations in the diagrams have been labelled SN for saddle-node, TC for transcritical, HB for Hopf and Het for heteroclinic. In the phase portraits, the solid horizontal line represents an invariant axis. } \label{unfolding1} \end{figure} The third possibility, which can occur in systems of dimension three or more, involves a single zero and a pair of purely imaginary eigenvalues. This transcritical-Hopf case was investigated in some detail in the literature, starting from \citet{Langford1979}, who presented several one-parameter unfoldings near the singularity. Later, \citet{Jiang2010} and \citet{Saputra2015} presented four distinct two-parameter unfoldings. In the vicinity of this point, one can find periodic orbits inside and outside the invariant plane, as well as quasi-periodic motion. Out of these three cases, the first and the last have been reported on in model studies. The first requires only a single boundary equilibrium and two positive (or negative) equilibria. It was found to play a role, for instance, in a model of smoking as an epidemic by \citet{Voorn2013} and one of dispersal patterns for co-existing species by \citet{Mohd2018}. The the latter case, the singularity is presented as a ``triple point'' which appears on the mutual boundary of regions of extinction, co-existence and bi-stability between these modes. It is also present in the model for plasma physics mentioned before and, in fact, the single-degree-of-freedom model formulated by \citet{Bian2003} is identical to the normal form proposed by \citet{Saputra2010}. To mention but a few examples of the analysis of the transcritical-Hopf bifurcation: \citet{Langford1979} found it in a model for fluid motion due to Hopf, \citet{Doedel1984} in a predator-prey-nutrient model and \citet{Gimmelli2015} in an ecoepidemic model. It was also identified in the stressed predator-prey-nutrient model under consideration here. \citet{Kooi2008} showed that, around this point, stable equilibrium and periodic solutions exist with zero or positive predator density. From the layout of the Hopf and transcritical bifurcations, we can tell that the unfolding corresponds to case III of \citet{Saputra2015}. To the best of our knowledge, the SNTC case with two zero eigenvalues has never been identified in an actual application. One may speculate the this is in part due to the fact that it is neither as easy to analyse as the case with a single zero eigenvalue nor as much part of the canon of singularity theory as the case with complex conjugate eigenvalues, having been analysed relatively recently. We show that a change in a single model parameter, related to the growth rate of predators, introduces this point as an organising centre in the model by \citet{Kooi2008}. In its vicinity, there exists a limit cycle of arbitrarily large period that models switching between two nutrient-prey equilibria, mediated by the predator population. In addition to presenting the first analysis of the double zero SNTC interaction in an application, a secondary goal of this paper is to explain why the widely used bifurcation analysis software packages MatCont \cite{Dhooge2004} and AUTO \cite{AUTO} erroneously classify the single-zero and double-zero SNTC cases as cusp and Bogdanov-Takens (BT) points, respectively. We suggest a test function that takes the special structure of the model into account and uniquley identifies the SNTC interactions. \begin{figure}[t] \begin{center} \includegraphics[width=300pt]{Double_zero_SNTC_2.eps} \end{center} \caption{Unfolding of the SNTC interaction with a double zero eigenvalue including only the non-negative equilibria. Shown is the elliptic case with a positive periodic orbit. The bifurcations in the diagrams have been labelled SN for saddle-node, TC for transcritical, HB for Hopf and SNH for a homoclinic to a saddle-node. In the phase portraits, the solid horizontal line represents an invariant axis. } \label{unfolding2} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=300pt]{Double_zero_SNTC_3.eps} \end{center} \caption{Unfolding of the SNTC bifurcation with a double zero eigenvalue including only the non-negative equilibria. Shown is the saddle or elliptic case with a negative periodic orbit. The bifurcations in the diagrams have been labelled SN for saddle-node and TC for transcritical. In the phase portraits, the solid horizontal line represents an invariant axis. } \label{unfolding3} \end{figure} \section{The stressed model}\label{model} We consider the stressed model introduced by \citet{Kooi2008}. Here, "stressed" refers to the presence of a toxicant in the water flowing into a basin that contains a nutrient as well as a prey and a predator population. The basin is considered to be well-mixed and hold a constant volume of water. The densities of the nutrient, prey, predator and toxicant then satisfy \begin{align} \frac{dN}{dt}&=(N_r-N)D-I_{NR}\frac{N}{\kappa_{NR}+N}R, \label{dNdt}\\ \frac{dR}{dt}&=\bigg(\mu_{NR}\frac{N}{\kappa_{NR}+N}-(D+m_R(c_R))\bigg)R-I_{RP}\frac{R}{\kappa_{RP}+R}P, \label{dRdt}\\ \frac{dP}{dt}&=\bigg(\mu_{RP}\frac{R}{\kappa_{RP}+R}-(D+m_{P0})\bigg)P,\label{dPdt}\\ \frac{dc_{T}}{dt}&=(c_{r}-c_{T})D, \label{dcTdt} \end{align} respectively. The nutrient density tends to relax to the inflow value, $N_r$, which is the first control parameter, at the flow rate, $D$, which is the second control parameter. It is also consumed by the prey which, in turn, is consumed by the predator. The consumption of the nutrient by the prey and the prey by the predator is modelled by a Holling type II functional response. Both the prey and the predator are drained from the basin a the flow rate $D$, and die at a rate that is fixed for the predator but depends on the toxicant concentration for the prey. The total toxicant can be split up into the portion ingested by the prey, $c_R$, and the portion in the ambient water, $c_W$, according to \begin{equation} c_{T}=c_{W}+c_R R. \label{cTcWcR} \end{equation} On time scales relevant to this model, the proportion of toxicant carried by the prey is fixed by \begin{equation} c_R=BCF_{WR} \,c_W. \label{cRcW} \end{equation} Finally, the maintenance rate constant of the prey is given by \begin{equation} m_{R}(c_{R})=m_{R0}\bigg(1+\frac{\max\{c_{R}-c_{RM0},0\}}{c_{RM}} \bigg). \label{mR} \end{equation} Together, Eqs. (\ref{dNdt}--\ref{mR}) form a closed system. However, since we are interested in equilibria, limit cycles and connecting orbits rather than transient motion, we will assume that $c_T$ has assumed its equilibrium value $c_r$ and eliminate the toxicant concentrations to find \begin{align} \frac{dN}{dt}&=(N_r-N)D-I_{NR}\frac{N}{\kappa_{NR}+N}R, \label{ODE1} \\ \frac{dR}{dt}&=\bigg(\mu_{NR}\frac{N}{\kappa_{NR}+N}-(D+m_R(R))\bigg)R-I_{RP}\frac{R}{\kappa_{RP}+R}P, \label{ODE2}\\ \frac{dP}{dt}&=\bigg(\mu_{RP}\frac{R}{\kappa_{RP}+R}-(D+m_{P0})\bigg)P, \label{ODE3} \end{align} where \begin{equation} \label{mRR} m_R(R)=m_{R0}+\frac{m_{R0}}{c_{RM}}\max\left\{\frac{BCF_{CW} c_r}{1+BCF_{CW} R}-c_{RM0},0\right\}. \end{equation} Table \ref{pars} lists the parameters of the model with their interpretation, value and dimension. \begin{table}\label{pars} \tbl{Parameter set for a stressed bacterium-ciliate model. All values are taken from \citet{Kooi2008} except for $\mu_{RP}$, which is changed to locate the double-zero SNTC interaction. The units are given in terms of time, $t$, the mass of toxicant, $m$, the volume of the basin, $\nu$, and the (bio)volume of the nutrient, predator and prey, $V$.} {\begin{tabular}{|llll|}\hline $\mu_{NR}$ & Max growth rate & $t^{-1}$ & $0.5$ $h^{-1}$\\ $I_{NR}$ & Max ingestion rate & $t^{-1}$ & $1.25$ $h^{-1}$\\ $\kappa_{NR}$ & Saturation constant & $V/\nu$ & $8.0$ $mg/dm^{3}$\\ $m_{R0}$ & Maintenance rate coefficient & $t^{-1}$ & $0.025$ $h^{-1}$\\ $c_{RM0}$ & No effect concentration & $m/V$ & $0.1$ $\mu g/mg$\\ $c_{RM}$ & Tolerance concentration & $m/V$ & $0.5$ $\mu g/mg$\\ $BCF_{WR}$ & Bioconcentration factor & $\nu /V$ & $1.0$ $dm^{3}/mg$ \\ $\mu_{RP}$ & Max growth rate & $t^{-1}$ & $0.2$ or $0.7$ $h^{-1}$\\ $I_{RP}$ & Max ingestion rate & $t^{-1}$ & $0.333$ $h^{-1}$\\ $\kappa_{RP}$ & Saturation constant & $V/\nu$ & $9.0$ $mg/dm$\\ $m_{P0}$ & Maintenance rate coefficient & $t^{-1}$ & $0.01$ $h^{-1}$\\ $c_r$ & Toxicant concentration at inflow & $m/\nu$ & $1$ or $9$ $\mu g/dm^3$\\ \hline \end{tabular}} \end{table} Two invariant subsystems exist in this model. In the invariant subsystem $R=P=0$ all solutions tend towards the ``wash-out'' equilibrium $(N_r, 0, 0)$. The invariant predator-free subsystem $P=0$ can support two equilibria in addition to the wash-out solution, as well as limit cycles and connecting orbits. The full system can support equilibria, cycles and connecting orbits with co-existing predator and prey populations. \section{The single zero SNTC interaction} The first organising centre identified by \citet{Kooi2008} is a single zero SNTC interaction in the nutrient-prey system at $c_r=1$ and $\mu_{RP}=0.2$. In their Fig. 2, it is labeled ``N'' and the unfolding is that shown in our Fig. \ref{single_zero}. The three regions around the codimension two point are characterised by wash-out, a stable positive prey population and co-existence of these stable solutions. \subsection{Test functions for the single zero SNTC interaction} When using MatCont \cite{Dhooge2004} to construct the bifurcation diagram, the codimension two point is classified as cusp point on the saddle-node curve. Here, we verify that the test function used by MatCont has an isolated zero at the SNTC point and we suggest a test function to distinguish the two cases. Suppose we are tracing a saddle-node bifurcation in two parameters, $\lambda_1$ and $\lambda_2$ in a system with $n$ variables. Along the curve we have \begin{equation} f(x,\lambda_1,\lambda_2)=0,\ D_x f(x, \lambda_1,\lambda_2)q=0, \label{defSN} \end{equation} where $q$ is the right zero eigenvector. In practice, the left and right zero eigenvalue are computed by solving the following nonsingular bordered systems \cite{Govaerts2000} \begin{align}\label{bordered1} \begin{bmatrix} p^t &\gamma \end{bmatrix}\begin{bmatrix} Df(x,\lambda_1,\lambda_2) & \bar{p}\\ \bar{q}^t & 0 \end{bmatrix} &= \begin{bmatrix} 0 &1 \end{bmatrix}, &\quad \begin{bmatrix} Df(x,\lambda_1,\lambda_2) & \bar{p}\\ \bar{q}^t & 0 \end{bmatrix} \begin{bmatrix} q\\\gamma \end{bmatrix}&= \begin{bmatrix} 0\\1 \end{bmatrix}. \end{align} The eigenvectors are thus normed as $\bar{p}^t p=\bar{q}^t q=1$, where $\bar{p}$ and $\bar{q}$ are the zero eigenvectors computed at the previous point on the saddle-node curve. In a similar fashion, a vector tangent to the solution curve is found. We will denote it by $v$ and assume it is normalised to unit length. Finally, the Hessian of the dynamical system is computed. Following \citet{Kuznetsov1999}, we will consider it as a bilinear form $\mathbb{R}^n\times\mathbb{R}^n\rightarrow \mathbb{R}^n$ denoted by $D^2_{x,x}f$. Generically, the dynamics in the centre manifold is conjugate to the normal form \begin{align} \dot{y}&= \alpha a+\beta y^2\\ \alpha&=\langle p, \left. D_s f(x,\lambda_1(s),\lambda_2(s))\right|_{s=0}\rangle\\ \beta&=\langle p, (D_{x,x}f)(q,q)\rangle \end{align} where the brackets denote the dot product and $D^2_{x,x}f$ is evaluated at $s=0$, where the saddle-node bifurcation takes place. If $\beta=0$ at some point along the curve, MatCont classifies it as a cusp. However, if $\alpha=0$ at the same point, it is actually a SNTC point. Thus, we propose to use $\alpha$ as additional test function. A suitable choice for the unfolding curve is one perpendicular to the saddle-node curve, e.g. $(s v_2,-s v_1)$. The test function then evaluates to \begin{equation}\label{TF} \alpha=\langle p, D_{\lambda_1}f v_2-D_{\lambda_2}f v_1\rangle \end{equation} The unfolding of the cusp bifurcation is given by \begin{equation} \dot{x}=a + b x +x^3. \end{equation} Simply taking $p=q=1$, we find along the saddle-node curve, defined by $a^2/4+b^3/27=0$, that $\beta=2^{2/3} a^{1/3}/9$ while $\alpha$ stays bounded away from zero. In fact, $\alpha=1$ at the bifurcation point. The normal form for the single zero SNTC interaction proposed by \citet{Saputra2010} and \citet{Saputra2015} is \begin{equation} \dot{x}=a x+ b x^2 +x^3. \end{equation} The equilibrium at $x=0$ undergoes a transcritical bifurcation along $a=0$ and a saddle-node bifurcation along the line $a=b^2/4$. Along the latter curve, we find $\alpha=-b/4+\mbox{O}(b^3)$ and $\beta=-b$ so both test functions are zero at the SNTC point. Should one want to detect the SNTC interaction along a transcritical bifurcation curve, one should monitor test function $\beta$. By default, no test functions are computed along transcritical curves in MatCont. \section{The double zero SNTC interaction} \begin{figure}[t] \label{bifcurve} \begin{center} \includegraphics[width=300pt]{bifcurve.eps} \end{center} \caption{Partial bifurcation diagram of the stressed nutrient predator prey system with $N$ and $D$ as the free parameters. The diagram was computed using MatCont \cite{Dhooge2004} at $c_r=9$ and $\mu_{RP}=0.7$. The bifurcations have been labeled as in Fig. \ref{unfolding1}.} \label{2zero} \end{figure} The SNTC interaction with two zero eigenvalues occurs in the stressed model at parameter values $c_r=9$ and $\mu_{RP}=0.7$. Figure \ref{2zero} shows a partial bifurcation diagram in the nutrient density of the inflow, $N_r$, and the flow rate, $D$. It follows the pattern of unfolding (\ref{unfolding1}). The wash-out equilibrium is stable in this entire diagram. In region 1, this is the only equilibrium. In region 2, two equilibria with nonzero prey density exist. The one with the highest prey density is stable, while the other is of saddle type. The stable manifold of the latter separates the basins of attraction of the stable wash-out and nonzero prey equilibria. In region 3, the predator has invaded the system, rendering both equilibria on the boundary $P=0$, $R>0$ unstable. In region 4, the co-existence equilibrium has turned unstable in a Hopf bifurcation and the system periodically oscillates. Close to the heteroclinic bifurcation, the period of the oscillation grows without bound, and the observed behaviour looks like switching between the equiilbria with high and low prey density in the boundary. An example is shown in Fig. \ref{ts}. In the first phase of the switching, the prey density grows while the predator density stays close to zero. In the next phase, the predator density slowly grows while the prey density remains nearly constant and high. The predators then thrive and rapidly consume the prey until the density of the latter drops below a critical threshold, after which the predator population declines. In region 5, the cycle has disappeared in a heteroclinic bifurcation. After this bifurcation, the stable manifold of the zero predator equilibirum with the higher prey density no longer bounds the basin of attraction of the wash-out solution, which seems to attract all initial conditions. This remains true in region 6, where the coexistence equilibrium is no longer positive. The main difference between regions 5 and 6, on one hand, and 1, on the other, is that in regions 5 and 6 long transients can be observed due to the existence of unstable equilibria with nonzero predator in prey densities. \begin{figure}[t] \label{timeseries} \begin{center} \includegraphics[width=270pt]{HET_ts.eps} \end{center} \caption{Time series in region 4 of diagram \ref{2zero}, near the heteroclinic bifurcation curve. Shown are the nutrient, normalised by $N_r$, in black and the prey and predator densities, normalised by $\kappa_{RP}$, in red and blue, respectively. The parameter values are $N_r=24.6$ and $D=0.142$.} \label{ts} \end{figure} \subsection{Test functions for the double zero case} The test function used by MatCont to locate BT points along saddle-node curves is extracted from the following nonsingular bordered systems \begin{align}\label{bordered2} \begin{bmatrix} u^t &\beta \end{bmatrix}\begin{bmatrix} Df(x,\lambda_1,\lambda_2) & \bar{p}\\ \bar{q}^t & 0 \end{bmatrix} &= \begin{bmatrix} p^t &0 \end{bmatrix}, &\quad \begin{bmatrix} Df(x,\lambda_1,\lambda_2) & \bar{p}\\ \bar{q}^t & 0 \end{bmatrix} \begin{bmatrix} v\\\beta \end{bmatrix}&= \begin{bmatrix} q\\0 \end{bmatrix}, \end{align} that are solved after systems (\ref{bordered1}). If, and only if, $\beta=0$, $u$ and $v$ are left and right generalized eigenvectors for eigenvalue zero. Multiplying the second system from the left by $(p^t,0)$, we find that $\beta=p^t q$ so that the condition $\beta=0$ can also be interpreted as the solvability condition for the existence of such generalised eigenvectors. A BT point is detected when $\beta=0$. Two models have been proposed for the double zero SNTC interaction. In the absence of a formal derivation, \citet{Saputra2010} called the following system a ``minimal model'': \begin{align} \dot{x}&= y \nonumber\\ \dot{y}&= a x + k_1 b y + b x^2 + k_2 x y+x^2 y+\epsilon x^3+k_3 x^4, \label{SNTC1} \end{align} where the constants $k_1,\, k_2,\, k_3\neq 0$ satisfy \begin{alignat}{2} 2\epsilon k_1^2-k_1 k_2&=1, \nonumber\\ 3 k_1 k_3 &=1. \label{conditions} \end{alignat} In this model, the SNTC interaction takes place at $a=b=0$ and the saddle-node bifurcation takes place along the line $a=\epsilon (2\sqrt{(1-3 k_3 b)^3} -2+9 k_3 b)/(27 k_3^2)$. A straightforward computation shows that, along this curve, $\beta=\beta_1\epsilon k_3 b+\mbox{O}(b^2)$, where $\beta_1$ is a constant that depends on the choice of eigenvectors, which explains why the SNTC interaction is classified as a BT point. A more formally derived normal form was presented by \citet{Saputra2015}: \begin{align} \dot{x}&=a +y+ \mu_1 x^2 \nonumber\\ \dot{y}&=b y +\mu_2 x y. \label{SNTC2} \end{align} Again, the SNTC interaction takes place at $a=b=0$, and the saddle-node bifurcation takes place along the curve $a=0$. It is easy to see that, along this curve, $\beta=\beta_1 b+\mbox{O}(b^2)$. Again, the BT test function has a zero at the SNTC point. At a zero of the test function $\beta$, MatCont automatically computes the coefficient $a_2$ of the BT normal form \begin{align} \dot{x}&=y \nonumber\\ \dot{y}&=\lambda_1+\lambda_2 x+a_2 x^2+b_2 x y. \label{BT} \end{align} As is apparent from model (\ref{SNTC1}) of the SNTC interaction, $a_2$ evaluates to zero at the codimension two point. Thus, $a_2$ could be used as an additional test function. However, this coefficient is also zero in the case of a degenerate BT bifurcation of codimension three, which does not involve a transcritical bifurcation. It is better to use test function (\ref{TF}), that relies on the absence of a constant term in the unfolding. A direct computation shows that, for SNTC models (\ref{SNTC1}) and (\ref{SNTC2}), $\alpha=\alpha_1 \epsilon b+\mbox{O}(b^2)$ and $\alpha=\alpha_1 b+\mbox{O}(b^2)$, repspectively, where $\alpha_1$ is a numerical constant that depends on the choice of eigenvectors. In contrast, for the BT normal form, we find $\alpha=\|p\|$ at the SNTC point. Thus, test function (\ref{TF}) neatly distiguishes the SNTC interaction from the BT bifurcation. When tracing out the transcritical bifurcation curve, one can simple use $\beta$ as a test function to localize the double zero SNTC interaction. \section{Conclusion} We have analysed the single zero and double zero SNTC interaction in the stressed nutrient-predator-prey model of \citet{Kooi2008}, and demonstrated that both play the role of ``organsing centre'' for the dynamics. Since the transcritical bifurcation is of codimension one in this model, due to its special structure, the numerical construction of the bifurcation diagram is not entirely straightforward. The single zero SNTC interaction is classified by MatCont \cite{Dhooge2004} as a cusp bifurcation, while the double zero interaction is classified as a BT point. We verified that the applicable test functions have isolated zeros for the normal forms of the SNTC interactions proposed by \citet{Saputra2015} and \citet{Saputra2010}. While we have refered to MatCont in the discussion, AUTO \cite{AUTO} uses the same test functions, albeit computed in a slightly different way, aproximating $\beta$ by a finite-difference formula. Our observations about the test functions are thus valid for AUTO, too. We propose a new test function that can uniquely identify the SNTC interactions along saddle-node curves. This test function only involves the parametric derivatives of the dynamical system and the null space of its Jacobian. We hope that this test function will prove a useful tool for performing numerical bifurcation analysis on models with invariant coordinate planes, often found in the modelling of population dynamics and chemical reactions. \nonumsection{Acknowledgements} \noindent LvV was supported by an individual Discovery Grant of NSERC.
1,314,259,995,523
arxiv
\section{\uppercase{D}\lowercase{efinitions of coordinates}\label{sec_1}} In the main text, the Cartesian coordinate depicted in Fig.~\ref{fig_s1}(a) was adopted. The $x$, $y$, and $z$ axes of this coordinate are parallel to the [$\bar{1}\bar{1}\bar{2}$], [1$\bar{1}$0], and [$\bar{1}\bar{1}$1] crystallographic directions of diamond, respectively. Since a single nitrogen-vacancy (NV) center with its symmetry axis parallel to the [$\bar{1}\bar{1}$1] direction was used as a quantum sensor and the static external magnetic field $\bm{B}_0$ was applied along the same direction, this coordinate is suitable to discuss the position of the target nucleus with respect to that of the sensor, and is henceforth termed as ``sensor coordinate''. The spherical coordinate parameters ($r$, $\theta$, $\phi$) with the distance $r$ ($\geq 0$), the polar angle $\theta$ ($0 \leq \theta \leq \pi$), and the azimuthal angle $\phi$ ($0 \leq \phi < 2\pi$) are used to specify the position of the target nucleus. \begin{figure*} \begin{center} \includegraphics{fig_s1.eps} \caption{ (a) Sensor coordinate. (b) Laboratory coordinate. \label{fig_s1}} \end{center} \end{figure*} The origin is taken at an approximate ``center of mass'' of the NV center, where the NV spin is regarded as a point dipole. It resides in the crossing point of the NV symmetry axis and the plane spanned by the three carbon atoms adjacent to the vacancy, or in other words, 0.75~\AA~above the vacancy site. It is assumed that the vacancy site locates above the nitrogen site. The motivation of this definition is that the direction of $A_{\perp}$ is conditional on $\theta$ and changes a sign at $\theta$ = $\frac{\pi}{2}$ [Eq.~(\ref{eq_e_perp}) in Sec.~\ref{sec_3}], and that the target nuclear spin we measured lies coincidentally in the plane including the vacancy. It is generally accepted that the ``center of mass'' of the NV center lies above the vacancy, even though the precise position has not been determined. For analysis of $\phi$ of the target nuclear spin, we take the direction of $A_{\perp}$ inward. Our definition gives $\theta$ of the target spin as 94.8$^{\circ}$, so that it is consistent with Eq.~(\ref{eq_e_perp}). Here, we introduce ``laboratory coordinate'', the second coordinate that is to be used in this Supplemental Material. The sensor coordinate and the laboratory coordinate are readily transformed each other, and the purpose of using two coordinates is purely for convenience sake. The laboratory coordinate is depicted in Fig.~\ref{fig_s1}(b). The $x^{(\mathrm{L})}$, $y^{(\mathrm{L})}$, and $z^{(\mathrm{L})}$ axes are parallel to the [110], [$\bar{1}$10], and [001] crystallographic directions, respectively. Because of the configuration of our experimental setup [Fig.~\ref{fig_s2}(a)], this coordinate is suitable to discuss the direction of the magnetic field generated by a coil bonded on the back side of the sample mount (Sec.~\ref{sec_2}). The following matrix transforms a vector $\bm{a}^{\mathrm{(L)}}$ defined in the laboratory coordinate into a vector $\bm{a}^{\mathrm{(S)}}$ in the sensor coordinate: \begin{equation} T^{ \mathrm{(L \rightarrow S)} } = R_y(-\Theta_0^{\mathrm{(L)}}) R_z (-\Phi_0^{\mathrm{(L)}}). \end{equation} $\Theta_0^{\mathrm{(L)}}$ = 54.7$^{\circ}$ and $\Phi_0^{\mathrm{(L)}}$ = 180$^{\circ}$ are the polar and azimuthal angles of $\bm{B}_0$ as seen in the laboratory coordinate, respectively. $R_y(\Theta)$ and $R_z(\Phi)$ are the rotation matrices defined as \begin{equation} R_y(\Theta) = \left( \begin{array}{ccc} \cos \Theta & 0 & \sin \Theta \\ 0 & 1 & 0 \\ -\sin \Theta & 0 & \cos \Theta \\ \end{array} \right) \quad\mathrm{and}\quad R_z(\Phi) = \left( \begin{array}{ccc} \cos \Phi & -\sin \Phi & 0 \\ \sin \Phi & \cos \Phi & 0 \\ 0 & 0 & 1 \\ \end{array} \right). \end{equation} \section{\uppercase{E}\lowercase{xperimental setup}\label{sec_2}} \subsection{Electronics} As described in the main text, a single NV center in a type-IIa (001) diamond was measured by a home-built confocal microscope. We here focus on the electronics aspect of our setup, which is schematically shown in Fig.~\ref{fig_s2}(b). Microwave pulses are generated by a vector signal generator (VSG, Stanford Research Systems SG396), amplified by a high-power broadband amplifier (Mini-Circuits ZHL-16W-43+), and delivered to the NV center through a copper wire running across the diamond surface [Fig.~\ref{fig_s2}(a)]. An arbitrary waveform generator (AWG, Tektronix AWG7122C) provides IQ signals to the VSG. RF pulses are triggered by the same AWG. Waveforms generated by a function generator (FG, NF WF1973) and amplified by a low-impedance amplifier (Accel Instruments TS200-HF) are sent to a hand-wound coil bonded on the back side of the sample mount [Fig.~\ref{fig_s2}(a)]. This coil has an inductance $L_{\mathrm{c}}$ of 3.5~$\mu$H and a resistance $R_{\mathrm{c}}$ of 0.3~$\Omega$. A low-impedance--high-power resister ($R$ = 4.7~$\Omega$) is series-connected to the coil. By this resistor, the rising time is reduced down to $L_{\mathrm{c}}/(R + R_{\mathrm{c}})$ = 0.7~$\mu$s, allowing us to generate RF signals at several hundreds of kHz. \begin{figure*} \begin{center} \includegraphics{fig_s2.eps} \caption{ (a) Front and back sides of the sample mount (PCB board) with a diamond sample, a copper wire, and a coil. (b) Schematic of electronics. AWG: arbitrary waveform generator, VSG: vector signal generator, FG: function generator. (c) Resonance frequencies of single NV centers found in a 80$\times$80$\times$10~$\mu$m$^3$ volume. The black lines indicate the average resonance frequencies of the NV centers having the same symmetry axis. (d) Determination of the direction of $\bm{B}_{\mathrm{c}}$. The fit errors are minimized at ($\theta_{\mathrm{c}}^{\mathrm{(L)}}, \phi_{\mathrm{c}}^{\mathrm{(L)}}$) = (5.2$^{\circ}$, 81.6$^{\circ}$). \label{fig_s2}} \end{center} \end{figure*} \subsection{Magnetic fields generated by the coil} To calibrate the direction of the magnetic fields generated by the coil, we conduct vector DC magnetometry using multiple single NV centers. We optically resolve single NV centers (including the one used in the main text) in a $80\times80\times10~\mathrm{\mu m^3}$ scan volume within diamond, and apply DC voltages on the coil to generate the DC magnetic field $\bm{B}_{\mathrm{c}}$. Note that this $\bm{B}_{\mathrm{c}}$ is different from $\bm{B}_0$. The latter was supplied using a permanent magnet, and in the present measurement, the magnet was removed ($B_0$ = 0~mT). The spin resonance frequencies of 13 single NV centers under $\bm{B}_{\mathrm{c}}$ are plotted in Fig.~\ref{fig_s2}(c). Among the NV centers having the same symmetry axis, the deviation from the averaged resonance frequency [solid lines in Fig.~\ref{fig_s2}(c)] is found to be less than 0.1~kHz, certifying high homogeneity of $\bm{B}_{\mathrm{c}}$ within the observed volume. The resonance frequencies of the $i$th NV center are calculated from the spin Hamiltonian \begin{equation} H^{(i)} = D (S_z^{(i)})^2 + \gamma_{\mathrm{e}} \bm{B}_{\mathrm{c}} \cdot \bm{S}^{(i)}, \end{equation} where $D$ is the zero-field splitting, $\gamma_{\mathrm{e}}$ = 28~MHz/mT is the gyromagnetic ratio of the electron, and $\bm{S}^{(i)}$ is the $S$ = 1 spin operator with its quantization axis taken as the symmetry axis of the $i$th NV center. The direction and strength of $\bm{B}_{\mathrm{c}}$ are determined by minimizing the errors between the observed and calculated resonance frequencies [Fig.~\ref{fig_s2}(d)]. As a result, we obtain $D$ = 2870.4~MHz, $B_{\mathrm{c}}$ = 1.47~mT, $\theta_{\mathrm{c}}^{\mathrm{(L)}}$ = 5.2$^{\circ}$, and $\phi_{\mathrm{c}}^{\mathrm{(L)}}$ = 81.6$^{\circ}$, where $\theta_{\mathrm{c}}^{\mathrm{(L)}}$ and $\phi_{\mathrm{c}}^{\mathrm{(L)}}$ are the polar and azimuthal angles of $\bm{B}_{\mathrm{c}}$ as seen in the laboratory coordinate. In the sensor coordinate, these angles are given by $\theta_{\mathrm{c}}$ = 55.7$^{\circ}$ and $\phi_{\mathrm{c}}$ = 186.2$^{\circ}$. \subsection{Delay time in the RF electronics} The presence of an unknown delay directly affects the accuracy of the estimation of $\phi$. For instance, at $f_1$ = 215.6~kHz used in our experiments, the delay of 13~ns amounts to the angle difference of $-$1$^{\circ}$. The sources of the delay include trigger jitters in the AWG and the FG, the electrical length of the coaxial cables, the time constant of the LC circuit, and so on. Some of them can be characterized independently, but the total delay {\it at the position of the NV center}, $t_{\mathrm{delay}}$ can only be measured using the NV center itself. For this purpose, we use a waveform \begin{equation} W(t) = \left\{ \begin{array}{ll} V_{\mathrm{pp}} \cos [2 \pi f_1 (t - \tau_0) + \phi_{\mathrm{rf}}] & (\tau_0 \leq t \leq \tau_0 + 4 \tau) \\ & \\ 0 & (\mathrm{otherwise}) \end{array} \right. \end{equation} with $V_{\mathrm{pp}}$ = 10~mV, $\tau_0$ = 11~$\mu$s, $\tau$ = 2.319~$\mu$s = (2$f_1$)$^{-1}$, and $\phi_{\mathrm{rf}}$ = 0$^{\circ}$ or 270$^{\circ}$. The coil receives an amplified waveform $t_{\mathrm{delay}}$ seconds after the trigger of the AWG. At the same time, the CP sequence with $N$ = 4 and $\tau$ = 2.319~$\mu$s, starting $t_{\mathrm{wait}}$ seconds after the trigger, detects this AC field. When read out by the ($\pi$/2)$_{\mathrm{Y}}$ pulse, the transition probability is given by~\cite{SGS+17s} \begin{equation} P_{\mathrm{Y}} = \frac{1}{2}(1-\sin \varphi) \label{py_rf} \end{equation} with \begin{equation} \varphi = \frac{ 2 \pi \gamma_{\mathrm{e}} b_{\mathrm{rf}} }{ V_{\mathrm{pp}} } \int W(t - t_{\mathrm{delay}}) y(t-t_{\mathrm{wait}}) dt. \label{eq_varphi} \end{equation} $b_{\mathrm{rf}}$ is the AC signal amplitude at the position of the NV center, and $y(t)$ is the modulation function \begin{equation} y(t) = \left\{ \begin{array}{ll} 1 & ( 0 \leq t < \frac{\tau}{2}, \frac{3 \tau}{2} \leq t < \frac{5 \tau}{2}, \frac{7 \tau}{2} \leq t < 4 \tau ) \\ & \\ -1 & ( \frac{\tau}{2} \leq t < \frac{3 \tau}{2}, \frac{5 \tau}{2} \leq t < \frac{7 \tau}{2} ) \\ & \\ 0 & (\mathrm{otherwise}). \end{array} \right. \end{equation} Equation~(\ref{eq_varphi}) means that the accumulated phase $\varphi$ is a convolution of a signal wave with delay $t_{\mathrm{delay}}$ and a ``sensing window'' of the CP sequence. When the timing of the CP sequence matches with that of the signal wave, $P_{\mathrm{Y}}$ is modified accordingly. Therefore, by sweeping $t_{\mathrm{wait}}$, we can estimate $t_{\mathrm{delay}}$. Figure~\ref{fig_s3} shows $P_{\mathrm{Y}}$ as a function of $t_{\mathrm{wait}}$ for $\phi_{\mathrm{rf}}$ = 0$^{\circ}$ (a) and 270$^{\circ}$ (b). \begin{figure*} \begin{center} \includegraphics{fig_s3.eps} \caption{ AC sensing for (a) $\phi_{\mathrm{rf}}$ = 0$^{\circ}$ and (b) $\phi_{\mathrm{rf}}$ = 270$^{\circ}$. \label{fig_s3}} \end{center} \end{figure*} The data is fitted by Eq.~(\ref{py_rf}), and we extract $t_{\mathrm{delay}}$ = 1.088$\pm$0.003~$\mu$s. The error of $\pm$3~ns corresponds to only $\pm$0.2$^{\circ}$. \subsection{Chirped microwave pulse} In our experiments based on the pulse sequences shown in Fig.~1 of the main text, single photons emitted from the NV center are counted by a single-photon counting module (Laser Components COUNT-10C). These events are indicated as {\it Readout} in Fig.~1 of the main text. The recorded photon counts are converted into the ``transition probabilities'' (Sec.~\ref{sec_3}). For accurate conversion, calibration of the photon counts marking $|m_S = 0 \rangle$ and $|m_S = -1 \rangle$ is crucial. To robustly flip the NV spin from laser-initialized $|m_S = 0 \rangle$ to $|m_S = -1 \rangle$, we use a chirped microwave pulse known as WURST (wideband, uniform rate, smooth truncation)~\cite{KF95s,SSB+16s}. The amplitude modulation of WURST is given by \begin{eqnarray} \left( 1 - \left| \cos \frac{\pi t}{ t_{\mathrm{p}} } \right| \right)^{ \alpha_{\mathrm{p}} }, \end{eqnarray} where $t_{\mathrm{p}}$ is the pulse length and $\alpha_{\mathrm{p}}$ characterizes the envelope shape. We set $t_{\mathrm{p}}$ = 2~$\mu$s and $\alpha_{\mathrm{p}}$ = 2, and sweep the microwave frequency from $-$10 to 10~MHz around the resonance frequency during the pulse. The pulse shaping and frequency modulation are done by the AWG. Prior to the respective pulse sequences, we record the reference photon counts with and without a chirped pulse. A chirped pulse is also used in the protocol of Fig.~1(g) of the main text, when the NV spin is prepared in $|m_S = -1 \rangle$ before applying an RF $\pi$/2 pulse (Sec.~\ref{sec_8}). Note that microwave pulses in the CP and PulsePol sequences are {\it not} chirped. \section{\uppercase{D}\lowercase{ynamics of a single nuclear spin under the }\uppercase{CP}\lowercase{ sequence}\label{sec_3}} Here, we derive Eqs.~(1) and (2) of the main text, closely following the descriptions in~\cite{TWS+12s} and~\cite{BCA+16s}. In the sensor coordinate, the spin Hamiltonian of the NV--$^{13}$C-coupled system is given by \begin{equation} H = D S_z^2 + \gamma_{\mathrm{e}} B_0 S_z - \gamma_{\mathrm{c}} B_0 I_z + S_z (A_{\perp} \bm{e}_{\perp} \cdot \bm{I} + A_{\parallel} I_z). \label{eq_h} \end{equation} The target $^{13}$C nuclear spin is located at ($r$, $\theta$, $\phi$). $\bm{e}_{\perp}$ is given by \begin{equation} \bm{e}_{\perp} = \left\{ \begin{array}{ll} \cos \phi \, \bm{e}_x + \sin \phi \, \bm{e}_y & ( 0 \leq \theta < \frac{\pi}{2} ) \\ & \\ -\cos \phi \, \bm{e}_x - \sin \phi \, \bm{e}_y & ( \frac{\pi}{2} \leq \theta \leq \pi ). \end{array} \right. \label{eq_e_perp} \end{equation} The direction of $A_{\perp}$ is conditional on $\theta$; when the nuclear spin locates above (below) the sensor spin, $A_{\perp}$ points outward (inward) because $A_{\perp} \propto 3 \cos \theta \sin \theta/r^3$ in a dipolarly-coupled system. Figure~1(a) of the main text depicts the case for $0 \leq \theta \leq \frac{\pi}{2}$. The operator $\bm{e}_{\perp} \cdot \bm{I}$ can be simplified as $I_x$ by an appropriate unitary transformation (a rotation about the $z$ axis), so that the eigenenergies of Eq.~(\ref{eq_h}) are independent of $\phi$. However, we keep this form in order to examine how the real-space position of the $^{13}$C nuclear spin is reflected in its dynamics. In our experiments, $|m_S = 0 \rangle$ and $|m_S = -1 \rangle$ of the NV spin, separated by $D - \gamma_{\mathrm{e}} B_0$ = 1.8582~GHz, are used as $|0 \rangle$ and $|1 \rangle$ of the sensor, respectively. (Strictly speaking, $^{14}$N isotope of the NV center has a nuclear spin $I$ = 1, and the $m_I$ = 1 state we used has the transition frequency 2~MHz higher than $D - \gamma_{\mathrm{e}} B_0$, due to the NV--$^{14}$N hyperfine interaction. In addition, the $^{14}$N nuclear spin is polarized into the $m_I$ = 1 state by optical hyperpolarization~\cite{AS18s}. Other nuclear sublevels thus do not play any roles in the present work, and are not considered below.) We move to the rotating frame of the NV spin and rewrite Eq.~(\ref{eq_h}) as \begin{equation} H_{\mathrm{r}} = -\gamma_{\mathrm{c}} B_0 I_z + S_z (A_{\perp} \bm{e}_{\perp} \cdot \bm{I} + A_{\parallel} I_z). \label{eq_hr} \end{equation} Note that the nuclear spin stays in the original sensor coordinate. When the NV spin is $|m_S = 0 \rangle$, $H_{\mathrm{r}}$ is reduced to \begin{equation} H_0 = -\gamma_{\mathrm{c}} B_0 I_z = -f_0 I_z. \end{equation} For $|m_S = -1\rangle$, we obtain \begin{equation} H_1 = -f_0 I_z - (A_{\perp} \bm{e}_{\perp} \cdot \bm{I} + A_{\parallel} I_z) = -f_1\bm{e}_{\mathrm{p}} \cdot \bm{I}, \end{equation} with \begin{eqnarray} f_1 &=& \sqrt{(f_0 + A_{\parallel})^2 + A_{\perp}^2} \label{eq_f1} \\ \bm{e}_{\mathrm{p}} &=& \frac{ f_0 + A_{\parallel} }{f_1} \bm{e}_z + \frac{ A_{\perp} }{f_1} \bm{e}_{\perp} = \cos \theta_{\mathrm{p}} \bm{e}_z + \sin \theta_{\mathrm{p}} \bm{e}_{\perp}. \label{eq_e_p} \end{eqnarray} In our experiments, the polar angle is calculated as $\theta_{\mathrm{p}} = \arctan[A_{\perp}/(f_0 + A_{\parallel})]$ = 5.9$^{\circ}$. In the protocol of Fig.~1(g) of the main text, the nuclear spin is driven by an RF field while the sensor is $|m_S = -1\rangle$. Therefore, $\bm{e}_{\mathrm{p}}$ is the precession axis of the target nuclear spin (hence the suffix ``p''). We detail this point in Sec.~\ref{sec_8}. We now examine the dynamics of a single nuclear spin under the CP sequence by calculating the transition probabilities, which are the probabilities that, at the end of the CP sequence, the NV spin is found to be in the state opposite to the state right before the application of the first $(\pi/2)_{\mathrm{X}}$ pulse. In our experiments, both $|m_S = 0 \rangle$ and $|m_S = -1 \rangle$ are used as the initial state of the CP sequence, but the expressions of the transition probabilities obtained below [Eqs.~(\ref{eq_px}) and (\ref{eq_py})] do not depend on the initial state. We assume that the sensor is initialized to $|m_S = 0 \rangle$ with its pure state density matrix given by $|0 \rangle \langle 0|$. We also introduce a density matrix of the nuclear spin as \begin{equation} \rho_{\mathrm{n}} = \frac{1}{2} \bm{1} + \bm{\nu}_{\mathrm{n}} \cdot \bm{I}, \end{equation} where $\bm{\nu}_{\mathrm{n}}$ is the Bloch vector of the nuclear spin defined in the sensor coordinate. Again, we assume that the nuclear spin is in the pure state, which is relevant because the nuclear spin is polarized in our protocol, and the coherence time of the nuclear spin is long. Therefore, $\bm{\nu}_{\mathrm{n}}$ is parametrized by $(\theta_{\mathrm{n}}, \phi_{\mathrm{n}})$. Note that both $\theta_{\mathrm{n}}$ and $\phi_{\mathrm{n}}$ are time-dependent in the sensor coordinate. The density matrix of an uncoupled NV--$^{13}$C system is written as \begin{equation} \rho_0 = |0 \rangle \langle 0| \otimes \rho_{\mathrm{n}}. \end{equation} The $(\pi/2)_{\mathrm{X}}$ pulse is applied first to create a superposition of $|0 \rangle$ and $|1 \rangle$, which couple with the nuclear spin differently. The unitary operator for the $(\pi/2)_{\mathrm{X}}$ pulse is given as \begin{equation} U_{\mathrm{X}} = \frac{1}{\sqrt{2}} \left( |0 \rangle \langle 0| + |1 \rangle \langle 1| - i |1 \rangle \langle 0| -i |0 \rangle \langle 1| \right). \end{equation} The evolution of the nuclear spin during the CP sequence is described as \begin{equation} U_{\mathrm{cp}} = (U_0 U_1^2 U_0)^{ \frac{N}{2} } |0 \rangle \langle 0| + (U_1 U_0^2 U_1)^{ \frac{N}{2} } |1 \rangle \langle 1| \label{eq_u_cp} \end{equation} with \begin{eqnarray} U_0 &=& e^{ -2\pi i H_0 \frac{\tau}{2} } = e^{ i \pi f_0 \tau I_z} = e^{ i \alpha I_z} \label{eq_u0} \\ U_1 &=& e^{ -2\pi i H_1 \frac{\tau}{2} } = e^{ i \pi f_1 \tau \bm{e}_{\mathrm{p}} \cdot \bm{I} } = e^{ i \beta \bm{e}_{\mathrm{p}} \cdot \bm{I} }. \label{eq_u1} \end{eqnarray} By noting that any unitary evolution of a single spin can be described as a rotation around a certain axis, we can rewrite Eq.~(\ref{eq_u_cp}) as \begin{equation} U_{\mathrm{cp}} = e^{ -i N \phi_{\mathrm{cp}} \bm{n}_0 \cdot \bm{I} } |0 \rangle \langle 0| + e^{ -i N \phi_{\mathrm{cp}} \bm{n}_1 \cdot \bm{I} } |1 \rangle \langle 1|, \label{eq_u_cp_2} \end{equation} where $\bm{n}_0$ and $\bm{n}_1$ define the rotation axes of the respective unitary operations, and $N \phi_{\mathrm{cp}}$ is the rotation angle. From straightforward calculations, we obtain $\phi_{\mathrm{cp}}$ as \begin{equation} \cos \phi_{\mathrm{cp}} = \cos \alpha \cos \beta - \cos \theta_{\mathrm{p}} \sin \alpha \sin \beta. \label{eq_cos_phi_cp} \end{equation} with $\alpha = \pi f_0 \tau$ and $\beta = \pi f_1\tau$ [as defined by Eqs.~(\ref{eq_u0}) and (\ref{eq_u1})]. For later convenience, we also give an explicit form of $\bm{n}_0$: \begin{equation} \bm{n}_0 = -\frac{ \sin \theta_{\mathrm{p}} \sin \beta }{ \sin \phi_{\mathrm{cp}} } \bm{e}_{\perp} -\frac{ \cos \theta_{\mathrm{p}} (\sin \alpha \cos \beta + \cos \theta_{\mathrm{p}} \cos \alpha \sin \beta ) }{ \sin \phi_{\mathrm{cp}} } \bm{e}_z. \label{eq_n0} \end{equation} The transition probability with the $(\pi/2)_{\mathrm{X}}$ readout pulse is calculated as \begin{equation} P_{\mathrm{X}} = \mathrm{Tr}[ (S_z^2) U_{\mathrm{X}} U_{\mathrm{cp}} U_{\mathrm{X}} \rho_0 U_{\mathrm{X}}^{\dagger} U_{\mathrm{cp}}^{\dagger} U_{\mathrm{X}}^{\dagger} ]. = 1 - \frac{1}{2 }(1-\bm{n}_0 \cdot \bm{n}_1) \sin^2 \frac{N \phi_{\mathrm{cp}}}{2}. \label{eq_px} \end{equation} This is Eq.~(1) of the main text. When the $(\pi/2)_{\mathrm{Y}}$ pulse is used, the transition probability becomes \begin{eqnarray} P_{\mathrm{Y}} &=& \mathrm{Tr}[ (S_z^2) U_{\mathrm{Y}} U_{\mathrm{cp}} U_{\mathrm{X}} \rho_0 U_{\mathrm{X}}^{\dagger} U_{\mathrm{cp}}^{\dagger} U_{\mathrm{Y}}^{\dagger} ] \nonumber \\ &=& \frac{1}{2} + \frac{1}{4} \bm{\nu}_n \cdot \left\{ (\bm{n}_0 - \bm{n}_1) \sin (N \phi_{\mathrm{cp}}) +2 (\bm{n}_0 \times \bm{n}_1) \sin^2 \frac{ N \phi_{\mathrm{cp}} }{2} \right\}. \label{eq_py} \end{eqnarray} $U_{\mathrm{Y}}$, the unitary operator for the $(\pi/2)_{\mathrm{Y}}$ pulse, is given as \begin{equation} U_{\mathrm{Y}} = \frac{1}{\sqrt{2}} \left( |0 \rangle \langle 0| + |1 \rangle \langle 1| + |1 \rangle \langle 0| - |0 \rangle \langle 1| \right). \end{equation} Up to this point, calculations are rigorous and general. We now consider a situation appropriate to our experiments. Both $P_{\mathrm{X}}$ and $P_{\mathrm{Y}}$ are strongly modulated when $\bm{n}_0$ and $\bm{n}_1$ are anti-parallel {\it i.e.,} $\bm{n}_0 \cdot \bm{n}_1 = -1$. This also means $\bm{n}_1 = -\bm{n}_0$ and $\bm{n}_0 \times \bm{n}_1 = 0$. Equation~(\ref{eq_py}) is simplified as \begin{equation} P_{\mathrm{Y}} = \frac{1}{2} + \frac{1}{2} \bm{\nu}_{\mathrm{n}} \cdot \bm{n}_0 \sin (N \phi_{\mathrm{cp}}). \label{eq_py_2} \end{equation} In the high-field regime ($f_0 \gg |A_{\parallel}|, A_{\perp}$), the condition $\bm{n}_0 \cdot \bm{n}_1 = -1$ is realized, for instance, by choosing $\tau$ such that~\cite{TWS+12s} \begin{equation} 2\tau \approx \frac{1}{ f_0 + A_{\parallel}/2 } \approx \frac{1}{f_{\mathrm{t}}}, \end{equation} which is also the setting of our protocol. In this case, we evaluate $\alpha \approx \pi f_0 /(f_0 + A_{\parallel}/2) \approx \pi/2$, $\beta \approx \pi f_1 /(f_0 + A_{\parallel}/2) \approx \pi/2$, and $\cos \theta_{\mathrm{p}} \approx 1$. From Eq.~(\ref{eq_cos_phi_cp}), we obtain $\cos \phi_{\mathrm{cp}} \approx \cos \theta_{\mathrm{p}} \approx -1$, and hence $\phi_{\mathrm{cp}} \approx \pi$. It should be noted that we can assume either $\phi_{\mathrm{cp}} > \pi$ or $\phi_{\mathrm{cp}} < \pi$~\cite{BCA+16s}. This arbitrariness affects the directions of $\bm{n}_{0,1}$ and a sign of $\sin (N \phi_{\mathrm{cp}})$, but does not affect the final results, because $U_{\mathrm{cp}}$ is identical in both cases. We set $\phi_{\mathrm{cp}} \approx \pi - \theta_{\mathrm{p}} < \pi$. In this case, for sufficiently small and even $N$, $\sin (N \phi_{\mathrm{cp}}) < 0$. From~(\ref{eq_n0}), and by noting $(\sin \theta_{\mathrm{p}} / \sin \phi_{\mathrm{cp}}) \approx 1$, we obtain $\bm{n}_0 \approx -\bm{e}_{\perp}$. Moreover, after the application of the RF $\pi/2$ pulse, the nuclear spin is also in the $xy$ plane ($\theta_{\mathrm{n}}$ = $\pi/2$). Equation~(\ref{eq_py_2}) becomes \begin{equation} P_{\mathrm{Y}} = \left\{ \begin{array}{ll} \frac{1}{2} - \frac{1}{2} \cos (\phi - \phi_{\mathrm{n}}) \sin (N \phi_{\mathrm{cp}}) & ( 0 \leq \theta < \frac{\pi}{2} ) \\ & \\ \frac{1}{2} - \frac{1}{2} \cos (\phi - \phi_{\mathrm{n}} + \pi) \sin (N \phi_{\mathrm{cp}}) & ( \frac{\pi}{2} \leq \theta \leq \pi ). \end{array} \right. \label{eq_py_3} \end{equation} which is Eq.~(2) of the main text. \section{\uppercase{E}\lowercase{stimation of hyperfine parameters}\label{sec_4}} From Eqs.~(\ref{eq_f1}) and (\ref{eq_cos_phi_cp}), we obtain \begin{eqnarray} A_{\parallel} &=& \frac{ \cos \alpha \cos \beta - \cos (\pi-2\pi f_{\mathrm{cp}} \tau) }{ \sin \alpha \sin \beta} f_1 - f_0 \label{eq_A_para} \\ A_{\perp} &=& \sqrt{ f_1^2 - (f_0 + A_{\parallel})^2 }. \label{eq_A_perp} \end{eqnarray} $f_{\mathrm{cp}}$ was determined in Fig.~2(b) of the main text. We relate $f_{\mathrm{cp}}$ and $\phi_{\mathrm{cp}}$ as $\phi_{\mathrm{cp}} = \pi - 2\pi f_{\mathrm{cp}} \tau$. $f_0$ and $f_1$ were determined in Fig.~2(c) of the main text. Substituting $f_{\mathrm{cp}}$ = 10.2~kHz, $f_0$ = 387.5~kHz, and $f_1$ = 215.6~kHz, and $\tau$ = 1.6875~$\mu$s into Eqs.~(\ref{eq_A_para}) and (\ref{eq_A_perp}), we deduce $A_{\parallel}$ = $-$173.1~kHz and $A_{\perp}$ = 22.3~kHz as given in the main text. \section{\uppercase{T}\lowercase{he number of nuclear spins contributing to the signal}\label{sec_5}} Even though $A_{\parallel}$ and $A_{\perp}$ are determined with high precisions, there is still a possibility that multiple nuclear spins are contributing to the experimental data. This is possible if multiple nuclear spins share the same hyperfine parameters (within experimental accuracy), This occurs, for instance, if they occupy lattice sites equivalent to each other by symmetry. When multiple nuclear spins are involved but can be regarded as independent, $P_{\mathrm{X}} $ is given as \begin{equation} P_{\mathrm{X}} = \frac{1}{2} [1 + \prod_{i = 1}^{N_{\mathrm{nuc}}}(2P_{\mathrm{X},i} - 1)], \label{eq_P_X_N} \end{equation} where $N_{\mathrm{nuc}}$ is the number of nuclear spins. Figure~\ref{fig_s4} shows the data in Fig.~2(b) of the main text, together with simulations ($N_{\mathrm{nuc}}$ = 1, 2, and 3) performed using the experimental values. \begin{figure*} \begin{center} \includegraphics{fig_s4.eps} \caption{ Data in Fig.~2(b) of the main text, together with simulations for $N_{\mathrm{nuc}}$ = 1, 2, and 3. A single exponential decay with the time constant of 1.23~ms is superposed. \label{fig_s4}} \end{center} \end{figure*} A large oscillation amplitude with its minimum less than 0.5 and a single-component oscillation are hallmarks of a single nuclear spin. The experimental data clearly shows these features, especially at early times before the damping occurs. \section{\uppercase{P}\lowercase{ulse}\uppercase{P}\lowercase{ol method and calibration of the }\uppercase{RF}\lowercase{ pulse length}\label{sec_6}} In the main text, PulsePol, a pulsed DNP technique recently developed by Schwartz {\it et al.}~\cite{SST+17s}, was used for the polarization transfer between the NV electron spin and the $^{13}$C nuclear spins. While other DNP techniques such as NOVEL~\cite{HDSW88s} and optical pumping~\cite{SSC+16s} may also be applied to this system, we find PulsePol particularly useful because of its robustness and control flexibility. Even at low magnetic fields used in our experiments ($B_0$ = 36.2~mT), highly efficient polarization transfer was achieved. In addition, the direction of the polarization can be controlled. Here, we elaborate these features of PulsePol by simulations. In Fig.~\ref{fig_s5}(a), we simulate the polarization transfer signal (upper) and the nuclear spin state (lower) after PolX/Y with $N_{\mathrm{pol}}$ = 5. \begin{figure*} \begin{center} \includegraphics{fig_s5.eps} \caption{ (a) Simulations of the polarization transfer (upper) and the nuclear spin state (lower) after PolX/Y with $N_{\mathrm{pol}}$ = 5. (b) Selective polarization transfer by PulsePol as a function of $N_{\mathrm{rep}}$. (c) $P_{\mathrm{sat}}$ at $k$ = 3 as a function of the RF pulse length. \label{fig_s5}} \end{center} \end{figure*} The initial state of the target nuclear spin is assumed to be completely mixed ($\rho_{\mathrm{n}} = \frac{1}{2} \bm{1}$). The NV spin is initialized to $|m_S = 0 \rangle$, and the probability of staying in $|m_S = 0 \rangle$, $P_0$, is reduced when the polarization transfer occurs. Simulated $P_0$ reproduces well with the experimental observation shown in Fig.~3 ($\square$) of the main text. Simulated $\langle I_z \rangle$ shows that the polarization direction is controllable by $k$ (= $2 \tau_{\mathrm{pol}}/f_{\mathrm{t}}$) and the phase-cycling of the sequence (PolX or PolY). $N_{\mathrm{pol}}$ is a tunable parameter that determines the bandwidth. We used $N_{\mathrm{pol}}$ = 5 to selectively polarize the target nuclei at $k$ = 3 by tuning the bandwidth close to that of the $N$ =16 CP sequence. As described in~\cite{SST+17s}, the rate of polarization transfer is maximized at $k$ = 3. The number of repetitions required to transfer an angular momentum of 1 ($ \times \hbar$) is estimated as $N_{\mathrm{rep}} \geq \pi f_{\mathrm{t}}/(3 (2 + \sqrt{2}) A_{\perp} N_{\mathrm{pol}}) \sim$ 2.5, where $f_{\mathrm{t}}$ = 301.6~kHz and $A_{\perp}$ = 22.3~kHz are the experimental values. The color plot in Fig.~\ref{fig_s5}(b) shows the evolution of the polarization transfer as $N_{\mathrm{rep}}$ is increased. The spectrum for $N_{\mathrm{rep}}$ = 1 corresponds to the square ($\square$) points of Fig.~3 of the main text. The transferred polarization is evaluated by integrating the $k$ = 3 dip [the lower panel of Fig.~\ref{fig_s5}(b)]. The transfer is efficient up to $N_{\mathrm{rep}}$ = 3, consistent with the estimation above. The RF pulse length was calibrated by observing the saturated polarization $P_{\mathrm{sat}}$ as a function of the RF pulse length [Fig.~\ref{fig_s5}(c)]. In order to accurately track the motion of the nuclear spin, we want the lengths of the RF pulses to be integer multiples of oscillation periods. We set the length of the RF $\pi$ ($\pi$/2) pulse as 199.443~$\mu$s (102.041~$\mu$s), corresponding to 43 (22) oscillation periods of $f_1$ = 215.6~kHz. \section{\uppercase{A}\lowercase{nalysis of undersampled oscillations}\label{sec_7}} The data in Fig.~4(b) of the main text was intentionally undersampled in order to secure sufficiently long $t >$ 1~ms [Fig.~\ref{fig_s6}(a)]. \begin{figure*} \begin{center} \includegraphics{fig_s6.eps} \caption{ (a) Pulse protocol to observe nuclear free precessions. See also Fig.~1(f) of the main text. (b) Undersampled nuclear free precessions. Solid lines are fits by $A e^{-t/T_{\mathrm{2n}}^*} \cos (2\pi f^{(4)} t + \alpha^{(4)}) + B$. (c) $T_{\mathrm{2n}}^{*}$ of the measured nuclear free precessions. $\circ$: PolY, $\triangle$: PoX. The dashed line indicates $T_1$ of the sensor spin. \label{fig_s6}} \end{center} \end{figure*} We outline a procedure to analyze undersampled data. In signal processing, the sampling theorem asserts that an oscillation at frequency $f$ can be recovered if the sampling rate $(\Delta t)^{-1}$ is set to satisfy \begin{equation} \frac{1}{\Delta t} \geq 2f \,\,\,{\mathrm{or}}\,\,\, f_{\mathrm{N}} \equiv \frac{1}{2 \Delta t} \geq f, \end{equation} where $f_{\mathrm{N}}$ is called the Nyquist frequency. When this condition is not met, measurement points are undersampled. There is a non-zero integer $m$ that satisfies \begin{equation} m f_{\mathrm{N}} \leq f < (m + 1) f_{\mathrm{N}}. \end{equation} For even $m$, $f$ and the frequency obtained by undersampling, $f^{(m)}$, is connected by \begin{eqnarray} f^{(m)} = f - m f_{\mathrm{N}}. \end{eqnarray} Suppose that by fitting to the undersampled data we obtain the phase $\eta^{(m)}$ but the original oscillation has the phase $\eta$. The following relation must be satisfied for any integer $k$: \begin{equation} 2\pi f^{(m)} (t_0 + k \Delta t) + \eta^{(m)} = 2\pi f ( t_0 + k \Delta t ) + \eta, \end{equation} where $t_0$ is the starting time of the sampling. It follows that $\eta = \eta^{(m)} - 2\pi \, m f_{\mathrm{N}} (t_0 + k \Delta t)$, and therefore \begin{equation} \eta = \eta^{(m)} - m \frac{\pi t_0}{\Delta t} \quad (\bmod \,\, 2 \pi). \label{eq_eta} \end{equation} From Fig.~4(a) of the main text, we already know the oscillation frequency to be 216~kHz. The frequency resolution is limited by relatively short $t$ of about 100~$\mu$s, but is sufficient to determine an appropriate undersampling condition. We chose $\Delta t$ = 9.600~$\mu$s for undersampling of $m$ = 4, and $t_0$ = 6.872~$\mu$s in order to account for the RF pulse length. We can then recover the original phase using Eq.~(\ref{eq_eta}). Figure~\ref{fig_s6}(b) shows undersampled nuclear free precessions for $\phi_{\mathrm{rf}}$ = 0$^{\circ}$ and 270$^{\circ}$ (see Sec.~\ref{sec_8} for the definition of $\phi_{\mathrm{rf}}$). From fits to the data, we obtain 7.5~kHz, consistent with undersampling of $m$ = 4. The recovered original frequency is 215.8~kHz. The phases determined by Eq.~(\ref{eq_eta}) are plotted in Fig.~4(b) of the main text. Even though $t$ is extended longer than 1~ms, no significant decays are observed, as expected for nuclear spins. The decay times $T_{\mathrm{2n}}^*$ are plotted in Fig.~\ref{fig_s6}(c). They all take similar values, and fall on around $T_1$ = 5.1~ms of the sensor spin. It is likely that true $T_{2\mathrm{n}}^*$ could be longer. Nonetheless, $t$ can still be extended up to $T_1$ to achieve better precisions. \section{\uppercase{D}\lowercase{etermination of $\phi$}\label{sec_8}} In our demonstration shown in Fig.~4(a) of the main text, the origin of the time axis is defined as the end time of the RF pulse [Fig.~\ref{fig_s6}(a)], after which the CP sequence to detect nuclear spin precessions can be applied (otherwise the RF field much stronger than the nuclear spin signal will be detected, as performed in Sec.~\ref{sec_2}). The crucial information is the azimuthal angle of $\bm{\nu}_{\mathrm{n}}(s = 0)$, $\phi_{\mathrm{n}}(0)$. If $\phi_{\mathrm{n}}(0)$ is known and the initial phase of the free precession is experimentally determined as $\phi_0$, Eq.~(\ref{eq_py_3}) allows us to determine $\phi$ as \begin{equation} \phi = \left\{ \begin{array}{ll} \phi_{\mathrm{n}}(0) + \phi_0 & ( 0 \leq \theta < \frac{\pi}{2} ) \\ & \\ \phi_{\mathrm{n}}(0) + \phi_0 + \pi & ( \frac{\pi}{2} \leq \theta \leq \pi ). \end{array} \right. \label{eq_phi} \end{equation} Since we have determined the polar angle $\theta$ of the target nuclear spin to be 94.8$^{\circ}$ in the main text, the second relation applies to our case. In future applications aiming at single-molecular NMR spectroscopy, target molecules will be placed at a diamond surface, and the polar angles of target nuclei will always be less than $\pi/2$. A primary task in this section is to determine $\phi_{\mathrm{n}}(0)$ accurately. It should be noted that in real space no sooner does the nuclear spin feel the RF field at $s$ = $-t_1$ than the nuclear spin Bloch vector $\bm{\nu}_{\mathrm{n}}(s)$ has a transverse component. We have also seen in Sec.~\ref{sec_3} that the precession axis of the nuclear spin is tilted from the $z$ axis, due to the hyperfine interaction. Therefore, it is important to carefully examine the dynamics of the target nuclear spin in the time range $s \leq 0$. To set the stage, we review the experimental sequence of Fig.~\ref{fig_s6}(a) step by step. See also Fig.~\ref{fig_s7}. \begin{figure*} \begin{center} \includegraphics{fig_s7.eps} \caption{ $\bm{\nu}_{\mathrm{n}}(s < -t_2)$ = $-\bm{e}_z$ (blue arrow), $\bm{e}_{\mathrm{p}}$ (red arrow), and $\bm{e}_{\mathrm{rf}}$ (orange arrow) in the sensor coordinate. \label{fig_s7}} \end{center} \end{figure*} \begin{description} \item[$s < -t_2$] \mbox{}\\ The target nuclear spin is polarized by PulsePol. We assume that PolY at $k$ = 3 is used, so that the nuclear spin is initialized into $\bm{\nu}_{\mathrm{n}}(s < -t_2) = -\bm{e}_z$ (Sec.~\ref{sec_6}). \item[$-t_2 \leq s < -t_1$] \mbox{}\\ A 2-$\mu$s-long chirped microwave $\pi$ pulse is applied to flip the NV spin (Sec.~\ref{sec_2}). During this time, the NV spin is coherently driven from $|m_S = 0 \rangle$ to $|m_S = -1 \rangle$, and the precession axis of the nuclear spin tilts from $\bm{e}_z$ to $\bm{e}_{\mathrm{p}}$ concurrently (Sec.~\ref{sec_3}.) We approximate that the precession axis is $\bm{e}_z$ until the midpoint of the chirped pulse ($s$ = $-t_2$). It then jumps to $\bm{e}_{\mathrm{p}}$ and is fixed subsequently. In our experiments, $(t_2 - t_1)$ = 2.288~$\mu$s, which is broken down into 1~$\mu$s of the latter half of the chirped pulse, 0.2~$\mu$s of the trigger time for the following RF pulse, and $t_{\mathrm{delay}}$ = 1.088~$\mu$s (Sec.~\ref{sec_2}). \item[$-t_1 \leq s < 0$] \mbox{}\\ An RF $\pi/2$ pulse at $f_1$ = 215.6~kHz is applied to tip the nuclear spin. The pulse length is set as $t_1$ = $22/f_1$ = 102.041~$\mu$s (Sec.~\ref{sec_6}). We define $\bm{e}_{\mathrm{rf}}$ as \begin{equation} \bm{e}_{\mathrm{rf}} = \frac{ \bm{B}_{\mathrm{c}} }{ B_{\mathrm{c}} } = \left( \begin{array}{c} \sin \theta_{\mathrm{c}} \cos \phi_{\mathrm{c}} \\ \sin \theta_{\mathrm{c}} \sin \phi_{\mathrm{c}} \\ \cos \theta_{\mathrm{c}} \end{array} \right) \end{equation} with $\theta_{\mathrm{c}}$ = 55.7$^{\circ}$ and $\phi_{\mathrm{c}}$ = 186.2$^{\circ}$ (Sec.~\ref{sec_2}). When a cosine wave $\cos (2\pi f_1 t)$ is applied on the coil, the RF magnetic field $\bm{B}_{\mathrm{rf}}$ first points to $\bm{e}_{\mathrm{rf}}$, and after half the oscillation period it points to $-\bm{e}_{\mathrm{rf}}$; the coil generates a linearly-polarized RF magnetic field along $\bm{e}_{\mathrm{rf}}$. \item[$0 \leq s < t_0$] \mbox{}\\ The RF pulse is turned off, and the nuclear spin precesses freely. We set $t_0$ = 6.872~$\mu$s. \item[$t_0 \leq s$] \mbox{}\\ The free precession of the nuclear spin is detected by the CP sequence with the $(\pi/2)_{\mathrm{Y}}$ readout pulse (Sec.~\ref{sec_3}). We experimentally determine the precession frequency $f_{\mathrm{p}}$ and $\phi_0$. \end{description} With this setting, let us first consider the simplest case, in which both $\bm{e}_{\mathrm{p}} \parallel \bm{e}_z$ and $f_1$ = $f_{\mathrm{p}}$ are satisfied. The first condition is justified when $B_0$ is higher than a few hundreds of mT. For instance, at $B_0$ = 1~T, $\theta_{\mathrm{p}}$ should be suppressed to less than 0.1$^{\circ}$. We define the waveform of the RF field as \begin{equation} \bm{B}_{\mathrm{rf}}(s) = 2 b(s) \cos ( 2\pi f_1 s + \phi_{\mathrm{rf}} ) \, \bm{e}_{\mathrm{rf}}, \end{equation} with \begin{equation} b(s) = \left\{ \begin{array}{ll} (4t_1)^{-1} & ( -t_1 \leq s \leq 0 ) \\ & \\ 0 & (\mathrm{otherwise}). \end{array} \right. \end{equation} Because $\bm{e}_{\mathrm{p}} \parallel \bm{e}_z$, the component of $\bm{B}_{\mathrm{rf}}$ projected onto the $xy$ plane ($\bm{B}_{\mathrm{rf}, \perp}$) only acts to rotate the nuclear spin. In addition, by invoking the rotating wave approximation, it is sufficient to consider a clockwise-rotating component of $\bm{B}_{\mathrm{rf}, \perp}$, which co-rotates with the nuclear spin. The component of $\bm{B}_{\mathrm{rf}}$ parallel to the $z$ axis ($\bm{B}_{\mathrm{rf}, \parallel}$) modifies the nuclear precession frequency. However, such a frequency modulation averages out by setting the RF pulse length as an integer multiple of the oscillation period (Sec.~\ref{sec_6}), and the effect of $\bm{B}_{\mathrm{rf}, \parallel}$ becomes negligible. $\phi_{\mathrm{n}}(0)$ is evaluated as \begin{equation} \phi_{\mathrm{n}}(0) = -\phi_{\mathrm{rf}} +\phi_{\mathrm{c}} - \frac{\pi}{2} = -\phi_{\mathrm{rf}} + 96.2^{\circ} \quad (\bmod \,\, 360^{\circ}). \label{eq_phi_n_1} \end{equation} Here, $-\phi_{\mathrm{rf}} +\phi_{\mathrm{c}}$ is the azimuthal angle of the rotation axis of the nuclear spin (note that $\phi_{\mathrm{rf}}$ changes clockwise), and by the RF $\pi/2$ pulse the nuclear spin ends up in the direction orthogonal to it (the negative sign in $-\frac{\pi}{2}$ reflects the rotation direction of the nuclear spin). When $f_1 \neq f_{\mathrm{p}}$, the detuning accumulates as $-2\pi (f_{\mathrm{p}} - f_1) t_1$, relative to $-\phi_{\mathrm{rf}} +\phi_{\mathrm{c}}$. Again, the negative sign is due to the fact that the nuclear spin precesses clockwise, whereas $\phi$ is defined counter-clockwise. Experimentally obtained $f_{\mathrm{p}}$ is on average $\bar{f_{\mathrm{p}}}$ = 215.7908~kHz, so the effect of detuning amounts to $-0.319^{\circ} \times 22 = -7.0^{\circ}$. $\phi_{\mathrm{n}}(0)$ is evaluated as \begin{equation} \phi_{\mathrm{n}}(0) = -\phi_{\mathrm{rf}} +\phi_{\mathrm{c}} - \frac{\pi}{2} -2\pi (\bar{f_{\mathrm{p}}} - f_1) t_1 = -\phi_{\mathrm{rf}} + 89.2^{\circ} \quad (\bmod \,\, 360^{\circ}). \label{eq_phi_n_2} \end{equation} In Fig.~4(b) of the main text, we experimentally obtain $\phi_0$ as \begin{equation} \phi_0 = \phi_{\mathrm{rf}} + 334.0^{\circ} \quad (\bmod \,\, 360^{\circ}). \label{eq_phi_0} \end{equation} From Eqs.~(\ref{eq_phi}), (\ref{eq_phi_n_2}), and (\ref{eq_phi_0}), we obtain \begin{equation} \phi = (-\phi_{\mathrm{rf}} + 89.2^{\circ}) + (\phi_{\mathrm{rf}} + 334.0^{\circ}) + 180^{\circ} = 243.2^{\circ} \quad (\bmod \,\, 360^{\circ}). \end{equation} $\phi$ = 243.2$^{\circ}$ is given as the dashed line in the lower panel of Fig.~4(b) of the main text, and the accuracy range 243.2$\pm$5.3$^{\circ}$ is shown in the left panel of Fig.~4(c). \begin{figure*} \begin{center} \includegraphics{fig_s8.eps} \caption{ Dynamics of the target nuclear spin with the simulation parameters $\phi$ = 0$^{\circ}$ and $f_1$ = $f_{\mathrm{p}}$ = 215.6~kHz. (a) $\phi_{\mathrm{rf}}$ = 0$^{\circ}$, (b) 90$^{\circ}$, (c) 180$^{\circ}$, and (d) 270$^{\circ}$. The orange arrow indicates $\bm{e}_{\mathrm{rf}}$. The red circles ($\circ$) indicate the initial position. The red (blue) curves are the trajectories during $-t_2 \leq s < -t_1$ ($-t_1 \leq s \leq 0$). The blue circles ($\circ$) follow the stroboscopic trajectories at $s$ = $-t_1 + k/f_1$, with $k$ = 0, 1, $\cdots$, 22, similar to the ones observed in the rotating frame. The green arrows indicate $\phi_{\mathrm{n}}(0)$ evaluated using Eq.~(\ref{eq_phi_n_1}). The values of simulated $\phi_{\mathrm{n}}(0)$ are 90.9$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 0$^{\circ}$, 4.8$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 90$^{\circ}$, 282.7$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 180$^{\circ}$, and 188.1$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 270$^{\circ}$. \label{fig_s8}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics{fig_s9.eps} \caption{ Dynamics of the target nuclear spin with the simulation parameters $\phi$ = 248.8$^{\circ}$, $f_1$ = 215.6~kHz, and $\bar{f_{\mathrm{p}}}$ = 215.7908~kHz. The green arrows indicate $\phi_{\mathrm{n}}(0)$ evaluated using Eq.~(\ref{eq_phi_n_2}). The values of simulated $\phi_{\mathrm{n}}(0)$ are 98.9$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 0$^{\circ}$, 358.4$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 90$^{\circ}$, 272.4$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 180$^{\circ}$, and 189.9$^{\circ}$ at $\phi_{\mathrm{rf}}$ = 270$^{\circ}$. \label{fig_s9}} \end{center} \end{figure*} For more accurate estimation, we consider the case $\bm{e}_{\mathrm{p}} \nparallel \bm{e}_z$. As mentioned above, this effect is suppressed by applying large $B_0$, but the present experiments are performed at $B_0$ = 36.2~mT, rendering the polar angle of $\bm{e}_{\mathrm{p}}$ to be 5.9$^{\circ}$ (Sec.~\ref{sec_3}). The effect of $\bm{e}_{\mathrm{p}}$ is worth a careful analysis. This is best done by simulating real-space trajectories of the target nuclear spin in the time range $-t_2 \leq s \leq 0$, based on the Bloch equation: \begin{equation} \frac{ d\bm{\nu}_{\mathrm{n}} (s) }{ ds } = 2\pi \bm{\nu}_{\mathrm{n}}(s) \times [ f_{\mathrm{p}} \, \bm{e}_{\mathrm{p}} + B_{\mathrm{rf}}(s) \, \bm{e}_{\mathrm{rf}} ], \label{eq_bloch} \end{equation} where $b(s)$ appearing in $B_{\mathrm{rf}}(s)$ is now \begin{equation} b(s) = \left\{ \begin{array}{ll} (4 t_1 | \bm{e}_{\mathrm{rf}} \times \bm{e}_{\mathrm{p}} |)^{-1} & ( -t_1 \leq s \leq 0 ) \\ & \\ 0 & (\mathrm{otherwise}). \end{array} \right. \end{equation} To gain physical insights, we first perform simulations by setting $\phi$ = 0$^{\circ}$ and $f_1$ = $f_{\mathrm{p}}$ = 215.6~kHz. The results are shown in Fig.~\ref{fig_s8}. Simulated $\phi_{\mathrm{n}}(0)$ deviates from Eq.~(\ref{eq_phi_n_1}) on the order of $\theta_{\mathrm{p}}$. We observe that the inital precession about $\bm{e}_{\mathrm{p}}$, before the RF pulse is applied, is a major source of deviation. Since the azimuthal angle of $\bm{e}_{\mathrm{p}}$ is $\phi$, this type of deviation sinusoidally depends on $\phi$ as well as on $\phi_{\mathrm{rf}}$ and $(t_2 - t_1)$. One way to suppress this effect at low fields may be to set $(t_2 - t_1)$ as an integer multiple of the precession period, so that the nuclear spin returns to $-\bm{e}_z$ when the RF field is applied. We remark that there is a more subtle, additional effect of $\bm{e}_{\mathrm{p}}$ that is in play during the RF pulse. Once the RF field is applied, the nuclear spin rotates, to a good approximation, around $\bm{B}_{\mathrm{rf}, \perp}$. However, because now the nuclear spin feels a circularly-polarized RF field that is rotating in the plane perpendicular to $\bm{e}_{\mathrm{p}}$, the nuclear spin does not end up in the $xy$ plane exactly (even when the nuclear spin starts to rotate from $-\bm{e}_z$). When the nuclear spin is projected onto the $xy$ plane, this effect is seen to depend on $\phi$, but not on $\phi_{\mathrm{rf}}$. What is seen in Fig.~\ref{fig_s8} is a cumulative effect of two roles that $\bm{e}_{\mathrm{p}}$ plays, making the deviation from Eq.~(\ref{eq_phi_n_1}) less systematic as changing $\phi_{\mathrm{rf}}$. If $(t_2 - t_1)$ is as an integer multiple of the precession period, only the latter is effective. The deviation becomes systematic, and the analysis will be facilitated. We set $\phi$ and $\phi_{\mathrm{rf}}$ as parameters in the simulation, and find the value of $\phi$ that best reproduces the experimental data, The precession frequency is set as $\bar{f_{\mathrm{p}}}$, and assumed to be independent of $\phi_{\mathrm{rf}}$. The simulation result for $\phi$ = 248.8$^{\circ}$ is shown in Fig.~\ref{fig_s9}, for which the deviation from the experimental data is minimized. $\phi$ = 248.8$^{\circ}$ is given as the dotted line in the lower panel of Fig.~4(b) of the main text, and the accuracy range 248.8$\pm$2.7$^{\circ}$ is shown in the middle panel of Fig.~4(c). Strictly speaking, measured precession frequencies $f_{\mathrm{p}}$ vary from one measurement to another. This is partly attributed to errors in the fits, but it is also conceivable that the precession frequency indeed differs in different measurements, due to, for instance, temperature drifts (which can change $B_0$ provided by a permanent magnet). Therefore, lastly, we perform simulations by setting $\phi$ as the only parameter and using the values of $f_{\mathrm{p}}$ for the respective runs. The results are shown as the circles ($\circ$) in the lower panel of Fig.~4(b) of the main text, and their average value, 247.8$^{\circ}$, is given as the solid line. The accuracy range 247.8$\pm$4.1$^{\circ}$ is shown in the right panel of Fig.~4(c). The lattice site that falls on these accuracy ranges are shown as blue circles in Fig.~4(c), which have $\phi$ = 250.9$^{\circ}$. Therefore, we determine is the position of the target nuclear spin as ($r$, $\theta$, $\phi$) = (6.84~\AA, 94.8$^{\circ}$, 250.9$^{\circ}$). \section{\uppercase{D}\lowercase{emonstration of the protocol on a different single $^{13}$}\uppercase{C}\lowercase{ nuclear spin}\label{sec_10}} Here, we show results on a single nuclear spin different from the one discussed in the main text. Figure~\ref{fig_s10}(a) shows the NMR spectrum given in Fig.~2(a) of the main text. \begin{figure*} \begin{center} \includegraphics{fig_s10.eps} \caption{ (a) Data in Fig.~2(a) of the main text, with a simulation to a different dip at $f_{\mathrm{t}}$ = 443.4~kHz. (b) $\phi_{\mathrm{rf}}$ vs. $\phi_0$ for the second target nuclear spin. The experimental parameters are $f_1$ = 503.0~kHz, $t_0$ = 4.403~$\mu$s, $t_1$ = 214.725~$\mu$s (108 periods), and $t_2$ = 216.797~$\mu$s. $\bar{f_{\mathrm{p}}}$ = 503.0~kHz is obtained. (c) Estimated $\phi$ of the second target nuclear spin. The accuracy ranges are 357.7$\pm$8.6$^{\circ}$ for $0 \leq \theta < \frac{\pi}{2}$ and 177.7$\pm$8.6$^{\circ}$ for $\frac{\pi}{2} \leq \theta < \pi$. \label{fig_s10}} \end{center} \end{figure*} We observe that there is another dip at $f_{\mathrm{t}}$ = 443.4~kHz, albeit overlapping with the broad dip arising from the bath nuclei. We examined this signal in more detail (as done on the first nuclear spin in the main text), and determined its hyperfine parameters as $A_{\parallel}$ = 112.1~kHz and $A_{\perp}$ = 59.9~kHz. Unfortunately, we have not been able to narrow down the lattice site it belongs to, because we do not find theoretical values sufficiently close to them. Although we do not have $r$ and $\theta$ of this second nuclear spin, the protocol can still be applied to estimate $\phi$. A summary of the experiments are shown in Figs.~\ref{fig_s10}(b) and (c). All the experimental parameters, given in the caption of Fig.~\ref{fig_s10}, are optimized for this nuclear spin. Note that there remain two possible ranges of $\phi$, differing by 180$^{\circ}$, due to the lack of knowledge on $\theta$. Nonetheless, these values are different from those obtained from the first target nuclear spin, and support our claim that the values of $\phi$ obtained in our protocol are specific to the individual nuclear spins. \section{\uppercase{O}\lowercase{bservation of bath $^{13}$}\uppercase{C}\lowercase{ nuclear spins by coherent averaging}\label{sec_11}} As commented in the main text, our protocol can be combined with the high-resolution spectroscopy method reported in~\cite{SGS+17s,AS18s,GBL+18s,BCZD17s}. We reinforce this claim by observing bath $^{13}$C nuclear spins, using the sequence shown in Fig.~\ref{fig_s11}(a). \begin{figure*} \begin{center} \includegraphics{fig_s11.eps} \caption{ (a) Pulse protocol to observe the bath nuclear spins. The RF frequency is set at $f_{\mathrm{c}} = \gamma_{\mathrm{c}} B_0$ = 387.5~kHz. (b) $P_{\mathrm{Y}}$ as a function of $M t_{\mathrm{L}}$. (c) FFT of (b). The dashed line indicates $f_{\mathrm{c}}$. \label{fig_s11}} \end{center} \end{figure*} On the one hand, this sequence is essentially the same as coherently averaged synchronized readout described in~\cite{GBL+18s}; after inducing free precessions of bath nuclei by an RF $\pi/2$ pulse, we record them by repeating the CP sequence with the regular interval of $t_{\mathrm{L}}$. On the other hand, coherent averaging of the signal from a handful of nuclear spins is possible only when PulsePol is applied prior to the RF $\pi/2$ pulse. Even though the bath nuclei contain multiple nuclear spins, they cannot be regarded as an ensemble as in the case of external proton spins in~\cite{GBL+18s}. Rather, the bath nuclei are a collection of independent single nuclear spins, the individual hyperfine parameters of which could in principle be resolved by high-resolution spectroscopy. We polarize the bath nuclei by PulsePol with 2$\tau_{\mathrm{pol}}$ = 7.744~$\mu$s (= $3f_{\mathrm{c}}^{-1}$), $N_{\mathrm{pol}}$ = 1, and $N_{\mathrm{rep}}$ = 200. The RF $\pi/2$ pulse tuned at $f_{\mathrm{c}}$ is applied, and the CP sequence with $N$ = 2 and 2$\tau$ = 2.581~$\mu$s (= $f_{\mathrm{c}}^{-1}$) was repeated $M$ = 501 times with the interval of $t_{\mathrm{L}}$ = 8.000~$\mu$s. To mitigate possible back actions on the nuclear spins, the number of pulses in the CP sequence was minimized ($N$ = 2). For the same reason, the signals are undersampled at $m$ = 6. Figure~\ref{fig_s11}(b) shows measured oscillations for PolY and PolX, and their FFT spectra (real part, with the frequency shift due to undersampling corrected) are shown in Fig.~\ref{fig_s11}(c). Clearly, the oscillation phases are opposite, consistent with our main result on the single nuclear spin [Fig.~4(b) of the main text]. While a further analysis of Figs.~\ref{fig_s11}(b) and (c) is outside of the scope of the present work, multiple signals present in the data suggest that we are detecting multiple nuclear spins simultaneously and the phases of the respective frequency components carry the information on the azimuthal angles of the respective nuclear spins. We note that the sequence in Fig.~\ref{fig_s11}(a) does not use a microwave $\pi$ pulse prior to the RF $\pi/2$ pulse, and therefore the precession axis of the nuclear spins is fixed as $\bm{e}_z$. Finally, we checked that the signal decay time does not depend on $t_{\mathrm{L}}$, indicating that the present decay time is not limited by the back actions that the sequence exerts on the nuclear spins. An understanding of sources of the decay will be a subject of future research.
1,314,259,995,524
arxiv
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \def{\it i.e.}{{\it i.e.}} \def\IZ{\relax\ifmmode\mathchoice {\hbox{\cmss Z\kern-.4em Z}}{\hbox{\cmss Z\kern-.4em Z}} {\lower.9pt\hbox{\cmsss Z\kern-.4em Z}} {\lower1.2pt\hbox{\cmsss Z\kern-.4em Z}}\else{\cmss Z\kern-.4em Z}\fi} \def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}} \def{\it e.g.}{{\it e.g.}} \def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}} \def{\rm gh}{{\rm gh}} \def{\rm sgh}{{\rm sgh}} \def{\rm NS}{{\rm NS}} \def{\cal R}{{\rm R}} \def{\rm i}{{\rm i}} \def\comm#1#2{\left[ #1, #2\right]} \def\acomm#1#2{\left\{ #1, #2\right\}} \def\mathrm{tr}{{\rm tr\,}} \newlength{\bredde} \def\slash#1{\settowidth{\bredde}{$#1$}\ifmmode\,\raisebox{.15ex}{/} \hspace*{-\bredde} #1\else$\,\raisebox{.15ex}{/}\hspace*{-\bredde} #1$\fi} \newcommand{\ft}[2]{{\textstyle\frac{#1}{#2}}} \newcommand {\Rbar} {{\mbox{\rm$\mbox{I}\!\mbox{R}$}}} \newcommand {\Hbar} {{\mbox{\rm$\mbox{I}\!\mbox{H}$}}} \newcommand {\Cbar} {\mathord{\setlength{\unitlength}{1em} \begin{picture}(0.6,0.7)(-0.1,0) \put(-0.1,0){\rm C} \thicklines \put(0.2,0.05){\line(0,1){0.55}} \end {picture}}} \newsavebox{\zzzbar} \sbox{\zzzbar} {\setlength{\unitlength}{0.9em} \begin{picture}(0.6,0.7) \thinlines \put(0,0){\line(1,0){0.6}} \put(0,0.75){\line(1,0){0.575}} \multiput(0,0)(0.0125,0.025){30}{\rule{0.3pt}{0.3pt}} \multiput(0.2,0)(0.0125,0.025){30}{\rule{0.3pt}{0.3pt}} \put(0,0.75){\line(0,-1){0.15}} \put(0.015,0.75){\line(0,-1){0.1}} \put(0.03,0.75){\line(0,-1){0.075}} \put(0.045,0.75){\line(0,-1){0.05}} \put(0.05,0.75){\line(0,-1){0.025}} \put(0.6,0){\line(0,1){0.15}} \put(0.585,0){\line(0,1){0.1}} \put(0.57,0){\line(0,1){0.075}} \put(0.555,0){\line(0,1){0.05}} \put(0.55,0){\line(0,1){0.025}} \end{picture}} \newcommand{\mathord{\!{\usebox{\zzzbar}}}}{\mathord{\!{\usebox{\zzzbar}}}} \def{\rm Im ~}{{\rm Im ~}} \def{\rm Re ~}{{\rm Re ~}} \newcommand{\langle}[1]{\langle{#1}|} \newcommand{\rangle}[1]{|{#1}\rangle} \newcommand{\braket}[2]{\langle{#1}|{#2}\rangle} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\sect}[1]{Section~\ref{#1}} \newcommand{\eq}[1]{(\ref{#1})} \newcommand{\fig}[1]{Figure~\ref{#1}} \newcommand{\chap}[1]{Chapter~\ref{#1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \def\Gamma{\Gamma} \def{\bf K}{{\cal K}} \def{\cal N}{{\cal N}} \def{\cal H}{{\cal H}} \newcommand{{1\over2}}{{1\over2}} \newcommand{{\rm Tr}}{{\rm Tr}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\alpha{\alpha} \def\beta{\beta} \def\hat{d}{\delta} \def\varepsilon{\epsilon} \def\lambda}\def{\bf L}{\Lambda{\lambda} \def\gamma{\gamma} \def{(p)}{\partial} \def\rho{\rho} \def\tilde{\tilde} \def\varphi{\varphi} \def\Gamma{\Gamma} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O}} \def{\cal R}{{\cal R}} \def{\bf R^3}{{\bf R^3}} \def\({\left (} \def\){\right )} \def\[{\left [} \def\[{\right ]} \def{\cal S}{{\cal S}} \def{(\phi_{,r})}{{(\phi_{,r})}} \def\mathrm{tr}{\mathrm{tr}} \def\rightarrow{\rightarrow} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\mu}\def\nu{\nu{\mu} \def\nu{\nu} \def\Delta{\Delta} \def{(p)}{\partial} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def {\bf B}{{\bf B}} \def {\bf K}{{\bf K}} \def {(p)}{\partial} \def \rho{{\bf r}} \def {\cal A}{{\bf A}} \def {\bf L}{{\bf L}} \def {\cal M}{{\bf M}} \newcommand{\eqn}[1] {(\ref{eq:#1})} \input amssym.def \input amssym.tex \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \newcommand{\todo}[1]{{\em \small {#1}}\marginpar{$\Longleftarrow$}} \newcommand{\labell}[1]{\label{#1}\qquad_{#1}} \newcommand{\bbibitem}[1]{\bibitem{#1}\marginpar{#1}} \def\Bibitem#1{\bibitem{#1}% \smash{\hbox to0pt{\raise1ex\hbox{\tiny[#1]}\hss}}} \newcommand{\llabel}[1]{\label{#1}\marginpar{#1}} \newcommand{\figref}[1]{Fig. \ref{#1}} \newcommand{\secref}[1]{Sec. \ref{#1}} \newcommand{\tableref}[1]{Table \ref{#1}} \newcommand{\appref}[1]{Appendix \ref{#1}} \def\Label#1{\label{#1}% \smash{\hbox to0pt{\raise1ex\hbox{\tiny[#1]}\hss}}} \def\let\Label=\label{\let\Label=\label} \def\let\bbibitem=\bibitem{\let\bbibitem=\bibitem} \def\let\Bibitem=\bibitem{\let\Bibitem=\bibitem} \def\[{\left [} \def\]{\right ]} \def\({\left (} \def\){\right )} \def\partial{\partial} \def\lambda}\def{\bf L}{\Lambda{\lambda}\def{\bf L}{\Lambda} \def\gamma{\gamma} \def\rho{\rho} \def{\mathcal O}{{\mathcal O}} \def\phi{\phi} \def\mu}\def\nu{\nu{\mu}\def\nu{\nu} \def\{ \cal{C} \}{\{ \cal{C} \}} \def\hat{d}{\hat{d}} \def\varepsilon{\varepsilon} \def{\cal N}{{\cal N}} \def{(p)}{\partial} \def\sqrt{2}{\sqrt{2}} \def{\cal M}{{\cal M}} \def{\cal A}{{\cal A}} \def{(p)}{{(p)}} \def\partial{\partial} \def{\rm sech}{{\rm sech}} \def\langle{{\langle}} \def\rangle{{\rangle}} \def\Label#1{\label{#1}% \smash{\hbox to0pt{\raise1ex\hbox{\tiny[#1]}\hss}}} \def\let\Label=\label{\let\Label=\label} \def\let\bbibitem=\bibitem{\let\bbibitem=\bibitem} \newcommand{\sphere}[0]{{\rm S}^3} \newcommand{\su}[0]{{\rm SU(2)}} \newcommand{\so}[0]{{\rm SO(4)}} \newcommand{\bK}[0]{{\bf K}} \newcommand{\bL}[0]{{\bf L}} \newcommand{\bR}[0]{{\bf R}} \newcommand{\tK}[0]{\tilde{K}} \newcommand{\tL}[0]{\bar{L}} \newcommand{\tR}[0]{\tilde{R}} \newcommand{\btzm}[0]{BTZ$_{\rm M}$} \newcommand{\ads}[1]{{\rm AdS}_{#1}} \newcommand{\ds}[1]{{\rm dS}_{#1}} \newcommand{\eds}[1]{{\rm EdS}_{#1}} \newcommand{\sph}[1]{{\rm S}^{#1}} \newcommand{\gn}[0]{G_N} \newcommand{\SL}[0]{{\rm SL}(2,R)} \newcommand{\cosm}[0]{R} \newcommand{\hdim}[0]{\bar{h}} \newcommand{\bw}[0]{\bar{w}} \newcommand{\bz}[0]{\bar{z}} \newcommand{\bx}[0]{\bar{x}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}} {\begin{equation}} \newcommand{\end{eqnarray}} {\end{equation}} \newcommand{\beqa} {\begin{eqnarray}} \newcommand{\eeqa} {\end{eqnarray}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\delta \tilde{R}}{\delta \tilde{R}} \newcommand{\delta A_t}{\delta A_t} \newcommand{\delta A_\theta}{\delta A_\theta} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\bel}[1]{\begin{equation}\label{#1}} \def\alpha{\alpha} \def\beta{\beta} \def{\mu\nu}{{\mu\nu}} \newcommand{\rep}[1]{{\bf #1}} \newcommand{\vev}[1]{\left\langle#1\right\rangle} \def\langle{\langle} \def\rangle{\rangle} \def(?FIX?){(?FIX?)} \def\vec{k}{\vec{k}} \usepackage{caption} \usepackage{subcaption} \usepackage{float} \usepackage{braket} \usepackage{bbm} \usepackage{ae} \newcommand{\ensuremath{\mathbbm{Z}}}{\ensuremath{\mathbbm{Z}}} \newcommand{\ensuremath{\mathbbm{Q}}}{\ensuremath{\mathbbm{Q}}} \newcommand{\ensuremath{\mathbbm{C}}}{\ensuremath{\mathbbm{C}}} \newcommand{\ensuremath{\mathrm{d}}}{\ensuremath{\mathrm{d}}} \newcommand{\ensuremath{\,\rd}}{\ensuremath{\,\ensuremath{\mathrm{d}}}} \newcommand{\bracket}[3]{\langle{#1}#2\rangle{#3}} \newcommand{\right)}{\right)} \newcommand{\left(}{\left(} \newcommand{\right]}{\right]} \newcommand{\left[}{\left[} \newcommand{\mathcal{L}}{\mathcal{L}} \begin{document} \begin{titlepage} \begin{flushright} \end{flushright} \vfill \begin{center} {\Large \bf Entwinement in discretely gauged theories} \vskip 10mm {\large V.~Balasubramanian$^{a,b}$, A.~Bernamonti$^{c,d}$, B.~Craps$^b$,\\ T.~De Jonckheere$^b$, F.~Galli$^{c,d}$\\ \vspace{3mm} } \vskip 7mm $^a$ David Rittenhouse Laboratory, Univ. of Pennsylvania, Philadelphia, PA 19104, USA \\ $^b$ Theoretische Natuurkunde, Vrije Universiteit Brussel, and \\ International Solvay Institutes, Pleinlaan 2, B-1050 Brussels, Belgium \\ $^c$ Perimeter Institute for Theoretical Physics, \\ 31 Caroline Street North, ON N2L 2Y5, Canada \\ $^d$ KU Leuven, Institute for Theoretical Physics,\\ Celestijnenlaan 200D, B-3001 Leuven, Belgium\\ \vskip 6mm {\small\noindent {\tt vijay@physics.upenn.edu, abernamonti@perimeterinstitute.ca, Ben.Craps@vub.ac.be, Tim.De.Jonckheere@vub.ac.be, fgalli@perimeterinstitute.ca}} \end{center} \vfill \begin{center} {\bf ABSTRACT} \vspace{3mm} \end{center} We develop the notion of ``entwinement'' to characterize the amount of quantum entanglement between internal, discretely gauged degrees of freedom in a quantum field theory. This concept originated in the program of reconstructing spacetime from entanglement in holographic duality. We define entwinement formally in terms of a novel replica method which uses twist operators charged in a representation of the discrete gauge group. In terms of these twist operators we define a non-local, gauge-invariant object whose expectation value computes entwinement in a standard replica limit. We apply our method to the computation of entwinement in symmetric orbifold conformal field theories in 1+1 dimensions, which have an $S_N$ gauging. Such a theory appears in the weak coupling limit of the D1-D5 string theory which is dual to AdS$_3$ at strong coupling. In this context, we show how certain kinds of entwinement measure the lengths, in units of the AdS scale, of non-minimal geodesics present in certain excited states of the system which are gravitationally described as conical defects and the $M=0$ BTZ black hole. The possible types of entwinement that can be computed define a very large new class of quantities characterizing the fine structure of quantum wavefunctions. \end{titlepage} \tableofcontents \section{Introduction} \label{intro} When a quantum system finds itself in a pure state $|\psi\rangle$, the entanglement between a part $A$ of the system and its complement $\bar A$ is quantified by the entanglement entropy. In most applications, $A$ and $\bar A$ describe\ the degrees of freedom in complementary spatial regions. In systems with localized degrees of freedom such as spin chain models or local quantum field theory, this corresponds to a natural separation of the total Hilbert space. However, it is also fruitful to consider other ways of separating the Hilbert space. In \cite{Balasubramanian:2011wt,Agon:2014uxa}, the degrees of freedom of a local quantum field theory were separated into high and low spatial momentum modes. It was demonstrated that in a generic interacting field theory, even in the vacuum state, the long wavelength (low energy) degrees of freedom necessarily find themselves in a nontrivial reduced density matrix because of entanglement with the short wavelength (high energy) degrees of freedom. This gives rise to finite entanglement entropy, which was computed explicitly in perturbative scalar field theories. The more traditional way of describing the low energy degrees of freedom is Wilsonian renormalization. In this language, the vacuum state of a Wilsonian low energy theory is necessarily a density matrix with finite entropy \cite{Balasubramanian:2011wt,Agon:2014uxa}. This phenomenon of UV-IR entanglement in quantum field theories could be particularly important in theories of gravity (in which ultraviolet and infrared degrees of freedom are known to couple in nontrivial ways), as well as in the ground states of strongly correlated electronic systems (see, e.g. \cite{Lundgren:2014qua}). This also raises the question whether it is fruitful to consider other non-spatial ways of dividing the degrees of freedom of quantum field theories. One interesting way to separate energy scales is by imagining a collection of local observers who have a finite duration $T$ over which they can make measurements. This is a natural situation to consider, as it describes the practical constraints of most measurements. Intuitively, given Heisenberg's energy-time uncertainty relation, such observers will be insensitive to energies smaller than the inverse duration of the experiment, so that those low energy degrees of freedom are effectively traced out, turning the accessible part of the state into a nontrivial density matrix. Thinking in this way, \cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa,Myers:2014jia,Headrick:2014eia} proposed a new information theoretic quantity, the {\it differential entropy}, as a measure of UV-IR entanglement, at least for two-dimensional theories. These effects should become stronger for theories with an energy gap that is smaller than the inverse spatial size of the system. For example, consider relativistic theories where the local degrees of freedom are matrices, e.g. $SU(N)$ gauge theories. In such systems the energy gap can be much smaller than the inverse spatial size of the system so that even a set of observers with enough time to observe the entire spatial domain will not have access to the lowest energy excitations. Another example which is easier to visualize is a ``long string'' theory, where strings or spin chains are multiply wound around a spatial circle, allowing for excitation wavelengths that exceed the system size. In both these examples, the key to the physics lies in entanglement between ``internal" degrees of freedom (matrix components, or strands of string) that are not spatially organized. One way to study the entanglement of gauge degrees of freedom in an $SU(N)$ theory is to break the gauge group into $SU(m)\times SU(N-m)$ while allowing for interactions between the two sectors. This could be realized holographically by separating a stack of $N$ branes into a stack of $m$ and one of $N-m$ branes and studying entangling surfaces in the AdS$_d\times S^{10-d}$ geometry which arises in the low energy limit. Such a set-up was first considered in~\cite{Mollabashi:2014qfa} and later refined in~\cite{Karch:2014pma,Taylor:2015kda,Mozaffar:2015bda}. The authors of~\cite{Karch:2014pma,Taylor:2015kda} also considered global symmetries and in case of an $SO(11-d)$ global symmetry they proposed a quantity in the field theory which would holographically be represented by the area of caps on the internal $S^{10-d}$. One important complication that we have glossed over so far is that in systems exhibiting gauge symmetry, even the association of degrees of freedom to spatial regions is subtle. For example, some of the fundamental degrees of freedom, such as Wilson loops, are not local in space, making it more complicated to split up the Hilbert space according to spatial regions. Interesting work on how to define entanglement entropy in gauge theories has recently appeared (see e.g.\ \cite{Buividovich:2008gq,Donnelly:2011hn,Casini:2013rba,Radicevic:2014kqa,Donnelly:2014gva,Donnelly:2014fua, Ghosh:2015iwa,Soni:2015yga,VanAcoleyen:2015ccp,Radicevic:2015sza}), but a complete understanding is still lacking. These questions about entanglement in quantum field theory are also linked to equally deep questions about the nature of black hole horizons and the holographic emergence of spacetime. It was proposed in \cite{Ryu:2006bv,Ryu:2006ef,Hubeny:2007xt} that the entanglement entropy of a spatial region $A$ in the field theory is proportional to the area of the minimal surface in AdS space that ends on the boundary of $A$. Furthermore, \cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa,Myers:2014jia,Headrick:2014eia} showed that the area of closed surfaces in the bulk of AdS can be related to a measure of UV-IR entanglement, the differential entropy discussed above, at least for two-dimensional boundary theories and higher dimensional cases with translational symmetries -- some of the limitations were discussed in \cite{Engelhardt:2015dta}. Finally, in \cite{VanRaamsdonk:2010pw, Bianchi:2012ev, Faulkner:2013ica, Maldacena:2013xja,Lashkari:2013koa,Swingle:2014uza} it was suggested that spacetime connectedness is related to entanglement of the underlying quantum degrees of freedom, and that the linearized equations of motion of gravity can be derived from the dynamics of entanglement perturbations. In general, can all of spacetime geometry be reconstructed from spatial entanglement entropy in the AdS/CFT correspondence? At least when we do not consider bulk quantum corrections to the entanglement entropy \cite{Faulkner:2013ana}, the answer is no -- in some asymptotically AdS spacetimes, the minimal surfaces anchored on the boundary that geometrically reproduce the entanglement entropy will not penetrate a region \cite{Hubeny:2012ry, Engelhardt:2013tra} which has been called the {\it entanglement shadow} \cite{Balasubramanian:2014sra,Freivogel:2014lja}. It is argued in \cite{Balasubramanian:2014sra} that in such systems entanglement can be dominated by ``internal'' degrees of freedom (e.g.\ the matrix components, or strands of string discussed above) that are not spatially organized, and that these entanglements can measure the areas of non-minimal, extremal surfaces that can penetrate part of the entanglement shadows of the gravitational dual. In the examples arising in the AdS/CFT correspondence, such internal degrees of freedom are usually gauged. Thus, reconstructing the emergent space in gauge/gravity duality will involve entanglement between ``internal", gauged degrees of freedom -- a notion that was named {\em entwinement} in \cite{Balasubramanian:2014sra}. While we will not address the question of which part of a general spacetime can be probed by extremal surfaces (see, for instance, \cite{Hubeny:2012ry, Engelhardt:2013tra,Freivogel:2014lja}), it is clear that entwinement will often allow the reconstruction of a larger part of spacetime than spatial entanglement entropy. Entwinement also plays a key role in the description of holographic spacetimes using methods of integral geometry based in kinematic space \cite{Czech:2014ppa, Czech:2015qta, Czech:2015kbp}. In summary, both in quantum field theory and in quantum gravity, we are driven to consider a new notion of ``entwinement'' -- non-spatial quantum entanglement between gauged degrees of freedom. In this paper we will define entwinement formally in discretely gauged theories, and discuss how it can be explicitly computed. Section~\ref{sec:entdef} develops the general formalism. For two-dimensional theories, we define entwinement in terms of a replica method using twist operators that are charged under the discrete gauge group. We use these operators to construct a new non-local, gauge invariant object whose expectation value is defined to be the entwinement in a standard replica limit. Section~\ref{sec:orbifold} applies this formalism to symmetric orbifold conformal field theories in two dimensions. By explicitly applying uniformization maps, obtained by generalizing a construction of \cite{Asplund:2011cq}, to the Riemann surfaces arising from the replica method we directly compute entwinement in generic microstates of these theories. We also comment on how to recover the usual spatial entanglement entropy as a special limit of entwinement. In section~\ref{sec:D1D5}, we apply our formalism to the weak coupling limit of the D1-D5 CFT, a theory with a holographic dual. Ordinary spatial entanglement entropy in generic classes of the D1-D5 CFT was considered before in~\cite{Asplund:2011cq,Giusto:2014aba,Galliani:2016cai}. For generic microstates, entanglement entropy was computed approximately using both a short interval expansion~\cite{Giusto:2014aba} and large $c$ methods~\cite{Galliani:2016cai}. In~\cite{Asplund:2011cq}, a specific state corresponding to a local quench was considered, and the evolution of the entanglement entropy was computed using a uniformization map -- we generalize the construction for use with other microstates. Although this uniformization map will work for computing single interval entwinement in general microstates, in section~\ref{sec:D1D5} to compare with holography we focus on two examples of microstates, which are gravitationally related to conical defects and the $M=0$ BTZ black hole, respectively. We demonstrate that the lengths of non-minimal geodesics that penetrate entanglement shadows of the spacetime are computed by certain entwinements. In the same way as spatial entanglement entropy corresponds to minimal extremal area surfaces in the bulk, here entwinement provides a direct field theory interpretation for non-minimal extremal surfaces. The paper concludes with a discussion of possible directions forward to develop the notion of entwinement in more general situations, and comments on the relation with the appearance of non-minimal geodesics in the semiclassical Virasoro conformal blocks discussed in \cite{Asplund:2014coa}. A number of technical results are collected in appendices. \setcounter{equation}{0} \section{Defining entwinement} \label{sec:entdef} In a gauge theory, states are required to be symmetric under identifications by the gauge group. The main complication in discussing entanglement entropy in gauge theories is that the Hilbert space does not factorize. For example, for a $U(1)$ gauge theory there is a Gauss law constraint which requires that the electric flux entering a region should determine the flux leaving it. We will be interested in situations where the gauge symmetry is discrete. Recently various authors have developed a formalism for dealing with this lack of factorization in gauge theories. One method is to consider an extended Hilbert space in which the gauge constraints are temporarily relaxed \cite{Buividovich:2008gq,Donnelly:2011hn,Donnelly:2014gva,Donnelly:2014fua,Ghosh:2015iwa, Soni:2015yga, VanAcoleyen:2015ccp}. A way of achieving this is by introducing ``edge modes'' on the entangling surface \cite{Donnelly:2011hn}. In this approach, the Hilbert space splits into superselection sectors defined by the fluxes at the entangling surface, making the reduced density matrix block-diagonal. The entanglement entropy then becomes a sum of two contributions, a Shannon entropy associated to the distribution over superselection sections and a distillable piece arising from entanglement within each superselection sector. In the same spirit, the approach to entwinement suggested in \cite{Balasubramanian:2014sra} was to ungauge the theory, compute, and then symmetrize. An alternative approach is to define a subalgebra of gauge invariant observables ${\cal O}_A$ associated to the region $A$. Then, given a density matrix $\rho$ for the full theory, the reduced density matrix $\rho_A$ is defined as the element of the subalgebra of region $A$ such that ${\rm Tr}(\rho_A {\cal O}_A) = {\rm Tr}(\rho {\cal O}_A)$ for all ${\cal O}_A$ \cite{ Casini:2013rba, Radicevic:2014kqa,Radicevic:2015sza}. In this formalism, the reduced density matrix splits into blocks according to superselection sectors determined by the center of the subalgebra. An algebraic approach to entwinement was recently proposed in \cite{Lin:2016fqk}, where it was shown that for a spin system the entwinement could be recovered from a state-dependent subalgebra. A third approach, which works for states that can be constructed by a Euclidean path integral, is the replica trick. In this case, the entanglement entropy is computed by analytically continuing the R\'{e}nyi entropies. In two dimensions this approach is particularly convenient and the R\'{e}nyi entropies can be defined in terms of the correlation functions of twist operators that splice together replicated copies of the CFT. At least for the case of 2d Yang-Mills theory in de Sitter space, it was verified in \cite{Donnelly:2014gva} that the replica method gives the same result as the extended Hilbert space method described above. Below we will define entwinement along these lines. As a working example, consider a CFT with target space $M^N/S_N$. The CFT contains $N$ sets of fields, where each set can be viewed as coordinates on one copy of the manifold $M$, together with companion fermions in case of a supersymmetric theory. The $S_N$ indicates that we identify configurations that differ by permutations of these $N$ sets of fields. This is similar to the way one treats indistinguishable particles in quantum mechanics: wavefunctions need to be appropriately symmetrized under permutations. The $S_N$ identification is really a discrete gauge symmetry. We can gauge fix the local symmetry and think of the fields as changing continuously from point to point (i.e., each of the $N$ copies of $M$ has a continuous string embedded in it). The theory has so-called ``twisted sectors'' in which strings are only periodic up to permutations. A twisted sector is labeled by a conjugacy class, which is characterized by the lengths of its permutation cycles: there will be $N_m$ cycles of length $m$ such that $\sum_m N_m m = N$. Each cycle is referred to as a ``long string'' because it can be visualized as a string winding $m$ times. We will refer to each winding of the long string as a ``strand''. The conventional spatial entanglement entropy of an angular interval of size $\alpha$ can be thought of as the entanglement entropy of the union of intervals of size $\alpha$ in each of the $N$ strands of the system. Following the proposal of \cite{Balasubramanian:2014sra}, we want to define entwinement as the entanglement of intervals that extend over some strands and not others. For example, one can talk about the entwinement of an interval on a single strand. If one considers the entwinement of a union of identical intervals in each strand, then it reduces to the conventional spatial entanglement. Because there is a gauge symmetry, we cannot invariantly specify which strand we are talking about. But, as we will argue below, we can meaningfully talk about things like ``the entanglement of one and a half connected strands". Likewise, while one cannot invariantly ask for the entanglement of a particular strand, one can ask for the entanglement of a single strand if we do not specify which one it is. This invariance can be made manifest by simply averaging the computation of entanglement of a single strand over all the strands. It is worth emphasizing that this average is {\it not} the same thing as the entanglement of the union of such intervals. Below, we will give a mathematical definition of such quantities. \subsection{Replica trick -- generalities} A useful method for computing entanglement entropy in two-dimensional conformal field theories is the replica trick. In this method, the entanglement entropy of an interval $[0,\alpha]$ is computed from the reduced density matrix for this region $\rho_\alpha$ by taking a limit of the R\'{e}nyi entropies: \begin{equation} S(\alpha) = \lim_{n\to 1} \frac{1}{1 - n} \log {\rm Tr}(\rho_\alpha^n) \, . \end{equation} Consider a two-dimensional CFT in the plane in Euclidean signature. In radial quantization, circles of fixed radius become equal time slices. An operator $\sigma$ acting at the origin creates a pure state, and we can find the wavefunction at $t=0$ by performing the path integral with the operator inserted and fixed boundary conditions at the unit circle. The density matrix $\rho(\phi,\phi')$ corresponding to such a pure state is then computed by inserting operators at the origin and at infinity in the path integral, and imposing boundary conditions $\phi$ and $\phi'$ on the interior and the exterior of the unit circle, respectively. The reduced density matrix for the interval $[0,\alpha]$ is computed by tracing over the complementary part of the unit circle (i.e., setting $\phi = \phi'$ in the complementary region and then integrating over $\phi$ there). This leaves us with a path integral over the entire plane, except over the arc corresponding to the interval $[0,\alpha]$, as depicted in figure~\ref{fig:PICut}. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Fig/PathIntegralCut.pdf} \caption{\small Path integral in radial quantization. The path integral is left open on the cut (indicated in full blue lines) with boundary conditions $\phi$ on the lower cut and $\phi '$ on the upper cut. The dashed lines define the complementary interval along which boundary conditions are matched $\phi=\phi '$. The operator $\sigma$ prepares the state.} \label{fig:PICut} \end{figure} To compute ${\rm Tr}(\rho_\alpha^n)$ we consider $n$ copies of the plane cyclically glued together over the cut $[0,\alpha]$ producing an $n$-sheeted Riemann surface as in figure~\ref{fig:nsheeted} (left). This can alternatively be obtained as a correlator of R\'{e}nyi twist operators $\Sigma^{(n)}$ computed on a single sheet of the $n$-fold cover of the theory, ${\rm CFT}^n/{\mathbb Z}_n$, as represented in figure~\ref{fig:nsheeted} (right). The $n$ fictitious copies of the CFTs are called ``replicas". Each of the $n$ copies of the CFT is placed in the same state. The R\'{e}nyi twist cyclically splices together the $n$ CFT copies, such that dragging a field from one CFT around the cut produces a field in the next copy of the CFT. Below we will illustrate this procedure in symmetric product CFTs with target space $M^N/S_N$ and show how entwinement in these theories can be defined in a similar manner. We will then generalize the definition to apply to more general discretely gauged theories in two dimensions. \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{Fig/Replica.pdf} \caption{\small {\it (Left)} The $n$-sheeted Riemann surface from cyclically gluing $n$ copies of the plane. The dashed arrows denote how to sew fields across the cuts. {\it (Right)} Correlator in the plane. The $\Sigma$ insertions represent R\'enyi twist operators, while the $\sigma$ insertions define the replicated state.} \label{fig:nsheeted} \end{figure} Now take the CFT to be a symmetric orbifold with target space $M^N/S_N$ for some $M$. As discussed above, this CFT can be regarded as having $N$ elementary strands spliced together into series of cycles (``long strings") determined by the twisted sector. The twist operator can therefore be regarded as a product of elementary twists \begin{equation} \Sigma^{(n)} = \Sigma^{(n)}_1 \Sigma^{(n)}_2 \cdots \Sigma^{(n)}_N, \label{entangtwist} \end{equation} where each $\Sigma^{(n)}_i$ splices together the $n$ replica copies of the $i$th strand. Each elementary twist is in the fundamental representation of $S_N$. Thus we can write for any $g \in S_N$ \begin{equation} g[\Sigma^{(n)}_i] \equiv g \Sigma^{(n)}_i g^{-1} = \Sigma^{(n)}_{g(i)}, \end{equation} where $g(i)$ is the strand produced by permuting $i$ by the action of $g$. The twist operator appearing in the computation of entanglement entropy (\ref{entangtwist}) is a product of all the elementary twists and hence is invariant under the action of $S_N$. The twists configuration that computes the entanglement entropy is depicted in figure~\ref{fig:mwound} (left). \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{Fig/entvsentw.pdf} \caption{\small Multiwound strings, each consisting of $m$ strands. There are $N_m$ $m$-wound strings such that the total number of strands is $N$. Here we depict $N=7$, $N_2=2$, $N_3=1$. {\it (Left)} The entanglement entropy is computed by inserting R\'enyi twist operators at the endpoints of the interval on every strand. The entangling region can be visualized in the long string picture as a union of disjoint intervals on all strands. {\it (Right)} Configuration of twists corresponding to the bilocal operator of single interval entwinement. The entangling region extends across different strands of the 3-cycle.} \label{fig:mwound} \end{figure} We can define entwinement formally in terms of the elementary twists. Take $\Sigma_i$ to be an elementary twist operator for strand $i$ and consider the bilocal combination \begin{equation} \tilde\Sigma^{(n)}_i\!(1) \, \, {\Sigma}^{(n)}_i\!(e^{i(\alpha + 2\pi \ell)} ) \end{equation} where for compactness we have only written the holomorphic coordinate and $\tilde\Sigma_i$ is the conjugate twist. In this bilocal operator the $\Sigma_i$ is taken around the complex plane relative to $\tilde \Sigma_i$ by an amount $\alpha + 2\pi \ell$. In a specific state of the symmetric orbifold the $i$th strand is generally spliced with $k$ other strands into a long string. In such a state the twist $\Sigma$ will be inserted on a different strand, as represented on the right of figure~\ref{fig:mwound}. We then consider the bilocal, gauge-invariant quantity \begin{equation} \frac{1}{|S_N|} \sum_{g \in S_N} \tilde\Sigma^{(n)}_{g(i)}\!(1) \, \, {\Sigma}^{(n)}_{g(i)}\!(e^{i(\alpha + 2\pi \ell)} )\,, \label{Tdef} \end{equation} where $|S_N|$ is the cardinality of $S_N$, $i$ is any reference strand of the CFT, and $g(i)$ is the strand to which $i$ is transported when all the strands are permuted by $g \in S_N$. Because we are summing over all permutations in $S_N$, the final quantity is independent of $i$. Its expectation value computes the R\'enyi analog of entwinement of single intervals. When $\ell>k$ ($k$ being the number of strands of the specific cycle, i.e.\ long string, in which the strand $g(i)$ lives) we mean the operator in the sum in (\ref{Tdef}) to represent the twisted boundary conditions of the replicated set of fields on the full long string. Intuitively, we can imagine starting from a short interval on a single strand and putting twisted boundary conditions on the fields inside the interval. Enlarging the interval until it eventually covers the full string, i.e.\ $\alpha+2\pi \ell=2\pi k$, represents putting twisted boundary conditions on the fields on the full string. Further increasing the interval to $\alpha+2\pi \ell>2\pi k$ does not change this picture and just keeps all fields on the long string twisted, nothing more. Keeping this in mind, we can define the entwinement as \begin{equation}\label{entwinement_single} E_\ell(\alpha) = \lim_{n\to 1} \frac{1}{1 - n} \log \left[ \langle{\Psi} \frac{1}{|S_N|} \sum_{g \in S_N} \tilde\Sigma^{(n)}_{g(i)}\!(1) \, \, {\Sigma}^{(n)}_{g(i)}\!(e^{i(\alpha + 2\pi \ell)} ) \rangle{\Psi} \right]. \end{equation} For symmetric product orbifolds this is just a formal way of saying that we are calculating the entanglement entropy of a connected set of partial strands in a long string. This definition of single interval entwinement can be generalized to multi-interval entwinements by taking a product of operators like (\ref{Tdef}) defined at different locations and strands. A particular example of multi-interval entwinement is entanglement, where we take a product of the same interval with $\alpha < 2\pi$ in each of the $N$ strands. The formalism described above is general. We can consider theories with any discrete gauge symmetry $H$, and elementary twist operators in any representation $R$ of $H$ that is useful. These can be used to define bilocal, gauge invariant twist operators of the form (\ref{Tdef}) and products of such operators. Entwinements defined as expectation values of these quantities are a very general new class of gauge-invariant objects than can be used to characterize quantum wavefunctions in two-dimensional theories. Conceptually we can also talk about the entanglement of subsets of degrees of freedom in a spatial region even in higher dimensional theories on any manifold, but we need a formalism for calculating such quantities efficiently. In the next section we will use the replica method to explicitly compute entwinements in symmetric orbifold conformal field theories in two dimensions. \setcounter{equation}{0} \section{Entwinement in symmetric orbifold CFTs} \label{sec:orbifold} In the following we analyze symmetric orbifold CFTs on a circle of length $L$. These are obtained starting with a seed CFT with target space $M$ and central charge $c$. The orbifold theory $\mathcal{C} =$ CFT$^{N}/S_N$ has target space $M^{N}/S_N$ and central charge $c_N=cN$. Because of the $S_N$ identification, states need only be periodic up to the action of a group element. In a sector twisted by $h \in S_N$, the boundary conditions are \begin{equation} \phi_{i}(L)= \phi_{h(i)}(0) ~ \qquad {i=1,\dots ,N} \, , \end{equation} where here $\phi_i$ collectively indicates the fields in the $i$-th copy of the CFT. All physical states should be invariant under the action of $S_N$. Since acting with a group element $g$ maps the sector twisted by $h$ to that twisted by $ghg^{-1}$, twisted sectors should really be labeled by conjugacy classes $[h]$, as mentioned in the previous section. Twisted states can be conveniently obtained through the action of {\it orbifold twist operators} on untwisted states. An orbifold twist operator $\sigma_{m}(0)$ at the origin of the complex plane causes $m$ copies of the target space $M$ to be linked together by the periodicity condition % \begin{equation} \sigma_{m}(0): \qquad\phi_j( z e^{2 \pi i }) = \phi_{j'}( z ) ~;~~~j' = (j+1)\mod m\, . \end{equation} Here $j = 1,\ldots,m$, and the twist operator can be thought of as linking $m$ strands of string, each with period $2\pi$, into a single long string with period $2m\pi$. We will be interested in twisted states of the form \begin{equation} \label{eq:state} |\psi \rangle = \prod_{m=1}^{N} \left[\sigma_{m} (0) \right]^{N_m} |0 \rangle\, , \end{equation} % where each $\sigma_m$ acts on a different subset of the $N$ copies of the target space $M$. Thus, there will be $N_m$ long strings of period $2m\pi$ and $\sum_m m N_m= N$. In radial quantization this prepares a state on the spatial circle, and the corresponding out state is % % \begin{equation} \langle \psi | = \langle0| \prod_{m=1}^{N} \left[ \tilde\sigma_{m} (\infty) \right]^{N_m}\equiv \langle0| \prod_{m=1}^{N} \lim_{z, \bz \to \infty } \left[ z^{2 h_m} \bz^{2 \bar h_m} \tilde\sigma_{m} (z, \bz) \right]^{N_m} \, , \end{equation} where $\tilde\sigma_m$ has opposite action to $\sigma_m$. The twists transform as primaries with conformal weights \begin{equation} h_m = \bar h_m=\frac{c}{24}\(m -\frac{1}{m}\) \ . \end{equation} We wish to define entwinement in such twisted states of the symmetric product CFT. In radial quantization we can specify a connected entangling region at a fixed time $t=0$ by an arc on the unit circle. As explained in the previous section, entwinement in twisted states of the form \eqref{eq:state} can be computed using the replica trick, by inserting \emph{elementary replica twist operators}, $\Sigma_i$. Such twists act each on a single strand out of the $N$ strands in the CFT and can be thought of as connecting the $n$-fold R\'{e}nyi replicates of that strand. The elementary replica twists, $\Sigma_i$, have conformal weights \begin{equation} H_n =\bar H_n= \frac{c}{24}\(n-\frac{1}{n}\) \, . \end{equation} We insert the elementary twists at the endpoints of the chosen interval of length $\alpha+2\pi\ell$ and average over the symmetric group, which moves the left boundary of the interval over all strands while keeping the length of the interval fixed. The state of the replicated theory, $|\Psi\rangle$, is obtained inserting orbifold twist operators for each of the $n$ replica copies of the theory, \begin{equation} \rangle{\Psi}= \[ \prod_{m=1}^{N}~ [ \sigma_{m} (0) ]^{N_m} \]^n |0 \rangle \, , \end{equation} i.e., by taking products of (\ref{eq:state}) for each of the $n$ replicated theories. In terms of $|\Psi\rangle$, we can write \begin{equation} \label{eq:entw} E_{\ell}(\alpha)=\lim\limits_{n\rightarrow 1}\frac{1}{1-n} \log \[ \frac{1}{| S_N |}\sum_{g\in S_N} \langle \Psi | \tilde \Sigma^{(n)}_{g(i)}(1) \Sigma^{(n)}_{g(i)}(e^{2\pi i \ell}x,e^{-2\pi i \ell}\bx) | \Psi \rangle \]\, , \end{equation} where we have taken the entwinement interval to extend between $1$ and $x $ in the complex plane; in the notation of the previous section, $x=e^{i\alpha}$. As before, since we are averaging over $S_N$, the result is independent of the arbitrary choice of the initial strand $i$, which can run from $1$ to $N$. This correlator is a four-point function in the cyclic orbifold theory $\mathcal{C}^n/\ensuremath{\mathbbm{Z}}_n$ obtained taking $n$ replicas of the original symmetric orbifold $\mathcal{C}=$ CFT$^{N} /S_N$. At the practical level then, the computation above simplifies drastically. Consider a term in (\ref{eq:entw}) where the strand $g(i) = k$ is embedded in a particular long string with $m$ strands, which we relabel here as $1,2,\ldots, m$ for convenience. Let us define a notation \begin{equation} \Sigma_k^{(n)}(e^{2\pi i \ell} \, x , e^{-2\pi i \ell} \bar{x}) \equiv \Sigma_{k+\ell}(x,\bar{x}), \end{equation} where the subscripts $k$ and $k+\ell$ are understood modulo $m$ because of the cyclic symmetry of the $m$-stranded long string. Then we have to compute terms like \begin{equation} \label{eq:scyclcntr} \langle \Psi | \tilde \Sigma^{(n)}_{k}(1) \Sigma^{(n)}_{k+\ell}(x,\bx) | \Psi \rangle = \langle \[ \tilde\sigma_m(\infty) \]^n \tilde \Sigma^{(n)}_{k}(1) \Sigma^{(n)}_{k+\ell}(x,\bx) \[ \sigma_m(0) \]^n \rangle \, . \end{equation} As discussed in the previous section, if the long string is shorter than the interval in question, i.e., $2\pi m<\alpha+2\pi\ell$, (\ref{eq:scyclcntr}) is understood as computing how the entire long string is entangled with the rest of the system. \subsection{Single interval entwinement} \label{sec:Einterval} To evaluate the entwinement of a single interval we must therefore compute the correlator \begin{equation} \label{eq:correlator} \langle [ \tilde \sigma_m (\infty)]^n \tilde \Sigma^{(n)}_{k}(1) \Sigma^{(n)}_{k+\ell}(x,\bar x) [\sigma_m (0)]^n \rangle \ . \end{equation} The branching structure of the correlator is effectively $m n$-dimensional as illustrated in figure~\ref{fig:cover}, and we can label the twist fields in terms of these $m n$ sheets involved in the correlator: \begin{eqnarray} \left[\sigma_{m}\right]^n &=& \sigma_{\(1\ldots m\)}\sigma_{\(m+1\ldots 2m\)}\ldots\sigma_{\(m(n-1)+1\ldots mn\)}\,, \\ \Sigma^{(n)}_{k} &=& \sigma_{\( k,k+m,\ldots, k+(n-1)m\) } \end{eqnarray} in the cycles notation $\sigma_{(\ldots)}$ of $S_{m n}$. The correlator \eqref{eq:correlator} can be evaluated through a uniformization map to a covering space. To compute the map we extend a calculation of \cite{Asplund:2011cq}, which dealt with the case $m=2$. We first observe that the branched covering has spherical genus, as determined by the Riemann-Hurwitz formula \begin{equation} g = \frac{1}{2}\sum\limits_{i} (r_i- 1) - s+1 = \frac 1 2 \left[ 2 n (m-1) + 2(n-1)\right] - m n +1 = 0\, . \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{Fig/cover.pdf}\\ \hfill \caption{\small Representation of the branching structure of a correlator of the form \eqref{eq:correlator} in the simple case of a single strand entwinement, $\ell=0$, on a $3$-cycle factor, $m=3$.} \label{fig:cover} \end{figure} The sum is over the twist insertions in \eqref{eq:correlator} of length $r_i$, and $s = mn$ is the total number of sheets involved in the correlator. An explicit formula for such twist correlation function in terms of the properties of the uniformization map to a genus zero surface is worked out in appendix D of \cite{Avery:2010qw} (see also \cite{Lunin:2000yv,Lunin:2001pw}). In appendix~\ref{app:twist}, we review these results and apply them to the computation of single interval entwinement. Our final result is (see eq.~\eqref{eq:resultcorrelator} in the appendix) \begin{align} \label{eq:resultcor} \langle [ \tilde \sigma_m & (\infty)]^n \tilde \Sigma^{(n)}_{k}(1) \Sigma^{(n)}_{k+\ell}(x,\bar x) [\sigma_m (0)]^n \rangle = \left[ m^2 \left| A\right|^{m-1} \left|A-1\right|^2\right]^{-\frac{c}{12}\left( n-\frac{1}{n}\right)}\,, \end{align} with $A= x^{1/m}e^{2\pi i\ell/m}$ and $c$ the central charge of a single copy. To obtain the entwinement of an interval of opening angle $\alpha$ on the spatial circle $w \sim w+ L$, we only need to relate the result \eqref{eq:resultcor} obtained on the plane to the computation on the cylinder, with elementary replica twists inserted at $w=0$ and $w=\alpha L/(2\pi)$. Using the map $z = e^{ \frac{2\pi i w}{L}}$: \begin{equation} \langle \Psi| \tilde \Sigma^{(n)}_{k}(0) \Sigma^{(n)}_{k+\ell} \Big(\frac{\alpha L}{2\pi}\Big) | \Psi \rangle_{\rm cylinder} = \(\frac{2 \pi}{L}\)^{4 H_n} \langle \Psi| \tilde \Sigma^{(n)}_{k}(1) \Sigma^{(n)}_{k+\ell}(x,\bx) | \Psi \rangle \end{equation} with $x= e^{i \alpha} $, $\bx =e^{- i \alpha}$. If we explicitly insert a UV cutoff $\epsilon_{UV}$ to regulate the twist operators and work out the sum over all elements in $S_N$, we obtain \begin{equation} E_{\ell}( \alpha) =\lim\limits_{n\rightarrow 1}\frac{1}{1-n}\log \left[ \frac{1}{N}\sum\limits_{m=\ell+1}^{N} m N_m \left|\frac{m L}{ \pi \epsilon_{UV}}\sin\left( \frac{\alpha+2\pi \ell }{2m}\right)\right|^{-\frac{c}{6}\left( n-\frac{1}{n}\right)}\right] + O(\epsilon_{UV}^0)\, . \label{eq:EWcyl} \end{equation} The sum in the first term extends over long strings with $\ell + 1$ and more strands, because shorter strings are completely covered by intervals of length $\alpha + 2\pi \ell$. The contribution from each of these shorter strings computes the entanglement of the string with the rest of the theory. Since the short string is by construction disconnected from all other strands in this particular twisted sector, its entanglement entropy will not have the dominant UV-divergent contributions that are present for the longer strings with $\ell + 1$ and more strands. In formulas, if one sets $\alpha+2\pi\ell\approx 2\pi m$ up to a contribution of order $\epsilon_{UV}$, the occurences of the cutoff will cancel in (\ref{eq:EWcyl}) between $1/\epsilon_{UV}$ and the sine, yielding a cutoff-independent result, which is small compared to the cutoff-dependent terms arising from long strings longer than the interval. \subsection{Entanglement entropy of a spatial region} The entanglement entropy of an interval is a specific case of computing entwinement. The gauge invariant twist operators are decomposed into products of twist operators on each strand as in \eqref{entangtwist}, and the entanglement entropy of a spatial interval can be expressed as \begin{equation} S(\alpha) = \lim\limits_{n\rightarrow 1} \frac{1}{1-n} \log \left[ \langle{\Psi} \tilde \Sigma_1^{(n)}(1) {\Sigma}_1^{(n)}(x,\bar{x}) \tilde\Sigma_2^{(n)}(1){\Sigma}_2^{(n)}(x,\bar{x})\ldots\tilde\Sigma_N^{(n)}(1) {\Sigma}_N^{(n)}(x,\bar{x})\rangle{\Psi}\right]. \end{equation} In the perspective of entwinement, entanglement entropy coincides with the entwinement of the union over all strands of an interval that fits within a single strand. Clearly, as this quantity is already gauge invariant, the sum over $S_N$ appearing in the entwinement definition exactly cancels the normalization $|S_N|^{-1}$. In fact the entanglement entropy for a state of the form \eqref{eq:state} is not known in general. For instance the branching structure of the correlator leads generically to a covering space of non-trivial genus, and thus one cannot straightforwardly apply the same techniques we used for computing entwinement. However, in the limit of a short interval ($x,\bx \rightarrow 1$), using the OPE of the elementary twists $\tilde \Sigma^{(n)}_i(1){\Sigma}^{(n)}_i(x,\bar{x})\sim \mathbbm{1}/ |1-x|^{2H_n}$ the correlator factorizes. Via the conformal map to the cylinder, the result reproduces the short interval expansion of the entanglement entropy for a CFT on a circle of length $L$ with central charge $c_N =c N$, \begin{equation} S(\alpha) \approx\frac{ c_N }{3} \log \frac{\alpha L}{\pi\epsilon_{UV}} \, . \end{equation} This is $N$ times the single strand ($\ell=0$) short interval expansion ($\alpha \to 0$) of the entwinement result given in \eqref{eq:EWcyl}. This reflects the fact that $\ell=0$ entwinement computes the entanglement for a single factor in the symmetric product orbifold theory, while entanglement entropy simultaneously involves fields in all $N$ factors. \setcounter{equation}{0} \section{D1-D5 CFT} \label{sec:D1D5} A well-known example of a symmetric orbifold CFT is the D1-D5 CFT. This is realized in type IIB string theory compactified on $S^1 \times T^4$ (or $S^1 \times K3$), with $N_1$ D1-branes wrapping the circle and $N_5$ D5-branes wrapping the entire compact product space. The near horizon geometry of the D1-D5 brane system is AdS$_3 \times S^3 \times T^4$, and one can formulate a two-dimensional CFT at the conformal boundary of the AdS$_3$. This is an ${\cal N}=(4,4)$ supersymmetric sigma model with $SU(2) \times SU(2)$ R-symmetry, corresponding to the isometry group of the $S^3$, another $SU(2) \times SU(2)$ global symmetry and central charge equal to $ 6 N_1 N_5$ (see for instance \cite{Avery:2010qw} for a review). The moduli space of the CFT contains an orbifold point where the theory consists of $N \equiv N_1N_5$ copies of a $c = 6$ free CFT of 4 real bosons and their fermionic superpartners with target space $(T^4)^N / S_N$. We will work at the orbifold point of the D1-D5 CFT and focus on the Ramond ground states. These can be constructed by multiplying together bosonic and fermionic twist operators to achieve a total twist of $N$. The theory contains eight bosonic and eight fermionic twists labeled in terms of the global symmetries. Since we are only interested in computing correlators of bosonic quantities that do not carry R-charge, we can simplify the discussion and generically consider the normalized symmetric orbifold microstates \eqref{eq:state}. In this section we will consider two examples of such states, which in the large $N$ limit are dual to conical defects and zero mass BTZ black holes in the bulk (see \cite{Balasubramanian:2000rt} for discussion of the map between Ramond ground states of the D1-D5 system and AdS$_3$ gravity).\footnote{The BTZ black hole is actually dual to an ensemble of states; we will comment on this point and on the notion of typical states in section \ref{sec:BTZ}.} Points in the moduli space with a geometric supergravity description are actually far from the orbifold point where we perform our computations, and agreement with semiclassical gravity is not to be expected a priori. However protected BPS quantities can be computed exactly at the orbifold point and it has been proposed that agreement should extend also to observables computed in terms of covering space constructions \cite{Martinec:2002xq}. \subsection{Conical defects} Simple Ramond ground states of the D1-D5 CFT are of the form \cite{Lunin:2000yv, Lunin:2001pw, Balasubramanian:0508} \begin{equation} \rangle{\psi} = \left[ \sigma_{\tilde m}(0)\right]^{N/ \tilde m} \rangle{0}\,, \label{ConicalDefectState} \end{equation} where $N/\tilde m$ is an integer. The holographically dual bulk geometries are conical defects \begin{equation} \label{eq:conicadefectmetric} ds^2 = -\( \frac{1}{\tilde m^2} + \frac{r^2}{R_{\rm AdS}^2}\) dt^2 + \( \frac{1}{\tilde m^2} + \frac{r^2}{R_{\rm AdS}^2}\)^{-1} dr^2 + r^2 d\theta^2, \end{equation} where $\theta$ is a periodic coordinate and the AdS radius is related to the length of the spatial circle introduced in the previous section via $R_{\rm AdS} = L/(2\pi)$. These geometries can be obtained from empty AdS$_3$ in global coordinates via $\mathbbm{Z}_{\tilde m}$ identifications. For a single interval of opening angle $\alpha$ on the boundary, there exist multiple geodesics connecting the endpoints of the interval. These have regulated lengths \cite{Balasubramanian:2014sra} \begin{equation} {\cal L}_\ell (\alpha)=2 R_{\rm AdS} \log \left[ \frac{2 \tilde m r_\infty}{R_{\rm AdS}} \sin\left( \frac{\alpha+2\pi \ell}{2 \tilde m}\right)\right] \, . \end{equation} The index $\ell=0,\ldots,\tilde m-1$ counts the number of times the geodesics winds around the conical defect at $r=0$. Here $r_\infty$ is an IR gravitational cutoff. Identifying it with the field theory cutoff $\epsilon_{UV}$ via $r_\infty = \frac{L}{2 \pi} \frac{R_{\rm AdS}}{\epsilon_{UV}}$ and substituting $c=6$ in \eqref{eq:EWcyl}, we find the relation between entwinement in the state \eqref{ConicalDefectState} and geodesic lengths \begin{equation} E_{\ell}(\alpha) =\frac{{\cal L}_\ell (\alpha)}{R_{\text{AdS}}}. \label{EntwineVersusGeodesic} \end{equation} Therefore in this case, the field theory notion of entwinement computes the length of geodesics in AdS, in agreement with the idea advanced in~\cite{Balasubramanian:2014sra} that non-minimal geodesics in the bulk capture the entanglement of internal degrees of freedom. Ordinary entanglement entropy on the other hand is related to the length of minimal geodesics, in agreement with the Ryu-Takayanagi formula~\cite{Ryu:2006bv} \begin{equation} S (\alpha)= \frac{{\cal L}_0 (\alpha)}{4 G_N}. \end{equation} Since the D1-D5 central charge is $c N = 3 R_{\text{AdS}}/(2 G_N)$, the entanglement entropy of an interval coincides with $N$ times the $\ell = 0$ single interval entwinement $E_{0}$. \subsection{Zero mass BTZ black hole}\label{sec:BTZ} One could wonder whether the length of long geodesics also captures entwinement in a black hole background. We will show this is indeed the case in the zero mass BTZ black hole \begin{equation} ds^2 = - \frac{r^2}{R_{\text{AdS}}^2} dt^2 + \frac{R_{\text{AdS}}^2}{r^2} dr^2 + r^2 d\theta^2\,, \end{equation} which arises in the $\tilde m \to \infty$ limit of the conical defect geometry \eqref{eq:conicadefectmetric}. The $M=0$ black hole has a horizon of zero size which coincides with the singularity. The $M=0$ black hole geometry is not dual to a particular CFT microstate, but rather to an ensemble of states of the D1-D5 CFT with fixed $N$. Following \cite{Balasubramanian:0508}, instead of working with the microcanonical ensemble, it is more convenient to work in the canonical description where $N_m$ as well as the total number of strands fluctuate, but the ensemble average is fixed to $N$. The average number of $m$-cycles is \cite{Balasubramanian:0508} \begin{equation} \langle N_{m} \rangle = \frac{8}{\sinh\beta m}\,, \label{eq:TypicalState} \end{equation} where the inverse fictitious temperature $\beta$ is determined in terms of the average $N$ of the total number of strands, as \begin{equation} N = \Bigg \langle\sum_{m=1}^{\infty} m N_m \Bigg\rangle \simeq \frac{2\pi^2}{\beta^2}\,. \end{equation} In the large $N$ limit, typical states in the ensemble have individual twist distributions that lie very close to \eqref{eq:TypicalState} and expectation values of observables in a typical state deviate by only a small amount from those computed in the ensemble. In the following, we therefore compute entwinement in a typical state with representative distribution \eqref{eq:TypicalState}, rather than in the ensemble. The single interval entwinement in a typical microstate is again given by~(\ref{eq:EWcyl}). As we prove in appendix~\ref{SqrtN}, for fixed $\alpha$ and $\ell$, the sum over $m$ is dominated by the terms with $m \sim O(\sqrt{N})$. For $\ell \ll \sqrt{N}$, we obtain \begin{equation} E_{\ell}(\alpha)\approx \frac{c}{3} \log \left[\frac{L}{2\pi\epsilon_{UV}}(\alpha+2\pi \ell)\right]. \end{equation} Black holes admit a region outside the horizon which is not penetrated by minimal geodesics, which we can call the entanglement shadow of the black hole. Just as in the case of the conical defect, non-minimal geodesics penetrate the entanglement shadow. The non-minimal geodesics wind around the horizon and the bigger their winding number, the closer they wrap the horizon. The $M=0$ BTZ black hole has a horizon that shrinks to zero size, but nevertheless it has a finite entanglement shadow. The lengths of non-minimal geodesics in the $M=0$ BTZ black hole background are given by \begin{equation} {\cal L}_\ell(\alpha)= 2R_{\rm AdS}\log\left[\frac{r_\infty}{R_{\rm AdS}}(\alpha+2\pi \ell)\right]. \end{equation} Using again that $c=6$ and $r_\infty = \frac{L}{2\pi}\frac{R_{\rm AdS}}{ \epsilon_{UV}}$, we recover that entwinement and geodesic length are related by \begin{equation} E_\ell(\alpha) = \frac{{\cal L}_\ell(\alpha)}{R_{\rm AdS}}. \end{equation} As $\ell \to \sqrt{N}$, the relation between entwinement and geodesic length breaks down. But the corresponding long geodesics, which wind very many times around the black hole, approach the horizon to within a Planck length. It is not clear that they are well defined in the quantum theory where we expect classical geometry to be ill-defined at the Planck scale. Hence it is not surprising that the lengths of these geodesics do not match the corresponding entwinements. \setcounter{equation}{0} \section{Discussion and outlook} \label{conclusions} We have verified the correspondence between single-interval entwinement and lengths of non-minimal geodesics in conical defects and $M=0$ BTZ black holes, which correspond to specific states of the D1-D5 system. We studied these configurations because there are explicit constructions of the corresponding states in the literature \cite{Balasubramanian:0508}. Furthermore, these states are BPS-protected ground states in the Ramond sector of the theory \cite{Balasubramanian:2000rt, Martinec:2002xq}, so that we can expect non-renormalization of some quantities as we deform the theory away from its orbifold point. Of course this does not mean that all correlation functions extrapolate from weak to strong coupling, but we were essentially computing partition sums after a conformal map, which might help explain the agreements we found between field theory and gravity. It has also been seen that certain graviton correlators computed at the orbifold point do match the gravitational results which are related to the strongly coupled theory \cite{Balasubramanian:0508}. Possible general reasons for such matching are discussed in \cite{Martinec:2002xq}. It should be possible to extend our computations in at least two interesting directions. First, we can consider the general R-charged Ramond ground states of the D1-D5 CFT. The $M=0$ BTZ black hole and the conical defects considered here were two specific examples of such states, but a more general class is discussed in \cite{Balasubramanian:0508}, including candidate states describing ``black ring'' geometries. Another interesting extension is to consider rotating, but extremal, AdS$_3$ black holes. These are represented in the D1-D5 CFT by adding energy to the left moving sector alone. One approach would be to add a small left-moving temperature; another would be to perturb the theory with a holomorphic stress tensor. Both of these are settings where it would be very interesting to compute both spatial entanglement and entwinement. While in this paper we have mostly focused on 2d symmetric orbifold CFTs, our definition \eqref{entwinement_single} of entwinement can be extended to more general discretely gauged theories. Consider, for instance, the discrete rotation orbifold $\Rbar^2/\mathord{\!{\usebox{\zzzbar}}}_N$ with $N>2$. In the sector twisted by the generator of $\mathord{\!{\usebox{\zzzbar}}}_N$, the target space coordinates satisfy \begin{eqnarray} X(\tau,2\pi)&=&\cos\left(\frac{2\pi}{N}\right)X(\tau,0)-\sin\left(\frac{2\pi}{N}\right)Y(\tau,0),\\ Y(\tau,2\pi)&=&\sin\left(\frac{2\pi}{N}\right)X(\tau,0)+ \cos\left(\frac{2\pi}{N}\right)Y(\tau,0). \end{eqnarray} We can extend the range of the $\sigma$ coordinate in $X(\tau,\sigma)$ to $0\leq\sigma<4\pi$ by defining \begin{equation} X(\tau,\sigma)\equiv\cos\left(\frac{2\pi}{N}\right)X(\tau,\sigma-2\pi)-\sin\left(\frac{2\pi}{N}\right)Y(\tau,\sigma-2\pi), \ \ \ 2\pi\leq\sigma<4\pi. \end{equation} A twisted sector string is then determined by the ``long string'' profile $X(\tau,\sigma)$ with $0\leq\sigma<4\pi$, which satisfies the boundary condition \begin{equation} X(\tau,4\pi)=-X(\tau,0)+2\cos\left(\frac{2\pi}{N}\right)X(\tau,2\pi). \end{equation} Entwinement defined in analogy with \eqref{entwinement_single} then quantifies how one part of this long string is entangled with its complement. There does seem to be an important difference, however, with symmetric orbifold theories. For the above rotation orbifold, we could equally well have defined entwinement by considering target space coordinates $X',Y'$ related to $X,Y$ by rotation in field space over an arbitrary angle. Entwinement defined using long string profiles $X'(\sigma)$ would generically be different from that defined using $X(\sigma)$, reflecting the basis dependence of measures of quantum entanglement in general. In contrast, for symmetric orbifolds the requirements that the target space coordinates should be mapped into each other by permutations and should have diagonal kinetic terms does select a preferred notion of entwinement. It is interesting to ask for which gauge theories our definition of entwinement leads to ``natural'' quantities, including quantities with a clear geometrical meaning in a holographic dual. Other generalizations worth studying are continuous gauge theories, higher dimensional theories and matrix models. Note that entwinement, as we defined it, measured the lengths of geodesics in units of the AdS scale, rather than in units of the Planck length (which is related to the three dimensional Newton constant $G_N$). In the original formulation of holographic entanglement entropy \cite{Ryu:2006bv, Ryu:2006ef}, it seemed natural that $G_N$ should appear in the formulas, in analogy with black hole entropy which is measured by horizon area in units of $G_N$. Of course field theories with a holographic dual having a classical description generally have a large number of local degrees of freedom arising from e.g. dynamical variables that are large matrices. The entwinement that we are defining seeks to piece apart the entanglement of some of these local degrees of freedom (e.g. parts of the local matrices) with other elements of the Hilbert space. As such, we are extracting the elementary parts out of which spatial entanglement arises in such quantum field theories. These elementary entanglements are correspondingly smaller, and are thus related to geodesic lengths measured relative to a length scale that is much larger than the Planck length. One might ask if single interval entwinement can ever be of the same order of magnitude as the spatial entanglement as we make the interval size larger. At least in the 2d CFT states we considered, this cannot happen because entanglement between different long strings is negligible and within a single long string entanglement only depends logarithmically on the interval size. This could be different in excited or thermal states where there could be extensive contributions to entanglement, or in theories with less local interactions such as matrix models. It is also interesting that the fundamental object (\ref{entwinement_single}) from which we construct entwinement is non-local. This recalls the discussion in \cite{polchinski} of the relevance of non-local observables in field theory for reconstructing local physics in AdS space in a gauge-invariant manner. One of our goals in this paper has been to define the field theoretic dual of extremal, non-minimal geodesics in AdS$_3$. These geodesics also appear in the semiclassical CFT computation of R\'{e}nyi entropies in terms of the conformal block expansion of heavy-heavy-light-light correlators \cite{Asplund:2014coa}. There it was shown that the single interval entanglement entropy in a state created by heavy operator insertions, and dual to an AdS$_3$ conical defect or BTZ, is well approximated by the semiclassical identity block and reproduces the Ryu-Takayanagi minimal geodesic result. This leading answer for the four-point function however has monodromies as the R\'{e}nyi twists are moved around the heavy operators in the CFT. These monodromies transform the minimal geodesic result into quantities related to the length of non-minimal geodesics. Therefore in this context non-minimal geodesics are also related in the dual CFT to analytic properties of semiclassical Virasoro blocks. \setcounter{equation}{0} \section*{Acknowledgments} We thank Alexandre Belin, Netta Engelhardt, Jutho Haegeman, Volkher Scholz, Norbert Schuch, Karel Van Acoleyen, Henri Verschelde, and Frank Verstraete for very helpful discussions. We are also particularly grateful to Bartek Czech, Lampros Lamprou, Sam McCandlish, and Jamie Sully for discussing details of their ongoing work on related questions with us. This work was supported in part by a grant from the Simons Foundation (\#385592, Vijay Balasubramanian) through the It From Qubit Simons Collaboration, by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7/37, by FWO-Vlaanderen through projects G020714N, G044016N and Odysseus grant G.001.12, by the European Research Council grant no. ERC-2013-CoG 616732 HoloQosmos, by COST Action MP1210 The String Theory Universe, and by Vrije Universiteit Brussel through the Strategic Research Program ``High-Energy Physics''. It was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation. T.D.J.\ is Aspirant FWO-Vlaanderen. F.G.\ is a Postdoctoral Researcher of FWO-Vlaanderen and acknowledges support from a JuMo grant of KU Leuven. V.B., A.B., and F.G.\ thank YITP for support during the program ``Quantum Information in String Theory and Many-body Systems'' and conference ``Quantum Matter, Spacetime and Information''. V.B.\ and T.D.J.\ also thank the Perimeter Institute for hospitality during the It From Qubit workshop and school. B.C.\ thanks the organizers of the Nordita program ``Black Holes and Emergent Spacetime'' for hospitality while this work was in progress. \setcounter{equation}{0}
1,314,259,995,525
arxiv
\section{Introduction} The 11-year sunspot cycle represents far more than just a quasi-decadal variation in the number of sunspots seen on the disk of the Sun. As the numbers of sunspots wax and wane so do the numbers of x-ray flares, coronal mass ejections, solar energetic particle events, and geomagnetic storms \citep[see e.g.][]{Hathaway15}. Strong space weather events impact our technology, costing us both money and inconvenience. For example, these sporadic events can disrupt cell phone service and global communication, or even shut down entire power grids. In addition, the background levels of the Sun's total irradiance, the irradiance at UV, EUV, and XUV wavelengths, and the flux of galactic cosmic rays also vary with the sunspot number. The short wavelength irradiance levels alter the Earth's upper atmosphere and ionosphere. Solar cycle related heating of the thermosphere can change the atmospheric density by an order of magnitude at spacecraft altitudes (400 km) -- leading to excess satellite drag and the loss of satellites themselves. Given these impacts, it is important to provide estimates of future levels of activity in order to anticipate and/or mitigate the consequences. Predicting levels of solar activity once a cycle is well underway can now be quite reliable. Predicting levels of solar activity in a cycle shortly after it starts, or before it starts, is far more difficult. A wide variety of prediction methods have been used, most with little success. Comparative studies of prediction techniques \citep{Hathaway_etal99, Petrovay10, Pesnell12} indicate that precursor methods based on the Sun's polar fields at about the time of cycle minimum have the most merit. Direct measurements of the polar fields have been used to successfully predict the last four cycles, Cycle 21 \citep{Schatten_etal78}, Cycle 22 \citep{SchattenSofia87}, Cycle 23 \citep{Schatten_etal96}, and Cycle 24 \citep{Svalgaard_etal05}. Geomagnetic activity near the time of cycle minimum has been shown to be a good proxy for the Sun's polar fields \citep{WangSheeley09} and extends the polar field measurements back 12 sunspot cycles. In Figure \ref{fig:PolarFieldPredictors} we plot the strength of the following cycle maximum (as given by the maximum in the smoothed daily sunspot area) as a function of: 1) the axial magnetic dipole strength at minimum; 2) the average polar field strength poleward of 55\deg\ at minimum; and 3) the minimum in the geomagnetic {\em aa} index (which occurs at about the time of cycle minimum). (The smoothing used here is with a Gaussian filter with a FWHM of 24 months convolved with the monthly averages of the daily values.) All three indicators of the Sun's polar fields are very well correlated with maximum of the following cycle. The individual correlation coefficients are 0.99, 0.95, and 0.92 respectively, with a combined correlation coefficient of 0.90. This association between the Sun's polar fields and the amplitude of the following cycle is found in most models for the Sun's magnetic dynamo \citep[see e.g.][]{Charbonneau10}. In those models the axial dipole field near the time of cycle minimum serves as the background seed field which is sheared by differential rotation to produce the toroidal field that emerges in active regions. Dynamo model predictions of Cycle 24 that reset the axial dipole to that observed at the minimum between Cycle 23 and Cycle 24 were quite successful in predicting the size of Cycle 24 \citep{Choudhuri_etal07, Jiang_etal07}. Such predictions are, however, sensitive to the timing of the polar field observations. \citet{MunozJaramillo_etal13B} found that the predictions become less reliable when polar field observations from well before cycle minima were used. The success rate falls below 50\% at three years prior to minimum. (In fact the prediction for Cycle 23 made three years before minimum by \citet{SchattenPesnell93} was significantly off the mark.) In this paper, we show that reliable forecasts for the amplitude of Solar Cycle 25 can be made even earlier by predicting what the polar fields will be during minimum of cycle 24/25, some four years before minimum occurs. The Sun's polar fields (and its axial dipole) are produced by the emergence of tilted active regions and the transport of the emerged magnetic flux by the fluid flows in the Sun's surface shear layer \citep{WangSheeley91, Sheeley05, Jiang_etal14}. Given the emerged active regions and the transport flows (differential rotation, meridional flow, and supergranule convection), surface flux transport can reproduce the evolution of the polar fields over many cycles \citep{Cameron_etal10}. Here we use our Advective Flux Transport model (or AFT code) \citep{UptonHathaway14A, UptonHathaway14B}, which combines knowledge of the transport flows along with simulated active region emergence over the next four years, to predict the polar fields that should be produced on the Sun through the start of the year 2020. From these predictions, we then estimate the amplitude of Solar Cycle 25. In addition, we explore the uncertainty in these predictions due to stochastic effects that naturally occur on the Sun (i.e., convective motions, variability in active region tilt, and variation in the meridional flow). \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{PolesAtMinVsSunspotAreaAtMax.eps}} \caption{The smoothed maximum of the daily sunspot area in each sunspot cycle is plotted as a function of three different indicators of the strength of the polar fields at the previous minimum. The axial dipole moment (scaled by a factor of 5.3) is shown in green. The polar field above 55\deg\ (scaled by a factor of 3.7) is shown in red. The minimum in the geomagnetic $aa$ index is shown in black. All three indicators of the polar fields at cycle minima are very well correlated with the strength of the following cycle maxima.} \label{fig:PolarFieldPredictors} \end{figure} \section{Surface Flux Transport} The magnetic flux that emerges in active regions (sunspot groups) is transported across the surface of the Sun by convective motions (supergranules) and the axisymmetric flows -- differential rotation and meridional flow. The magnetic elements, with typical field strengths of a kilo-Gauss, are quickly carried to the boundaries of the convection cells where they form the Sun's magnetic network. The field in the network is concentrated in downdrafts where it becomes largely vertical. This vertical (radial) magnetic field is transported by the horizontal flows according to the flux transport equation: \begin{equation} \label{eqn:SFT} \frac{\partial B}{\partial t}= -\frac{1}{R_{\odot} \cos\lambda} \frac{\partial (u B)}{\partial \phi} - \frac{1}{R_{\odot} \cos\lambda} \frac{\partial (\cos \lambda\ v B)}{\partial \lambda}+ S \end{equation} \noindent where $B = B(\lambda,\phi,t)$ is the radial component of the magnetic field, $\lambda$ is the latitude, $\phi$ is the longitude, $u = u(\lambda,\phi,t)$ is the transport velocity in the longitudinal direction, $v = v(\lambda,\phi,t)$ is the transport velocity in the latitudinal direction, $R_{\odot}$ is the solar radius, and $S = S(\lambda,\phi,t)$ is the source term giving the emergence of flux in active regions. This equation is derived from the radial component of the magnetic induction equation. Most surface flux transport modelers approximate the effects of the transport done by the convective motions as a strictly diffusive process represented by a horizontal diffusion term: \begin{equation} \label{eqn:diffusion} \eta_{H} \left[\frac{1}{R_{\odot}^2 \cos{\lambda}} \frac{\partial}{\partial \lambda}\left(\cos\lambda \frac{\partial B}{\partial \lambda}\right) + \frac{1}{R_{\odot}^2 \cos^2{\lambda}}\frac{\partial^2 B}{\partial \phi^2}\right] \end{equation} \noindent where $\eta_{H}$ is a diffusivity. We model this transport explicitly using an evolving convective flow pattern derived from Doppler measurements of the spectrum of the convective flows \citep{Hathaway_etal15}. The convective flow pattern itself is transported by prescribed differential rotation and meridional flow profiles by solving for the changes in the complex spectral coefficients produced by those axisymmetric flows \citep[see][]{Hathaway_etal10}. In addition to this slow evolution of the convection pattern, we give finite lifetimes to the convection cells by adding small, random rotations to the complex phases of each spectral coefficient at each time step. The amplitudes of these phase perturbations increase with wavenumber in a manner that reproduces the decrease in the Pearson correlation coefficient of the Doppler pattern with time. Our AFT code solves the surface flux transport equation (Equation \ref{eqn:SFT}) using these convective flows along with the associated differential rotation and meridional flow velocities. While the solutions for the convective flow velocities are done in spectral space with 4th order Runga-Kutta time differencing, the solutions for the magnetic field evolution are done in physical space on a grid with 512 equi-spaced latitude points from pole-to-pole and 1024 equi-spaced longitude points at the equator. The number of longitude points drops to 512 at $\cos \lambda = 0.5$, to 256 at $\cos \lambda = 0.25$, and ultimately to 8 just equatorward of each pole, to keep the actual longitudinal grid spacing (in Mm) close to that used in latitude and in longitude at the equator (4.27 Mm). Equation \ref{eqn:SFT} is cast in flux conservative finite difference form with second order accurate spatial differencing and first order Euler time differencing. We use the convection spectrum of \citet{Hathaway_etal15} out to spherical harmonic degree $\ell = 512$ but with a Hanning taper on the amplitudes above $\ell = 384$, well beyond the spectral peak at $\ell = 120$ due to supergranules. The resulting convective motions have maximum velocities on the order of 1500 m s$^{-1}$. With the first order Euler time differencing, this limits the time steps to about 5 minutes. Advection equations like Equation \ref{eqn:SFT} are unstable in the sense that the transported quantity piles up where the flows converge to produce sharp features, but with Gibbs ringing around the flux concentrations. We mitigate this problem in the usual manner by adding a diffusion term (Equation \ref{eqn:diffusion}) with a diffusivity just large enough to minimize the Gibbs ringing. \section{Flux Transport Flows} The magnetic flux is transported by the near-surface flows, $u(\lambda,\phi,t)$ and $v(\lambda,\phi,t)$, that include both the non-axisymmetric convective flows described in the last section and the axisymmetric differential rotation and meridional flow. The spectrum of the non-axisymmetric flows is not expected to change much with time and the differential rotation profile is observed to change only slightly with solar activity. The meridional flow, however, is observed to vary substantially over the course of a sunspot cycle. We \citep{HathawayRightmire10, HathawayRightmire11, RightmireUpton_etal12} have measured the axisymmetric motions of the magnetic elements outside active regions by cross-correlating strips of data from magnetograms taken at 8-hour intervals from the ESA/NASA {\it Solar and Heliospheric Observatory} Michelson Doppler Imager (SOHO/MDI) \citep{Scherrer_etal95} and the NASA {\it Solar Dynamics Observatory} Helioseismic and Magnetic Imager (SDO/HMI) \citep{Scherrer_etal12}. We have shown \citep{HathawayRightmire11} that this measurement method provides accurate flow velocities with minimal systematic error. \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{MF_PlmHistoryCycle21_24f90.eps}} \caption{History of coefficients of Legendre polynomials fit to the meridional flow profiles measured from the motions of the magnetic elements. The $S_1^0$ coefficients are shown with filled circles. The $S_3^0$ coefficients are shown with open circles. Measurements by \citet{Komm_etal93b} in the 1980s are shown in red with 1$\sigma$ error bars. Measurements by \citet{HathawayRightmire10} from SOHO/MDI are shown in black with 2$\sigma$ error bars. Updated measurements by \citet{RightmireUpton_etal12} from SDO/HMI are shown in blue with 2$\sigma$ error bars. Possible future fit coefficients are shown with the lines of dots from 2016 to 2020. A continuation of the profile measured at the start of 2016 is shown in green. A projection to a cycle minimum profile in 2020 is shown in blue. } \label{fig:MeridionalFlowHistory} \end{figure} We fit the differential rotation and meridional flow profiles with Legendre polynomials and find that the profiles are well represented with the first 5 (differential rotation) or 6 (meridional flow) polynomials. In Figure \ref{fig:MeridionalFlowHistory} we show updated values for the coefficients of the two most significant components of the meridional flow profile. Earlier measurements by \citet{Komm_etal93b} using a similar measurement technique on ground-based magnetograms are shown with the red symbols while our measurements from SOHO/MDI are shown in black and measurements from SDO/HMI are shown in blue. The meridional flow is fast at sunspot cycle minima (the smoothed sunspot number is shown with the dashed line in Figure \ref{fig:MeridionalFlowHistory} ) and slow at cycle maxima. The cycle-related variation in the meridional flow amplitude is about 50\% from cycle maximum to cycle minimum. \section{The Simulations} We run a series of simulations in which we evolve the Sun's surface magnetic field starting from an initial magnetic field map from the end of January 2016 to final magnetic field maps for the end of December 2019 -- a date expected to be close to the Cycle 24/25 minimum. The quantities of interest for predicting Cycle 25 are the strengths of polar fields and the axial dipole. We use historical data from the Wilcox Solar Observatory (WSO) for our comparisons with past cycles so we adopt, to some extent, their definitions of these quantities. The polar fields, $B_N$ and $B_S$, are taken to be the radial field averaged over latitudes poleward of 55\deg, that is \begin{equation} \label{eqn:polarFields} B_N = \int_{55^\circ}^{90^\circ} \int_{0}^{2\pi} B(\lambda,\phi) d\phi\ \cos\lambda\ d\lambda/ \int_{55^\circ}^{90^\circ} \int_{0}^{2\pi} d\phi\ \cos\lambda\ d\lambda \end{equation} \noindent for the north with a similar integral for $B_S$. Note that the polar fields reported by WSO are for the average line-of-sight field in their northernmost and southernmost pixel. Those numbers must be divided by a projection factor \begin{equation} \label{eqn:projectionFactor} \int_{55^\circ}^{90^\circ} \int_{-\pi/2}^{\pi/2} \cos\phi\ d\phi\ \cos^2\lambda\ d\lambda/ \int_{55^\circ}^{90^\circ} \int_{-\pi/2}^{\pi/2} d\phi\ \cos\lambda\ d\lambda = 0.2482 \end{equation} \noindent before they can be compared to the numbers we calculate from our magnetic maps of the radial field using Equation \ref{eqn:polarFields}. The axial dipole strength, $B_P$, is calculated using the spherical harmonic normalization used at WSO with \begin{equation} \label{eqn:dipoleStrength} B_p = \frac{3}{4\pi} \int_0^{2\pi} \int_{-\pi/2}^{\pi/2} B(\lambda,\phi) \sin\lambda \cos\lambda\ d\lambda\ d\phi \end{equation} \noindent Note that a different normalization ($\sqrt {3/2}$ vs. 3) was used in \citet{UptonHathaway14A}. The axial dipole strengths shown there must be multiplied by a factor of $\sqrt 6$ to give the same normalization as used here. The strengths of the polar fields and the axial dipole at the end of the simulations depend upon several factors: 1) the strength and structure of the field in the initial magnetic map, 2) the numbers, strengths, locations, and tilts of new active regions, 3) the strength and structure of the meridional flow, and 4) the diffusive effects of the convective flows. Our initial magnetic field map is constructed to be a faithful representation of the Sun's surface magnetic field as measured by MDI/HMI at the end of January 2016. We have assimilated full-disk magnetograms via Kalman filtering into our AFT code's magnetic maps. Magnetograms from SOHO/MDI were assimilated every 96 minutes starting in October 1998 (after communications with SOHO were renewed) and continued with magnetograms from SDO/HMI every 60 minutes starting in May 2010. Both the polar fields and the axial dipole derived from these maps are closely matched by those reported by the Wilcox Solar Observatory (WSO) as shown in later figures. We do find, however, that a calibration coefficient of 1.26 must multiply the WSO polar field values to make them directly comparable to the MDI/HMI values. We use the numbers, strengths (area), and locations of active regions from cycle 14, 107 years earlier, as a good representation of the active regions that will appear in the next four years. Sunspot Cycle 14 (1901-1913) was similar in amplitude, shape, and hemispheric asymmetry (dominant north early in the cycle and dominant south late in the cycle) to the current Cycle 24. In \citet{UptonHathaway14A} we also used active regions from Cycle 14 to successfully predict the reversal of the axial dipole in Cycle 24 in early 2014 and the strength of the axial dipole in 2016 ($1.4 \pm 0.3$ G predicted vs. 1.2 G observed). The Royal Greenwich Observatory data for the active regions of Cycle 14 give the longitude, latitude, and total sunspot area for each active region for each day that the active region was visible. We use these quantities to give the longitude, latitude, and magnetic flux in each of the two (balanced) magnetic polarities comprising the active region. We convert the total sunspot area, $A$, of the sunspot group into total unsigned magnetic flux of the active region, $\Phi$, using the relationship given by \citet{Sheeley66} with \begin{equation} \label{eqn:area2flux} \Phi(A) = 7.0 \times 10^{19} A {\rm \ Mx} \end{equation} \noindent where the sunspot group area is given in millionths of the area of a solar hemisphere. We convert the longitude and area of the sunspot group into longitudes for each of the bipolar components using a longitudinal separation, $\Delta \phi$, between components given by \begin{equation} \label{eqn:longitudeExtent} \Delta \phi(A) = 3^\circ + 8^\circ \tanh (A/500) \end{equation} \noindent This is similar to the expression used in \citet{UptonHathaway14A} but is based on measurements of the centroid positions of the bipolar components of active regions in SOHO/MDI magnetograms. We convert the latitude and area of the sunspot group into latitudes for each of the bipolar components using the longitudinal separation given in Equation \ref{eqn:longitudeExtent} and the Joy's Law active region tilt given by \citet{StenfloKosovichev12} to give a latitudinal separation, $\Delta \lambda$, given by \begin{equation} \label{eqn:JoysLaw} \Delta \lambda(A, \lambda, \Phi) = \Delta \phi(A) \tan(32.\!\!^\circ1 \sin \lambda + \delta \lambda(\Phi)) \end{equation} \noindent where $\delta \lambda(\Phi)$ is a random variation in active region tilt with a full width at half-maximum given by \citet{StenfloKosovichev12} as \begin{equation} \label{eqn:JoysLawVariation} \delta \lambda(\Phi) = 25.\!\!^\circ3 + 154.\!\!^\circ7 [1.59/(1.59 + \Phi^{0.84})] \end{equation} \noindent with $\Phi$ here given in units of $10^{20}$ Mx. Note that this variability in active region tilt is a key source of variations in the Sun's polar fields and axial dipole. We use Equation \ref{eqn:JoysLawVariation} to produce a series of 8 different realizations for the tilts of the active regions we assimilate. This ensemble of realizations allows us to estimate the uncertainty in the predicted quantities. New active region magnetic flux is added daily provided the area of that active region has increased from its previous maximum. The meridional flow profiles are well represented with the Legendre polynomial fit coefficients shown in Figure \ref{fig:MeridionalFlowHistory}. We consider two possible future scenarios for the evolution of the meridional flow profile: 1) the meridional flow profile seen at the end of January 2016 continues without changing; 2) the meridional flow profile evolves to more closely match the profile found at the last cycle minimum. The fit coefficients for these two scenarios are represented by the symbols in Figure \ref{fig:MeridionalFlowHistory} that extend from 2016 to 2020 with the green symbols representing the continued profile and the blue symbols representing the projected (evolving) profile. \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{MF_Profiles.eps}} \caption{Two possible meridional flow profiles given by the future fit coefficients in Figure 2. The profile measured at the start of 2016 is shown in green and is referred to as the continued meridional flow. The profile projected for 2020 by the coefficient trends is shown in blue and is referred to as the projected meridional flow. } \label{fig:MeridionalFlowProfiles} \end{figure} The resulting profiles for 2020 are shown in Figure \ref{fig:MeridionalFlowProfiles} using the same color scheme -- green for continued and blue for projected. (Note that the primary difference in these profiles is a quenching or weakening of the meridional flow in the active latitudes in the 2016 continued profile.) We use each of these meridional flow profile scenarios with the 8 different realizations of the Joy's Law tilt in the assimilated active regions. The AFT code allows us to vary the properties of the convective motions (supergranules) that advect the magnetic elements and produce the magnetic network. We choose to keep the velocity spectrum and cell lifetimes fixed in all simulations but vary the details of the convective flows by using 8 different convective patterns produced by shifting the flow pattern by 8 multiples of $45^\circ$ in longitude. The convective motions are quenched in active regions in a realistic manner by producing a multiplicative mask for the flows that drops from unity across most of the surface to zero where the magnetic field intensity rises above a threshold (taken to be 50 G averaged over an area within a radius of 35 Mm for our spatial resolution). This threshold was determined by examining the masks to see that network and plage fields remained unmasked while the strong fields in and around sunspots are masked. This process reproduces the observed decay of active regions as noted by \citet{UgarteUrra_etal15}. While the convective motions are quenched in active regions, active region magnetic fields are still subject to differential rotation, meridional flow, and the diffusion imposed to reduce Gibbs ringing. The 8 different realizations for the convective motions are calculated for both of the meridional flow scenarios. \section{Cycle 25 Predictions} We run the AFT code for 32 different realizations of the Sun's magnetic field evolution over the next four years: first, the two meridional flow scenarios were run combined with 8 different realizations of Joy's Law tilt in the assimilated active regions, and then the two meridional flow scenarios were run combined with 8 different realizations of the evolving convective pattern. In the latter cases Joy's Law was used with no tilt variation. We save the full surface magnetic field maps at 8-hour intervals and construct magnetic butterfly diagrams (latitudinal profiles of the magnetic field averaged over longitude and time for each 27-day rotation of the Sun) from each of the 32 realizations. We use the magnetic butterfly diagrams to calculate the axial dipole strength and the polar fields averaged over latitudes above $55^\circ$ for each hemisphere as discussed in the previous section. This allows us to compare our results for these quantities with those reported from the WSO for earlier cycles. \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{AxialDipolePredictionConvectionVariationsRevised.eps}} \caption{The axial dipole strength predictions with variations in the convective flow pattern and the meridional flow profile. The axial dipole strength as reported by the WSO is shown in black. The axial dipole strength derived from our maps with assimilated MDI/HMI data is shown in red. The predicted axial dipole strengths obtained with the projected meridional flow profile and eight different realizations of the convective flow pattern are shown in blue. The predicted axial dipole strengths obtained with the continued meridional flow profile and eight different realizations of the convective flow pattern are shown in green. The axial dipole strengths at the time of sunspot cycle minimum are shown for the last three minima by the labeled vertical lines. } \label{fig:DipoleWithConvectionVariations} \end{figure} The axial dipole strength found for the 16 realizations with different meridional flow and convective patterns is shown in Figure \ref{fig:DipoleWithConvectionVariations}. This figure includes the WSO measurements (in black) going back to 1976 as well as the MDI/HMI measurements (in red) starting in late 1998. Note that at the time of Cycle 24 minimum the axial dipole from MDI/HMI is virtually identical to that from WSO with no evident need of any multiplicative calibration coefficient. These results clearly show an axial dipole strength in 2020 similar to that preceding Cycle 24 but substantially weaker than the strength seen with WSO preceding Cycle 22 and Cycle 23. While there is some variability in the axial dipole strength due to the variations in the convection pattern, the systematic variation with the meridional flow profile is more apparent in the offset between the green and blue lines. The slower continued meridional flow profile produces a stronger axial dipole. This behavior of surface flux transport has long been recognized \citep[e.g.][]{Sheeley05} and is attributed to increased cancellation of opposite leading polarity flux across the equator when the meridional flow at those latitudes is weaker. \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{AxialDipolePredictionJoysLawVariationsRevised.eps}} \caption{The axial dipole strength predictions with variations in the Joy's Law tilt of active regions and the meridional flow profile shown in the same manner as in Figure \ref{fig:DipoleWithConvectionVariations}. } \label{fig:DipoleWithJoysLawVariations} \end{figure} The axial dipole strength found for the 16 realizations with different meridional flow and Joy's Law tilt is shown in Figure \ref{fig:DipoleWithJoysLawVariations} with the same line colors as in Figure \ref{fig:DipoleWithConvectionVariations}. The significant difference with the Joy's Law variations is seen in the larger variation in the axial dipole strength in 2020 -- large enough to over-power the variations due to changes in the meridional flow. The variations in the Joy's Law tilt of active regions clearly has a more substantial impact on the axial dipole strength. Here again we see that the axial dipole strength at the start of Cycle 25 (the start of the year 2020) is similar to to that at the start of Cycle 24 in late 2008 but substantially smaller than that at the start of Cycle 23 or Cycle 22. The average of all 32 realizations gives the axial dipole strength at the start of Cycle 25 as $+1.36 \pm 0.20$ G while the WSO gives -1.61 G at the start of Cycle 24, +3.21 G at the start of Cycle 23, and -4.40 G at the start of Cycle 22. The 15\% uncertainty in the predicted axial dipole strength is small enough to allow us to predict that Cycle 25 will be a small cycle like Cycle 24, certainly not as large as the moderate Cycle 23, and certainly not as small as the cycles in the Maunder Minimum. We can also address possible hemispheric asymmetry by comparing the average field intensity above some latitude for each hemisphere. The latitude chosen is rather arbitrary. However, to continue our comparisons with the WSO measurements we will use their 55\deg as the boundary of the polar regions as discussed in Section 4. Note (as shown in Figure \ref{fig:PolarFieldPredictors}) that these polar fields are good predictors of the strength (sunspot area) in the associated hemisphere and give the correct sign of the hemispheric asymmetry in two out of three cycles. \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{PolarFieldsPredictionConvectionVariationsRevised.eps}} \caption{The polar field strength predictions with variations in the convective flow pattern and the meridional flow profile. The polar field strength derived from the WSO measurements is shown with a solid black line for the field in the north above 55\deg latitude and with a dashed line for the field in the south. The polar field strengths derived from maps with assimilated MDI/HMI data are shown in red. The predicted polar field strengths obtained with the projected meridional flow profile and eight different realizations of the convective flow pattern are shown in blue. The predicted polar field strengths obtained with the continued meridional flow profile and eight different realizations of the convective flow pattern are shown in green. The polar field strengths at the time of sunspot cycle minimum are shown for the last three minima by the labeled vertical lines. } \label{fig:PolesWithConvectionVariations} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width=20pc]{PolarFieldsPredictionJoysLawVariationsRevised.eps}} \caption{The polar field strength predictions with variations in the Joy's Law tilt of active regions and the meridional flow profile shown in the same manner as in Figure \ref{fig:PolesWithConvectionVariations}. } \label{fig:PolesWithJoysLawVariations} \end{figure} The results of the polar field measurements are shown in Figure \ref{fig:PolesWithConvectionVariations} for the realizations with convective flow variations and in Figure \ref{fig:PolesWithJoysLawVariations} for the realizations with the Joy's Law tilt variations. The WSO polar fields are shown in black with a solid line for the northern hemisphere and a dashed line for the southern hemisphere. Our polar fields from MDI/HMI are similarly shown in red from October 1998 through January 2016. The realizations with the continued meridional flow profile are shown in green and the realizations with the projected meridional flow profile are shown in blue from January 2016 to the start of 2020. Note that the WSO (multiplied by the calibration coefficient of 1.26) and the MDI/HMI measurements agree very well where they overlap. These polar field predictions also indicate that Cycle 25 will be similar in strength to Cycle 24. The average of the absolute values of the northern and southern polar fields was 2.7 G at the start of Cycle 24 and is predicted to be $2.5 \pm 0.5$ G at the start of Cycle 25. If we define the hemispheric asymmetry as \begin{equation} \label{eqn:asymmetry} \frac{|N| - |S|} {0.5(|N| + |S|)} \end{equation} \noindent then the predicted asymmetry of Cycle 25 is -0.16 -- the southern hemisphere should dominate the north. Note, however, that the predicted asymmetries from the polar fields of Cycles 22, 23, and 24 were -0.20, +0.16, and -0.11 respectively while the observed asymmetries (in maximum sunspot area) were -0.09, -0.07, and -0.36. So this prediction of hemispheric asymmetry should be taken more lightly than the prediction for the strength of Cycle 25. Another feature worth noting in Figures \ref{fig:PolesWithConvectionVariations} and \ref{fig:PolesWithJoysLawVariations} is that we predict that the polar fields in the south will weaken in late 2016 and into 2017 before recovering. [Note that these calculations were completed in early 2016.] This weakening is seen in all of our realizations and is attributed to magnetic field patterns that are already on the Sun in our initial magnetic map. The peak in solar activity in early 2014 was followed by a precipitous drop in late 2014. This drop in activity left low latitude leading polarity flux uncanceled and allowed both polaritiies to be carried to the poles - higher latitude following polarity first and lower latitude leading polarity later. This can be seen in all magnetic butterfly diagrams constructed for this time period. The slow meridional flow and random walk by the convective motions require 1-2 years to carry that magnetic flux into the polar regions. \section{Conclusions} We have used our AFT code to predict the Sun's magnetic field at the start of 2020 as a means of predicting the amplitude and hemispheric asymmetry of Cycle 25 based on the Sun's polar fields at cycle minimum. While surface flux transport has been shown to reproduce the magnetic patterns on the Sun in some detail given knowledge of the active region sources and transport flows, we do not have detailed knowledge of what those quantities will be in the future. We do, however, provide knowledgeable estimates and use them, along with their known variability, to produce a series of 32 realizations for the evolution of the Sun's magnetic field from the end of January 2016 to the start of January 2020. We find that the polar fields, as given by the axial dipole strength and the average field strength above 55\deg, indicate that Cycle 25 will be similar in size to (or slightly smaller than) the current small cycle, Cycle 24. We also find (weaker) evidence that the southern hemisphere will be more active than the north. Small cycles, like Cycle 24, start late and leave behind long cycles with deep extended minima \citep{Hathaway15}. We expect a similar deep, extended minimum for the Cycle 24/25 minimum in 2020. An important result from these simulations of future magnetic fields is the uncertainty produced by the stochastic variations in both the active region tilt and the convective motions. While these variations directly limit the accuracy of our polar field predictions to about 15\% after 4 years of simulation, they also suggest that these stochastic variations limit the predictability of the solar cycle itself. The accumulated uncertainty over the course of just one sunspot cycle is expected to make long-range predictions (more than one cycle into the future) very unreliable. We note that, while writing this paper, a very similar paper was published by \citet{Cameron_etal16}. They also use a surface flux transport code to predict the Sun's axial dipole strength at the next sunspot cycle minimum and find nearly identical results (although they refer to the Cycle 24 as a moderate sized cycle when it is clearly smaller than average). The primary differences in our methods include: 1) we use detailed convective motions while they approximate the process with a diffusivity, 2) they use a statistical model for the emerging active regions while we use a specific example - Cycle 14, and 3) they explore the uncertainty in the initial conditions. These differences mean that we can determine the uncertainty associated with stochastic variations in the convection pattern while they can better determine the uncertainties associated with different locations and sizes of active regions. We both show the reliability of the method, they on their postdiction of the previous three cycles, we on our published prediction of the axial dipole over the last four years. The fact that we get nearly identical predictions is gratifying. \begin{acknowledgments} DHH was supported by a grant from the NASA Heliophysics Grand Challenge Program (NNX14A083G P. Martens, Georgia State University, PI). LAU was supported by a grant from the NASA Heliophysics Guest Investigator Program (NNH16AC71I I. Ugarte-Urra, George Mason University, PI). The HMI data used are courtesy of the NASA/SDO and the HMI science team. The SOHO/MDI project was supported by NASA grant NAG5-10483 to Stanford University. SOHO is a project of international cooperation between ESA and NASA. Stanford University operates the WSO with funding currently provided by NASA. The authors benefited from many useful discussions with Nagi Mansour, Alan Wray, Thomas Hartlep, Robert Cameron, and Jie Jiang, and from useful comments from an anonymous referee. The SOHO/MDI magnetograms used in this study are available from: http://soi.stanford.edu/magnetic/index5.html. The SDO/HMI magnetograms used in this study are available from: http://jsoc.stanford.edu/ajax/lookdata.html. The WSO data on polar fields and the axial dipole strength are available from: http://wso.stanford.edu. \end{acknowledgments}
1,314,259,995,526
arxiv
\section{Introduction and main results} The following modification of the Sherrington-Kirkpatrick model \cite{SK} was introduced recently in \cite{MS}. Given $N\geq 1$, let us denote by \begin{equation} \sigma = (\sigma_1,\ldots,\sigma_N) \in \varSigma_N = \bigl\{-1,+1 \bigr\}^N \end{equation} a configuration of $N$ Ising spins. Consider a finite set ${\cal S}$ that will be fixed throughout the paper and, in particular, it does not change with $N$. We emphasize this because we will often omit the dependence of other objects on $N$. The elements of ${\cal S}$ will be called species and will be denoted by $s$ or $t$. Let us divide all spin indices into disjoint groups indexed by the species, \begin{equation} I = \bigl\{1\, ,\ldots,\, N\bigr\} = \bigcup_{s\in{\cal S}} I_s. \label{species} \end{equation} These sets will, obviously, vary with $N$ and we will assume that their cardinalities $N_s =|I_s|$ satisfy \begin{equation} \lim_{N\to\infty} \frac{N_s}{N} = \lambda_s \in (0,1) \,\mbox{ for all }\, s\in{\cal S}. \end{equation} For simplicity of notation, we will omit the dependence of $\lambda_s^N := N_s/N$ on $N$ and will simply write $\lambda_s$. The Hamiltonian proposed in \cite{MS} resembles the usual SK Hamiltonian, \begin{equation} H_N(\sigma) = \frac{1}{\sqrt{N}} \sum_{i,j =1}^N g_{ij}\sigma_i \sigma_j, \label{SKH} \end{equation} where the interaction parameters $(g_{ij})$ are independent Gaussian random variables, only now they are not necessarily identically distributed but, instead, satisfy \begin{equation} \mathbb{E} g_{ij}^2 = \Delta_{st}^2 \,\mbox{ if }\, i\in I_s, j\in I_t \,\mbox{ for }\, s,t\in{\cal S}. \label{Delta} \end{equation} In other words, the variance of the interaction between $i$ and $j$ depends only on the species they belong to. We will make the same assumptions on the matrix $\Delta^2 = (\Delta_{st}^2)_{s,t\in{\cal S}}$ as in \cite{MS}, namely, that it is symmetric and nonnegative definite, \begin{equation} \Delta_{st}^2 = \Delta_{ts}^2 \,\mbox{ for all }\, s,t\in{\cal S} \,\mbox{ and }\, \Delta^2 \geq 0. \label{assumption} \end{equation} Let us denote the overlap of the restrictions of two spin configurations to a given species $s\in{\cal S}$ by \begin{equation} R_s(\sigma^1,\sigma^2) = \frac{1}{N_s} \sum_{i\in I_s} \sigma_i^1 \sigma_i^2. \label{overlap} \end{equation} Then it is easy to see that the covariance of the Gaussian Hamiltonian (\ref{SKH}) is given by \begin{equation} \frac{1}{N} \mathbb{E} H_N(\sigma^1)H_N(\sigma^2) = \sum_{s,t\in{\cal S}} \Delta_{st}^2 \lambda_s \lambda_t R_s(\sigma^1,\sigma^2) R_t(\sigma^1,\sigma^2). \label{Cov} \end{equation} This already gives some idea about the main new difficulty one encounters in this model compared to the classical Sherrington-Kirkpatrick model. Namely, now we will need to understand the joint distributions of the overlap arrays in the thermodynamic limit simultaneously for all species $s\in{\cal S}$. Our main goal will be to compute the limit of the free energy in this model, \begin{equation} F_N = \frac{1}{N}\hspace{0.3mm} \mathbb{E} \log Z_N, \,\mbox{ where }\, Z_N = \sum_{\sigma\in\Sigma_N } \exp H_N(\sigma). \label{FE} \end{equation} Notice that we do not consider the inverse temperature parameter here, because it can be absorbed into the definition of the matrix $\Delta^2$. One can also consider the externals fields that depend only on the species but, since it does not affect any arguments in the paper, for simplicity of notation we will omit them. Under the assumption (\ref{assumption}), the authors in \cite{MS} proved, using the Guerra-Toninelli interpolation \cite{GuerraToninelli}, that the free energy has a limit. They also proposed the following analogue of the Parisi formula \cite{Parisi79, Parisi} for the free energy, which was proved for the original SK model by Talagrand in \cite{TPF} (see also \cite{SG2}). Given integer $r\geq 1$, consider a sequence \begin{equation} 0=\zeta_{-1}< \zeta_0 <\ldots < \zeta_{r-1} < \zeta_r = 1 \label{zetas} \end{equation} and, for each $s\in {\cal S}$, a sequence \begin{equation} 0=q^s_0\leq q^s_1 \leq \ldots \leq q^s_{r-1}\leq q^s_r =1. \label{qs} \end{equation} We will also consider two types of non-decreasing combinations of these sequences as follows. For $0\leq \ell\leq r$, we define \begin{equation} Q_\ell = \sum_{s,t\in{\cal S}} \Delta_{st}^2 \lambda_s \lambda_t q^s_\ell q^t_\ell \,\ \mbox{ and }\,\ Q^s_\ell = 2\sum_{t\in{\cal S}} \Delta_{st}^2 \lambda_t q^t_\ell \,\, \mbox{ for }\,\, s\in {\cal S}. \label{Qs} \end{equation} The meaning of these definitions will become clear when we look at the covariance of the cavity fields in the Aizenman-Sims-Starr scheme in Section \ref{Sec5label}. Given these sequences, let us consider i.i.d. standard Gaussian random variables $(\eta_\ell)_{1\leq \ell\leq r}$ and, for $s\in{\cal S}$, define \begin{equation} X^s_r = \log {\mbox{\rm ch}}\hspace{0.3mm} \sum_{1\leq \ell\leq r} \eta_\ell \bigl(Q^s_{\ell} - Q^s_{\ell-1}\bigr)^{1/2}. \label{Xr} \end{equation} Recursively over $0\leq \ell\leq r-1,$ we define \begin{equation} X^s_\ell=\frac{1}{\zeta_\ell}\log \mathbb{E}_\ell\exp \zeta_\ell X^s_{\ell+1}, \label{Xelll} \end{equation} where $\mathbb{E}_\ell$ denotes the expectation with respect to $\eta_{\ell+1}$ only. Notice that $X^s_0$ are non-random. Finally, we define the analogue of the Parisi functional by \begin{equation} {\cal P}(\zeta,q) = \log 2 + \sum_{s\in{\cal S}} \lambda_s X^s_0 -\frac{1}{2} \sum_{0\leq \ell\leq r-1} \zeta_\ell \bigl(Q_{\ell+1} - Q_{\ell}\bigr). \label{Pzeta} \end{equation} The main result of the paper is the following. \begin{theorem}\label{ThFE} Under the assumption (\ref{assumption}), the limit of the free energy is given by \begin{equation} \lim_{N\to\infty} F_N = \inf {\cal P}(\zeta,q), \label{Parisi} \end{equation} where the infimum is taken over $r\geq 1$ and the sequences (\ref{zetas}) and (\ref{qs}). \end{theorem} In \cite{MS}, the inequality $F_N \leq \inf {\cal P}(\zeta,q)$ was proved under the assumption (\ref{assumption}) using the analogue of Guerra's replica symmetry breaking interpolation \cite{Guerra}. For convenience, we will reproduce this result in Section \ref{Sec2label} in the formalism of the Ruelle probability cascades, which will also allow us to introduce several objects that will be used in the subsequent sections. In this paper we will prove the matching lower bound using the analogue of the Aizenman-Sims-Starr scheme \cite{AS2} and, in this part, the assumption $\Delta^2\geq 0$ will not be needed. The approach was applied previously in various situations in \cite{PPF} and \cite{WKC} and is based on the ultrametricity result in \cite{PUltra}. As we mentioned above, in the multi-species model we encounter a new non-trivial obstacle. Namely, we need to describe the joint distribution of the overlap arrays simultaneously for all species and, even though it is clear that the marginal distribution of each array will be generated by the Ruelle probability cascades as in the SK model, it is not at all clear what their joint distribution should be. We will develop an approach to overcome this obstacle in Sections \ref{Sec3label} and \ref{Sec4label}. In Section \ref{Sec3label} we will prove a multi-species version of the Ghirlanda-Guerra identities, which are similar to the original Ghirlanda-Guerra identities \cite{GG}, but apply to generic overlaps that may depend on the overlaps of all species. Using these identities, we will show in Section \ref{Sec4label} that the overlaps of different species are synchronized in the sense that they are deterministic functions of the overlaps of the whole system. This will describe the joint distribution of all overlaps and allow us to obtain the lower bound in Section \ref{Sec5label} in a straightforward way using the Aizenman-Sims-Starr scheme. \smallskip \noindent \textbf{Acknowledgement.} The author would like to thank Wei-Kuo Chen for several helpful discussions and comments about the paper. \section{Guerra's replica symmetry breaking bound}\label{Sec2label} Given $r\geq 1$, let $(v_\alpha)_{\alpha\in \mathbb{N}^r}$ be the weights of the Ruelle probability cascades \cite{Ruelle} corresponding to the parameters (\ref{zetas}) (see e.g. Section 2.3 in \cite{SKmodel} for the definition). For $\alpha, \beta\in \mathbb{N}^r$, we denote \begin{equation} \alpha\wedge \beta = \min\bigl\{0\leq \ell\leq r \ |\ \alpha_1= \beta_1, \ldots, \alpha_{\ell} = \beta_{\ell}, \alpha_{\ell+1} \not = \beta_{\ell+1} \bigr\}, \end{equation} where $\alpha\wedge \beta =r$ if $\alpha=\beta$. Since the sequences defined in (\ref{Qs}) are non-decreasing, we can consider Gaussian processes $C^s(\alpha)$ for $s\in{\cal S}$ and $D(\alpha)$ both indexed by $\alpha\in\mathbb{N}^r$ with the covariances \begin{equation} \mathbb{E} C^s(\alpha) C^s(\beta) = Q^s_{\alpha\wedge\beta} \,\,\mbox{ and }\,\, \mathbb{E} D(\alpha) D(\beta) = Q_{\alpha\wedge\beta}. \label{CD} \end{equation} These are the usual Gaussian fields that accompany the construction of the Ruelle probability cascades (see e.g. Section 2.3 in \cite{SKmodel}). For each $s\in{\cal S}$ and each $i\in I_s$, let $C_i(\alpha)$ be a copy of the process $C^s(\alpha)$ and suppose that all these processes are independent of each other and of $D(\alpha)$. For $0\leq x\leq 1$, consider an interpolating Hamiltonian defined on $\Sigma_N\times \mathbb{N}^r$ by \begin{equation} H_{N,x}(\sigma,\alpha) = \sqrt{x}\, H_N(\sigma) + \sqrt{1-x}\hspace{0.3mm} \sum_{i=1}^N \sigma_i C_{i}(\alpha) +\sqrt{x} \sqrt{N} D(\alpha) \label{HNx} \end{equation} and the corresponding interpolating free energy \begin{equation} \varphi(x)=\frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\sigma,\alpha} v_{\alpha} \exp H_{N, x}(\sigma,\alpha). \label{phix} \end{equation} Then it is easy to check the following. \begin{lemma} Under the assumption (\ref{assumption}), the derivative of $\varphi(x)$ in (\ref{phix}) satisfies $\varphi'(x)\leq 0$. \end{lemma} \textbf{Proof.} Let us denote by $\langle\, \cdot\,\rangle_x$ the average with respect to the Gibbs measure $\Gamma_x(\sigma,\alpha)$ on $\Sigma_N\times \mathbb{N}^r$ defined by $$ \Gamma_x(\sigma,\alpha) \sim v_\alpha \exp H_{N, x}(\sigma,\alpha). $$ Then, obviously, for $0<x<1$, $$ \varphi'(x) = \frac{1}{N}\mathbb{E} \Bigl\langle \frac{\partial H_{N, x}(\sigma,\alpha)}{\partial x} \Bigr\rangle_x. $$ It is easy to check from the above definitions that \begin{align*} & \frac{1}{N}\, \mathbb{E} \frac{\partial H_{N, x}(\sigma^1,\alpha^1)}{\partial x} H_{N, x}(\sigma^2,\alpha^2) \\ & = \frac{1}{2}\sum_{s,t\in{\cal S}} \Delta_{st}^2 \lambda_s \lambda_t \Bigl( R_s(\sigma^1,\sigma^2) R_t(\sigma^1,\sigma^2) -2 R_s(\sigma^1,\sigma^2) q^t_{\alpha^1\wedge\alpha^2} + q^s_{\alpha^1\wedge\alpha^2}q^t_{\alpha^1\wedge\alpha^2} \Bigr). \end{align*} In particular, this is zero when $(\sigma^1,\alpha^1)=(\sigma^2,\alpha^2)$ and, in general, can be rewritten as a quadratic form $(\Delta^2(R-q),(R-q))/2$, where $$ R= \bigl( \lambda_s R_s(\sigma^1,\sigma^2) \bigr)_{s\in{\cal S}},\,\, q= \bigl( \lambda_s q^s_{\alpha^1\wedge\alpha^2} \bigr)_{s\in{\cal S}}. $$ Notice that here we used the symmetry of the matrix $\Delta^2$. Finally, usual Gaussian integration by parts then gives (see e.g. Lemma 1.1 in \cite{SKmodel}) $$ \varphi'(x) = - \frac{1}{2}\mathbb{E} \bigl\langle \bigl(\Delta^2(R-q),(R-q) \bigr) \bigr\rangle_x \leq 0, $$ where the last inequality follows from the assumption $\Delta^2\geq 0$ in (\ref{assumption}). \qed \medskip \noindent The lemma implies that $\varphi(1)\leq \varphi(0).$ It is easy to see that $$ \varphi(0) = \log 2 + \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\alpha\in\mathbb{N}^r} v_{\alpha} \prod_{i\leq N} {\mbox{\rm ch}}\, C_{i}(\alpha) $$ and $$ \varphi(1) = F_N + \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\alpha\in\mathbb{N}^r} v_{\alpha} \exp \sqrt{N} D(\alpha). $$ Now, standard properties of the Ruelle probability cascades imply that (see e.g. the proof of Lemma 3.1 in \cite{SKmodel}), \begin{equation} \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\alpha\in\mathbb{N}^r} v_{\alpha} \prod_{i\leq N} {\mbox{\rm ch}}\, C_{i}(\alpha) = \frac{1}{N}\hspace{0.3mm}\sum_{1\leq i\leq N} \mathbb{E}\log \sum_{\alpha\in\mathbb{N}^r} v_{\alpha} {\mbox{\rm ch}}\, C_{i}(\alpha) = \sum_{s\in{\cal S}} \lambda_s X^s_0 \label{simp1} \end{equation} and \begin{equation} \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\alpha\in\mathbb{N}^r} v_{\alpha} \exp \sqrt{N} D(\alpha) = \frac{1}{2} \sum_{0\leq \ell\leq r-1} \zeta_\ell \bigl(Q_{\ell+1} - Q_{\ell}\bigr). \label{simp2} \end{equation} Recalling (\ref{Pzeta}), the inequality $\varphi(1)\leq \varphi(0)$ can be written as $F_N\leq {\cal P}(\zeta,q)$, which yields the upper bound in (\ref{Parisi}). \section{Multi-species Ghirlanda-Guerra identities} \label{Sec3label} In order to prepare for the proof of the lower bound, we need to obtain some strong coupling properties for the overlaps in different species, which will be achieved in the next section using a multi-species version of the Ghirlanda-Guerra identities that we will now prove. Let us consider a countable dense subset ${\cal W}$ of $[0,1]^{|{\cal S}|}$. For a vector \begin{equation} w=(w_s)_{s\in {\cal S}} \in {\cal W}, \end{equation} let $s_i(w)=\sqrt{w_s}$ for $i\in I_s$ and $s\in{\cal S}$, and consider the following $p$-spin Hamiltonian, \begin{equation} h_{N,w,p}(\sigma) = \frac{1}{N^{p/2}} \sum_{1\leq i_1,\ldots, i_p\leq N} g_{i_1\ldots i_p}^{w,p}\, \sigma_{i_1}s_{i_1}(w)\,\cdots\, \sigma_{i_p}s_{i_p}(w), \label{hNwp} \end{equation} where $g_{i_1\ldots i_p}^{w,p}$ are i.i.d. standard Gaussian random variables independent for all combinations of indices $p\geq 1,w\in {\cal W}$ and $i_1,\ldots, i_p\in \{1,\ldots, N\}$. If we define \begin{equation} R_w\bigl(\sigma^1,\sigma^2\bigr) = \sum_{s\in{\cal S}} \lambda_s w_s R_s\bigl(\sigma^1,\sigma^2\bigr), \label{Rw} \end{equation} where $R_s(\sigma^1,\sigma^2)$ was defined in (\ref{overlap}), then it is easy to check that the covariance of (\ref{hNwp}) is \begin{equation} \mathbb{E} h_{N,w,p}(\sigma^1)h_{N,w,p}(\sigma^2) = R_w\bigl(\sigma^1,\sigma^2\bigr)^p. \label{CovhNpw} \end{equation} Since the set ${\cal W}$ is countable, we can consider some one-to-one function $j:{\cal W}\to\mathbb{N}$. Then, we let $x_{w,p}$ for $p\geq 1,w\in {\cal W}$ be i.i.d. random variables uniform on the interval $[1,2]$ and define a Hamiltonian \begin{equation} h_{N}(\sigma) = \sum_{w\in{\cal W}} \sum_{p\geq 1} 2^{-j(w)-p} x_{w,p}\, h_{N,w,p}(\sigma). \label{hNw} \end{equation} Note that, conditionally on $x=(x_{w,p})_{p\geq 1,w\in {\cal W}}$, this is a Gaussian process and its variance is bounded by $4$. The Hamiltonian $h_N(\sigma)$ will play a role of a perturbation Hamiltonian, which means that, instead of $H_N(\sigma)$ in (\ref{SKH}), from now on we will consider the perturbed Hamiltonian \begin{equation} H_N^{\mathrm{pert}}(\sigma) = H_N(\sigma) + s_N h_N(\sigma), \label{Hpert} \end{equation} where $s_N=N^{\gamma}$ for any $1/4<\gamma<1/2$. First of all, it is easy to see, using Jensen's inequality on each side, that \begin{align} \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\sigma\in\varSigma_N} \exp H_N(\sigma) & \leq \ \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\sigma\in\varSigma_N} \exp H_N^{\mathrm{pert}}(\sigma) \nonumber \\ & \leq \ \frac{1}{N}\hspace{0.3mm} \mathbb{E}\log \sum_{\sigma\in\varSigma_N} \exp H_N(\sigma) + \frac{2s_N^2}{N}, \label{compareF} \end{align} and, since $\lim_{N\to\infty} N^{-1} s_N^2 = 0,$ the perturbation term does not affect the limit of the free energy. As in the Sherrington-Kirkpatrick and mixed $p$-spin models, the purpose of adding the perturbation term is to obtain the Ghirlanda-Guerra identities for the Gibbs measure \begin{equation} G_N(\sigma) = \frac{\exp\, H_N^{\mathrm{pert}} (\sigma)}{Z_N} \,\mbox{ where }\, Z_N = \sum_{\sigma\in\varSigma_{N}} \exp\, H_N^{\mathrm{pert}}(\sigma), \label{GNpert} \end{equation} corresponding to the perturbed Hamiltonian (\ref{Hpert}). We will denote the average with respect to $G_N^{\otimes \infty}$ by $\langle\, \cdot\, \rangle$. Now, given $n\geq 2,$ let $$ R^n = \bigl(R_s(\sigma^\ell,\sigma^{\ell'})\bigr)_{s\in{\cal S}, \ell,\ell'\leq n} $$ and consider an arbitrary bounded measurable function $f=f(R^n)$. For $p\geq 1$ and $w\in {\cal W}$, let \begin{equation} \varDelta(f,n,w, p) = \Bigl| {\mathbb{E}} \bigl\langle f R_w\bigl(\sigma^1,\sigma^{n+1}\bigr)^p \bigr\rangle - \frac{1}{n} {\mathbb{E}} \bigl\langle f \bigr\rangle \hspace{0.3mm} {\mathbb{E}} \bigl\langle R_w\bigl(\sigma^1,\sigma^2\bigr)^p \bigr\rangle - \frac{1}{n}\sum_{\ell=2}^{n}{\mathbb{E}} \bigl\langle f R_w\bigl(\sigma^1,\sigma^\ell \bigr)^p\bigr\rangle \Bigr|, \label{GGfinite} \end{equation} where ${\mathbb{E}}$ denotes the expectation conditionally on the i.i.d. uniform sequence $x=(x_{w,p})_{p\geq 1,w\in {\cal W}}$. If we denote by $\mathbb{E}_x$ the expectation with respect to $x$ then the following holds. \begin{theorem}\label{ThGG} For any $n\geq 2$ and any bounded measurable function $f=f(R^n)$, \begin{equation} \lim_{N\to\infty} \mathbb{E}_x \hspace{0.3mm} \varDelta(f,n,w,p) = 0 \label{GGxlim} \end{equation} for all $p\geq 1$ and $w\in{\cal W}$. \end{theorem} \textbf{Proof.} The proof is identical to the one of Theorem 3.2 in \cite{SKmodel}. For a given $p\geq 1$ and $w\in{\cal W}$, the equation (\ref{GGxlim}) is obtained by utilizing the term $h_{N,w,p}(\sigma)$ in the perturbation (\ref{hNw}). \qed \medskip \smallskip \noindent Theorem \ref{ThGG} implies that we can choose a non-random sequence $x^N=(x^N_{w,p})_{p\geq 1,w\in {\cal W}}$ changing with $N$ such that \begin{equation} \lim_{N\to\infty} \hspace{0.3mm} \varDelta(f,n,w,p) = 0 \label{GGxlim2} \end{equation} for the Gibbs measure $G_N$ with the parameters $x$ in the perturbation Hamiltonian (\ref{hNw}) equal to $x^N$ rather than random. In fact, the choice of $x^N$ will be made below in a special way to coordinate with the Aizenman-Sim-Starr scheme. In this section, we will simply assume that we have any such sequence $x^N$. Moreover, let us now consider any subsequence $(N_k)_{k\geq 1}$ along which the array $$ \bigl(R_s(\sigma^\ell,\sigma^{\ell'}) \bigr)_{s\in{\cal S},\ell,\ell'\geq 1} $$ of the overlaps within species for infinitely many replicas $(\sigma^\ell)_{\ell\geq 1}$ converges in distribution under the measure $\mathbb{E} G_N^{\otimes\infty}$. Again, later we will be interested in a special choice of such subsequence. Let \begin{equation} (R^s_{\ell,\ell'})_{s\in{\cal S},\ell,\ell'\geq 1} \label{Rsfirst} \end{equation} be the array with the limiting distribution and, similarly to (\ref{Rw}), define \begin{equation} R^w_{\ell,\ell'} = \sum_{s\in{\cal S}} \lambda_s w_s R^s_{\ell,\ell'}. \label{Rwlim} \end{equation} Then the equations (\ref{GGfinite}) and (\ref{GGxlim2}) imply that the limiting array satisfies \begin{equation} \mathbb{E} f(R^n) \bigl(R^w_{1,n+1} \bigr)^p = \frac{1}{n}\mathbb{E} f(R^n) \hspace{0.3mm} \mathbb{E} \bigl(R^w_{1,2} \bigr)^p + \frac{1}{n}\sum_{\ell=2}^{n}\mathbb{E} f(R^n) \bigl(R^w_{1,\ell} \bigr)^p, \label{GGwp} \end{equation} where, of course, now $R^n=(R^s_{\ell,\ell'})_{s\in{\cal S}, \ell,\ell'\leq n}$. From this we will deduce the following multi-species form of the Ghirlanda-Guerra identities for such limiting arrays. Let us consider an array \begin{equation} Q_{\ell,\ell'} = \varphi \bigl( (R_{\ell,\ell'}^s)_{s\in{\cal S}}\bigr) \label{Qphi} \end{equation} for any bounded measurable function $\varphi$ of the overlaps in different species. \begin{theorem}\label{ThGGms} For any $n\geq 2$ and any bounded measurable function $f=f(R^n)$, \begin{equation} \mathbb{E} f(R^n) Q_{1,n+1} = \frac{1}{n}\mathbb{E} f(R^n) \hspace{0.3mm} \mathbb{E} Q_{1,2} + \frac{1}{n}\sum_{\ell=2}^{n}\mathbb{E} f(R^n) Q_{1,\ell}. \label{GGms} \end{equation} \end{theorem} \textbf{Proof.} Since the equation (\ref{GGwp}) holds for all $w\in{\cal W}$, both sides are continuous in $w$ and ${\cal W}$ is dense in $[0,1]^{|{\cal S}|}$, the equation (\ref{GGwp}) holds for all $w\in [0,1]^{|{\cal S}|}$. Take any integers $p_s\geq 0$ for $s\in {\cal S}$ and let $p=\sum_{s\in{\cal S}} p_s$. If we recall the definition of $R^w_{\ell,\ell'}$ in (\ref{Rwlim}), $$ \frac{\partial^p}{\prod_{s\in{\cal S}} \partial w_s^{p_s}} \bigl(R^w_{\ell,\ell'}\bigr)^p = p! \prod_{s\in{\cal S}} (\lambda_s R^s_{\ell,\ell'})^{p_s}. $$ Computing this partial derivative on both sides of (\ref{GGwp}) implies \begin{equation} \mathbb{E} f(R^n) \prod_{s\in{\cal S}} (R^s_{1,n+1})^{p_s} = \frac{1}{n}\mathbb{E} f(R^n) \hspace{0.3mm} \mathbb{E} \prod_{s\in{\cal S}} (R^s_{1,2})^{p_s} + \frac{1}{n}\sum_{\ell=2}^{n}\mathbb{E} f(R^n) \prod_{s\in{\cal S}} (R^s_{1,\ell})^{p_s}. \end{equation} Approximating continuous functions by polynomials, this implies (\ref{GGms}) for continuous functions $\varphi$ in (\ref{Qphi}) and the general case follows. \qed \medskip \smallskip \noindent \textbf{Remark.} In particular, Theorem \ref{ThGGms} implies that the array $(Q_{\ell,\ell'})_{\ell,\ell'\geq 1}$ itself satisfies the usual Ghirlanda-Guerra identities, \begin{equation} \mathbb{E} f(Q^n) \psi(Q_{1,n+1}) = \frac{1}{n}\mathbb{E} f(Q^n) \hspace{0.3mm} \mathbb{E} \psi(Q_{1,2}) + \frac{1}{n}\sum_{\ell=2}^{n}\mathbb{E} f(Q^n) \psi(Q_{1,\ell}), \label{GGreg} \end{equation} for any bounded measurable function $\psi$ and $f=f(Q^n)$, where $Q^n =(Q_{\ell,\ell'})_{\ell,\ell'\leq n}$. In the case when the array $Q$ is also nonnegative definite, the main result in \cite{PUltra} will allow us to use the full force of the Ghirlanda-Guerra identities and, in particular, will imply that such arrays are ultrametric and can be generated by the Ruelle probability cascades (see Section 2.4 in \cite{SKmodel}). \section{Synchronizing the species}\label{Sec4label} Now, let us consider any limiting distribution as in (\ref{Rsfirst}) and let us notice that the overlap $$ R(\sigma^\ell,\sigma^{\ell'}) = \frac{1}{N} \sum_{i=1}^N \sigma^\ell_i \sigma^{\ell'}_i = \sum_{s\in{\cal S}} \lambda_s R_s( \sigma^\ell,\sigma^{\ell'}) $$ of two configurations over the whole system in the limit will become \begin{equation} R_{\ell,\ell'} = \sum_{s\in{\cal S}} \lambda_s R^s_{\ell,\ell'}. \label{Aoverlap} \end{equation} In this section, we will prove the main result that will allow us to characterize the limits that will arise in the Aizenman-Sims-Starr scheme. \begin{theorem}\label{ThSynch} For any array (\ref{Rsfirst}) that satisfies (\ref{GGms}), there exist non-decreasing $(1/\lambda_s)$-Lipschitz functions $L_s: [0,1]\to [0,1]$ such that $R^s_{\ell,\ell'} = L_s(R_{\ell,\ell'})$ almost surely for all $s\in{\cal S}$ and all $\ell,\ell'\geq 1$. \end{theorem} \noindent The reason we can consider the domain and range of $L_s$ to be $[0,1]$ is because each array $R^s$ is nonnegative definite and satisfies the Ghirlanda-Guerra identities (\ref{GGreg}) and, therefore, its entries are nonnegative by Talagrand's positivity principle (Theorem 2.16 in \cite{SKmodel}). Theorem \ref{ThSynch} implies that the joint distribution of the overlap arrays for all species will be determined trivially by the overlap array $(R_{\ell,\ell'})_{\ell,\ell'\geq 1}$. On the other hand, the Ghirlanda-Guerra identities imply that this array can be generated using the Ruelle probability cascades, which will be used in Section \ref{Sec5label}. We begin with the following observation. \begin{lemma}\label{LemSynch} If $R^s_{\ell,\ell'}> R^s_{\ell,\ell''}$ for some $s\in{\cal S}$ then $R^t_{\ell,\ell'}\geq R^t_{\ell,\ell''}$ for all $t\in{\cal S}$. \end{lemma} \textbf{Proof.} By Theorem \ref{ThGGms}, for any $s,t\in{\cal S}$, the arrays $$ \bigl(R^s_{\ell,\ell'}\bigr)_{\ell,\ell'\geq 1}, \bigl(R^t_{\ell,\ell'}\bigr)_{\ell,\ell'\geq 1} \,\,\mbox{ and }\,\, \bigl(R^s_{\ell,\ell'}+R^t_{\ell,\ell'} \bigr)_{\ell,\ell'\geq 1} $$ satisfy the Ghirlanda-Guerra identities. Since all these arrays are nonnegative definite, the main result in \cite{PUltra} (or Theorem 2.14 in \cite{SKmodel}) implies that these arrays are ultrametric, i.e. \begin{equation} R^s_{\ell',\ell''} \geq \min\bigl(R^s_{\ell,\ell'}, R^s_{\ell,\ell''} \bigr) \label{ultram} \end{equation} for any different $\ell,\ell',\ell'' \geq 1$ and, similarly, for the other two arrays. In other words, given three replica indices, the smallest two overlaps are equal. Suppose now that $R^s_{\ell,\ell'}> R^s_{\ell,\ell''}$ but $R^t_{\ell,\ell'}< R^t_{\ell,\ell''}$. By ultrameticity of the first two arrays, $$ R^s_{\ell,\ell'}> R^s_{\ell,\ell''} = R^s_{\ell',\ell''} \,\,\mbox{ and }\,\, R^t_{\ell',\ell''} = R^t_{\ell,\ell'}< R^t_{\ell,\ell''}. $$ However, this implies that $$ R^s_{\ell',\ell''} + R^t_{\ell',\ell''}< \min\bigl( R^s_{\ell,\ell'}+R^t_{\ell,\ell'}, R^s_{\ell,\ell''} + R^t_{\ell,\ell''}\bigr), $$ violating ultrametricity of the third array. \qed \medskip \noindent Let us state one obvious corollary of the above lemma. \begin{corollary}\label{CorSynch} The following statements hold. \begin{enumerate} \item[(a)] If $R_{\ell,\ell'}> R_{\ell,\ell''}$ then $R^s_{\ell,\ell'}\geq R^s_{\ell,\ell''}$ for all $s\in{\cal S}$. \item[(b)] If $R^s_{\ell,\ell'}> R^s_{\ell,\ell''}$ for some $s\in{\cal S}$ then $R_{\ell,\ell'}> R_{\ell,\ell''}$. \end{enumerate} \end{corollary} \noindent This already gives some indication that the overlaps in different species will be synchronized. However, keeping in mind the ultrametric tree structure of the Ruelle probability cascades that generate them, we need to show that the entire clusters are synchronized and the corresponding cascades are completely coupled. To prove this, for $q\in [0,1]$ and $s\in{\cal S}$, we will consider the array \begin{equation} R^{s,q}_{\ell,\ell'} = {\hspace{0.3mm}{\rm I}\hspace{0.1mm}}(R_{\ell,\ell'}\geq q)(R^s_{\ell,\ell'} +1). \end{equation} First of all, we add $+1$ to the overlap $R^s_{\ell,\ell'}$ to ensure that the only way the right hand side can be equal to zero is when $R_{\ell,\ell'}< q$ and not, for example, when $R^s_{\ell,\ell'}=0$. As in (\ref{ultram}), by Theorem \ref{ThGGms}, the array $(R_{\ell,\ell'})_{\ell,\ell'\geq 1}$ is ultrametric, which implies that the array $({\hspace{0.3mm}{\rm I}\hspace{0.1mm}}(R_{\ell,\ell'}\geq q))_{\ell,\ell'\geq 1}$ is nonnegative definite, as it consists of blocks on the diagonal with all entries equal to one. Therefore, the array $$ R^{s,q} =\bigl(R^{s,q}_{\ell,\ell'} \bigr)_{\ell,\ell'\geq 1} $$ is nonnegative definite as the Hadamard product of two such arrays. By Theorem \ref{ThGGms}, the array $R^{s,q}$ also satisfies the Ghirlanda-Guerra identities, so all the consequences of the Ghirlanda-Guerra identities for nonnegative definite arrays described, for example, in Section 2.4 in \cite{SKmodel} hold in this case. One such consequence is the following. Let \begin{equation} \mu = {\cal L}(R_{1,2}) \,\,\mbox{ and }\,\, \mu^{s,q} = {\cal L}(R^{s,q}_{1,2}) \end{equation} be the distributions of one entry of the arrays $R$ and $R^{s,q}$ correspondingly. Lemma 2.7 in \cite{SKmodel} implies the following consequence of the Ghirlanda-Guerra identities, which was first observed in \cite{PT}. \begin{lemma}\label{LemDense} For any $s\in{\cal S}, \ell\geq 1$ and $q\in [0,1]$, with probability one, the set \begin{equation} A^{s}_\ell(q) = \bigl\{R^{s,q}_{\ell,\ell'} \ |\ \ell'\not=\ell \bigr\} = \bigl\{ {\hspace{0.3mm}{\rm I}\hspace{0.1mm}}(R_{\ell,\ell'}\geq q)(R^s_{\ell,\ell'} +1) \ |\ \ell'\not=\ell \bigr\} \label{Asq} \end{equation} is a dense subset of the support of $\mu^{s,q}$. \end{lemma} This will be the key to the proof of Theorem \ref{ThSynch}. Now, for any $q\in [0,1]$, let us define \begin{equation} \ell_s(q) = \inf\bigl\{x\geq 1 \ | \ x\in \mbox{supp}\,\mu^{s,q} \bigr\}-1. \label{ellsq} \end{equation} Equivalently, one could take the infimum over $x>0$, because $R^{s,q}_{\ell,\ell'}>0$ if and only if $R^{s,q}_{\ell,\ell'}\geq 1$. To understand the meaning of this definition, let us notice that, whenever the set $A^{s}_\ell(q)$ in (\ref{Asq}) is dense in the support of $\mu^{s,q}$ (which happens with probability one for a given $q$), \begin{equation} \ell_s(q) = \inf\bigl\{R^s_{\ell,\ell'} \ | \ \ell'\not =\ell, R_{\ell,\ell'} \geq q\bigr\}, \label{ellsq2} \end{equation} so $\ell_s(q)$ is just the smallest value that $R^s_{\ell,\ell'}$ can take whenever $R_{\ell, \ell'}\geq q$. This alternative definition, obviously, implies the following. \begin{lemma} For any $s\in{\cal S},$ the function $\ell_s(q)$ in (\ref{ellsq}) is non-decreasing in $q$. \end{lemma} \noindent To obtain the functions $L_s$ in Theorem \ref{ThSynch}, we will first need to regularize $\ell_s(q)$ as follows, \begin{equation} L_s(q) = \lim_{x\uparrow q}\, \ell_s(x) \end{equation} for $q>0$ and $L_s(0) = \ell_s(0)$. Theorem \ref{ThSynch} will be now proved in two steps. First, we will show that $R^s_{\ell,\ell'} = L_s(R_{\ell,\ell'})$ almost surely. Second, we will show that $L_s$ is $(1/\lambda_s)$-Lipschitz on the support of the distribution $\mu$ of $R_{1,2}$. Then, we can redefine $L_s$ outside of the support to be $(1/\lambda_s)$-Lipschitz extension which, obviously, does not change the first claim, $R^s_{\ell,\ell'} = L_s(R_{\ell,\ell'})$, since $R_{\ell,\ell'}$ belongs to the support of $\mu$ almost surely. \medskip \noindent \textbf{Proof of Theorem \ref{ThSynch}.} \emph{Step 1.} We will use that the claim in Lemma \ref{LemDense} holds with probability one simultaneously for all $q\in\mathbb{Q} \cap [0,1]$. Let us fix some indices $\ell\not = \ell'$. If $\mu(\{0\})=0$ then all $R_{\ell,\ell'} > 0$ almost surely. If $\mu(\{0\})>0$ and $R_{\ell,\ell'} = 0$ then we must have $R^s_{\ell,\ell'}=0$ for all $s\in{\cal S}$ and the definition (\ref{ellsq}) implies that $\ell_s(0)=0$. In this case, $$ R^s_{\ell,\ell'} = \ell_s(R_{\ell,\ell'}) =L_s(R_{\ell,\ell'}). $$ Let us now consider the case when $R_{\ell,\ell'}>0$. First of all, for any $x< R_{\ell,\ell'}$ we must have that $\ell_s(x)\leq R^{s}_{\ell,\ell'}$, because the function $\ell_s(x)$ is non-decreasing and, for any rational $q\leq R_{\ell,\ell'}$, (\ref{ellsq2}) implies that $\ell_s(q)\leq R^{s}_{\ell,\ell'}$. Next, consider arbitrary ${\varepsilon}>0$ and consider any rational $q$ such that \begin{equation} q<R_{\ell,\ell'}\leq q+{\varepsilon}. \label{qeps} \end{equation} Consider two possibilities. First, suppose that $R^s_{\ell,\ell'} = \ell_s(q)$. Since for $q\leq x < R_{\ell,\ell'}$ we showed that $$ \ell_s(x) \leq R^s_{\ell,\ell'} = \ell_s(q) \leq \ell_s(x) $$ (so $\ell_s(x) = \ell_s(q)$ for such $x$), we get the desired claim, $$ R_{\ell,\ell'}^s = \ell_s(q) = \lim_{x\uparrow R_{\ell,\ell'}} \ell_s(x) = L_s(R_{\ell,\ell'}). $$ Second, suppose that $\ell_s(q)< R^s_{\ell,\ell'}.$ By (\ref{ellsq2}), we can find a sequence $(\ell_n)$ such that $R_{\ell,\ell_n}\geq q$ and $R^{s}_{\ell,\ell_n} \downarrow \ell_s(q).$ Since we assumed that $\ell_s(q)< R^s_{\ell,\ell'}$, for large enough $n$ we must have $R^{s}_{\ell,\ell_n}< R^{s}_{\ell,\ell'}$ and, by Corollary \ref{CorSynch}, we get $R_{\ell,\ell_n}< R_{\ell,\ell'}$ and $R^{t}_{\ell,\ell_n}\leq R^{t}_{\ell,\ell'}$ for all $t\in {\cal S}.$ Therefore, \begin{equation} 0\leq \lambda_s \bigl(R^{s}_{\ell,\ell'} - R^{s}_{\ell,\ell_n} \bigr) \leq \sum_{t\in{\cal S}} \lambda_t \bigl(R^{t}_{\ell,\ell'} - R^{t}_{\ell,\ell_n} \bigr) = R_{\ell,\ell'} - R_{\ell,\ell_n} \leq q+{\varepsilon} -q = {\varepsilon}. \label{Lip} \end{equation} Using that $R^{s}_{\ell,\ell_n} \downarrow \ell_s(q)$ implies that $\ell_s(q) \leq R^{s}_{\ell,\ell'} \leq \ell_s(q) + {\varepsilon} \lambda_s^{-1}.$ Finally, letting $q\uparrow R_{\ell,\ell'}$ and ${\varepsilon}\downarrow 0$ in such a way that (\ref{qeps}) holds, again, implies the desired claim $$ R_{\ell,\ell'}^s = \lim_{q\uparrow R_{\ell,\ell'}} \ell_s(q) = L_s(R_{\ell,\ell'}). $$ \emph{Step 2.} Let us now show that $L_s$ is $(1/\lambda_s)$-Lipschitz on the support of the distribution $\mu$ of $R_{1,2}$. Take $q_1< q_2$ in the support of $\mu$. Let $q_2' = q_2-{\varepsilon}_2$ for some small ${\varepsilon}_2>0$ such that $q_2'>0$ and let $q_1'=\max(q_1-{\varepsilon}_1, 0)$ for some small ${\varepsilon}_1>0$. Let us also make sure that $q_1'$ and $q_2'$ are rational. By (\ref{ellsq2}), given ${\varepsilon}>0$, we can find indices $\ell_j$ for $j=1,2$ such that \begin{equation} R_{\ell,\ell_j}\geq q_j' \,\,\mbox{ and }\,\, \ell_s(q_j') \leq R^s_{\ell,\ell_j}\leq \ell_s(q_j')+{\varepsilon}. \label{temp} \end{equation} Similarly to Lemma \ref{LemDense}, Lemma 2.7 in \cite{SKmodel} implies that the set $\{R_{\ell,\ell'}\ |\ \ell'\not =\ell\}$ is a dense subset of the support of $\mu = {\cal L}(R_{1,2})$ with probability one and, since we chose $q_1$ and $q_2$ in the support of $\mu$, we can find other indices $\ell_j'$ for $j=1,2$ such that $$q_j' \leq R_{\ell,\ell_j'} \leq q_j+{\varepsilon}.$$ If the index $\ell_j$ already satisfies this condition, we simply take $\ell_j' = \ell_j$. Otherwise, because of the first inequality in (\ref{temp}), we must have $R_{\ell,\ell_j'} < R_{\ell,\ell_j}$ and, by (\ref{ellsq2}), Corollary \ref{CorSynch} and the second inequality in (\ref{temp}), $$ \ell_s(q_j') \leq R^s_{\ell,\ell_j'} \leq R^s_{\ell,\ell_j} \leq \ell_s(q_j')+{\varepsilon}. $$ In both cases, we have $$ q_j' \leq R_{\ell,\ell_j'} \leq q_j+{\varepsilon} \,\,\mbox{ and }\,\, \ell_s(q_j') \leq R^s_{\ell,\ell_j'} \leq \ell_s(q_j')+{\varepsilon} $$ Since $q_1<q_2$, by taking ${\varepsilon}>0$ small enough, we can assume that $R_{\ell,\ell_1'}<R_{\ell,\ell_2'}$. Then, as in (\ref{Lip}), $$ \lambda_s \bigl( R^{s}_{\ell,\ell_2'} - R^{s}_{\ell,\ell_1'} \bigr) \leq R_{\ell,\ell_2'} - R_{\ell,\ell_1'}. $$ Combining all the inequalities, we showed that $$ \lambda_s \bigl( \ell_s(q_2') - \ell_s(q_1')-{\varepsilon} \bigr) \leq q_2 +{\varepsilon} -q_1'. $$ Letting ${\varepsilon}, {\varepsilon}_1, {\varepsilon}_2\downarrow 0$ implies $\lambda_s (L_s(q_2) - L_s(q_1)) \leq q_2 - q_1$, which proves that $L_s$ is $(1/\lambda_s)$-Lipschitz on the support of $\mu$. As we mentioned above, $(1/\lambda_s)$-Lipschitz extension of $L_s$ outside of the support does not affect the fact that $R^s_{\ell,\ell'} = L_s(R_{\ell,\ell'})$ almost surely. \qed \section{Lower bound via the Aizenman-Sims-Starr scheme}\label{Sec5label} Given the main result in the previous section, the arguments of this section will be a standard exercise. To a reader familiar with the corresponding arguments in the setting of the classical SK model (e.g. Sections 3.5 and 3.6 in \cite{SKmodel}) these arguments will be completely obvious. Otherwise, we recommend to study them first in the easier case of the SK model. It is clear that small modifications of the vector $(\lambda_s)_{s\in{\cal S}}$ result in small changes both of the free energy for large $N$ and the Parisi formula (\ref{Parisi}) so, without loss of generality, we can assume that all $\lambda_s$ are rational and can be written as \begin{equation} \lambda_s = \frac{k_s}{k}. \end{equation} In the proof of the lower bound, we will use an obvious fact that \begin{equation} \liminf_{N\to\infty} F_N \geq \frac{1}{k}\liminf_{n\to\infty} \bigl(\mathbb{E} \log Z_{nk+k} - \mathbb{E} \log Z_{nk}\bigr). \label{FNAN} \end{equation} Let us consider the right hand side for a fixed $N=nk$ and, in addition to the partition (\ref{species}), let us consider a partition of $k$ new coordinates \begin{equation} I^+ = \{N+1,\ldots, N+k\} = \bigcup_{s\in{\cal S}} I_s^+ \label{speciescavity} \end{equation} into different species, so that $|I_s^+| = k_s$. Let us compare the partition functions $Z_{N}$ and $Z_{N+k}$. If we denote $\rho = (\sigma,{\varepsilon})\in \varSigma_{N+k}$ for $\sigma\in\varSigma_N$ and ${\varepsilon}\in\Sigma_k$ then we can write \begin{equation} H_{N+k}(\rho) = H_N'(\sigma) + \sum_{i\in I^+}{\varepsilon}_i z_{N,i}(\sigma) + r({\varepsilon}), \label{decomp1} \end{equation} where \begin{equation} H_N'(\sigma) = \frac{1}{\sqrt{N+k}} \sum_{i,j =1}^N g_{ij}\sigma_i \sigma_j, \label{commonH} \end{equation} \begin{equation} z_{N,i}(\sigma) = \frac{1}{\sqrt{N+k}} \sum_{j=1}^N \bigl(g_{ij} + g_{ji} \bigr)\sigma_j \end{equation} and \begin{equation} r({\varepsilon})= \frac{1}{\sqrt{N+k}} \sum_{i,j\in I^+} g_{ij}{\varepsilon}_i {\varepsilon}_j. \end{equation} One the other hand, the Gaussian process $H_N(\sigma)$ on $\Sigma_N$ can be decomposed into a sum of two independent Gaussian processes \begin{equation} H_N(\sigma) \stackrel{d}{=} H_N'(\sigma) + y_N(\sigma), \label{commonH2} \end{equation} where \begin{equation} y_N(\sigma) = \frac{\sqrt{k}}{\sqrt{N(N+k)}} \sum_{i,j =1}^N g_{ij}'\sigma_i \sigma_j \end{equation} and $(g_{ij}')$ are independent copies of the Gaussian random variables $(g_{ij})$. Using that the term $r({\varepsilon})$ is of a small order, we can write \begin{equation} \mathbb{E}\log Z_{N+k} = \mathbb{E} \log \sum_{\sigma\in\varSigma_N} \prod_{i\in I^+} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(z_{N,i}(\sigma) \bigr) \exp H_{N}'(\sigma) + o(1) \label{ZN1} \end{equation} and, using the equation (\ref{commonH2}), \begin{equation} \mathbb{E}\log Z_{N} = \mathbb{E} \log \sum_{\sigma\in\varSigma_N} \exp (y_N(\sigma) \bigr) \exp H_{N}'(\sigma). \label{ZN} \end{equation} Finally, if we consider the Gibbs measure on $\varSigma_N$ corresponding to the Hamiltonian $H_N'(\sigma)$ in (\ref{commonH}), \begin{equation} G_N'(\sigma) = \frac{\exp H_N'(\sigma)}{Z_N'} \,\mbox{ where }\, Z_N' = \sum_{\sigma\in\varSigma_{N}} \exp H_N'(\sigma), \label{MeasureGNprime} \end{equation} then combining (\ref{ZN1}), (\ref{ZN}) we can replace the right hand side of (\ref{FNAN}) by, \begin{equation} \frac{1}{k}\liminf_{n\to\infty} \Bigl( \mathbb{E} \log \sum_{\sigma\in\Sigma_{N}} \prod_{i\in I^+} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(z_{N,i}(\sigma) \bigr) G_N'(\sigma) - \mathbb{E} \log \sum_{\sigma\in\varSigma_{N}} \exp\bigl(y_N(\sigma)\bigr) G_N'(\sigma) \Bigr). \label{AS2repr} \end{equation} This is the analogue of the Aizenman-Sims-Starr representation in \cite{AS2} (see Section 3.5 in \cite{SKmodel}). From the construction it is clear that the Gaussian processes $z_{N,i}(\sigma)$ for $i\in I^+$ and $y_N(\sigma)$ are independent of each other and the randomness of the measure $G_N'$. For $s\in {\cal S}$ and $i\in I_s^+$, \begin{align} \mathbb{E} z_{N,i}(\sigma^1) z_{N,i}(\sigma^2) &= \frac{1}{N+k}\sum_{t\in {\cal S}}\sum_{j\in I_t} 2 \Delta_{st}^2\sigma^1_j \sigma^2_j \nonumber \\ &= \frac{N}{N+k}\sum_{t\in {\cal S}} 2 \Delta_{st}^2 \lambda_t R_t(\sigma^1, \sigma^2) \nonumber \\ &= 2\sum_{t\in {\cal S}} \Delta_{st}^2 \lambda_t R_t(\sigma^1, \sigma^2) + O(N^{-1}) \label{Covz} \end{align} and, similarly to the computation of the covariance in (\ref{Cov}), \begin{align} \mathbb{E} y_{N}(\sigma^1) y_{N}(\sigma^2) &= \frac{kN^2}{N(N+k)}\sum_{s,t\in {\cal S}} \Delta_{st}^2 \lambda_s \lambda_t R_s(\sigma^1, \sigma^2) R_t(\sigma^1, \sigma^2) \nonumber \\ &= k \sum_{s,t\in {\cal S}} \Delta_{st}^2 \lambda_s \lambda_t R_s(\sigma^1, \sigma^2) R_t(\sigma^1, \sigma^2) + O(N^{-1}). \label{Covy} \end{align} Notice, how these expressions resemble the definition in (\ref{Qs}). Of course, one can ignore the lower error terms $O(N^{-1})$ from now on. The same computation can be carried out just as easily in the case when the free energy $F_N$ in (\ref{FNAN}) corresponds to the perturbed Hamiltonian $H_N^{\mathrm{pert}}(\sigma)$ in (\ref{Hpert}) instead of the original Hamiltonian $H_N(\sigma)$. Moreover, since the perturbation term $s_N h_N(\sigma)$ in (\ref{Hpert}) is of a smaller order, one can show that the perturbation term $s_{N+k} h_{N+k}(\rho)$ in the partition function $Z_{N+k}$ can simply be replaced by the one in $Z_{N}$, $s_N h_N(\sigma)$. This is standard and is explained, for example, in Section 3.5 in \cite{SKmodel}. In this case, we obtain the representation (\ref{AS2repr}) with the Gibbs measure $G_N'$ in (\ref{MeasureGNprime}) corresponding to the perturbed Hamiltonian $$ H_N'(\sigma) + s_N h_N(\sigma). $$ Also, in this case the expectation $\mathbb{E}$ in (\ref{AS2repr}) includes the average $\mathbb{E}_x$ in the uniform random variables $x=(x_{w,p})$ in the definition of the perturbation Hamiltonian (\ref{hNw}). The proof of Theorem \ref{ThGG} applies verbatim to the measure $G_N'$, and right below Theorem \ref{ThGG} we mentioned that one can choose a non-random sequence $x^N=(x^N_{w,p})_{p\geq 1,w\in {\cal W}}$ changing with $N$ such that (\ref{GGxlim2}) holds for the Gibbs measure $G_N'$ with the parameters $x$ in the perturbation Hamiltonian (\ref{hNw}) equal to $x^N$ rather than random. By Lemma 3.3 in \cite{SKmodel}, one can choose this sequence $x^N$ in such a way that the lower limit in (\ref{AS2repr}) is not affected by fixing $x=x^N$ instead of averaging in $x$. To finish the proof, we will use Theorem 1.3 in \cite{SKmodel}, (a trivial modification of) which implies that \begin{equation} \mathbb{E} \log \sum_{\sigma\in\Sigma_{N}} \prod_{i\in I^+} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(z_{N,i}(\sigma) \bigr) G_N'(\sigma) - \mathbb{E} \log \sum_{\sigma\in\varSigma_{N}} \exp\bigl(y_N(\sigma)\bigr) G_N'(\sigma) \label{comp1} \end{equation} is a continuous functional of the distribution of the array \begin{equation} \bigl(R_s(\sigma^\ell,\sigma^{\ell'}) \bigr)_{s\in{\cal S},\ell,\ell'\geq 1} \label{Rsend} \end{equation} under the measure $\mathbb{E} G_N^{\prime \otimes\infty}$. Passing to a subsequence, if necessary, we can assume that this array converges in distribution to some array $(R^s_{\ell,\ell'})_{s\in{\cal S},\ell,\ell'\geq 1}$ that, by construction, satisfies Theorem \ref{ThGGms}. In particular, by Theorem \ref{ThSynch}, \begin{equation} R^s_{\ell,\ell'} = L_s(R_{\ell,\ell'}) \label{RLS} \end{equation} for some non-decreasing $(1/\lambda_s)$-Lipschitz functions $L_s$, where $R_{\ell,\ell'}$ is the overlap of the whole system in (\ref{Aoverlap}). Let us consider the sequence (\ref{zetas}) and a sequence \begin{equation} 0=q_0< q_1 < \ldots < q_{r-1}< q_r =1 \label{q} \end{equation} such that the distribution $\zeta$ on $[0,1]$ defined by \begin{equation} \zeta\bigl(\bigl\{q_\ell\bigr\}\bigr) = \zeta_{\ell} - \zeta_{\ell-1} \,\mbox{ for }\, \ell=0,\ldots, r, \label{zetafop} \end{equation} is close to the distribution ${\cal L}(R_{1,2})$ of one element of the array $(R_{\ell,\ell'})_{\ell,\ell'\geq 1}$ in some metric that metrizes weak convergence of distributions on $[0,1]$. As in Section \ref{Sec2label}, let $(v_\alpha)_{\alpha\in \mathbb{N}^r}$ be the weights of the Ruelle probability cascades corresponding to the parameters (\ref{zetas}). Let $(\alpha^\ell)_{\ell\geq 1}$ be an i.i.d. sample from $\mathbb{N}^r$ according to these weights and, using the sequence (\ref{q}), define \begin{equation} Q_{\ell,\ell'} = q_{\alpha^\ell\wedge \alpha^{\ell'}}. \end{equation} Since from Theorem \ref{ThGGms} it is clear that the overlap array $(R_{\ell,\ell'})_{\ell,\ell'\geq 1}$ satisfies the Ghirlanda-Guerra identities, Theorems 2.13 and 2.17 in \cite{SKmodel} imply that its distribution will be close to the distribution of the array $(Q_{\ell,\ell'})_{\ell,\ell'\geq 1}$. If for each $s\in{\cal S}$ we define the sequence in (\ref{qs}) by \begin{equation} q^s_\ell = L_s(q_\ell) \,\,\mbox{ for }\,\, 0\leq l\leq r, \end{equation} and let \begin{equation} Q^s_{\ell,\ell'} = L_s(Q_{\ell,\ell'}) = q^s_{\alpha^\ell\wedge \alpha^{\ell'}}, \label{QLS} \end{equation} the equation (\ref{RLS}) implies that the entire array $(Q^s_{\ell,\ell'})_{s,\in{\cal S}, \ell,\ell'\geq 1}$ will be close in distribution to the array $(R^s_{\ell,\ell'})_{s\in{\cal S}, \ell,\ell'\geq 1}$. Let us now consider Gaussian processes $C^s(\alpha)$ for $s\in{\cal S}$ and $D(\alpha)$ indexed by $\alpha\in\mathbb{N}^r$ as in Section \ref{Sec2label}. For each $s\in{\cal S}$ and each $i\in I^+_s$, let $C_i(\alpha)$ be a copy of the process $C^s(\alpha)$ and suppose that all these processes are independent of each other and of $D(\alpha)$. Similarly to (\ref{comp1}), consider \begin{equation} \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} \prod_{i\in I^+} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(C_{i}(\alpha) \bigr) v_\alpha - \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} \exp\bigl(\sqrt{k}D(\alpha)\bigr) v_\alpha. \label{comp2} \end{equation} By (\ref{Qs}), (\ref{CD}) and (\ref{QLS}), the covariances of these Gaussian processes can be written as \begin{equation} \mathbb{E} C_i(\alpha^1) C_i(\alpha^2) = 2\sum_{t\in{\cal S}} \Delta_{st}^2 \lambda_t q^s_{\alpha^1\wedge\alpha^2} = 2\sum_{t\in{\cal S}} \Delta_{st}^2 \lambda_t Q^s_{1,2} \label{CD1} \end{equation} for $s\in{\cal S}$ and $i\in I_s^+$, and \begin{equation} \mathbb{E} \sqrt{k}D(\alpha^1) \sqrt{k}D(\alpha^2) = k \sum_{s,t\in{\cal S}} \Delta_{st}^2 \lambda_s \lambda_t q^s_{\alpha^1\wedge\alpha^2}q^t_{\alpha^1\wedge\alpha^2} = k \sum_{s,t\in{\cal S}} \Delta_{st}^2 \lambda_s \lambda_t Q^s_{1,2} Q^t_{1,2} . \label{CD2} \end{equation} If we compare the covariances in (\ref{Covz}) and (\ref{Covy}) with (\ref{CD1}) and (\ref{CD2}), Theorem 1.3 in \cite{SKmodel} implies that (\ref{comp2}) is the same continuous functional of the distribution of the array \begin{equation} \bigl(Q^s_{\ell,\ell'} \bigr)_{s\in{\cal S},\ell,\ell'\geq 1} \label{Qsend} \end{equation} as (\ref{comp1}) is of the array (\ref{Rsend}). Since both arrays, by construction, approximate in distribution the array $(R^s_{\ell,\ell'})_{s\in{\cal S},\ell,\ell'\geq 1}$, we proved that the quantities \begin{equation} \frac{1}{k} \Bigl( \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} \prod_{i\in I^+} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(C_{i}(\alpha) \bigr) v_\alpha - \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} \exp\bigl(\sqrt{k} D(\alpha)\bigr) v_\alpha \Bigr) \label{lowerend} \end{equation} can be used to approximate the lower limit of the free energy. It remains to observe that, similarly to (\ref{simp1}) and (\ref{simp2}), using standard properties of the Ruelle probability cascades (again, we refer to the proof of Lemma 3.1 in \cite{SKmodel}), \begin{align*} \frac{1}{k} \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} \prod_{i\in I^+} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(C_{i}(\alpha) \bigr) v_\alpha &= \frac{1}{k} \sum_{i\in I^+} \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(C_{i}(\alpha) \bigr) v_\alpha \\ &= \sum_{s\in {\cal S}} \lambda_s \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} 2\hspace{0.3mm} {\mbox{\rm ch}} \bigl(C^s(\alpha) \bigr) v_\alpha = \log2+ \sum_{s\in {\cal S}} \lambda_s X^s_0 \end{align*} and $$ \frac{1}{k} \mathbb{E} \log \sum_{\alpha\in \mathbb{N}^r} \exp\bigl(\sqrt{k} D(\alpha)\bigr) v_\alpha = \frac{1}{2} \sum_{0\leq \ell\leq r-1} \zeta_\ell \bigl(Q_{\ell+1} - Q_{\ell}\bigr). $$ Therefore, (\ref{lowerend}) is precisely ${\cal P}(\zeta,q)$ defined in (\ref{Pzeta}), and this finishes the proof of the lower bound.
1,314,259,995,527
arxiv
\section{Introduction} \begin{quote} \emph{Entities should not be multiplied unnecessarily.} --- William of Ockham \end{quote} In recent years, deep learning has quickly grown to revolutionize a number of complex tasks, ranging from language modeling to computer vision ~\citep{lecun2015deeplearning}. The key to this success was found in designing a large search space in which many local minima are good enough approximations for a given task ~\citep{choromanska2014loss}. But this blessing comes at a cost of having large and complex models, which often conflicts with goals of sparsity and interpretability ~\citep{Lample2020Deep}, making neural nets especially ill-suited for a myriad of physical and computational problems that have compact and interpretable underlying mathematical structures. For instance, simple harmonic motion can be described by a small set of equations that allow for human-level understanding and can easily be generalized beyond the domain of available data. A similar scenario can be found in input-output-based programs: a few lines of code can easily express a wide range of mathematical behavior, despite the hidden program simplicity. Finding these underlying mathematical structures for equations and programs are the tasks of symbolic regression and program synthesis, respectively. But even though neural networks can easily emulate physical motion or simple programs, the resulting model is not interpretable, might not preserve desired physical or computational properties (e.g. time invariance), and is unable to generalize beyond observed data. Moreover, neural networks' reliance on large complexity implies that reproducing programs or equations might require the full collection of trained weights to be encoded as an approximation, in opposition to a compact form solution. In contrast, Evolutionary Algorithms (EAs) have been successful in symbolic regression and automatic programming \citep{poli2008field}, as they are able to find interpretable and compact models that can explain observed data. EAs have recently been employed as a competitive alternative to gradient descent for optimizing neural networks, in what is called Neuroevolution ~\citep{such2017deep, 265960, 10.1145/2330163.2330207}. Traditionally, these algorithms operate over populations of candidate solutions using a number of methods inspired by biological evolution, such as mutations and selection of the fittest \citep{poli2008field}. More recently, evolutionary strategies that model a probability distribution over parameters, updating this distribution according to their own best samples (i.e.\ selecting the fittest), were found to be advantageous in optimization on high-dimensional spaces, including in those of neural networks' hyperparameters \citep{hansen2016cma, loshchilov2016cmaes}. This approach is of particular interest for the purposes of Neuroevolution, as keeping a probability distribution over the weights requires less storage in comparison to keeping a population of networks over which selection occurs. In this paper, we consider the mixed approach of connectionist and evolutionary optimization for the purposes of symbolic regression, program synthesis and simple pattern recognition, also with an outlook to promising integration with deep learning. We use a neural network to model a probability distribution over functions, and optimize the model through a novel evolutionary strategy. For that, we also introduce a loss function that can be easily tuned for different tasks. Our method can handle non-differentiable functions, and converges to interpretable and sparse symbolic expressions. We also introduce a number of strategies to induce compactness and simplicity, à la Occam's Razor. \section{Model architecture} \label{sec:model} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{Figure1.pdf} \caption{(\emph{a}) A two-output network model with depth $L=2$, $\vec x = [x_0, x_1]$, constants $\mathcal{C} = [1, \pi ]$ and bases functions $\mathbf{\Phi} =\langle +(\cdot,\cdot), \sin(\cdot), (\cdot)^2, \times(\cdot,\cdot) \rangle$. Highlighted are the arguments sublayer, composed of $\mathbf{\Sigma}$ nodes, and the images sublayer, composed of the bases functions from $\mathbf{\Phi}$. Together, these two sublayers define a single layer of our model. (\emph{b}) An example of function-specifying directed acyclic graphs (DAGs) that can be sampled from the network in (\emph{a}). In this case, the function expressions reduce to $\vec y(\vec x) = [y_0(\vec x), y_1(\vec x)]$, with $y_0(\vec x) = \sin^2(x_0 + 1)$ and $y_1(\vec x) = \sin(\pi^2\sin(x_1))$. } \label{fig:mesh1} \end{figure} \subsection{Layer structure} A dataset $\mathcal{D}=\{\langle \vec{x}_p, \vec{y}_p \rangle\}_{p=1}^{|\mathcal{D}|}$ consists of pairs of observations $\vec{x}_p$ and targets $\vec{y}_p=f^*\left(\vec{x}_p\right)$, where $f^*(\vec{x}_p)=[f^*_0(\vec{x}_p),\dots, f^*_{v-1}(\vec{x}_p)]^\top$. Here, each $f^*_j(\cdot)$ is a hidden target function we aim to model (the $j$-th component of the output of size $v$). Our goal is to compose either the exact form or an approximation of $f_i^*(\cdot)$ using a predefined collection of $N$ basis functions $\mathbf{\Phi}=\{\phi_i(\cdot)\}_{i=1}^N$, which can be understood as a primitives for programs or functions. Note that bases can be repeated, their arity (number of arguments) are not restricted to one, and they may operate over different domains. The architecture to solve this problem resembles a conventional fully connected network with $L$ layers, where layer $l$ connects to layer $l+1$ via a fully connected layer with weights $\mathbf{w}_l$ and no bias. We define $\mathbf{W}=\left \{\mathbf{w}^{(l,i)} ; 1 \leq l \leq L, 1 \leq i\leq N \right \}$ as the collection of all weights, where $\mathbf{w}^{(l,i)}$ are the weights at layer $l$ for base $\phi_i(\cdot).$ The structure of $\mathbf{W}$ is discussed in Section \ref{sec:connectivity}. An example of our model architecture is depicted in Figure~\ref{fig:mesh1}. The input $\vec x$ is concatenated with a predefined collection of constants (e.g.\ $\pi$, $1$, $e$), which are used along with the bases $\mathbf{\Phi}$ to build constant factors in our solution. To incorporate the bases $\mathbf{\Phi}$ into the network, we follow a similar approach as \citep{martius2016extrapolation, sahoo18a, Kim2019IntegrationON} in which the bases act as activation functions on the nodes of the network. More specifically, each hidden layer consists of the \textit{arguments} sublayer and the \textit{images} sublayer. The bases are stacked in the images sublayer and act as the activation functions of their respective nodes. Each basis takes in $n > 0$ arguments, which are stacked in the arguments sublayer. We call nodes in the arguments sublayer \emph{$\mathbf{\Sigma}$-nodes}, as they sum over values from the previous images sublayer. Figure~\ref{fig:mesh1} highlights this sublayer structure, while the supplemental material (SM) describes the mathematical formalism behind it. \subsection{Temperature-controlled connectivity} \label{sec:connectivity} To make the network more interpretable, we want connections to be as sparse as possible. There are numerous approaches to inducing sparsity in neural networks, including $L_1$, $L_0$ and $L_{1/2}$ regularization \citep{louizos2018learning, Xu2010L12R, gale2019state}, but these methods indiscriminately regularize all the weights equally without capturing structure within layers. Here we propose a sparsity method based on the probabilistic interpretation of the softmax function to control the sparsity of each layer: sparsity is achieved by shaping the resulting softmax distribution into a delta function, as explained below. This has the additional benefit of allowing us to sample over the probability distribution for training, as discussed in Sections \ref{sec:prob} and \ref{sec:es}. Let $\vb{h}^{(l)}$ and $\widetilde{\vb{h}}^{(l+1)}$ be two consecutive \textit{images} and \textit{arguments} sublayers with $N_l$ and $M_{l+1}$ number of neurons, respectively. In a conventional neural network, these two sublayers would be related by a matrix with trainable weights. Here, to promote probability-based sparsity, we define a \emph{$T$-softmax layer} for any temperature $T > 0$ as the following transformation \begin{multline} \mathbf{W}(T) \cdot \mathbf{h}^{(l)} = \begin{bmatrix} {\mathsf{softmax}(\mathbf{w}_1;T)}^\top \\ \vdots \\ {\mathsf{softmax}(\mathbf{w}_{M_{l+1}};T)}^\top\end{bmatrix}\begin{bmatrix}h_1^{(l)} \\ \vdots \\ h_{N_l}^{(l)}\end{bmatrix} \equiv \begin{bmatrix}\widetilde h_1^{(l+1)} \\ \vdots \\ \widetilde h_{M_{l+1}}^{(l+1)}\end{bmatrix} = \widetilde{\mathbf{h}}^{(l+1)}, \end{multline} where $\mathsf{softmax}$ is the softmax activation, controlled by temperature $T$\footnote{as usual, for any $\mathbf{z}=[z_1,\dots,z_{N_l}]$ we define $ \mathsf{softmax}(\mathbf{z};T) \colonequals \left[\frac{\exp(z_1/T)}{\sum_{i=1}^{N_l}\exp(z_i/T)},\dots,\frac{\exp(z_{N_l}/T)}{\sum_{i=1}^{N_l}\exp(z_i/T)} \right]^\top, $ the limit of which is a delta function as $T\to 0.$}. So instead of allowing each element of matrix $\mathbf{W}$ to vary independently, we constrain each row of the matrix through the $\mathsf{softmax}$ function. This ensures that the elements in each row are positive and add up to one, leading to a probabilistic interpretation. In practice, we set up $T$ to a fixed, typically small, number throughout training. The last layer is usually set to have a higher temperature to allow for more compositionality. Moreover, it is possible to explore schedulers for the temperature in the spirit of simulated annealing ~\citep{Kirkpatrick671, bertsimas1993simulated}, which we discuss in the SM. \subsection{A neural network as a probability distribution over functions} \label{sec:prob} The softmax transformation specifies the probability of each argument node to be connected with a previous layer's image node. Hence, for fixed weights $\mathbf{W}$ our model represents a distribution $q(\cdot|\mathbf{W})$ over compositions yielding functions $f \equiv f(\cdot)$ in a function space $\mathcal{F}_{\mathbf{\Phi}}^L$ defined as follows: \[ \mathcal{F}_{\mathbf{\Phi}}^L = \{ \textrm{all function compositions up to nesting depth $L$ of bases } \mathbf{\Phi} \}. \] If we allow each $\mathbf{\Sigma}$-node to sample a single connection from their local softmax probabilities, the network becomes sparse and describes a function-specifying directed acyclic graph (DAG), as seen in Figure (\ref{fig:mesh1}.b). Each function $f(\cdot) \in \mathcal{F}_{\mathbf{\Phi}}^L$ that can be sampled is associated with a probability modeled by the network weights $\mathbf{W}$. We can now define the optimal weights $\mathbf{W}_*$ of our neural network as satisfying the following property \[ q(f|\mathbf{W}_*)=1 \text{ for some } f \text{ such that } f(x)=f^*(x) \text{ for all } x \text{ in the domain of } f^*. \] Note that since $q(\cdot|\cdot)$ is a probability distribution we have that $\sum_{f \in \mathcal{F}_{\mathbf{\Phi}}^L}q(f|\mathbf{W})=1$ and $q(f|\mathbf{W})\geq 0$ for all $f$ in $\mathcal{F}_{\mathbf{\Phi}}^L$. In practice, using a neural architecture allows for efficient sampling and function evaluation. At any given layer $l$, we apply the softmax transformation to weights $W^{(l)}$, as described in Section~\ref{sec:connectivity}, to obtain $\widetilde{W}^{(l)}$. Each row of $\widetilde{W}^{(l)}$ then defines a categorical distribution, from which we can efficiently sample an index representing a specific image-to-argument edge. Finally, these sampled edges are encoded as sparse matrices, through which a forward pass results in a full, sampled function being evaluated. The probability of this function being generated by this model can easily be recovered from the weights and the sampled indices by backtracking starting from the outputs. \subsection{Skip connections} We augment our model with skip connections similar to those in DenseNet~\citep{huang2017densely} and ResNet~\citep{he2016deep}, in which the image hidden states are concatenated with those of subsequent layers, as depicted in Figure~\ref{fig:mesh2}. Skip connections yield several desirable properties for our model: (\emph{i}) The network can find compact solutions as it allows all levels of composition to be considered. This promotes sparsity and interpretability of the solution. (\emph{ii}) Shallow layers can be trained before or alongside the subsequent layers due to more direct supervision, for gradients can propagate to shallow layers more easily to avoid exploding or vanishing gradients. We also speculate that this allows subsequent layers to behave as higher-order corrections to the solutions found in early layers. (\emph{iii}) Primitives in shallow layers can be reused, analogous to feature reuse in DenseNet. For finding programs, this encourages modularity by allowing program subroutines to be composed earlier and then reused in deep layers. \begin{figure}[!t] \centering \hfill\begin{minipage}{.70\textwidth} \makebox[\textwidth]{\includegraphics[width=\textwidth]{Figure2.pdf}} \end{minipage} \hfill\begin{minipage}{.25\textwidth} \caption{ Augmenting the neural network from Figure~(\ref{fig:mesh1}.a) $L=2$ and $\mathbf{\Phi}$ consists of the bases $+(\cdot,\cdot)$, $\sin(\cdot)$, $(\cdot)^2$ and $\times(\cdot,\cdot)$. Dashed lines indicate the origins from the reused neurons. Colour indicates the origin of the skip connections. } \label{fig:mesh2} \end{minipage} \end{figure} \section{Training} \label{sec:optimization} We want to be able to include bases that are non-differentiable so that we can express a wide range of functions and programs. Additionally, while local minima are often ``good enough'' for deep learning, symbolic regression typically wants to be able to escape local minima and explore a broader range of functions. To approach this, we propose a loss function and a training method that combine gradient based optimization and evolutionary strategies for efficient global exploration of the function space. \subsection{Loss} In order to train $q(\cdot|\mathbf{W})$, we consider a mini-batch $\mathcal{M} = \langle X, Y \rangle$, and sample a function from the network $f(\cdot) \sim q(\cdot|\mathbf{W})$. We then compute the \textit{fitness} of $f(\cdot)$ with respect to a training pair $(\vec{x}, y)$\footnote{For simplicity we consider scalar outputs. We discuss multi-output networks in the next section.} by evaluating an interpolation amplitude using a \emph{normalized radial basis function} (RBF) kernel $k(f(\vec{x}),y)=(2\pi\sigma^2)^{-1/2}\exp(-(f(\vec{x})-y)^2/(2\sigma^2))$. This kernel is equivalent to a Gaussian with mean $y$ and variance $\sigma^2$, and measures how close $f(\vec{x})$ is to the target $y$. The total fitness is constructed by summing over the entire mini-batch as follows $K(\mathcal{M},f) = \sum_{(\vec{x},y) \in \mathcal{M}} k(f(\vec{x}),y)$. Note that the variance of $k(f(\vec{x}),y)$ characterizes the smoothness of the fitness function for our training. In the extremum $\sigma^2 \rightarrow 0$, the RBF defines a delta function for which the fitness will only be non-zero for some $(\vec{x}, y)$ if $f(\vec{x}) = y$. Similarly, a large variance characterizes a fitness in which potentially many solutions give good approximations, increasing the risk of convergence to local minima. In the former case, learning becomes harder as only few $f(\cdot)$ out of exponentially many samplable functions will result in any signal, whereas in the later case learning might not converge to the optimal solution. We hence let $\sigma^2$ be a network hyperparameter which can be tuned for the tradeoff between ease of learning and solution optimality for different tasks. We can now introduce a loss function as to perform backpropagation on the weights of $q(\cdot|\mathbf{W})$: \begin{align*} H_q[f, \mathbf{W}, \mathcal{M}] & = - \sum_{\langle \vec{x}, y\rangle \in \mathcal{M}} \log q(f(\vec{x})|\mathbf{W}) \;\; k(f(\vec{x}),y) \\ & = - \log q(f(\cdot)|\mathbf{W}) \sum_{\langle \vec{x}, y\rangle \in \mathcal{M}} \frac{1}{\sqrt{2\pi \sigma^2}} \exp \left(-\frac{(f(\vec{x}) - y)^2}{2 \sigma^2} \right) \\ \tag{\theequation}\label{eq:cross_entropy} & = - \log q(f(\cdot)|\mathbf{W}) \; K(\mathcal{M},f). \end{align*} If we interpret $k(\cdot,y)$ as a Gaussian probability density conditional on its mean being the target $y$, we can interpret~\eqref{eq:cross_entropy} as the cross-entropy of the conditional distribution for the target and the probability of the sampled function $f.$ Our intuition is that if the sampled function $f$ is close to $f^*$, then $K(\mathcal{M},f)$ will be large and the gradient update below will also be large: \begin{equation} \label{eq:gradient_update} \nabla_{\mathbf{W}} H_q[f, \mathbf{W},\mathcal{M}] = -\frac{\nabla_{\mathbf{W}}q(f|\mathbf{W})}{q(f|\mathbf{W})} \; K(\mathcal{M},f). \end{equation} The first term on the right hand side (RHS) of the update~\eqref{eq:gradient_update} increases the likelihood of the function $f.$ The second term on the RHS is the interpolating amplitude, and it is maximal when $f \equiv f^*$, and most importantly, it will be close to zero as $f$ deviates from $f^*$. If the sampled function is not close to the target, then the likelihood update will be suppressed by the vanishing interpolating function. Thus, we only optimize the likelihood for functions close to the target. Since a probability increase for a given sample is coupled with a probability decrease in some negative samples (through our local softmax coupling), in spirit~\eqref{eq:gradient_update} also performs a form of \emph{negative sampling}, as defined in Word2Vec~\citep{NIPS2013_5021}. In practice, we also tested reweighting the importance of the top-$\lambda$ routes. Let us sort those routes in increasing order of $K(\mathcal{M},\cdot)$. Then, if $K(\mathcal{M},f_i)$ denotes the $i$-th sorted fitness, we substitute it by $K'(\mathcal{M},f_i) = K(\mathcal{M},f_i) / i$. This results in updates that are biased towards the best routes within the already selected ones. We find that this small modification does not improve the fraction of solutions that converge to the correct result, but it does make convergence faster. Note that in~\eqref{eq:gradient_update} we backpropagate only through the probability of the function $f$ given by $q(f|\mathbf{W}),$ whose value \emph{does not} depend on the bases in $\mathbf{\Phi}.$ This suggests that the bases can be non-differentiable, without interfering with our optimization. This is particularly useful for our program discovery and pattern recognition experiments, as these domains require logic operations such as $\mathsf{MIN}$, $\mathsf{MAX}$ and $\mathsf{IF}$, which are non-differentiable. \subsection{Evolutionary strategy} \label{sec:es} A simple training algorithm can be executed by sampling $R$ functions in each step and performing a gradient step for each sampled function as defined in \eqref{eq:gradient_update}. However, we found that this approach can easily converge to inadequate local minima. Instead, inspired by the recent development of evolutionary-based optimization algorithms, we propose an evolutionary strategy to update our model. Denote $\mathbf{W}^{(t)}$ as the set of weights at training step $t$. We initialize all weights of $\mathbf{W}^{(0)}$ to the same value of $1$. Let us also fix two hyperparameters: $R$, the number of functions to sample from $q(\cdot|\cdot)$ at each training step; and $\lambda$, or the \textit{truncation parameter}, which defines the fraction of the $R$ paths to be chosen for optimization via~\eqref{eq:gradient_update}. Initialize $t = 0$, and $\mathbf{W}^{(0)}$. The algorithm then proceeds as follows: \begin{enumerate} \item Sample $R$ functions $f_1,\dots, f_R \sim q(\cdot|\mathbf{W}^{(t)}).$ \item Evaluate $K(f_i, \mathcal{M})$ for each $i=1,\dots,R$ and choose $\lambda$ paths which yield the top-$\lambda$ highest values of $K(\cdot, \mathcal{M}).$ If the network is multi-output with $v$ outputs, then for each output select the top-$\lambda$ function candidates for each output, yielding a total of $v \lambda$ selected functions. The total loss is then given by $\sum_{i=1}^{v \lambda}H_q[f_i,\mathbf{W},\mathcal{M}],$ which yields the gradient update for the training step: \begin{equation} \label{eq:total_update} -\sum_{i=1}^{v \lambda} \frac{\nabla_{\mathbf{W}}q(f_i|\mathbf{W})}{q(f_i|\mathbf{W})} \; K(\mathcal{M},f_i). \end{equation} \item Obtain $\mathbf{W}^{(t+1)}$ by performing the gradient step~\eqref{eq:total_update} on $\mathbf{W}^{(t)}$ for all selected paths. In practice, we find that the Adam algorithm~\citep{Kingma2015AdamAM} worked well for our experiments. \item Set $t = t + 1$ and repeat from Step 1 until a stop criterion is met. \end{enumerate} The benefit ot using~\eqref{eq:total_update} versus~\eqref{eq:gradient_update} is that accumulating over the top-$\lambda$ best fits to the target allows for explorations of function compositions that contain desired components, but are not fully developed yet. For example, if we want to train an implicit function with OccamNet, such as the hyperbola $x_0x_1=1$, then the constant function $f=1$ would always be a best fit, but it does not capture the desired behavior, while a composition that contains $x_0$ might not be fully developed to $x_0x_1$, but the probability of choosing $x_0$ should be increased, which is possible through~\eqref{eq:total_update}. We find that initializing the weights $\mathbf{W}^{(0)}$ to a matrix filled with a single value works in practice, and can be understood as initializing the model to a probability distribution that is well spread over the function space. \subsection{Recurrence} \label{sec:recurrence} OccamNet can also be trained to find recurrence relations, which is of particular interest for programs that rely on $\mathsf{FOR}$ or $\mathsf{WHILE}$ loops (see Table~\ref{table:benchmarks} for examples of such programs). To find such programs, we assume a \emph{maximal} depth of recursion $D\in\mathbb{N}$. Let us use the following notation for recurring functions: $f^{\circ (n+1)}(x) \equiv f^{\circ n}(f(x))$, with base case $f^{\circ 1}(x) \equiv f(x)$. We augment the evolutionary algorithm as follows. We first sample $(f_1,\dots, f_R) \sim q(\cdot|\mathbf{W}^{(t)})$. For each $f_i$, we compute its recurrence up to depth $D$: $(f_i^{\circ 1}, f_i^{\circ 2}, \dots f_i^{\circ D})$, obtaining a collection of $RD$ functions. Training continues similarly to Section \ref{sec:es} in which we compute the corresponding $K(\mathcal{M},f)$, select the best $\lambda$ fraction of them, and update the weights. It is important to note that we have to consider all depths up to $D$ since our maximal recurrence depth might be larger than the one for the target function. Note that we do not change the network architecture to accommodate for recurrence depth $D > 1$. As described in Section \ref{sec:es}, we can efficiently use the network architecture to evaluate a sampled function $f(\vec x)$ for a given batch of $\vec x$. To incorporate recurrence, we simply take the output of this forward pass and feed it again to the network $D$ times, in a similar fashion to a recurrent neural network. The resulting outputs are evaluations $(f_i^{\circ 1}(\vec x), f_i^{\circ 2}(\vec x), \dots f_i^{\circ D}(\vec x))$ for a given batch of $\vec x.$ \section{Experiments and Discussion} \label{sec:experiments} In order to empirically validate our model, we devise a diverse collection of benchmarks in three categories. \emph{Analytic functions} consist of simple, smooth equations, while \emph{Programs} include non-differentiable operations (such as $\mathsf{IF}$ or $\mathsf{MAX}$). Finally, we also experiment with \textit{Pattern Recognition} tasks to investigate both the accuracy and interpretability of our model's solution. For our experiments, we set the stop criterion and terminate learning when the top-$\lambda$ sampled functions all return the same fitness $K(\cdot, f)$ for 30 consecutive epochs. If this happens, these samples are either the exact same function-specifying DAG or equivalent function expressions for the input domain. At this training point, the network's probability mass will increasingly concentrate on these expressions, and computing the most likely DAG allows us to retrieve the final expression. We say that the network has converged in discovering functions if this final expression matches the correct function. For pattern recognition, there is no ground truth for a correct composition, so we measure the accuracy of the classification rule on a test split, in the conventional manner. In all of the experiments, we cap the maximum number of iterations, so that if termination is not met in a set number of steps we benchmark it as not converged. We also keep a constant temperature for all the layers, except for the last one. An increased last layer temperature allows the network to explore higher function compositionality, as shallow layers can be further trained before the last layer probabilities become concentrated; this is particularly useful for learning functions with high degrees of nesting. More details on hyperparameters for experiments can be found in the SM. \begin{table}[t] \centering \caption{Holistic benchmarking of OccamNet. $\eta$ is rate of successful converges from 10 trials. $T_c$ is the time of convergence measured in number of epochs. Reported accuracy is median from 10 trials.} \label{table:benchmarks} \begin{tabular}{lcrlcr} \toprule \multicolumn{3}{c}{Analytic Functions} & \multicolumn{3}{c}{Programs} \\ \cmidrule(r){1-3} \cmidrule(r){4-6} \multicolumn{1}{c}{Targets} & \multicolumn{1}{c}{$\eta$} & \multicolumn{1}{c}{$T_c$} & \multicolumn{1}{c}{Targets} & \multicolumn{1}{c}{$\eta$} & \multicolumn{1}{c}{$T_c$} \\ \cmidrule(r){1-1} \cmidrule(r){2-2} \cmidrule(r){3-3} \cmidrule(r){4-4} \cmidrule(r){5-5} \cmidrule(r){6-6} $2x^2+3x$ & 10/10 & 50 & $3x$ if $x>0$, else $x$ & 7/10 & 260 \\ $\sin(3x+2)$ & 8/10 & 560 & $x^2$ if $x>0$, else $-x$ & 10/10 & 100\\%& \multicolumn{3}{c}{Programs} \\ $\sum_{n=1}^3\sin(nx)$ & 7/10 & 1,900 & $x$ if $x>0$, else $\sin(x)$ & 10/10 & 2,360 \\%& \multicolumn{1}{c}{Targets} & \multicolumn{1}{c}{$\eta$} & \multicolumn{1}{c}{$T_c$}\\ $(x^2+x)/(x+2)$ & 9/10 & 810 & $\mathsf{SORT}(x_0,x_1,x_2)$ & 7/10 & 810 \\ $x_0^2(x_0+1)/x_1^5$ & 3/10 & 3,050 & $\mathsf{4LFSR}(x_0,x_1,x_2,x_3)$ & 10/10 & 140 \\ $x_0^2/2+(x_1+1)^2/2$ & 6/10 & 830 \\ \cmidrule(r){4-4} $x_0x_1-1=0$ & 3/10 & 1,000 & $y_0(\vec{x})=x_1$ if $x_0<2$, else $-x_1$ & \multirow{2}{*}{3/10} & \multirow{2}{*}{1,570} \\ \cmidrule(r){1-3} \multicolumn{3}{c}{Pattern Recognition} & $y_1(\vec{x})=x_0$ if $x_1<0$, else $x_1^2$\\ \cmidrule(r){1-3} \cmidrule(r){4-4} \multicolumn{1}{c}{Targets} & \multicolumn{1}{c}{Acc. (\%)} & \multicolumn{1}{c}{$T_c$} \\ \cmidrule(r){1-1} \cmidrule(r){2-2} \cmidrule(r){3-3} \cmidrule(r){4-4} MNIST Binary & 92.9 & 1,500 & $g(x)=x^2$ if $x<2$, else $x/2$ & \multirow{2}{*}{10/10} & \multirow{2}{*}{640} \\ MNIST Trinary& 59.6 & 4,000 & $y(x)=g^{\circ 4}(x)$ \\ \cmidrule(r){4-4} ImageNet Binary & 70.7 & 4,000 \\ \cmidrule(r){4-4} Backprop OccamNet & 98.1 & 100 & $g(x)=x+2$ if $x<2$, else $x-1$ & \multirow{2}{*}{10/10} & \multirow{2}{*}{640}\\ Finetune ResNet & 97.3 & 40 & $y(x)=g^{\circ 2}(x)$\\ \bottomrule \end{tabular} \end{table} \subsection{Analytic Functions} We collect a series of benchmarks that consist of composition of basic building blocks of analytic functions. Refer to Table~\ref{table:benchmarks} for the convergence rate and number of steps required for convergence. From our experimental benchmarks, we highlight the large ratio of convergence for the function expression $f(x) = (x^2 + x) / (x + 2)$, which we originally speculated could easily trick the network to get stuck in the local minimum $f(x) \approx x + 1$ for large enough $x$. In contrast, similarly to the difficulties faced by~\citet{Udrescu_2020}, we find that $f(x_0, x_1) = x_0^2 (x_0+1) / x_1^5$ often failed to converge, as the factor $x_0^2 (x_0+1)$ often got approximated to $x_0^3$; even when convergence did occur, it required a relatively large number of steps for the network to resolve this additional constant factor. We also tested implicit functions. OccamNet was set to learn the implicit equation of hyperbola $x_0x_1=1$ with inputs $x_0$ and $x_1$ satisfying the equation and setting a constant target $y_0 = 1$. Interestingly, the model does not fall into the famous trap~\citep{schmidt2009} of producing the trivial equation $1=1$. We were unable to fit more complicated implicit functions, such as the circle equation $x_0^2+x_1^2-1=0.$ In future work we will focus on testing different forms of regularization to accommodate for those cases. \subsection{Programming} We collect a number of benchmarks in learning non-differentiable, potentially recursive/iterative functions under the \textit{Programs} section in Table~\ref{table:benchmarks}. Of particular interest are the logic functions involving $\mathsf{IF}$ statements, which we found to converge quickly for simple equations. We also investigated learning recurrent logic expressions by employing the methods described in Section \ref{sec:recurrence}. From our experiments, we highlight both the network's fast convergence to the right functional form and the discovery of the correct recurrence depth of the final expression. We also investigated the usage of bases such as $\mathsf{MAX}$ and $\mathsf{MIN}$ for the purpose of sorting numbers, obtaining relatively well-behaved final solutions: the few solutions that did not converge would fail only in deciding the second component $y_2$ of the output vector. Finally, we introduced binary operators and discrete input sets for testing a simple 4-bit LFSR, for which we again highlight the fast convergence and high success rate. \subsection{Pattern recognition} To further investigate OccamNet's capabilities in different tasks, we also try simple pattern recognition on various image datasets. Of particular interest is the interpretability of the model's solutions. In the future, we hope to integrate our model with other tasks in an end-to-end fashion. We begin with MNIST~\citep{lecun-gradientbased-learning-applied-1998} data and train OccamNet to classify in a binary setting between the digits 0 and 7 (\emph{MNIST Binary}). For this high-dimensionality task, we implement OccamNet on GPU, yielding a considerable 8x speedup compared to CPU. One of the successful functional fits that OccamNet finds is $y_0\left(\vec{x}\right)$ to be $\tanh{\left(10\left(\max(x_{715},x_{747})+\tanh(x_{435})+2x_{710} +2x_{713}\right)\right)}$ and $y_1\left(\vec{x}\right)$ to be $\tanh\left(10\tanh(10\left(x_{512} + x_{566}\right))\right); $ the model learns to incorporate pixels that are indicative of the class into the functional fit: for example, here $x_{512}$ and $x_{566}$ would be indicative of the digit 7. We further classify between classes 0, 1 and 2 (\emph{MNIST Trinary}) and obtain 59.6\% accuracy, which indicates that OccamNet is learning pattern recognition, beyond random exploration of the search space. From our experiments, we note that: (\emph{i}) shallow networks are sufficient for good accuracy; (\emph{ii}) skip connections require very small learning rate, as otherwise the model learns to output a particular pixel; (\emph{iii}) OccamNet without skip connections requires large learning rates because a whole path needs to be trained at once, as the architecture is less modular These above-mentioned observations hold true when we further benchmark the integration of OccamNet with deep feature extractors. We extract features from ImageNet images using a ResNet 50 model, pre-trained on Imagenet~\citep{he2016deep}. For simplicity, we choose two classes from the dataset, ``minivan'' and ``porcupine,'' (\emph{ImageNet Binary}). Furthermore, OccamNet improves its accuracy significantly backpropagating through our model using standard cross-entropy signal. We either freeze the ResNet weights (\emph{Backprop OccamNet}) or finetune ResNet through OccamNet (\emph{Finetune ResNet}). In both cases the converged OccamNet represents simple rules ($y_0(\vec{x}) = x_{1838}$ and $y_1(\vec{x}) = x_{1557}$) suggesting that replacing the head in deep neural networks with OccamNet might be promising. \section{Related Work} There's a diverse and prosperous body of work in solving the tasks of symbolic regression and program synthesis. OccamNet was partially inspired by EQL network \citep{martius2016extrapolation, sahoo18a, Kim2019IntegrationON}, a neural network-based symbolic regression system which has successfully been able to find simple analytic functions. Neural Arithmetic Logic Units (NALU) and related models \citep{NIPS2018_8027, Madsen2020Neural} provide neural inductive bias for arithmetic in neural networks, which in principle can fit some of the benchmarks in Table~\ref{table:benchmarks}. NALU updates the weights by backpropagating through the activations, shaping the neural network towards a gating interpretation of the linear layers. However, generalizing those models to diverse set of function bases might be a formidable task: from our experiments, backpropagation through some activation functions (such as division or sine) seems to make training considerably harder. In a different computation paradigm, genetic programming has performed exceptionally well in the task of symbolic regression \citep{schmidt2009distilling, Udrescu_2020}, and a number of evolution-inspired, probability-based models have been explored for this goal \citep{mckay2010grammar}. In future work, we will investigate how OccamNet compares to these methods, especially in the context of search spaces with large dimensionality. For programs, one option to fit programs is to use EQL-based models with logic activations (step functions, $\mathsf{MIN}$, $\mathsf{MAX}$, etc.) approximated by sigmoid activations. Another is probabilistic program induction using domain-specific languages \citep{NIPS2018_7845, NIPS2018_8006, NIPS2019_9116}. Neural Turing Machines~\citep{Graves2014NeuralTM, Graves2016HybridCU} and their stable versions~\citep{CollierBeel2018} are also able to discover interpretable programs, simulated by neural networks via observations of input-output pairs by relying on an external memory. However, here we achieve similar results (for example, sorting) without external memory and in only minutes on a CPU. To our knowledge we are not aware of a classifiers that predict MNIST or ImageNet labels using symbolic rules in the way considered in this paper. The closest baseline we found is using GP~\citep{10.5555/1623755.1623876}, which in principle can perform comparably well to our neural method, but it is unclear how to integrate it with deep learning \section{Conclusion and Future Work} In this paper, we introduced a probability-based neural network model for discovering compact, interpretable solutions to fitting data. We introduced an evolutionary strategy for training, alongside with a tunable loss function and a number of architectural design choices that ensure compactness. We discussed methods to incorporate recurrence, and introduced hyperparameters that accommodate different tasks. We tested OccamNet's power in discovering function expressions, and showed how it can be used in different problem settings. Future work will include extending this framework to a range of symbolic regression and program synthesis tasks, such as integration or string parsing. Finally, we demonstrated that our method can be coupled with deep learning feature extraction methods, laying the groundwork for future investigation on integrating interpretable and probabilistic-based methods to deep learning. \section*{Acknowledgements} We would like to thank Owen Dugan, Thiago Bergamaschi, Kristian Georgiev, Andrew Ma, Peter Lu, Evan Vogelbaum, Laura Zharmukhametova, Momchil Tomov, Rumen Hristov, Charlotte Loh, Ileana Rugina and Lay Jain for fruitful discussions. Research was sponsored in part by the United States Air Force Research Laboratory and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. This material is based upon work supported in part by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT, under Collaborative Agreement Number W911NF-18-2-0048.
1,314,259,995,528
arxiv
\section{Introduction} A vertex algebroid is the algebraic structure induced on a subspace $V_0 \oplus V_1$ of a vertex algebra $V$. The study of vertex algebroids started with \cite{GMS} where the sheaves of chiral differential operators (CDO) were defined as the enveloping algebras of {exact} vertex algebroids. Algebras of chiral differential operators are sheaves of vertex algebras on smooth varieties resembling the associative algebras of differential operators in some respects. One striking difference from the classical prototype is that for some manifolds $X$ no CDO exists; or if there is one, there may be more than one isomorphism class of such sheaves. Speaking in technical language, sheaves of CDO form a stack, whose groupoid of global sections may be empty or have more than one connected component. In \cite{GMS} the classification of chiral differential operators was obtained; in particular, it was established that a global sheaf of CDO exists on $X$ if and only if $ch_2 (\Omega^1_X) = 0$ where $ch_2 (\Omega^1_X)$ is the second graded piece of Chern character of $\Omega^1_X$. This result was re-established by Bressler \cite{Bre} in a rather unexpected fashion. He noticed that the notion of a vertex algebroid is related to a well-known notion in differential geometry, a Courant algebroid: the latter is a quasi-classical limit of the former. He obtained a classification of Courant algebroids extending a fixed Lie algebroid and rediscovered the aforementioned obstruction by connecting the existence of a CDO on $X$ with the existence of certain Courant extensions of the Atiyah algebra of the sheaf of 1-forms. In both these classification problems the obstruction to global existence is a class in $ H^2 (X, \Omega^{2} \to \Omega^{3,cl}). $ This is due to a rather remarkable property of these algebroids: one can "twist" an algebroid $\cA$ on $U\subset X$ by a closed 3-form $\alpha$. To be more precise, let us denote by $\VExt$ (resp. $\CExt$ )\marginpar{!} the stack of vertex (resp. Courant) algebroid extensions of a given Lie algebroid $\cL \stackrel{\pi}{\to} \cT_X$ with an invariant pairing $\pair{,}$ on $\ker \pi$ (cf. section \ref{Extension-defin Subsection}). Then the twisting by 3-form action on $\VExt$ and $\CExt$ extends to an action of a certain stack associated to the complex $\Omega_X^2 \to \Omega^{3, cl}_X$ (cf. \cite{D}) and it makes each of those a torsor over the latter. By standard abstract nonsense, to every such stack $\cS$ there corresponds a class $cl(\cS)\in H^2(X, \Omega^2 \to \Omega^{3, cl})$ which vanishes precisely when $\cS$ has a global object. For example, the obstruction $ch_2(\Omega^1_X)$ above is exactly the class of the stack of exact vertex algebroids on $X$. In this article we classify transitive vertex algebroids. Since exact vertex algebroids classified in \cite{GMS} are, in fact, a particular kind of transitive vertex algebroids ( those whose associated Lie algebroid is the tangent sheaf), our classification generalizes that of \cite{GMS}. In particular, we compute the class of the stack $\VExt$. Bressler computes the corresponding class for Courant extensions of $X$ \cite{Bre} and proves that $cl (\CExt) = - \frac{1}{2} p_1(\cL, \pair{,})$ where $p_1(\cL, \pair{,})$ is the Pontryagin class associated with the pair $(\cL, \pair{,})$, a generalization of the familiar first Pontryagin class of a vector bundle, defined in {\it loc.cit.}. Our main result is Theorem \ref{intro-thm1} below. To prove it we take up the techniques of Baer arithmetic developed by Bressler for Courant algebroids and use the classification of both CDO and Courant algebroids. \begin{thm} \label{intro-thm1} The class of $\VExt$ in $H^2(X, \Omega^2 \to \Omega^{3, cl})$ equals $$ cl(\VExt) = ch_2 (\Omega^1_X) - \half p_1 (\cL, \pair{,}) $$ \end{thm} It is worthwhile to note that it is possible for a manifold $X$ to have no global CDO and Courant extensions of a given Lie algebroid $\cL$, but still have a vertex extension of $\cL$. \smallskip We use the classification result above to study certain deformations of sheaves of {\em twisted chiral differential operators} (TCDO) defined in \cite{AChM}. A TCDO is defined through a procedure that, starting with a CDO produces a sheaf which has features of both the original CDO and and the Bernstein-Beilinson algebra of twisted differential operators (\cite{BB1}). These sheaves have proved useful in representation theory of affine Lie algebras at the critical level. In particular, one has a localization procedure for certain classes of $\hat{\fg}$-modules. \cite{AChM}. More explicitly, a sheaf of TCDO on $X$ is a sheaf of vertex algebras that locally looks like $\cD^{ch} \otimes H_X$ where $\cD^{ch}$ is a sheaf of CDO on $X$ and $H_X$ is the algebra of differential polynomials on the space $H^1 (X, \Omega^{1,cl})$ classifying the {twisted differential operators} on $X$. When $X$ is a flag variety, $X = G/B_{-}$, the algebra $H_{G/B}$ is isomorphic to $\Cplx[\fh^*]$ where $\fh$ is the Cartan subalgebra of $\fg = \Lie G$. Moreover, there is an embedding of affine vertex algebra $$ V_{-h^\vee} (\fg) \to \Gamma(G/B, \cD^{ch, tw}_{G/B}) $$ which makes the space of sections of the TCDO over big cell $ \Gamma(U_e, \cD^{ch, tw}_X) \iso \cD^{ch}(U_e) \otimes H_{G/B} $ a $\fg$-module of the critical level, called the {\em Wakimoto module $W_{0, -h^{\vee}}$}. \cite{FF1} The Wakimoto module $W_{0, -h^\vee}$ is a member of the family $$ W_{0, k} = \cD^{ch}(U_e) \otimes H_{X, k + h^\vee} $$ where $H_{X, \kappa}$ is the Heisenberg vertex algebra associated with the space $\fh$ with a bilinear form equal to $\kappa$ times the normalized Killing form. One might ask whether $W_{0,k}$ with non-critical $k$ admits a localization similar to that of $W_{0, -h^\vee}$. We show that such a sheaf indeed exists on any flag manifold and is, in fact, a deformation of the TCDO mentioned above: there is one such sheaf for each choice of an invariant inner product $\pair{\cdot, \cdot }$ on $\fg$. If $X = \prline$, then we prove that this sheaf is a sheaf of $\widehat{\frak{sl}}_2$-modules of level $\pair{\cdot, \cdot } + \pair{\cdot, \cdot }_{crit}$ (cf. Corollary \ref{global_sections_of_algebroid-Cor}). More generally, we define a {\em deformation of TCDO} on an arbitrary manifold $X$ to be the vertex enveloping algebra of certain transitive vertex algebroid on $X$. We apply our main classification result (cf. Theorem \ref{intro-thm1} above) to classify the deformations. \smallskip {\em Acknowledgement.} The author would like to thank Fyodor Malikov for immense help and guidance throughout the work. \bigskip \section{Preliminaries} We will recall the basic notions of vertex algebra following the exposition of \cite{AChM}. All vector spaces will be over $\pole$. \subsection{Definitions and examples} \label{Definitions} Let $V$ be a vector space. A {\em field} on $V$ is a formal series $$a(z) = \sum_{n\in \ZZ} a_{(n)} z^{-n-1} \in ({\rm End} V)[[z, z^{-1}]]$$ such that for any $v\in V$ one has $a_{(n)}v = 0$ for sufficiently large $n$. Let $\Fields (V)$ denote the space of all fields on $V$. A {\em vertex algebra} is a vector space $V$ with the following data: \begin{itemize} \item a linear map $Y: V \to \Fields(V)$, $V\ni a \mapsto a(z) = \sum_{n\in \ZZ} a_{(n)} z^{-n-1}$ \item a vector $\vacB\in V$, called {\em vacuum vector} \item a linear operator $\de: V \to V$, called {\em translation operator} \end{itemize} that satisfy the following axioms: \begin{enumerate} \item (Translation Covariance) $ (\de a)(z) = \de_z a(z)$ \item (Vacuum) $\vacB(z) = \id$; $a(z)\vacB \in V[z]$ and $a_{(-1)}\vacB = a$ \item (Borcherds identity) \begin{align} \label{Borcherds-identity} &\sum\limits_{j\geq 0} {m \choose j} (a\ops{n+j} b )\ops{m+k-j}\\ =& \sum\limits_{j\geq 0} (-1)^{j} {n \choose j}\{ a\ops{m+n-j} b\ops{k+j} - (-1)^{n}b\ops{n+k-j} a\ops{m+j} \}\nonumber \end{align} \end{enumerate} A vertex algebra $V$ is {\em graded} if $V = \oplus_{n\geq 0}V_n$ and for $a\in V_i$, $b\in V_j$ we have $$a_{(k)}b \in V_{i+j - k -1}$$ for all $k\in \ZZ$. (We put $V_i = 0$ for $i<0$.) All vertex algebras in this article will be graded. We say that a vector $v\in V_m$ has {\em conformal weight} $m$ and write $\Delta_v = m$. If $v\in V_m $ we denote $v_k = v_{(k - m +1)}$, this is the so-called conformal weight notation for operators. One has $$v_k V_m \subset V_{m -k}.$$ A {\em morphism} of vertex algebras is a map $f: V \to W$ that preserves vacuum and satisfies $f(v_{(n)}v') = f(v)_{(n)}f(v')$. A {\em module} over a vertex algebra $V$ is a vector space $M$ together with a map \begin{equation} \label{def-vert-mod-1} Y^M: V \to \Fields(M),\; a \to Y^M(a,z) = \sum_{n\in \ZZ} a^M_{(n)}z^{-n-1}, \end{equation} that satisfy the following axioms: \begin{enumerate} \item $\vacB^M (z) = \id_M $ \item (Borcherds identity) \begin{align}\label{def-vert-mod-2} &\sum\limits_{j\geq 0} {m \choose j} (a^{}_{(n+j)} b )^M_{(m+k-j)}\\ = &\sum\limits_{j\geq 0} (-1)^{j} {n \choose j}\{ a^M_{(m+n-j)} b^M_{(k+j)} - (-1)^{n}b^M_{(n+k-j)} a^M_{(m+j)} \}\nonumber \end{align} \end{enumerate} \smallskip A module $M$ over a graded vertex algebra $V$ is called {\em graded} if $M = \oplus_{n\geq 0} M_n$ with $v_{k}M_l \subset M_{l-k}$ (assuming $M_{n} = 0$ for negative $n$). A {\em morphism of modules} over a vertex algebra $V$ is a map $f: M \to N$ that satisfies $f(v^M_{(n)}m) = v^N_{(n)}f(m)$ for $v\in V$, $m\in M$. $f$ is {\em homogeneous} if $f(M_k)\subset N_k$ for all $k$. \smallskip \subsubsection{Commutative vertex algebras.} \label{Commutative_vertex_algebras} A vertex algebra is said to be {\em commutative} if $a_{(n)} b =0$ for $a$, $b$ in $V$ and $n\geq 0$. The structure of a commutative vertex algebras is equivalent to one of commutative associative algebra with a derivation. If $W$ is a vector space we denote by $H_W$ the algebra of differential polynomials on $W$. As an associative algebra it is a polynomial algebra in variables $x_i$, $\de x_i$, $\de^{(2)}x_i$, $\dots$ where $\set{x_i}$ is a basis of $W^*$. A commutative vertex algebra structure on $H_W$ is uniquely determined by attaching the field $x(z) = e^{z\de}x_i$ to $x\in W^*$. $H_W$ is equipped with grading such that \begin{equation} \label{weights-0-1-diff-poly} (H_W)_0=\pole,\; (H_W)_1=W^*. \end{equation} \subsubsection{Beta--gamma system.} \label{beta-gamma-system} Define the Heisenberg Lie algebra to be the algebra with generators $a^i_n$, $b^i_n$, $1\leq i\leq N$ and $K$ that satisfy $[a^i_m, b^j_n] = \delta_{m, -n} \delta_{i,j} K$, $[a^i_n, a^j_m] = 0$, $[b^i_n, b^j_m] = 0$. Its Fock representation $M$ is defined to be the module induced from the one-dimensional representation $\Cplx_1$ of its subalgebra spanned by $a^i_n$, $n\geq 0$, $b^i_m$, $m>0$ and $K$ with $K$ acting as identity and all the other generators acting as zero. The {\em beta-gamma system} has $M$ as an underlying vector space, the vertex algebra structure being determined by assigning the fields $$a^i(z) = \sum a^i_n z^{-n-1}, \ \ b^i(z) = \sum b^i_n z^{-n}$$ to $a^i_{-1}1$ and $b^i_01$ resp., where $1\in \Cplx_1$. This vertex algebra is given a grading so that the degree of operators $a^i_n$ and $b^i_n$ is $n$. In particular, \begin{equation} \label{weight-0-1-b-g} M_0=\pole[b_0^1,...,b_0^N],\; M_1=\bigoplus_{j=1}^{N}(b^j_{-1}M_0\oplus a^j_{-1}M_0). \end{equation} \subsection{Vertex algebroids} \label{Vertexalgebroids} \subsubsection{Definition} \label{def-of-vert-alg} Let $V$ be a vertex algebra. Define a 1-truncated vertex algebra to be a sextuple $(V_0\oplus V_1, \vacB, \de, \opm, \zero, \one )$ where the operations $\opm, \zero, \one$ satisfy all the axioms of a vertex algebra that make sense upon restricting to the subspace $V_0 + V_1$. (The precise definition can be found in \cite{GMS}). The category of 1-truncated vertex algebras will be denoted $\cV ert_{\le 1}$. The definition of vertex algebroid is a reformulation of that of a sheaf of 1-truncated vertex algebras. Let $(X, \cO_X)$ be a space with a sheaf of $\Cplx$-algebras. A {\em vertex $\cO_X$-algebroid} is a sheaf $\cA$ of $\Cplx$-vector spaces equipped with $\Cplx$-linear maps $\pi: \cA \to \cT_X$ and $\de: \cO_X \to \cA$ satisfying $\pi \circ \de = 0$ and with operations $ \opm : \cO_X \times \cA \To \cA $, $ \zero : \, \cA \times \cA \To \cA $, $ \one : \, \cA \times \cA \To \cO_X $ satisfying axioms: \begin{eqnarray} \label{v-assoc} f\opm(g\opm v) - (fg)\opm v & = & \pi(v)(f)\opm \partial(g) + \pi(v)(g)\opm \partial(f) \\ \label{zero-minusone} x \zero ( f\opm y ) & = & \pi(x)(f)\opm y + f\opm ( x \zero y ) \\ \label{symm-zero} x \zero y + y \zero x & = & \partial ( x \one y ) \\ \pi(f\opm v) & = & f\pi(v) \\ \label{opm-one} ( f\opm x) \one y & = & f ( x\one y) - \pi(x)(\pi(y)(f)) \\ \label{zero-one} \pi(v)( x \one y ) & = & ( v \zero x) \one y + x \one (v \zero y) \\ \label{d-derivation} \partial(fg) & = & f\opm \partial(g) + g\opm \partial(f) \\ v \zero \partial(f) & = & \partial(\pi(v)(f)) \\ \label{va-1d} v \one \partial(f) & = & \pi(v)(f) \end{eqnarray} for $v,x,y\in\cA$, $f,g\in\cO_X$. The map $\pi$ is called the {\em anchor} of $\cA$. If $\cV = \bigoplus_{n \ge 0} \cV_n$ is a (graded) sheaf of vertex algebras with $\cV_0 = \cO_X$, then $\cA = \cV_1$ is a vertex algebroid with $\de$ equal to the translation operator and $\pi$ sending $x\in \cV_1$ to the derivation $f \mapsto x \zero f$. \subsubsection{Associated Lie algebroid} Recall that a {\em Lie algebroid} is a sheaf of $\cO_X$-modules $\cL$ equipped with a Lie algebra bracket $[, ]$ and a morphism $ \pi: \cA \to \cT_X $ of Lie algebra and $\cO_X$-modules called {\em anchor} that satisfies $ [x, ay] = a[x,y] + \pi(x) (a) y $, $x,y \in \cA$, $a\in \cO_X$. \smallskip If $\cA$ is a vertex algebroid, then the operation $\zero$ descends to that on $\cL_{\cA}= \cA / \cO_X \opm \de \cO_X$ and makes it into a Lie algebroid, with the anchor induced by that of $\cA$. $\cL_{\cA}$ is called {\em the associated Lie algebroid of} $\cA$. \subsubsection{} A vertex (resp., Lie) algebroid is {\em transitive}, if its anchor map $\pi$ is surjective. Being a derivation, see (\ref{d-derivation}), $\de: \cO_X \to \cA$ lifts to $\Omega^1_X \to \cA$. It follows from (\ref{va-1d}) that if $\cA$ is transitive, then $\Omega^1_X \iso \cO_X \opm \de \cO_X$ and $\cA$ fits into an exact sequence $$ 0 \To \Omega^1_X \To \cA \To \cL \To 0, $$ $\cL = \cL_{\cA}$ being an extension $$ 0 \To \fh(\cL) \To \cL \To \cT_X \To 0 $$ where $\fh(\cL) := \ker ( \cL \stackrel{\pi}{\To} \cT_X )$ is an $\cO_X$-Lie algebra. \smallskip Note that the pairing $\one$ on $\cA$ induces a symmetric $\cL_{\cA}$-invariant $\cO_X$-bilinear pairing on $\fg( \cL_{\cA})$ which will be denoted by $\pair{ , }$. We regard the pair $(\cL_{\cA}, \pair{, } )$ as "classical data" underlying the vertex algebroid $\cA$. \subsubsection{Truncation and vertex enveloping algebra functors} There is an obvious truncation functor $$t: \cV ert \to \cV ert_{\leq 1} $$ that assigns to every vertex algebra a 1-truncated vertex algebra. This functor admits a left adjoint \cite{GMS} $$ u: \cV ert_{\leq 1} \to \cV ert $$ called a {\em vertex enveloping algebra functor}. These functors have evident sheaf versions. In particular, one has the functor \begin{equation} U: \cV ert\calA lg \To Sh \cV ert \end{equation} from the category of vertex algebroids to the category of sheaves of vertex algebras. \bigskip \subsection{Courant algebroids} We give a definition of a Courant algebroid following \cite{Bre}; see also \cite{LWX}. \label{Cour-alg-section} A {\em Leibniz algebra} over $k$ is a $k$-vector space $A$ with a bracket $[, ] :A \otimes_k A \to A$ satisfying $$ [x, [y,z]] = [[x,y],z] + [y, [x,z]]. $$ The bracket is not assumed to be skew-commutative. \smallskip A {\em Courant $\cO_X$-algebroid} is an $\cO_X$-module $\cQ$ equipped with \begin{enumerate} \item a structure of a Leibniz $\mathbb{C}$-algebra $ [\ ,\ ] : \cQ\otimes_\mathbb{C}\cQ \to \cQ \ , $ \item an $\cO_X$-linear map of Leibniz algebras (the anchor map) $ \pi : \cQ \to \cT_X \ , $ \item a symmetric $\cO_X$-bilinear pairing $ \pair{, } : \cQ\otimes_{\cO_X}\cQ \to \cO_X \ , $ \item a derivation $ \partial : \cO_X \to \cQ $ \end{enumerate} which satisfy \begin{align} \pi\circ\partial & = 0 \label{complex} \\ \left[q_1,fq_2\right] & = f[q_1,q_2] + \pi(q_1)(f)q_2 \\ \langle [q,q_1],q_2\rangle + \langle q_1,[q,q_2]\rangle & = \pi(q)(\langle q_1, q_2\rangle) \\ \left[q,\partial(f)\right] & = \partial(\pi(q)(f)) \\ \langle q,\partial(f)\rangle & = \pi(q)(f) \label{ip-o}\\ \left[q_1,q_2\right] + [q_2,q_1] & = \partial(\langle q_1, q_2\rangle) \label{ip-symm} \end{align} for $f\in\cO_X$ and $q,q_1,q_2\in\cQ$. A morphism of Courant $\cO_X$-algebroids is an $\cO_X$-linear map of Leibnitz algebras which commutes with the respective anchor maps and derivations and preserves the respective pairings. A {\em connection} on a Courant algebroid $\cQ$ is an $\cO_X$-linear section $\babla$ of the anchor map such that $ \pair{\babla (\xi), \babla (\eta) } = 0 $. If $\cQ$ is a Courant algebroid, then $\cL_{\cQ}= \cQ / \cO_X\de \cO_X$ is a Lie algebroid; it is called {\em the associated Lie algebroid of} $\cQ$. The pairing $\pair{ , }$ on $\cQ$ induces a $\cL_{\cQ}$-invariant pairing on $\fg(\cL_{\cQ})$ which will be denoted $\pair{ , }$. \bigskip \subsection{The category of vertex extensions} \label{Extension-defin Subsection} Let $\cL$ be a transitive Lie algebroid. A {\em vertex extension} of $\cL$ is a vertex algebroid $\cA$ with an isomorphism of Lie algebroids $\phi : \, \cL_\cA \to \cL$. In what follows we will always identify $\cL_\cA$ and $\cL$ via $\phi$. A morphism of vertex extensions of $\cL$ is a morphism of vertex algebroids $f: \cA \to \cA'$ which induces the identity map on $\cL$. Thus $f$ fits into a diagram $$ \begin{CD} 0 @>>> \Omega^1_X @>>> \cA @>>> \cL @>>> 0 \\ @. @| @VV f V @| @. \\ 0 @>>> \Omega^1_X @>>> \cA' @>>> \cL @>>> 0 \end{CD} $$ Vertex extensions of $\cL$ on $X$ form a category $\cV \cE xt_{\cL} (X)$; clearly, it is a groupoid. One can consider the category of vertex extensions of $\cL |_U$ on $U$ for any open subset $U \subset X$. These categories with the obvious restriction functors form a stack on the Zariski topology of $X$, to be denoted $\cV \cE xt_\cL$ \smallskip Let $\cA$ be a vertex extension of $\cL$. Denote $\tfg_\cA := \ker (\pi: \cA \to \cT_X)$; it is an extension $$ \begin{CD} 0 @>>> \Omega^1_X @>>> \tfg_\cA @>>> \fg @>>> 0 \end{CD} $$ It is easy to see that the operation $\one$ satisfies $ \tfg_\cA \one \Omega^1 = 0 $, and, therefore, induces a (symmetric, $\cO_X$-bilinear) pairing $$ \pair{, }: \, \fg \times \fg \to \cO_X $$ If $f: \cA \to \cA'$ is a morphism of extensions, $f$ induces the identity map on $\fg$; it also preserves $\one$. Therefore $\cA$ and $\cA'$ must have the same pairing $\pair{,}$ on $\fg$. It follows that the groupoid $ \cV \cE xt_{\cL}(X) $ is a disjoint union $$ \cV\cE xt_{\cL} (X) = \coprod\limits_{\pair{,} \cV \cE xt_{\cL}^{\pair{,}}(X) $$ where $ \cV \cE xt_{\cL}^{\pair{,}}(X) $ is the full subcategory of vertex extensions of $\cL$ whose induced pairing on $\fg$ is $\pair{,}$. Such extensions will be called {\em vertex extensions of $(\cL, \pair{,})$}. \smallskip Similarly, we define the notion of a {\em Courant extension} of $\cL$ on $X$ and that of a morphism of Courant extensions, the categories $\CExt(U)$ and $\CExt(U)$, $U\subset X$. \smallskip \subsection{Chiral differential operators} \label{Chiral_differential_operators} Vertex extensions of $\cT_X$ are called {\em exact vertex algebroids}. Their vertex enveloping algebras, sheaves of {\em chiral differential operators} (CDO). were first introduced in \cite{MSV} and classified in \cite{GMS}. Let us recall the main classification result. \smallskip Let us call a smooth affine variety $U= {\rm Spec} A $ {\em suitable for chiralization} if $Der(A)$ is a free $A$-module admitting an abelian frame $\{\tau_1,...,\tau_n\}$. In this case there is a CDO over $U$, which is uniquely determined by the condition that $(\tau_i)_{(1)}(\tau_j)=(\tau_i)_{(0)}(\tau_j)=0$. Denote this CDO by $D^{ch}_{U,\tau}$. \begin{thm} \label{class-cdo-local} Let $U={\rm Spec} A$ be suitable for chiralization with a fixed abelian frame $\{\tau_i\}\subset Der A$. (i) For each closed 3-form $\alpha\in\Omega^{3,cl}_A$ there is a CDO over $U$ that is uniquely determined by the conditions \[ (\tau_i)_{(1)}\tau_{j}=0,\;(\tau_i)_{(0)}\tau_{j}=\iota_{\tau_i}\iota_{\tau_j}\alpha. \] Denote this CDO by $\cD_{U,\tau}(\alpha)$. (ii) Each CDO over $U$ is isomorphic to $\cD_{U,\tau}(\alpha)$ for some $\alpha$. (iii) $\cD_{U,\tau}(\alpha_1)$ and $\cD_{U,\tau}(\alpha_2)$ are isomorphic if and only if there is $\beta\in\Omega^{2}_A$ such that $d\beta=\alpha_1-\alpha_2$. In this case the isomorphism is determined by the assignment $\tau_i\mapsto\tau_i+\iota_{\tau_i}\beta$. \end{thm} If $A=\pole[x_1,...,x_n]$, one can choose $\partial/\partial x_j$, $j=1,...,n$, for an abelian frame and check that the beta-gamma system $M$ of sect. \ref{beta-gamma-system} is a unique up to isomorphism CDO over $\pole^n$. A passage from $M$ to Theorem \ref{class-cdo-local} is accomplished by the identifications $b^j_01=x_j$, $a^j_{-1}1=\partial/\partial x_j$. \bigskip \section{Classification of transitive vertex algebroids} \label{Classif_vertex_algebroids-Section} In this section we present a classification of transitive vertex algebroids in the spirit of \cite{Bre}. First we recall the definition of a $\Gr(\Omega_X^{[2,3>})$-gerbe given in \cite{GMS}; one of the results of \cite{Bre} is that $\VExt$ is a $\Gr(\Omega_X^{[2,3>})$-gerbe. In section \ref{linear-algebra-section} we describe the core tool of the classification method: the ``addition" operation on various algebroids. It enables us to construct a vertex extension starting from a Courant extension and an exact vertex algebroid. With this tool in hands we are able to compute the class of the stack $\VExt$ (Theorem \ref{obstrcution-VExt-thm}). \subsection{Gerbes and torsors} \label{gerbes section} \subsubsection{Twisting by a 3-form} \label{twisting3-subsection} Let $\cA = (\cA, \opm, \zero, \one, \de, \pi)$ be a vertex extension of $\cL$ on $U\subset X$ and let $\alpha \in \Omega^{3,cl}(U)$. Define an operation $ {}_{(0) + \alpha} : \cA \times \cA \to \cA $ by \begin{equation} \label{twisting-by-3-form} x \, {}_{(0) + \alpha} \, y = x \zero y + \iota_{\pi(x)} \iota_{\pi(y)} \alpha \end{equation} \begin{lem} Let $\alpha \in \Omega^{3, cl}(U)$. Then: (1)\, $ \cA \dplus \alpha := (\cA, \opm, {}_{(0) + \alpha}, \one, \de, \pi ) $ is a vertex extension of $\cL$ on $U$. (2)\, The assignment $\cA \mapsto \cA \dplus \alpha$ can be extended to an auto-equivalence \begin{equation} \label{functor-dplus-alpha} ? \dplus \alpha : \, \VExt \to \VExt \end{equation} \end{lem} {\em Proof.} The proof of (1) is the same as in the case of cdo (\cite{MSV, GMS}) or Courant algebroids (\cite{Bre}). To see (2), note that every morphism $f: \cA \to \cA'$ is automatically a morphism $\cA \dplus \alpha \to \cA' \dplus \alpha$; this tautological action on morphisms makes $?\dplus \alpha$ a functor; the composition $(? \dplus (-\alpha)) \circ (? \dplus \alpha)$ is the identity functor of $ \VExt$. $\qed$ \smallskip It is clear that the functors $? \dplus \alpha$, $\alpha \in \Omega^{3,cl}(U)$ define an action of the abelian group $\Omega^{3,cl}(U)$ on the category $\cV \cE xt_{\cL}(U)$. Let us show that this action in fact extends to an action of a category. \smallskip For an open subset $U\subset X$ define a category $ \Gr (\Omega^{[2,3>})(U) $ as follows. The objects of $ \Gr (\Omega^{[2,3>})(U) $ are elements $a \in \Omega^{3, cl}(U)$; the morphisms $$ Hom (\alpha, \alpha') = \set{ \beta \in \Omega^2 (U) : \ d\beta= \alpha' - \alpha}, $$ the composition being the addition in $\Omega^{2}(U)$. It is clear that $ \Gr (\Omega^{[2,3>})(U) $ is a groupoid. The groupoids $ \Gr (\Omega^{[2,3>})(U) $ form a prestack $ \Gr (\Omega_X^{[2,3>}) $; the addition of 3-forms gives it the structure of a {\em Picard prestack}. See \cite{D}, section 1.4 for generalities on Picard stacks. \smallskip For $f : \cA \to \cA'$ and $\beta : \alpha \to \alpha'$ define \begin{equation} \label{f-dplus-beta-defn} (f \dplus \beta)(x) = f(x) + \iota_{\pi(x)} \beta \end{equation} \begin{prop} (1)\, $f \dplus \beta$ is a morphism of vertex extensions $$ f \dplus \beta : \cA \dplus \alpha \to \cA' \dplus \alpha' $$ (2) \, The formulas (\ref{functor-dplus-alpha}) and (\ref{f-dplus-beta-defn}) define a functor \begin{equation} \nonumber \dplus : \ \cV \cE xt_{\cL}(U) \times \Gr (\Omega^{[2,3>})(U) \To \cV \cE xt_{\cL}(U) \end{equation} which gives rise to an action of $\Gr (\Omega^{[2,3>})(U) $ on $\cV \cE xt_{\cL}(U)$ \end{prop} The verification is, again, straightforward and repeats the analogous discussion in \cite{GMS}. $\qed$ \bigskip \subsubsection{$(\Omega^2 \to \Omega^{3,cl})$-gerbes} \label{gerbe-definition section} We will say a stack $\mathcal{S}$ over $X$ is a {\em $\Gr (\Omega_X^{[2,3>})$-gerbe } if there is an action $ \dplus: \, \mathcal{S} \times \Gr (\Omega_X^{[2,3>}) \to \mathcal{S} $ and a cover $\fU = \set{U_i}_{i\in \mathcal{I}}$ such that for any $i\in \mathcal{I}$ and $x\in \mathcal{S}(U_i)$ the functor $x \dplus ?: \, \Gr (\Omega_X^{[2,3>})(U) \to \mathcal{S}(U)$ is an equivalence. (In other words, $\mathcal{S}$ is a torsor over the associated stack). \begin{thm} \cite{Bre} \label{Thm - VExt and CExt are gerbes} The stacks $\VExt$ and $\CExt$, when locally nonempty, are $\Gr (\Omega^{[2,3>})$-gerbes. \end{thm} \smallskip \begin{rem} The categories $ \Gr (\Omega^{[2,3>})(U) $, $U\subset X$ form a Picard prestack ({\it cf.}\cite{D}, section 1.4.11) whose associated stack is the stack of { \em $(\Omega^2 \to \Omega^{3,cl})$-torsors}. What Bressler shows in \cite{Bre} is that this stack is equivalent to the stack $\cE \cC \cA_X$ of {\em exact Courant algebroids}, and that the stacks $\VExt$, $\CExt$ are, in fact, $\cE\cC\cA_X$-torsors. \end{rem} \smallskip Observe that for $\VExt$ being a $\Gr (\Omega^{[2,3>})$-gerbe means that for small enough $U\subset X$ and $\cA \in \VExt$ one has an equivalence $$ \cA \dplus ? : \, \Gr (\Omega^{[2,3>})(U) \to \VExt(U) $$ In particular, there is an isomorphism $$ \Hom (\alpha, \alpha') \iso \Hom (\cA + \alpha, \cA + \alpha' ) $$ Under this isomorphism, an element $\beta \in \Omega^{2}$ with $d\beta = \alpha' - \alpha$, is mapped to the morphism (cf. (\ref{f-dplus-beta-defn})) \begin{equation} \label{definition-of-exp} \exp (\beta):= \id \dplus \beta : \ x \mapsto x + \iota_{\pi(x)} \beta \end{equation} The same is true for Courant algebroids and we will use the notation $\exp(\beta)$ in both cases. \bigskip \subsubsection{The class of a gerbe} Let $\cS$ be a $\Gr (\Omega^{[2,3>})$-gerbe and $\fU$ a cover as in \ref{gerbe-definition section}. Let us choose an object $x_i \in \cS(U_i)$ for each $i$. For each pair $i,j$ we have objects $x_i |_{U_{ij}}$ and $x_j |_{U_{ij}}$, and therefore, an isomorphism \begin{equation} \eta_{ij} : x_i |_{U_{ij}} \to x_j |_{U_{ij}} \dplus \alpha_{ij} \end{equation} for some $\alpha_{ij} \in \Omega^{3,cl}$. The collection $(x_i, \eta_{ij}, \alpha_{ij})$ is called a {\em trivialization} of $\mathcal{S}$. We will denote by the same letter $\eta_{ij}$ all of its translates $$ \eta_{ij} \dplus \id_\gamma : x_i |_{U_{ij}} \dplus \gamma \to x_j |_{U_{ij}} \dplus (\alpha_{ij} + \gamma) $$ for $\gamma \in \Omega^{3,cl}(U_{i})$. For each triple $i, j, k$ consider the composition (over $U_{ijk} = U_i \cap U_j \cap U_k$) $$ \begin{CD} \eta_{jk} \eta_{ij} \eta_{ik}^{-1} : \ x_k @> \eta_{ik}^{-1} >> x_i \dplus (-\alpha_{ik}) @> \eta_{ij} >> x_j \dplus (\alpha_{ij} - \alpha_{ik}) @> \eta_{jk} >> x_k \dplus (\alpha_{ij} + \alpha_{jk} - \alpha_{ik}) \end{CD} $$ and denote by $\beta_{ijk}$ the element of $\Omega^2(U_{ijk})$ such that \begin{equation} \eta_{jk} \eta_{ij} \eta_{ik}^{-1} = \exp(\beta_{ijk}) \end{equation} One checks that \begin{equation} d_{\check{C}}\beta_{ijk} = 0, \ \ d_{DR} (\beta_{ijk}) = d_{\check{C}} (\alpha_{ij}), \ \ d_{DR} (\alpha_{ij}) = 0 \end{equation} so that the pair $ (\alpha_{ij}, \beta_{ijk}) $ is an element of $\check{Z}^2 (\fU, \Omega^{2} \to \Omega^{3, cl})$. By definition, the {\em class} of $\mathcal{S}$, $cl(\mathcal{S})$, is the class of $ (\alpha_{ij}, \beta_{ijk}) $ in $H^2(X, \Omega^{2} \to \Omega^{3, cl})$. One has the following classical result (cf., e.g., \cite{GMS} for a proof). \begin{prop} $\mathcal{S}(X)$ is nonempty if and only if $cl(\mathcal{S}) =0$.$\qed$ \end{prop} \smallskip \subsection{The stack $\CExt$} \label{facts about CExt - section} As an example, and for future use, we recall the construction of a trivialization of the stack $\CExt$ given in \cite{Bre}. \smallskip Let us choose a cover $\fU = \set{U_i}$ such that $\cT_{U_i}$ is free, choose connections ($\cO_X$-linear sections of the anchor map) $$ \nabla_i : \cT_{U_i} \to \cL_{U_i} $$ and identify $\cL_{U_i} \iso \cT_{U_i} \oplus \fg_{U_i}$ via $\nabla_i$. Define $ c_i = c(\nabla_i) \in \Omega^{2, cl}_{U_i} \otimes_{\cO} \fg_{U_i} $ to be the {\em curvature} of the connection $\nabla_i$, i.e. $$ c_i (\xi, \eta) = [\nabla_i (\xi), \nabla_i(\eta)] - \nabla_i ([\xi, \eta]) $$ Recall the following \begin{thm} \cite{Bre} \label{Thm - local existence of CExt} Let $U \subset X$ and $\nabla: \cT_U \to \cL_U$ is any connection. Then the category $\CExt{}(U)$ is nonempty if and only if the form $ \disp \half \pair{c(\nabla) \wedge c(\nabla) } $ is exact. \end{thm} \bigskip Assume that $$ \half \pair{c(\nabla_i) \wedge c(\nabla_i) } = dH_i $$ for some $H_i \in \Omega^3$. Then one can construct a Courant extension $\cQ_{\nabla_i, H_i}$, which is equal to $ \cL_{U_i} \oplus \Omega^1_{U_i}$ as a sheaf of $\cO_U$-modules, and satisfies \begin{gather} \label{Q_NH xi eta} [\xi, \eta] = [\xi, \eta]_{\cL} + \iota_\xi \iota_\eta H_i, \ \ \ \xi, \eta \in \cT_{U_i}, \\ \label{Q_NH fg pair omega} <\fg, \Omega^1_U> = <\fg, \, \nabla_i (\cT_U)> =0, \\ \label{Q_NH xi-braket-g} [\xi, g] = [\nabla_i(\xi), g]_{\cL} - \pair{\iota_{\xi }c(\nabla_i), g}. \end{gather} \smallskip For each $i, j$ define $$ A_{ij} = \nabla_i - \nabla_j \ \in \, \Omega^1_{U_{ij}} \otimes \fg^{}_{U_{ij}} $$ \begin{thm} \cite{Bre} There exists an isomorphism in $\CExt (U_{ij})$ \begin{equation} \theta_{ij}: \, \cQ_{\nabla_i, H_i} \stackrel{\sim}{\To} \cQ_{\nabla_j, H_j} \dplus \alpha_{ij} \end{equation} given by \begin{equation} \begin{split} \xi & \mapsto \xi + A_{ij}(\xi) - \half \pair{A_{ij}(\xi), A_{ij} } \\ g &\mapsto g - \pair{g, A_{ij} } \\ \omega& \mapsto \omega \end{split} \end{equation} where \begin{equation} \alpha_{ij} = \pair{c(\nabla_i) \wedge A_{ij}} - \half \pair{ [\nabla_i, A_{ij}], A_{ij}} + \frac{1}{6} \pair{ [A_{ij}, A_{ij}], A_{ij}} + H_i - H_j \end{equation} \end{thm} \smallskip The collection $(\cQ_{\nabla_i, H_i}, \theta_{ij}, \alpha_{ij})$ is a trivialization of the gerbe $\CExt$. \bigskip On triple intersections $U_{ijk}= U_i \cap U_j \cap U_k$ the isomorphisms $\theta_{ij}$ satisfy (\cite{Bre}) \begin{equation} \theta_{jk} \theta_{ij} \theta_{ik}^{-1} = \exp( - <A_{ij} \wedge A_{jk}>) \ \ \ \ \qed \end{equation} \smallskip Define $\beta_{ijk} = - <A_{ij} \wedge A_{jk}>$. Then $ (\alpha_{ij}, \beta_{ijk}) $ is a cocycle in $\check{Z}^2(\fU, \Omega_X^2 \to \Omega_X^{3, cl})$. The corresponding cohomology class was identified in \cite{Bre} with minus one half of {\em the first Pontryagin class $p_1 (\cL, \pair{,})$ of $(\cL, \pair{,}$ }. \begin{thm} \cite{Bre} \label{class-CExt-Theorem} This class is the class of the stack $\CExt$: $$ cl(\CExt) = - \half p_1 (\cL, \pair{,}). $$ \end{thm} \bigskip \subsection{Linear algebra} \label{linear-algebra-section} In this section we describe the main tool in the proof of the classification result: we define linear algebra-like operations on various algebroids. The main technical result to be proved in this section is as follows. \begin{thm} \label{lin-alg-main-thm} Let $U$ be suitable for chiralization. Then there exist a functor $$ \boxplus: \CExt(U) \times \CDO(U) \To \VExt(U) $$ $$ (\cQ, \cD) \mapsto \cQ \boxplus \cD $$ and a functor $$ \boxminus: \VExt(U) \times \CDO(U) \To \CExt(U) $$ $$ (\cA, \cD) \mapsto \cA \boxminus \cD $$ such that for a fixed $\cD \in \CDO(U)$ the functors $$ - \boxminus \cD : \ \VExt(U) \to \CExt(U) $$ and $$ - \boxplus \cD : \ \CExt(U) \to \VExt(U) $$ are mutually inverse equivalences of $\Gr (\Omega^{[2,3>}(U))$-torsors. \end{thm} \smallskip In fact, the functor $\boxminus$ was defined in \cite{Bre}, together with several versions of $\boxplus$ defined for various algebroids. Our $\boxplus$ is just an extension of Bressler's definition. \bigskip \subsubsection{Addition} \label{section-Addition} Let $\cQ$ be a Courant extension of $\cL$ and $\cD$ a cdo. We describe how to define a vertex extension of $\cL$ which can be though of as a ``sum" of these two structures; the construction parallels that of the Baer sum of two extensions. First, consider the pullback $ \bA:= \cQ \times_{\cT} \cD $ so that a section of $\cA$ is a pair $(q,x)$, $q\in \cQ$, $x \in \cD$ with $\pi(q) = \pi(x)$. Define operations $ \opm: \cO_X \times \bA \to \bA $ and $ \zero, \one : \, \bA \times \bA \to \bA $ as follows: \begin{eqnarray} \label{boxplus-oper-1} a\opm (q,x) & := & (aq, a \opm x) \\ (q,x) \zero (q', x') & := & ( \, [ q, q']^{}_\cQ , x \zero x'), \\ (q,x) \one (q', x') & := & \pair{q, q'} + x \one x', \\ \pi( (q,x ) ) &:=& \pi(q) = \pi(x), \\ \label{boxplus-oper-5} \de a & = & (\de a, 0) \end{eqnarray} Note that $\bA$ contains two copies of $\Omega^1$, one from $\cQ$ and the other from $\cD$. Let us define $\cQ \boxplus \cD$ to be the pushout of $\bA$ with respect to the addition map $+: \Omega^1 \times \Omega^1 \to \Omega^1$ so that one has the following $$ \begin{CD} 0 @>>> \Omega^1 \oplus \Omega^1 @>>> \bA @>>> \cL @>>> 0 \\ @. @VV+V @VVV @| @. \\ 0 @>>> \Omega^1 @>>> \cQ \boxplus \cD @>>> \cL @>>> 0 \end{CD} $$ Alternatively, $\cQ \boxplus \cD$ fits into the diagram $$ \begin{CD} 0 @>>> \tfg \oplus \Omega^1 @>>> \bA @>\pi>> \cT_X @>>> 0 \\ @. @VV+V @VVV @| @. \\ 0 @>>> \tfg @>>> \cQ \boxplus \cD @>>> \cT_X @>>> 0 \end{CD} $$ where the rows are exact and the left square is a push-out square. \begin{thm} \label{boxplus-is-valgd-thm} The operations (\ref{boxplus-oper-1} - \ref{boxplus-oper-5}) make sense on $\cQ \boxplus \cD$ and give it the structure of a vertex algebroid \end{thm} {\em Proof.} The verification is straightforward. As an example, let us show that (\ref{opm-one}) is satisfied. For $f\in \cO_X$, $q\in \cQ$, $v\in \cD$, one has: \begin{gather} \nonumber (f \opm (q, v)) \one (q', v') = (fq, f\opm v) \one (q', v') = \pair{fq, q'} + (f \opm v) \one v' \\ \nonumber = f \pair{q, q'} + f (v \one v') - \pi(v) \pi(v') (f) = f ((q, v) \one (q', v')) - \pi((q,v)) \pi((q',v')) (f) \end{gather} $\qed$ \bigskip Note that the assignment $(\cQ, \cD) \mapsto \cQ \boxplus \cD$ is naturally a functor $$ \boxplus: \CExt(U) \times \CDO(U) \To \VExt(U) $$ Indeed, let $f \in \Hom_{\calC\calE xt} (\cQ, \cQ')$, $f \in \Hom_{\CDO} (\cD, \cD')$. In particular, $f$ and $g$ are maps over $\cT$, so $(f, g)$ takes $\cQ \times_\cT \cD \subset \cQ \times \cD$ to $\cQ' \times_\cT \cD'$. Since $f$ and $g$ act as identity on the subsheaf $\Omega^1$, $(f,g)$ gives a well-defined map between the pushouts $\cQ \boxplus \cD \to \cQ' \boxplus \cD'$ that will be denoted $ f \boxplus g $. Finally, it remains to note that the composition is ``coordinate-wise": \begin{equation} \label{componentwise-composition-for-dplus} (f\boxplus g) (f' \boxplus g') = f f' \boxplus g g' \end{equation} which implies that $(f,g) \mapsto f\boxplus g$ is a functor. \smallskip Let us note, among the elementary properties of this functor, the following: \begin{enumerate} \item for any $\alpha \in \Omega^{3, cl}_U$, $\cQ \in \CExt(U)$, $\cD \in \CDO_U$ one has the equalities \begin{equation} \label{boxplus-dplus-compat-eqn} (\cQ \boxplus \cD) \dplus \alpha \cong \cQ \boxplus (\cD \dplus \alpha) \cong (\cQ \dplus \alpha) \boxplus \cD \end{equation} (by definition of $\dplus \alpha$ the three parts of the equation have underlying sheaf $\cQ \boxplus \cD$, one only has to check that the operations coincide). \smallskip \item one has the equality \begin{equation} \nonumber \exp(\beta) \boxplus \id_{\cD} = \exp(\beta ) = \id_{\cQ} \boxplus \exp(\beta) \end{equation} in $\Hom_{\cV \calE xt}( \cQ \boxplus \cD, (\cQ \boxplus \cD) \dplus d\beta)$; more generally, \begin{equation} \label{exp-boxplus-exp} \exp(\beta') \boxplus \exp(\beta'') = \exp(\beta' + \beta'') \end{equation} \end{enumerate} \subsubsection{Subtraction} \label{section-Subtraction} Let $\cA$ is a vertex extension of $\cL$ and $\cD$ a cdo. In [Bre] it is described how to define a Courant algebroid $\cA \boxminus \cD$. Let us recall this construction. Let $ \bQ:= \cA \times_{\cT} \cD $ so that a section of $\bQ$ is a pair $(v,x)$, $v\in \cA$, $x \in \cD$ with $\pi(v) = \pi(x)$. Define operations $ \cdot : \cO_X \times \bQ \to \bQ, $ $ [,] : \, \bQ \times \bQ \to \bQ, $ $ \pair{ , }: \bQ \times \bQ \to \cO_X, $ $ \pi: \bQ \to \cT, $ and $\de: \cO_X \to \bQ$ as follows: \begin{eqnarray} a \cdot (v,x) & := & (a \opm v, \, a \opm x ) \\ \left[ (v, x) , (v', x') \right] & := & ( v \zero v', \, x \zero x') \\ \pair{ (v,x), (v', x') } & := & v \one v' - x \one x' \\ \pi( (v,x ) ) &:=& \pi(v) = \pi(x) \\ \de a & = & (\de a, 0) \end{eqnarray} Define $\cA \boxminus \cD$ to be the pushout of $\bQ$ with respect to the subtraction map $-: \Omega^1 \times \Omega^1 \to \Omega^1$. One can show that all operations defined above make sense on $\cA \boxminus \cD$. One has \begin{thm} (\cite{Bre}, Lemma 5.6) The sheaf $\cA \boxminus \cD$ with the operations defined above is a Courant algebroid \end{thm} \smallskip \subsubsection{Compatibility of $\boxplus$ and $\boxminus$} \begin{thm} \label{inverse-equivalences-thm} The functors $$ - \boxminus \cD : \ \CExt(U) \to \VExt(U) $$ and $$ - \boxplus \cD : \ \CExt(U) \to \VExt(U) $$ are mutually inverse equivalences of $\Gr (\Omega^{[2,3>}(U))$-torsors. \end{thm} {\it Proof.} The compatibility of $\boxplus \cD$ and $\boxminus \cD$ with $\Gr (\Omega^{[2,3>}(U))$-action follows from properties (\ref{boxplus-dplus-compat-eqn} - \ref{exp-boxplus-exp}) and their obvious analogs for $\boxminus$. Let us construct the natural isomorphisms $ \eta_{\cA}: \, \cA \iso (\cA \boxminus \cD) \boxplus \cD $ where $\cA$ is a vertex extension of $\cL$ and $\cD$ is a cdo. Define $$ \eta_{\cA}(v) = ((v, x), x) $$ where $x\in \cD$ is arbitrary. To show $\eta_{\cA}$ is well-defined note that for any $x, y \in \cD$ with $\pi(x) = \pi(y) = \pi(v)$ we have $x-y \in \Omega^1$ and $$ ((v, x), x) = ((v, (y-x) + y ), x)= ((v, y) + (y-x), x) = ((v, y), x + (y-x)) = ((v, y), y) $$ To verify $\eta_{\cA}$ is a morphism we check $$ a \opm ((v,x),x) = (a(v,x), a \opm x) = ((a\opm v, a\opm x), a\opm x) = \eta_{\cA}(a \opm v) $$ $$ ((v,x),x) \zero ((v',x'), x') = ( [(v,x) , (v', x') ], x \zero x' ) = ( (v \zero v' , x \zero x' ) , x \zero x' ) = \eta_{\cA}(v \zero v') $$ $$ ((v,x),x) \one ((v',x'), x') =\pair{(v,x), (v', x')} + x \one x' = v \one v' - x \one x' + x \one x' = v \one v' $$ To check that $\eta_{\cA}$ is an isomorphism, one can check that the map $ \Psi: (\cA\boxminus \cD) \boxplus \cD \To \cA, $ $ ((v,x),y) \mapsto v + (y-x). $ is a well-defined inverse to $\Psi$. (Note that every section $((v,x),y)$ of $(\cA \boxminus \cD) \boxplus \cD$ can be written as $ ((v,x),y) = ((v+(y-x),x+(y-x)),y) = (( v + (y-x), y), y) $ with $v + (y-x)$ independent of the choice of representative $((v,x), y)$). The construction of the natural isomorphisms $\eta'_\cQ: \, \cQ \to \cQ \boxplus \cD \boxminus \cD$ is analogous and left to the reader. $\qed$ \smallskip The constructions of sections \ref{section-Addition}, \ref{section-Subtraction} and Theorem \ref{inverse-equivalences-thm} furnish the proof of Theorem \ref{lin-alg-main-thm}. \bigskip \subsection{Classification} \subsubsection{Local existence} Let $U$ be suitable for chiralization and suppose $\nabla: \cT_U \to \cL_U$ is a connection. \begin{thm} \label{vert-cour-local-existence} Then the following are equivalent: \begin{enumerate} \item The category $\VExt{} (U)$ is nonempty \item The category $\CExt{} (U)$ is nonempty \item The Pontryagin form $ \disp \half \pair{c(\nabla) \wedge c(\nabla) } $ is exact. \end{enumerate} \end{thm} {\it Proof.} Since $U$ is suitable for chiralization, there exists a CDO $\cD$ on $X$. Then (1) and (2) are equivalent due to the addition / subtraction operations: given a vertex extension $\cA$ there exists a Courant extension $\cQ = \cA \boxminus \cD$ and vice versa, given $\cQ$ one can produce a vertex extension $\cA = \cQ \boxplus \cD$. Finally, the equivalence of (2) and (3) is the content of Theorem \ref{Thm - local existence of CExt}. $\qed$ \bigskip \subsubsection{The obstruction} \label{obstruction-VExt-section} \begin{thm} \label{obstrcution-VExt-thm} Suppose $\VExt$ is nonempty. Then its class is equal to $$ cl(\VExt) = -\half p_1(\cL, \pair{,}) + ch_2 (\Omega^1_X) $$ \end{thm} where $p_1(\cL, \pair{,}) $ is the first Pontryagin class of a Lie algebroid $\cL$ with pairing $\pair{,}$. \bigskip {\it Proof.} What we will be proving is the following: $$ cl(\VExt) = cl(\CExt) + cl( \CDO(X)) $$ This is indeed sufficient, in view of Theorem \ref{class-CExt-Theorem} and the fact that $cl( \CDO(X)) = ch_2 (\Omega^1_X)$ \cite{GMS, Bre} Let $\fU$ be a cover of $X$ by open subsets $U$ suitable for chiralization. Since $\VExt$ is nonempty, so is $\CExt$. Suppose we are given a trivialization of the gerbe $\CDO$ and that of $\CExt$. In other words, we are given a CDO $\cD_i$ and a Courant extension $\cQ_i$ on each $U_i$, as well as isomorphisms $$ \eta_{ij} :\ \cD_i |_{U_{ij}} \To \cD_j |_{U_{ij}} \dplus \alpha^{ch}_{ij} $$ and $$ \theta_{ij}: \cQ_i |_{U_{ij}} \To \cQ_j |_{U_{ij}} \dplus \alpha^{Q}_{ij} $$ where $ \alpha^{Q}_{ij}, \alpha^{ch}_{ij} \in \Omega^{3, cl}(U_{ij}) $, such that on triple intersections $U_{ijk} = U_i \cap U_j \cap U_k$ one has $$ \alpha^{ ch}_{ij} + \alpha^{ch }_{jk} = \alpha^{ ch}_{ik}, \ \ \ \ \alpha^{\cQ }_{ij} + \alpha^{ \cQ}_{jk} = \alpha^{ \cQ}_{ik}, $$ and \begin{equation} \label{isos-on-triple} \eta_{jk} \eta_{ij}\eta_{ik}^{-1} = \exp(\beta^{ch}_{ijk}), \ \ \ \ \ \theta_{jk} \theta_{ij} \theta_{ik}^{-1} = \exp(\beta^{Q}_{ijk}), \end{equation} for some $ \beta^{ch}_{ijk} , \beta^{\cQ}_{ijk} \in \Omega^2 (U_{ijk}) $ Then $(\alpha^{ch}_{ij}, \beta^{ch}_{ijk} )$ and $(\alpha^{\cQ}_{ij}, \beta^{\cQ}_{ijk} )$ are cocycles representing the classes of the gerbes $\CDO_X$ and $\CExt$ respectively. \smallskip Now let us construct a trivialization of the gerbe $\VExt$. Define $$ \cA_i = \cQ_i \boxplus \cD_i \in \VExt(U_i). $$ One has the following isomorphisms: $$ \begin{CD} \cQ_i |_{U_{ij}} \boxplus \cD_i |_{U_{ij}} @> \theta_{ij} \boxplus \eta_{ij} >> (\cQ_j \dplus \alpha^{\cQ}_{ij}) |_{U_{ij}} \boxplus (\cD_j \dplus \alpha^{ch}_{ij}) | _{U_{ij}} @= \cQ_j |_{U_{ij}} \boxplus \cD_j |_{U_{ij}} \dplus ( \alpha^{\cQ}_{ij} + \alpha^{ch}_{ij})) \end{CD} $$ the latter being the identity on the level of vector spaces, by definition of $?\dplus \alpha$ (cf. sect. \ref{twisting3-subsection}). Thus $$ \theta_{ij} \boxplus \eta_{ij} : \cA_i \stackrel{\sim}{\To} \cA_j \dplus ( \alpha^{\cQ}_{ij} + \alpha^{ch}_{ij}). $$ The collection $(\cA_i, ( \alpha^{\cQ}_{ij} + \alpha^{ch}_{ij}), \theta_{ij} \boxplus \eta_{ij} )$ is a trivialization of the gerbe $\VExt$. Let us compute its class. On triple intersections $U_{ijk} = U_i \cap U_j \cap U_k$ we have (cf. (\ref{isos-on-triple}), (\ref{componentwise-composition-for-dplus}), (\ref{exp-boxplus-exp}) ) \begin{equation} \begin{split} ( \theta_{jk} \boxplus \eta_{jk} ) ( \theta_{ij} \boxplus \eta_{ij} ) ( \theta_{ik} \boxplus \eta_{ik} )^{-1} & = \theta_{jk}\theta_{ij} \theta_{ik}^{-1} \boxplus \eta_{jk} \eta_{ij} \eta_{ik}^{-1} \\ & = \exp(\beta^{\cQ}_{ijk}) \boxplus \exp(\beta^{ch}_{ijk}) \\ &= \exp(\beta^{\cQ}_{ijk} + \beta^{ch}_{ijk}) \end{split} \end{equation} (here, again, we slightly abuse the notation by writing $\theta_{ij}$ for any of its translates under the action of $\Gr(\Omega^{[2,3>})$). It follows that $ ( \alpha^{\cQ}_{ij} + \alpha^{ch}_{ij} , \beta^{\cQ}_{ijk} + \beta^{ch}_{ijk} )$ is a cocycle representing the class of the gerbe $\VExt$. $\qed$ \bigskip \section{Deformation of twisted CDO} \subsection{Twisted chiral differential operators} In this section we recall the definition of the sheaf $\twcdo$ of twisted chiral differential operators (TCDO) corresponding to a given CDO $\cD^{ch}$ on a smooth projective variety $X$. \subsubsection{The universal Lie algebroid $\cT^{tw}$} \label{universal-Lie-algd} The Lie algebroid underlying TCDO is a ``family of all TDO". More precisely, the universal enveloping algebra $\cD_X^{tw}$ of $\cT^{tw}$ possesses the following property: for every $\la\in H^1(X,\Om{1}{2}{X})$ there exists an ideal ${\frak{m}}_\la \subset \cD_X^{tw}$ such that the quotient $ \cD_X^{tw} / {\frak{m}}_\la $ is isomorphic to the tdo $\cD_X^{\la}$ corresponding to the class $\la$. Let us sketch the construction. \smallskip Since $X$ is projective, $H^1(X,\Om{1}{2}{X})$ is finite-dimensional, and there exists an affine cover $\frak{U}$ so that $\check{H}^1(\frak{U},\Om{1}{2}{X}) = H^1(X,\Om{1}{2}{X} )$. Let $\Lambda = \check{H}^1(\frak{U},\Om{1}{2}{X})$. We fix a lifting $ \check{H}^1(\frak{U},\Om{1}{2}{X}) \To \check{Z}^1(\frak{U},\Om{1}{2}{X}) $ and identify the former with the subspace of the latter defined by this lifting. Thus, each $\la \in \Lambda$ is a pair of cochains $\la = ( (\la^{(1)}_{ij} ), ( \la^{(2)}_{i}))$ with $ \la^{(1)}_{ij} \in \Omega^1(U_i \cap U_j), $ $ \la^{(2)}_{i} \in \Omega^{2, cl}(U_i), $ satisfying $ d_{DR} \la^{(1)}_{ij} = d_{\check{C}} \la^{(2)}_{i} $ and $ d_{\check{C}} \la^1_{ij} = 0. $ For $\la = (\la^{(1)}_{ij}, \la^{(2)}_{i}) \in \Lambda$ denote $\cD^{\la}$ the corresponding sheaf of twisted differential operators. One can consider $\cD^{\la}$ as an enveloping algebra of the (Picard) Lie algebroid $\cT^{\la} = \cD^{\la}_1$ \cite{BB2}. As an $\cO_X$-module, $\cT^{\la}$ is an extension $$ 0 \To \cO_X \One \To \cT^{\la} \To \cT_X \To 0 $$ given by $(\la^{(1)}_{ij})$. The Lie algebra structure on $\cT^{\la}_{U_i}$ is given by $ [\xi, \eta]_{\cT^\la} = [\xi, \eta] + \ixi \ieta \la^2_{i} \One. $ and $[\One, \cT^{\la}_{U_i}] = 0$. \smallskip Let $\set{ \la^*_i }$ and $\set{ \la_i }$ be dual bases of $\Lambda^*$ and $\Lambda$ respectively. Denote by $k$ the dimension of $\Lambda$. Define $\cT^{tw}$ to be an abelian extension $$ 0 \to \cO_X \otimes \Lambda^* \to \cT^{tw}_X \to \cT_X \to 0 $$ such that $[\Lambda^*, \cT^{tw} ] =0$ and there exist connections $\nabla_i: \cT_{U_i} \to \cT^{tw}_{U_i}$ satisfying \begin{gather} \label{univTDO-A} \nabla_j(\xi) - \nabla_i(\xi) = \sum_{r} \iota_{\xi} \la^{(1)}_r(U_{ij}) \la^*_r \\ \label{univTDO-curvature} [\nabla_i(\xi), \nabla_i(\eta)]- \nabla_i ([\xi,\eta]) = \sum_r \iota_{\xi} \iota_{\eta} \la^{(2)}_r (U_{i}) \la^*_r \end{gather} It is clear that the pair $(\cT^{tw}, \cO_X \otimes \Lambda^* \hookrightarrow \cT^{tw})$ is independent of the choices made. We call the universal enveloping algebra $\cD_X^{tw} = U_{\cO_X}(\cT^{tw})$ \textit{the universal sheaf of twisted differential operators}. \bigskip \subsubsection{A universal twisted CDO} \label{A universal twisted CDO} Let $ch_2(X)=0$ and fix a CDO $\cD^{ch}_X$. To each such sheaf one can attach a {\em universal twisted CDO}, $\cD^{ch, tw}_X$, a sheaf of vertex algebras whose "underlying" Lie algebroid is $\cT^{tw}_X$. Let us place ourselves in the situation of the previous section, where we had a fixed affine cover ${\frak U}=\{U_i\}$ of a projective algebraic manifold $X$, dual bases $\{\lambda_i\}\in H^1(X,\Omega^{[1,2>}_X)$, $\{\lambda_i^*\}\in H^1(X,\Omega^{[1,2>}_X)^*$, and a lifting $H^1(X,\Omega^{[1,2>}_X)\rightarrow Z^1({\frak U},\Omega^{[1,2>}_X)$. We can assume that $U_i$ are suitable for chiralization. Let us fix, for each $i$, an abelian basis $\tau^{(i)}_1,\tau^{(i)}_2,...$ of $\Gamma(U_i,\cT_X)$. Then the CDO $\cD^{ch}$ is given by a collection of 3-forms $\alpha^{(i)}\in\Gamma(U_i,\Omega^{3,cl}_X)$ (cf. sect.\ \ref{Chiral_differential_operators}, Theorem~\ref{class-cdo-local}) and transition maps $ g_{ij} : \cD^{ch}_{U_j}|_{U_i \cap U_j } \to \cD^{ch}_{U_i}|_{U_i \cap U_j }. $ Let us as well fix splittings $\cT_{U_i } \hookrightarrow \cD^{ch}_{U_i}$ and view $g_{ij}$ as maps $ g_{ij}: (\cT_{U_j } \oplus \Omega^1_{U_j} )|_{U_i \cap U_j } \to (\cT_{U_i } \oplus \Omega^1_{U_i})|_{U_i \cap U_j } $ The {\em universal sheaf of twisted chiral differential operators} $\cD^{ch,tw}_X$ corresponding to $\cD^{ch}_X$ is a vertex envelope of the $\cO_X$-vertex algebroid $\cA^{tw}$ determined by the following: \begin{itemize} \item $\cA^{tw}$ is a vertex extensions of $(\cT^{tw}_X, 0)$; \item there are embeddings $\cT_{U_i} \hookrightarrow \cA_{U_i}$ such that $$ \tau^{(i)}_l \zero \tau^{(i)}_m = \iota_{ \tau^{(i)}_l } \iota_{ \tau^{(i)}_m } \alpha^{ (i) } + \sum \iota_{ \tau^{(i)}_l } \iota_{ \tau^{(i)}_m } \lambda^{(2)}_k (U_i) \lambda^*_k $$ \item the transition function from $U_j$ to $U_i$ is given by \begin{equation} \label{twcdo_trans_fun} g_{ij}^{tw} (\xi) = g_{ij}(\xi) - \sum \iota_{\xi} \lambda_k^{(1)} (U_i \cap U_j) \lambda^*_k \end{equation} \end{itemize} See \cite{AChM} for a detailed construction. \subsubsection{Locally trivial twisted CDO} \label{Locally trivial twisted CDO} Observe that there is an embedding \begin{equation} \label{emb_coho} H^{1}(X,\Omega^{1,cl}_X)\hookrightarrow H^{1}(X,\Omega^{[1,2>}_X) \end{equation} The space $H^{1}(X,\Omega^{1,cl}_X)$ classifies {\em locally trivial} twisted differential operators, those that are locally isomorphic to $\cD_X$. Thus for each $\lambda\in H^{1}(X,\Omega^{1,cl}_X)$, there is a unique up to isomorphism TDO $\stackrel{\circ}{\cD}^{\lambda}_X$ such that for each sufficiently small open $U\subset X$, $\stackrel{\circ}{\cD}^{\lambda}_X|_U$ is isomorphic to $\cD_U$. Let us see what this means at the level of the universal TDO. In terms of Cech cocycles the image of embedding (\ref{emb_coho}) is described by those $(\lambda^{(1)},\lambda^{(2)})$, see section \ref{universal-Lie-algd}, where $\lambda^{(2)}=0$, and this forces $\lambda^{(1)}$ to be closed. Picking a collection of such cocycles that represent a basis of $H^{1}(X,\Omega^{1,cl}_X)$ we can repeat the constructions of sections \ref{universal-Lie-algd} and \ref{A universal twisted CDO} to obtain sheaves $\stackrel{\circ}{\cT}^{tw}_X$ and $\stackrel{\circ}{\cD}^{ch,tw}_X$. The latter is glued of pieces isomorphic (as vertex algebras) to $\cD^{ch}_{U_i} \otimes H_X$ with transition functions as in (\ref{twcdo_trans_fun}); here $H_X$ is the vertex algebra of differential polynomials on $H^{1}(X, \Omega^{1,cl}_X)$. We will call the sheaf $\stackrel{\circ}{\cD}^{ch,tw}_X$ the {\em universal locally trivial sheaf of twisted chiral differential operators.} \subsection{TCDO on flag manifolds} \label{Example:_flag_manifolds.} Let us see what our constructions give us if $X=\prline$. We have $\prline=\pole_{0}\cup\pole_{\infty}$, a cover ${\frak U}=\{\pole_{0},\pole_{\infty}\}$, where $\pole_0$ is $\pole$ with coordinate $x$, $\pole_{\infty}$ is $\pole$ with coordinate $y$, with the transition function $x\mapsto 1/y$ over $\pole^*=\pole_{0}\cap\pole_{\infty}$. Defined over $\pole_{0}$ and $\pole_{\infty}$ are the standard CDOs, $\cD^{ch}_{\pole_0}$ and $\cD^{ch}_{\pole_{\infty}}$. The spaces of global sections of these sheaves are polynomials in $\partial^{n}(x)$, $\partial^{n}(\partial_x)$ (or $\partial^{n}(y)$, $\partial^{n}(\partial_y)$ in the latter case), where $\partial$ is the translation operator, so that, cf. sect.\ \ref{Chiral_differential_operators}, $$ (\partial_x)_{(0)}x=(\partial_y)_{(0)}y=1. $$ There is a unique up to isomorphism CDO on $\prline$, $\cD^{ch}_{\prline}$; it is defined by gluing $\cD^{ch}_{\pole_0}$ and $\cD^{ch}_{\pole_{\infty}}$ over $\pole^*$ as follows \cite{MSV}: \begin{equation} \label{CDO_P1_gluing} x\mapsto 1/y,\; \partial_x\mapsto (-\partial_{y})_{(-1)}(y^2)-2\partial(x). \end{equation} The canonical Lie algebra morphism \begin{equation} \label{sl2-p1} sl_2\rightarrow \Gamma(\prline,\cT_{\prline}), \end{equation} where \begin{equation} \label{formulas-sl2-p1} e\mapsto\partial_x, \quad h\mapsto -2x\partial_x, \quad f\mapsto -x^2\partial_x, \end{equation} $e,h,f$ being the standard generators of $sl_2$, can be lifted to a vertex algebra morphism \begin{equation} \label{verte-sl2-p1} V_{-2}(sl_2)\rightarrow \Gamma(\prline,\cD^{ch}_{\prline}), \end{equation} where \begin{equation} \begin{split} \label{verte-formulas-sl2-p1} e_{(-1)}\vacB &\mapsto \, \partial_x, \\ h_{(-1)}\vacB & \mapsto \, -2(\partial_x)_{(-1)}x, \\ f_{(-1)}\vacB & \mapsto \, -(\partial_x)_{(-1)}x^2-2\partial(x). \end{split} \end{equation} The twisted version of all of this is as follows (\cite{AChM}). Since $\dim \prline=1$, $$ H^1(\prline,\Omega^{1}_{\prline}\rightarrow\Omega^{2,cl}_{\prline})= H^1(\Omega^{1,cl}_{\prline}), $$ so all twisted CDO on $\prline$ are locally trivial. Furthermore, $H^1(\prline,\Omega^{1,cl}_{\prline})=\pole$ and is spanned by the cocycle $\pole_0\cap\pole_{\infty} \mapsto dx/x$. We have $H_{\prline}=\pole[\lambda^*,\partial(\lambda^*),....]$. Let $\cD^{ch,tw}_{\pole_0}=\cD^{ch}_{\pole_0}\otimes H_{\prline}$, $\cD^{ch,tw}_{\pole_{\infty}}=\cD^{ch}_{\pole_{\infty}}\otimes H_{\prline}$ and define $ \cD^{ch,tw}_{\prline}$ by gluing $\cD^{ch,tw}_{\pole_0}$ onto $ \cD^{ch,tw}_{\pole_{\infty}}$ via \begin{equation} \label{gluing_twisted_p1} \lambda^*\mapsto\lambda^*,\;x\mapsto 1/y,\;\partial_x\mapsto -(\partial_y)_{(-1)}y^2-2\partial(y) +y_{(-1)}\lambda^*. \end{equation} Morphism (\ref{verte-sl2-p1}) ``deforms'' to \begin{equation} \label{tw-verte-sl2-p1} V_{-2}(sl_2)\rightarrow \Gamma(\prline, \cD^{ch,tw}_{\prline}), \end{equation} \begin{equation} \label{tw-verte-formulas-sl2-p1} e_{(-1)}\vacB\mapsto\partial_x, h{(-1)}\vacB\mapsto -2(\partial_x)_{(-1)}x+\lambda^*, f_{(-1)}\vacB\mapsto -(\partial_x)_{(-1)}x^2-2\partial(x)+x_{(-1)}\lambda^*. \end{equation} Furthermore, consider $T=e_{(-1)}f_{(-1)}+f_{(-1)}e_{(-1)}+1/2h_{(-1)}h\in V_{-2}(sl_2)$. It is known that $T\in\fz(V_{-2}(sl_2))$, the center of $V_{-2}(sl_2)$, and in fact, the center $\fz(V_{-2}(sl_2))$ equals the commutative vertex algebra of differential polynomials in $T$. The formulas above show \begin{equation} \label{image-sugawara} T\mapsto\frac{1}{2}\lambda^*_{(-1)}\lambda^*-\partial(\lambda^*)\in H_{\prline}. \end{equation} All of the above is easily verified by direct computations, cf. \cite{MSV}. The higher rank analogue is less explicit but valid nevertheless. \bigskip Let $G$ be a simple complex Lie group, $B\subset G$ a Borel subgroup, $X=G/B$, the flag manifold, $\fg=\text{Lie\;}G$ the corresponding Lie algebra, $\fh$ a Cartan subalgebra. One has a sequence of maps \begin{equation} \fh^* \rightarrow H^1(X,\Omega^{1,cl}_X)\rightarrow H^1(X,\Omega^{1}_X\rightarrow\Omega^{2,cl}_X). \end{equation} The leftmost map attaches to an integral weight $\lambda\in P\subset\fh^*$ the Chern class of the $G$-equivariant line bundle $\cL_\lambda=G\times_{B}\pole_{\lambda}$, and then extends thus defined map $P\rightarrow H^1(X,\Omega^{1,cl}_X)$ to $\fh^*$ by linearity. The rightmost one is engendered by the standard spectral sequence converging to hypercohomology. It is easy to verify that both these maps are isomorphisms. Therefore, \begin{equation} \label{on-flag-trivial} \fh^* \stackrel{\sim}{\rightarrow}H^1(X,\Omega^{1,cl}_X) \stackrel{\sim}{\rightarrow}H^1(X,\Omega^{1}_X\rightarrow\Omega^{2,cl}_X), \end{equation} and each twisted CDO on $X$ is locally trivial. Constructed in \cite{MSV} -- or rather in \cite{FF1}, see also \cite{F1} and \cite{GMSII} for an alternative approach -- is a vertex algebra morphism \begin{equation} \label{verte-g-f} V_{-h^{\vee}}(\fg)\rightarrow\Gamma(X,\cD^{ch}_X). \end{equation} Furthermore, it is an important result of Feigin and Frenkel \cite{FF2}, see also an excellent presentation in \cite{F1}, that $V_{-h^{\vee}}(\fg)$ possesses a non-trivial center, $\fz(V_{-h^{\vee}}(\fg))$, which, as a vertex algebra, isomorphic to the algebra of differential polynomials in $\text{rk}\fg$ variables. \begin{lem} \cite{AChM} \label{tw-verte-g-f} Morphism (\ref{verte-g-f}) ``deforms'' to $$ \rho: \;V_{-h\check{ }}(\fg)\rightarrow\Gamma(X,\cD^{ch,tw}_X). $$ Moreover, $ \rho(\fz(V_{-h^{\vee}}(\fg)))\subset H_{X}. $ \end{lem} \smallskip \subsection{A deformation} \subsubsection{Motivation: Wakimoto modules} Let $X =G/ B_{-}$ be a flag variety and $U = NB_{-} \subset X$ the {big cell} of $X$. In virtue of Lemma \ref{tw-verte-g-f}, the sections $\Gamma(U, \twcdo)$ become a $V_{-h^\vee}(\fg)$-module, hence a $\fg$-module at the critical level. Following \cite{FF1, F3}, we call $\Gamma(U, \twcdo)$ a {\em Wakimoto module of highest weight} $(0, -h^{\vee})$, to be denoted $W_{0, -h^\vee}$. By construction $ W_{0, -h^\vee} = \cD^{ch}(U) \otimes H_X $. In fact, Feigin and Frenkel proved \cite{FF1} that there exists a whole family of $\fg$-modules $$ W_{0, k -h^\vee} = \cD^{ch}(U) \otimes H_k $$ where $H_k$ is the Heisenberg vertex algebra associated to the space $\fh$ with bilinear pairing $k\pair{, }_0$, i.e., $k$ times the canonically normalized Killing form. The $\fg$-module structure is defined by a vertex algebra morphism $$ V_{ k -h^\vee}(\fg) \to \cD^{ch}(U) \otimes H_k $$ and thus, $W_{0, k -h^\vee}$ is a $\fg$-module of level $k-h^\vee$. \smallskip When the level is critical, $W_{0, -h^\vee} = \Gamma(U_e, \cD^{ch, tw}_{X})$ One might ask whether sheaves with an analogous property exist for Wakimoto modules at a non-critical level. To be more precise, we are interested in a sheaf $\cV$ of vertex algebras such that: \begin{itemize} \item its sections on the big cell $U$ and its $W$-translates are isomorphic to the tensor product of vertex algebras $\cD^{ch}({\mathbb{A}}^{\dim \fg /\fb})\otimes H_{k}$, for nonzero $k$; \item the associated Lie algebroid of $\cV$ is the universal tdo $\cT^{tw}$. \end{itemize} In other words, $\cV$ is a vertex extension of the pair $(\cT_{G/B}^{tw}, k\pair{,}_{0} )$ We show that such sheaves do indeed exist on $G/B$; moreover, the construction is rather general and can be carried out for any variety. We call the obtained sheaves the {\em deformations of TCDO} or {\em deformed TCDO}; deformations because they depend on $\pair{,}$ as a parameter, with $\pair{, } = 0$ corresponding to a TCDO. \subsubsection{Definition} The discussion above suggests the following definition. Let $X$ be a smooth projective variety and $\cT^{tw}$ the Lie algebroid underlying the universal TDO (cf. section \ref{universal-Lie-algd}). Recall that $\cT^{tw}_X$ fits into an exact sequence $$ 0 \to \cO_X \otimes \Lambda^* \to \cT^{tw}_X \to \cT_X \to 0 $$ where $ \Lambda = H^1 (X, \Omega^{1} \to \Omega^{2,cl}). $ Let us fix a symmetric bilinear pairing $\pair{, }: \Lambda^* \times \Lambda^* \to \Cplx$ and extend $\cO_X$-linearly to $\cO_X \Lambda^*$. \begin{defn} We will say that a sheaf $\cV$ is a {\em $\pair{,}$-deformation of TCDO} if $\cV$ is a vertex extension of the pair $(\cT^{tw}_X, \pair{, })$. \end{defn} Without specifying $\pair{,}$, a {deformation of TCDO} is just a vertex extension of the Lie algebroid $\cT^{tw}_X$. Being vertex extensions, $\pair{,}$-deformations form a stack, to be denoted $$ \cT \CDO_X^{\pair{,}} := \cV\cE xt^{\pair{,}}_{\cT^{tw}} $$ \smallskip \subsection{Classification of deformations} We apply the results of sections \ref{obstruction-VExt-section}. Theorem \ref{obstrcution-VExt-thm} implies that, when $\cT \CDO_X^{\pair{,}}$ is locally nonempty, its class is equal to \begin{equation} \nonumber cl(\cT \CDO_X^{\pair{,}}) = cl(\cC\cE xt_{\cT^{tw}}^{\pair{,}} ) + ch_2 (\Omega^1_X) \end{equation} We are going to use the description of $cl(\cC\cE xt^{\pair{,}}_{\cT^{tw}})$ given in section \ref{facts about CExt - section}. Let us work in the setup of sections \ref{universal-Lie-algd}, \ref{A universal twisted CDO}. Thus, we pick a basis $\set{\la_r}$ of $H^1(X, \Omega^{1}_X \to \Omega^{2, cl}_X)$, a dual basis $\set{\la^*_r}$ in $H^1(X, \Omega^{1}_X \to \Omega^{2, cl}_X)$, and a lifting $H^1(X, \Omega^{1}_X \to \Omega^{2, cl}_X) \to \check{Z}^1(X, \Omega^{1}_X \to \Omega^{2, cl}_X)$, so that each $\la_r$ is a pair of cochains $ (\la_r^{(1)}, \la^{(2)}_r ) \in \prod \Omega^1(U_{ij}) \times \prod \Omega^{2,cl}({U_{i}})$. By construction, the Lie algebroid $\cT_X^{tw}$ admits connections $\nabla_i : \cT_{U_i} \to \cT^{tw}_{U_i}$ such that \begin{equation} \label{Aij-for-Ttw} A_{ij} := \nabla_i - \nabla_j = - \la^*_k \la^{(1)}_k (U_{ij}) \end{equation} (summation over repeated indices is assumed) and \begin{equation} \label{cNabla-for-Ttw} c(\nabla_i) = - \la^*_k \la^{(2)}_k (U_i) \end{equation} \begin{thm} \label{class-DTCDO-thm} Let $\pair{} \neq 0$ be a symmetric bilinear form on $\cO_X \otimes \Lambda^*$. Then: (1) \, $\pair{}$-deformations exist locally on $X$ if and only if the 4-form \begin{equation} \label{def-Pontr-form} \pair{\la^*_r, \la^*_s} \la^{(2)}_r(U_i) \wedge \la^{(2)}_s (U_i) \end{equation} is exact; (2) \, Assume (1) and pick, for every $i$, a 3-form $H_i$ such that $2dH_i = \pair{\la^*_r, \la^*_s} \la^{(2)}_r(U_i) \wedge \la^{(2)}_s (U_i)$. Denote $$ \alpha_{ij} = \half \pair{\la^*_r, \la^*_s} \left( \la^{(2)}_r (U_i) + \la^{(2)}_r (U_j) \right) \wedge \la^{(1)}_s (U_{ij}) + H_i - H_j $$ and $$ \beta_{ijk} = \pair{\la^*_r, \la^*_s} \la^{(1)}_r(U_{ij}) \wedge \la^{(1)}_s (U_{jk}) $$ Then a global $\pair{,}$-deformation exists if and only if the class of the cocycle $ (\alpha_{ij}, \beta_{ijk}) $ in $H^2(X, \Om{2}{3}{X})$ is equal to $-ch_2(\Omega_X)$ (minus second graded piece of Chern character of $\Omega^1_X$). \end{thm} {\em Proof.} (1) Follows from Theorem \ref{vert-cour-local-existence}, since the 4-form (\ref{def-Pontr-form}) is just the Pontryagin form $ \half \pair{ c(\nabla_i) \wedge c(\nabla_i)} $ for the Lie algebroid $\cT^{tw}_{X}$. (2) Using the connections $\nabla_i$ (and formulas (\ref{Aij-for-Ttw}), (\ref{cNabla-for-Ttw})) in the construction of the section \ref{facts about CExt - section} one verifies that the cocycle $(\alpha_{ij}, \beta_{ijk})$ represents the class of $\cC\cE xt^{\pair{,}}_{\cT^{tw}}$. The statement follows immediately from Theorem \ref{obstrcution-VExt-thm} and the fact that $ cl(\CDO) = ch_2(\Omega^1_X) $ \cite{Bre}. $\qed$ \bigskip \begin{rem} In the presence of CDO, the classification problem for deformed TCDO becomes one for Courant extensions of $(\cT^{tw}_X, \pair{,})$, as any CDO $\cD^{ch}$ defines an equivalence of stacks over $X$ $$ ? \boxplus \cD^{ch} : \, \cC\cE xt^{\pair{,}}_{\cT^{tw}} \to \cT \CDO^{\pair{,}}. $$ \end{rem} \smallskip \subsection{Deformations of locally trivial TCDO} \label{Deformations_of_locally_trivial_TCDO} Recall from section \ref{Locally trivial twisted CDO} that locally trivial TCDO are constructed in the same way as TCDO by consistently replacing $H^1(X, \Omega^1 \to \Omega^{2, cl})$ with $H^1 (X, \Omega^{1, cl})$. In particular we construct a Lie algebroid $\stackrel{\circ}{\cT}^{tw}$. We define the corresponding versions of deformations as follows. A {\em locally trivial deformed TCDO} is a vertex extension of $\stackrel{\circ}{\cT}^{tw}$. A {\em locally trivial $\pair{,}$-deformation of TCDO} is a vertex extension of $ ( \stackrel{\circ}{\cT}^{tw}, \pair{,}). $ The locally trivial $\pair{,}$-deformations form a stack $\cT \CDO^{\pair{,}, lt}$. \smallskip Theorem \ref{class-DTCDO-thm} has the following analogue in the locally trivial case: \begin{thm} \label{class-DTCDO-LT-thm} Let $\pair{} \neq 0$ be a symmetric $\cT$-invariant bilinear form on $\cO_X \otimes \Lambda^*$. Then: (1) \, $\pair{}$-deformations exist locally on $X$. (2) \, every $\pair{,}$-deformation $\cA^{tw, lt}_{\pair{,}}$ is locally isomorphic to $\cD_U^{ch} \otimes H_{\pair{,}}$ where $\cD_U^{ch}$ is a CDO and $H_{\pair{,}}$ is a Heisenberg vertex algebra associated to the space $H^1(X, \Omega^{1,cl})^*$ with the bilinear form $\pair{,}$. (3) \, Denote $$ \beta_{ijk} = \pair{\la^*_r, \la^*_s} \la^1_r(U_{ij}) \wedge \la^1_s (U_{jk}) $$ and let $[ (0, (\beta_{ijk})) ]$ stand for the class of $ (0, (\beta_{ijk}))$ in $H^2 (\Omega^2 \to \Omega^{3,cl})$. Then the class of $ \cT \CDO^{\pair{,}, lt} $ in $H^2 (\Omega^2 \to \Omega^{3,cl})$ is given by $$ cl(\cT \CDO^{\pair{,}, lt}) = ch_2(\Omega^1_X) + [(0, (\beta_{ijk})) ] $$ \end{thm} {\em Proof.} (1) By construction, the Lie algebroid $\stackrel{\circ}{\cT}^{tw}$ admits {\em flat} connections $\nabla_i : \cT_{U_i} \to \stackrel{\circ}{\cT}_X^{tw} |_{U_i}$, which implies $ \pair{c(\nabla_i) \wedge c(\nabla_i)} =0. $ The local existence now follows from Theorem \ref{vert-cour-local-existence}. (2) Suppose $\nabla$ is a flat connection on an open set $U \subset X$, and let $\cQ =\cQ_{\nabla, H}$ be a Courant extension of $\stackrel{\circ}{\cT}^{tw}$ over $U$ (cf. \ref{facts about CExt - section}). Then $\cQ \iso \cT_U \oplus (\cO_U \otimes H^1(X, \Omega^{1,cl})) \oplus \Omega^1_U$ and since $c(\nabla) = 0$ one immediately observes from (\ref{Q_NH fg pair omega}) and (\ref{Q_NH xi-braket-g}) that the constant subsheaf $H^1(X, \Omega^{1,cl})^*$ ``decouples". It is clear from the construction, that it stays decoupled in $\cQ \boxplus \cD$, for any cdo $\cD$ on $U$. It has a structure of a Courant (equivalently, vertex) algebroid over ${\rm Spec} (\Cplx)$ whose vertex envelope is the algebra $H_{\pair{,}}$. (3) The proof is identical to that of Theorem \ref{class-DTCDO-thm}, Part (2). $\qed$ \bigskip \subsection{Deformed TCDO on $\prline$} This is a continuation of Example \ref{Example:_flag_manifolds.}. Recall that we are using standard coordinate charts $U_0$ and $U_1$ so that $\prline = U_0 \cup U_1$ with $0\in U_0$, $\infty \in U_1$ and coordinate functions $x: U_0 \to \Cplx$ and $y: U_1 \to \Cplx$ with $x = \frac{1}{y}$. Denote $$ \la = \frac{dy}{y} = - \frac{dx}{x} $$ a cocycle representative of a generator of 1-dimensional $H^1(\prline, \Omega^{1,cl})$. By definition, \begin{equation} \begin{split} \cT^{tw}_{U_i} = \cT_{U_i} \oplus \cO_{U_i} \la^* , \ \ i =0,1, \\ \end{split} \end{equation} with Lie bracket defined by $[\xi, \eta]_\cL = [\xi, \eta]$, \ $[\xi, a \la^*] = \xi(a) \la^*$. Let $\nabla_i: \cT_{U_i} \to \cT^{tw}_{U_i}$, $i =0,1$ be the canonical inclusions. The formula (\ref{Aij-for-Ttw}) in this case reads as \begin{equation} \label{nabla_1 - nabla_0} \nabla_1 - \nabla_0 = \frac{dy}{y} \la^*, \end{equation} which dictates the following gluing map $ g_{01}: \cT^{tw}_{1}|_{\Cplx^*} \to \cT^{tw}_{0}|_{\Cplx^*} $ \begin{align} \xi & \mapsto \xi + \ixi \la \cdot \la^* \\ \nonumber \la^* & \mapsto \la^* \end{align} In the chosen coordinates, it is $ \de_y = -x^2 \de_x + x \la^*. $ \bigskip \subsubsection{The deformed TCDO} We wish to construct a vertex extension of $(\cT^{tw}_{\prline}, \pair)$, where $\pair{,}$ is a symmetric $\cT^{tw}$-invariant $\cO$-bilinear pairing on $\fg(\cT^{tw}) = \cO_X \otimes H^1(X, \Omega^1 \to \Omega^2)^* = \cO \cdot \la^*$. In this case it is determined by a number $k\in \Cplx$ assigned to $\pair{\la^* | \la^*}$. Let us fix $k$ and assume $k \neq 0$ ($k=0$ corresponds to the usual TCDO). Since $\dim \prline =1$, $\Omega^{i}=0$ for $i>1$, in particular $H^i (\prline, \Omega^2 \to \Omega^{3,cl}) = 0$ for all $i$. Therefore there exists a unique vertex extension for any pair $(\cL, \pair{,})$. Let $\cA^{tw}_{\pair{,}}$ denote the vertex extension of $(\cT^{tw}_{\prline}, \pair{,})$. Denote by $H_{\prline}^{\pair{,}}$ the Heisenberg vertex algebra generated by a filed $\la^*$ satisfying $ \la^* \one \la^* = \pair{\la^*, \la^*}, \ \ \la^* {}_{(n)} \la^* =0 , n\ne 1. $ Theorem \ref{class-DTCDO-LT-thm} describes $\cA^{tw}_{\pair{,}}$ locally: one has isomorphisms of vertex algebras $ (\cA^{tw}_{\pair{,}})_{U_i} \iso \cD^{ch}_{U_i} \otimes H_{\prline}^{\pair{,}}, \, i=0,1. $ Some global information is provided by the following \begin{thm} (1)\ There are isomorphisms $\phi_i : \cA^{tw}_{\pair{,}} |_{U_i} \to \cD^{ch}_{U_i} \otimes H_{\prline}^{\pair{,}}$, $i=0,1$, such that \begin{gather} \label{P1-transition-vectorfields-inTheorem} \phi_0 \phi_1^{-1} (\de_y ) = - x^2 \de_x - 2 dx + x \la^* + \frac{1}{2} \pair{\la^*, \la^*} dx \\ \label{P1-transition-lambda-inTheorem} \phi_0 \phi_1^{-1} (\la^*) = \la^* - \pair{\la^*, \la^*} x^{-1} dx \end{gather} (2) \, The anchor map of $\cA^{tw}_{\pair{,}}$ induces a vector space isomorphism $$ H^0(\prline, \cA^{tw}_{\pair{,}}) \iso H^0 ( \prline, \cT_{\prline}). $$ \end{thm} \bigskip {\em Proof.} (1) The construction of section \ref{section-Addition} and the results of section \ref{facts about CExt - section} imply that the most general gluing formula is as follows: \begin{gather} \xi \mapsto g_{ij}(\xi ) + A(\xi) - \frac{1}{2} \pair{ A(\xi), A} + \iota_{\xi} \beta \\ g \mapsto g - \pair{g, A} \end{gather} where $g_{ij}$ is a transition function for a CDO, $\beta \in \Omega^{2}_{U_i \cap U_j}$, $A = \nabla_j - \nabla_i$, for some connectoions $\nabla_i : \cT_{U_i} \to \cL_{U_i}$. Applying to our case and using (\ref{CDO_P1_gluing}) and (\ref{nabla_1 - nabla_0}), we see that \begin{gather} \de_y \mapsto - x^2 \de_x - 2 dx + x \la^* + \frac{1}{2} \pair{\la^*, \la^*} dx \end{gather} and the map $ \tilde{\fg} |_{U_{1}} \to \tilde{\fg} |_{U_{0}} $ is given by \begin{equation} \label{P1-kernel-transition} \la^* \mapsto \la^* - \pair{\la^*, \frac{dy}{y}\la^*} = \la^* + k \frac{dx}{x} \end{equation} \bigskip (2) The gluing formula (\ref{P1-kernel-transition}) implies that the map $H^0 (\prline, \fg) \to H^1 (\prline, \Omega^1_{\prline})$ in the long exact sequence associated to $0 \To \Omega^1_{\prline} \To \tilde{\fg} \To \fg \To 0$ is an isomorphism. Since $H^j(\prline, \Omega^1) =H^k(\prline, \fg)=0$ for $j \ne 1$, $k \ne 0$, one can conclude that $ H^i (\prline, \tilde{\fg}) = 0 $ for all $i$. In turn, the long cohomology sequence associated to the sequence $$ 0 \To \tilde{\fg} \To \cA^{tw}_{\pair{,}} \To \cT_{\prline} \To 0 $$ shows that $ H^i(\prline, \cA^{tw}_{\pair{,}}) \iso H^i ( \prline, \cT) $. $\qed$ \subsection{Embedding of affine $\frak{sl}_2$.} For $\kappa \in \Cplx$ let $\cA_\kappa(\frak{sl}_2)$ denote the vertex algebroid over $\Cplx$ equal to $\frak{sl}_2$ as a space, with bracket $g \zero g' = [g, g']$ and pairing $g \one g' = \kappa\pair{g | g'}$ where $\pair{ \cdot | \cdot }$ is the canonically normalized invariant form (for $\frak{sl}_2$, it is $\frac{1}{4} \pair{ \cdot | \cdot }_{Killing}$ ). Let \begin{equation} \begin{split} \label{sl-2-embedding} e & = \de_x \\ h & = - 2 \de_x \opm x + \la^* \\ f & = - \de_x \opm x^2 - 2 dx + x \la^* + \half \pair{\la^* | \la^*} dx \end{split} \end{equation} \begin{lem} The elements $e$, $f$, $h$ given by the formulas (\ref{sl-2-embedding}) (1) satisfy the relations of $\widehat{\frak{sl}}_2 (\kappa)$ where $\kappa = \frac{\pair{\la^* | \la^*}}{2} -2$; (2) belong to $H^0(\prline, \cA^{tw}_{\pair{,}} )$ \end{lem} {\em Proof.} Restricted to the big cell, the statement of Part (1) goes back to Wakimoto \cite{W}; see also \cite{F1}. The rest follows from the following equalities over $U_0 \cap U_1$: \begin{equation} \begin{split} \de_x &= - \de_y \opm y^2 -2 dy + y \la^* + \half \pair{\la^* | \la^*} dy \\ - 2 \de_x \opm x + \la^* & = 2 \de_y \opm y - \la^* \\ - \de_x \opm x^2 -2 dx + x \la^* + \half \pair{\la^* | \la^*} dx & = \de_y \end{split} \end{equation} $\qed$ \smallskip \begin{cor} \label{global_sections_of_algebroid-Cor} The formulas (\ref{sl-2-embedding}) define an isomorphism of vertex algebroids over $k$ \begin{equation} \cA_\kappa (\frak{sl}_2) \iso H^0(\prline, \cA^{tw}_{\pair{, }}) \end{equation} that extends to the vertex algebra embedding \begin{equation} \label{sl-2-vertexalgebra-embedding} V_\kappa (\frak{sl}_2) \To H^0(\prline, U(\cD^{ch, tw}_{\pair{,}})) \end{equation} \end{cor} {\em Proof.} The map defined by (\ref{sl-2-embedding}) is clearly injective and the first statement follows by dimension count. The restriction of the second map to the big cell was shown in \cite{F1} to be injective. $\qed$ \bigskip \subsection{The case of a general flag variety} Recall that we have an identification \begin{equation} \label{Equation isom-hdual-H1} \bar{\alpha}: \fh^* \iso H^1 (X, \Omega^{1,cl}) \iso {H}^2 (X, \Cplx) \end{equation} In other words, the tdo on $G/B$ are classified by $\fh^*$. The Lie algebroid $\cT^{tw}_{G/B}$ is an extension $$ \begin{CD} 0 @>>> \cO_{G/B} \otimes_\Cplx \fh @>>> \cT^{tw}_{G/B} @>>> \cT_{G/B} @>>> 0 \end{CD} $$ A deformation of TCDO is therefore a vertex extension of $(\cT^{tw}_{G/B}, \pair{,})$ where $\pair{,}$ is a symmetric bilinear pairing $ \pair{,} : \, \fh \times \fh \to \Cplx $ We have the following \begin{thm} Let $X = G/B$. Then the class of $\cT \CDO^{\pair{}}$ is equal to $0$ if and only if $\pair{,}$ is proportional to the restriction of the Killing form on $\fh$. \end{thm} {\em Proof.} First, we find a convenient cocycle representation of the obstruction. Let $\{ \chi_r \}$ be the set of fundamental weights, $\cL_r$ the corresponding line bundles over $X$, $\cD_{\chi_r}$ algebras of tdo acting on $\cL_r$ and $T_{\chi_r}$ the corresponding Lie algebroids. Define the cocycles $\mu_r = ( \mu_r^{ij} ) \in \check{Z}^1 (X, \Omega_X^{1, cl})$ corresponding to $T_{\chi_r}$. Then the map (\ref{Equation isom-hdual-H1}) is the one taking $\chi_r$ to the class of $(\mu_r^{ij})$ in $ {H}^1 (X, \Omega_X^{1, cl})$. Take $\la^*_r$ to be the basis of $\fh$ dual to the basis $\{ \chi_r \}$. \smallskip Using Theorem \ref{class-DTCDO-LT-thm} and the existence of CDO on $X$ (\cite{GMSII}), we conclude that the class of $\cT \CDO^{\pair{}}$ is represented by a cocycle $ \lars \, \mu_r^{ij} \wedge \mu^{jk}_s$. Its image under the natural embedding $H^2(X, \Omega^2 \to \Omega^{3, cl}) \to H^4(X, \Cplx)$ (cf. \cite{GMSII}) equals to that of the element $$ S = \lars \, \chi_r \cdot \chi_s \in S^2 \fh^*. $$ which naturally corresponds to the form $\pair{,}: \fh \times \fh \to \Cplx$. By \cite{BGG}, $S$ becomes zero in $H^4(X, \Cplx)$ if and only if $S$ is $W$-invariant. Therefore, the form $\pair{,}$ has to be a multiple of the Killing form. $\qed$ \subsubsection{Embedding of the affine vertex algebra $\cV_{k} (\fg)$} \ Let $X$ be a $G$-variety. Let $\ucA_k(\fg)_{X}$ denote the constant sheaf with sections equal to $\fg$, equipped with the structure of a $\Cplx_X$-vertex algebroid as follows: \begin{equation} \begin{split} x \zero y & = [x, y] \\ x \one y & = k \pair{x | y} \\ \pi = 0, & \ \ \de=0 \end{split} \end{equation} Let $\cA$ be a (locally trivial) $\pair{,}$-deformation of TCDO. \smallskip Let us assume that there is a Lie algebra morphism \begin{equation} \alpha: \fg \to \cT^{tw}_X \end{equation} lifting the morphism $\bar{\alpha} :\fg \to \cT_{X}$ induced by the action of $G$. (This is the case for $X = G/B$). \smallskip Consider the sheaf of homomorphisms of vertex algebroids \begin{equation} \label{sheaf of embeddings} \calH om_{\alpha} ( \ucA_k(\fg)_{X}, \cA) \end{equation} that lift the morphism $\alpha$. We are mainly interested in the global sections of this sheaf, as they correspond to embeddings of the vertex algebra $\cV_{k} (\fg)$ into the envelope of $\cA$. \begin{prop} Suppose the image of $\fg$ in $\cT_X$ generates $\cT_X$ as an $\cO_X$-module. Then the sheaf (\ref{sheaf of embeddings}), if locally nonempty, is an $\Omega^{2,cl}$-torsor. \end{prop} {\em Proof.} Let us work locally on a subset $U\subset X$ small enough to admit an identification $\cA |_U \iso \cT_X^{tw}|_U \oplus \Omega^1_U$. Let $w, w' \in \calH om_{\alpha} ( \ucA_k(\fg)_{X}, \cA) (U)$. Then $w'(g) = w(g) + \omega(g)$ for some $\omega : \fg \to \Omega^1$, since the $\cT^{tw}$-component is fixed. Analysis similar to that in \cite{MSV, GMS} shows that $\omega$ must be given by $$ \omega (g) = \iota_{\alpha(g)} \beta $$ where $\beta \in \Omega^{2, cl}_X$. {\sloppypar Conversely, adding $ \iota_{\alpha( - )} \beta$ to any $w \in \calH om_{\alpha} ( \ucA_k(\fg)_{X}, \cA) (U)$ gives an element of $\calH om_{\alpha} ( \ucA_k(\fg)_{X}, \cA) (U)$. The statement follows. $\qed$ } \smallskip \begin{rem} When $\dim X = 1$ the torsor (\ref{sheaf of embeddings}) is trivial, therefore the existence of local embeddings implies the existence of a global one. For a general flag variety we do not know whether the torsor (\ref{sheaf of embeddings}) is trivial, but we believe it is. \end{rem}
1,314,259,995,529
arxiv
\section{Introduction} \label{sec:Chap2S1} Non-linear filtering is one of the most important Bayesian inferencing methods, with several key applications in: navigation \cite{Gus2002}, guidance \cite{GSE1995}, tracking \cite{C1984}, fault detection \cite{RD2004} and fault diagnosis \cite{N2004}. Within the Bayesian framework, a filtering problem aims at constructing a posterior filter density \cite{D2001}. In the last few decades, several tractable algorithms based on analytical and statistical approximation of the Bayesian filtering (e.g., extended Kalman filter (EKF) and unscented Kalman filter (UKF)) have been developed to allow tracking in non-linear SSMs \cite{A2002}. Although filters, such as EKF and UKF are efficient in tracking, their performance is often limited or affected by various numerical and statistical approximations. Despite the great practical interest in evaluating the non-linear filters, it still remains one of the most complex problems in estimation theory \cite{S2001}. The Cram\'er-Rao lower bound (CRLB) defined as an inverse of the Fisher information matrix (FIM) provides a theoretical lower bound on the second-order error (MSE) obtained with any maximum-likelihood (ML) based unbiased state or parameter estimator. An analogous extension of CRLB to the class of Bayesian estimators was derived by \cite{V1968}, which is commonly referred to as the PCRLB. The PCRLB is defined as the inverse of the posterior Fisher information matrix (PFIM) and provides a lower bound on the MSE obtained with any non-linear filter \cite{T1998}. A full statistical characterization of any non-Gaussian posterior density requires all higher-order moments \cite{R2004}. As a result, the PCRLB does not fully characterize the accuracy of non-linear filters. Nonetheless, it is an important tool, as it only depends on: system dynamics; prior density of the states; and system noise characteristics \cite{NB2001}. The PCRLB has been widely used as a benchmark for: (i) assessing the quality of different non-linear filters; (ii) comparing performances of non-linear filters against that of an optimal filter; and (iii) determining whether the filter performance requirements are practical or not. Some of the key practical applications of the PCRLB include: comparison of several non-linear filters for ballistic target tracking \cite{ARB2002}; terrain navigation \cite{B1999}; and design of systems with pre-specified performance bounds \cite{NH2000}. The PCRLB is also widely used in several other areas related to: multi-sensor resource deployment (e.g., radar resource allocation \cite{GS2011}, sonobuoy deployment in submarine tracking \cite{HB2004}); sensor positioning \cite{FSK2006}; and optimal observer trajectory for bearings-only tracking \cite{PD1998,HM1993}. The original PCRLB formulation in \cite{V1968} is based on batch data, which often renders its computation impractical for multi-dimensional non-linear SSMs. Alternatively, a recursive version of the PCRLB was proposed by \cite{B1975} for scalar non-linear SSMs with additive Gaussian noise. Its extension to deal with multi-dimensional case was developed much later in \cite{G1980,D1995}, where the authors compared the information matrix of a non-linear SSM with that of a suitable linear system with Gaussian noise. In the seminal paper \cite{T1998}, the authors proposed an elegant approach to recursively compute the PCRLB for discrete-time, non-linear SSMs. Compared to \cite{G1980,D1995}, the PCRLB formulation in \cite{T1998} is more general as it is applicable to multi-dimensional non-linear SSMs with non-Gaussian state and sensor noise. An overview of the historical developments of the PCRLB, along with other critical discussions can be found in \cite{K1989}. The PCRLB in \cite{T1998} provides a recursive procedure to compute the lower bound for tracking in general non-linear SSMs, operating with the probability of detection ${\pr_d=1}$ and the probability of false alarm ${\pr_{f}=0}$. Since then, several modified versions of the PCRLB have also appeared, which allow tracking in situations, such as: measurement origin uncertainty (${\pr_d=1}$ and ${\pr_{f}\geq 0}$) \cite{HMG2002}; missed detection (${\pr_d\leq 1}$ and ${\pr_{f}=0}$) \cite{FRT2002}; and cluttered environments (${\pr_d\leq 1}$ and ${\pr_{f}\geq 0}$) \cite{HFR2006}. However, unlike the bound formulation given in \cite{T1998}, the modified versions of the lower bound are mostly for a special class of non-linear SSMs with additive Gaussian state and sensor noise. Notwithstanding a recursive procedure to compute the PCRLB in \cite{T1998}, obtaining a closed form solution to it is non-trivial. This is due to the involved complex, multi-dimensional expectations with respect to the states and measurements, which do not lend themselves to an easy analytical solution, except in linear systems \cite{NB2001}, where the Kalman filter (KF) provides an exact solution to the PCRLB. Several attempts have been made in the past to address the aforementioned issues. First, several authors considered approximating the PCRLB for systems with: (i) linear state dynamics with additive Gaussian noise and non-linear measurement model \cite{NB2001,H2008}; (ii) linear and non-linear SSMs with additive Gaussian state and sensor noise \cite{S2001,M2010}; and (iii) linear SSMs with unknown measurement uncertainty \cite{X2005}. The special sub-class of non-linear SSMs with additive Gaussian noise allows reduction of the complex, multi-dimensional expectations to a lower dimension, which are relatively easier to approximate. \section{Motivation and contributions} \label{sec:Chap2S2} To obtain a reasonable approximation to the PCRLB for general non-linear SSMs, several authors have considered using simulation based techniques, such as the Monte Carlo (MC) method. Although a MC method makes the lower bound computations off-line, nevertheless, it is a popular approach, since for many real-time applications in tracking and navigation, the design, selection and performance evaluation of different filtering algorithms are mostly done a priori or off-line. Furthermore, availability of huge amount of historical test-data, makes MC method a viable option. An MC based bound approximation have appeared for several systems with: target generated measurements \cite{ARB2002,H2008}; measurement origin uncertainty \cite{HMG2002}; cluttered environments \cite{HFR2006,MHLW2009}; and Markovian models \cite{LS2010,Bessel2003}. Although MC methods can be effectively used to approximate the involved expectations, with respect to the states and measurements, it requires an ensemble of the true states and measurements. While the sensor readings may be available from the historical test-data, the true states may not be available, except in simulations or in carefully designed experiments \cite{Lei2011}. To avoid having to use the true states, \cite{Lei2011} proposed an EKF and UKF based method to compute the PCRLB formulation in \cite{T1998}. To approximate the bound, \cite{Lei2011} first assumes the densities associated with the expectations to be Gaussian, and then uses an EKF and UKF to approximate the Gaussian densities using an estimate of the mean and covariance. Even though the method proposed in \cite{Lei2011} is fast, since it only works with the first two statistical moments, there are several performance and applicability related issues with this numerical approach, such as: (i) relies on the linearisation of the underlying non-linear dynamics around the state estimates, which not only results in additional numerical errors, but also introduces bias in the PCRLB approximation; (ii) the method is applicable only for non-linear SSMs with additive Gaussian state and sensor noise; (iii) convergence of the numerical solution to the theoretical lower bound is not guaranteed; (iv) provides limited control for improving the quality of the resulting numerical solution; and (v) it involves long and tedious calculations of the first two moments of the assumed Gaussian densities. Recently, \cite{ZNV2011} derived a conditional lower bound for general non-linear SSMs, and used an SMC based method to approximate it in absence of the true states. Unlike the unconditional PCRLB in \cite{T1998}, the conditional PCRLB can be computed in real-time; however, as shown in \cite{ZNV2011}, the bound in less optimistic (or higher) compared to the unconditional PCRLB. This limits its use to applications, where real-time bound computation is far more important than obtaining a tighter limit on the tracking performance. However, in applications, such as filter design and selection, where the primary focus is on devising an efficient filtering strategy, the PCRLB in \cite{T1998} provides an optimistic measure of the filter performance. To the authors' best knowledge, there are no known numerical method to approximate the unconditional PCRLB in \cite{T1998}, when the true states are unavailable. The following are the main contributions in this paper: (i) an SMC based method is developed to numerically approximate the unconditional PCRLB in \cite{T1998}, for a general stochastic non-linear SSMs operating with ${\pr_d=1}$ and ${\pr_{f}=0}$. The expectations defined originally with respect to the true states and measurements are reformulated to accommodate use of the available sensor readings. This is done by first conditioning the distribution of the true states over the sensor readings, and then using an SMC method to approximate it. (ii) Based on the above developments, a numerical method to compute the lower bound for a class of discrete-time, non-linear SSMs with additive Gaussian state and sensor noise is derived. This is required, since several practical problems, especially in tracking, navigation and sensor management, are often modelled as non-linear SSMs, with additive Gaussian noise. (iii) Convergence results for the SMC based PCRLB approximation is also provided. (iii) The quality of the SMC based PCRLB approximation is illustrated on two examples, which include a uni-variate, non-stationary growth model and a practical problem of ballistic target tracking at re-entry phase. The proposed simulation based method is an off-line method, which can be used to deliver an efficient numerical approximation to the lower bound in \cite{T1998}, based on the sensor readings alone. Compared to the EKF and UKF based PCRLB approximation method derived in \cite{Lei2011}, the proposed SMC based method: (i) is far more general as it can approximate the PCRLB for a larger class of discrete-time, non-linear SSMs with possibly non-Gaussian state and sensor noise; (ii) avoids numerical errors arising due to the use of dynamics linearisation methods; and (iii) provides a far greater control over the quality of the resulting approximation. Moreover, several theoretical results exist for the SMC methods, which can be used to suggest convergence of the SMC based PCRLB approximation to the actual lower bound. All these features of the proposed method are either validated theoretically or illustrated on simulation examples. \section{Problem formulation} \label{sec:Chap2S3} In this paper, we consider a model for a class of general stochastic non-linear systems. \begin{model} \label{Chap2Model1} Consider the following discrete-time, stochastic non-linear SSM \begin{subequations} \label{eq:Chap2E1} \begin{align} {X}_{t+1}=&{f}_t({X}_{t},{u}_{t},{\theta}, {V}_{t}), \label{eq:Chap2E1a}\\ {Y}_t=&{g}_t({X}_{t},{u}_{t},{\theta}, {W}_{t}), \label{eq:Chap2E1b} \end{align} \end{subequations} where: ${{X}_t \in{\mathcal{X}}\subseteq \mathbb{R}^{n}}$ and ${{Y}_t\in{\mathcal{Y}}\subseteq\mathbb{R}^{m}}$ are the state variables and sensor measurements, respectively; ${{u}_{t}\in\mathcal{U}\subseteq\mathbb{R}^{p}}$ is input variables and ${{\theta}\in{\Theta}\subseteq\mathbb{R}^{r}}$ are the model parameters. Also: the state and sensor noise are represented as ${{V}_{t}\in \mathbb{R}^{n}}$ and ${{W}_{t}\in \mathbb{R}^{m}}$, respectively. ${f_t(\cdot)}$ is an n-dimensional state mapping function and ${g_t(\cdot)}$ is a m-dimensional measurement mapping function, where each being possibly non-linear in its arguments. \end{model} Model \ref{Chap2Model1} represents one of the most general classes of discrete-time, stochastic non-linear SSMs. For notational simplicity, explicit dependence on ${u_t\in\mathcal{U}}$ and ${\theta\in\Theta}$ are not shown in the rest of this article; however, all the derivations that appear in this paper hold with $u_t$ and $\theta$ included. Assumptions on Model \ref{Chap2Model1} are discussed next. \begin{assumption} \label{Chap2A1} The state and sensor dynamics are defined as ${{f_t}:=\mathcal{X}\times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}}$ and ${{g_t}:=\mathcal{X}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{m}}$, respectively, are at least twice differentiable with respect to ${X_t\in\mathcal{X}}$. Also, the parameters ${\theta\in\Theta}$ and inputs ${u_t\in\mathcal{U}}$ are assumed to be known {a priori}. \end{assumption} \begin{assumption} \label{Chap2A2E} Sensor measurements are target-originated, operating with probability of false alarm ${\pr_f=0}$ and probability of detection ${\pr_d=1}$. The target states ${X_t\in\mathcal{X}}$ are hidden Markov process, observed only through the measurement process ${Y_t\in\mathcal{Y}}$. \end{assumption} \begin{assumption} \label{Chap2A2} ${V}_{t}$, ${W}_{t}$ and $X_0$ are mutually independent sequences of independent random variables described by the probability density functions (pdfs) ${p}(v_t)$, $p(w_t)$ and $p(x_0)$, respectively. These pdfs are known in their classes (e.g., Gaussian; uniform) and are parametrized by a known and finite number of moments (e.g., mean; variance). \end{assumption} \begin{assumption} \label{Chap2A3} For a random realization $(x_{t+1},x_t,v_t)\in\mathcal{X}\times\mathcal{X}\times\mathbb{R}^n$ and $(y_t,x_t,w_t)\in\mathcal{Y}\times\mathcal{X}\times\mathbb{R}^m$ satisfying Model \ref{Chap2Model1}, $\nabla_{v_t}f^T_t(x_t,v_t)$ and $\nabla_{w_t}g^T_t(x_t,w_t)$ have rank $n$ and $m$, such that using implicit function theorem, ${p(x_{t+1}|x_t)=p(V_t=\tilde{f}_t(x_t,x_{t+1}))}$ and $p(y_{t}|x_t)=p(W_t=\tilde{g}_t(x_t,y_{t}))$ do not involve Dirac delta functions. \end{assumption} \subsection{Posterior Cram\'{e}r-Rao lower bound} \label{sec:Chap2S3.1} The conventional CRLB provides a lower bound on the MSE of any ML based estimator. An analogous extension of the CRLB to the class of Bayesian estimators was derived by \cite{V1968}, and is referred to as the PCRLB inequality. Extension of the PCRLB to non-linear tracking was provided by \cite{T1998}, and is given next. \begin{lemma} \label{lemma:Chap2L1} Let $\{Y_{1:t}\}_{t\in\mathbb{N}}$ be a sequence from Model \ref{Chap2Model1}, then MSE of any tracking filter at $t\in\mathbb{N}$ is bounded from below by the following matrix inequality \begin{align} P_{t|t}\triangleq\mathbb{E}_{p(X_{0:t},Y_{1:t})}[(X_t-\widehat{X}_{t|t})(X_t-\widehat{X}_{t|t})^T]\succcurlyeq J_t^{-1},\label{eq:Chap2E2} \end{align} where: $P_{t|t}$ is a ${n\times n}$ matrix of MSE; ${\widehat{X}_{t|t}\triangleq\widehat{X}_t(Y_{1:t}):=\mathbb{R}^{tm}\rightarrow \mathbb{R}^n}$ is a point estimate of ${X_t\in\mathcal{X}}$ at time ${t\in\mathbb{N}}$, given the measurement sequence ${\{{Y_{1:t}=y_{1:t}\}\triangleq\{y_1,\dots,y_t\}}}$; $J_t$ is a $n\times n$ PFIM matrix; $J^{-1}_t$ is a $n\times n$ PCRLB matrix; ${p(x_{0:t},y_{1:t})}$ is a joint probability density of the states and measurements up until time ${t\in\mathbb{N}}$; the superscript $(\cdot)^T$ is the transpose operation; and $\mathbb{E}_{p(\cdot)}[\cdot]$ is the expectation operator with respect to the pdf $p(\cdot)$. \end{lemma} \begin{IEEEproof} See \cite{V1968} for a detailed proof. \end{IEEEproof} Inequality (\ref{eq:Chap2E2}) implies that ${P_{t|t}-J_t^{-1}\succcurlyeq 0}$ is a positive semi-definite matrix for all ${\widehat{X}_{t|t}\in\mathbb{R}^n}$ and ${t\in\mathbb{N}}$. (\ref{eq:Chap2E2}) can also be written in terms of a scalar MSE (SMSE) as \begin{align} \label{eq:Chap2E3} P^S_{t|t}\triangleq\mathbb{E}_{p(X_{0:t},Y_{1:t})}[\|X_t-\widehat{X}_{t|t}\|^2]\geq \tr[J_t^{-1}], \end{align} where $\tr[\cdot]$ is the trace operator, and $\|\cdot\|$ is a 2-norm. \begin{lemma} \label{lemma:Chap2L2} For a system represented by Model \ref{Chap2Model1} and operating under Assumptions \ref{Chap2A1} through \ref{Chap2A3}, the PFIM in Lemma \ref{lemma:Chap2L1} can be recursively computed as \cite{T1998,S2001} \begin{align} \label{eq:Chap2E4} J_{t+1}=D_t^{22}-[D_t^{12}]^T(J_t+D_t^{11})^{-1}D_t^{12}, \end{align} where: \begin{subequations} \label{eq:Chap2E5} \begin{align} D_t^{11}=&\mathbb{E}_{p(X_{0:t+1},Y_{1:t+1})}[-\Delta_{X_t}^{X_t}\log~p(X_{t+1}|X_t)];\label{eq:Chap2E5b}\\ D_t^{12}=&\mathbb{E}_{p(X_{0:t+1},Y_{1:t+1})}[-\Delta_{X_t}^{X_{t+1}}\log~p(X_{t+1}|X_t)];\label{eq:Chap2E5c}\\ D_t^{22}=&\mathbb{E}_{p(X_{0:t+1},Y_{1:t+1})}[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(X_{t+1}|X_t)\nonumber\\ &-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y_{t+1}|X_{t+1})];\label{eq:Chap2E5e} \end{align} \end{subequations} and: $\Delta$ is a Laplacian operator such that ${\Delta^{Y}_{X}\triangleq\nabla_{X}\nabla_{Y}^{T}}$ with ${\nabla_{X}\triangleq\left[\frac{\partial{}}{\partial{X}}\right]}$ being a gradient operator, evaluated at the true states. Also, $J_0=\mathbb{E}_{p(X_{0})}[-\Delta_{X_0}^{X_0}\log~p(X_{0})]$. \end{lemma} \begin{IEEEproof} See \cite{T1998} for a complete proof. \end{IEEEproof} For Model \ref{Chap2Model1}, obtaining a closed-form solution to the PFIM or PCRLB is non-trivial. This is due to the complex integrals involved in (\ref{eq:Chap2E5}), which do not lend themselves to an easy analytical solution. The main problem addressed in this paper is discussed next. \begin{problem} \label{problem:Chap2P1} Compute a numerical solution to the PCRLB given in Lemma \ref{lemma:Chap2L1} for systems represented by Model \ref{Chap2Model1} and operating under Assumptions \ref{Chap2A1} through \ref{Chap2A3}. \end{problem} Use of simulation based methods in addressing Problem \ref{problem:Chap2P1} is discussed next. \section{Approximating PCRLB} \label{sec:Chap2S4} MC method is a popular approach, which can be used to approximate the PCRLB; however, as discussed in Section \ref{sec:Chap2S2}, MC method requires an ensemble of true states and sensor measurements. While sensor readings may be available from the historical test-data, the true states may not be available in practice. To allow the use of sensor readings in approximating the PCRLB, this paper reformulates the integrals in (\ref{eq:Chap2E5}) as given below. \begin{proposition} \label{proposition:Chap2P1} The complex, multi-dimensional expectations in (\ref{eq:Chap2E5}), with respect to the density $p(x_{0:t+1},y_{1:t+1})$ can be reformulated, and written as follows: \begin{subequations} \label{eq:Chap2E6} \begin{align} I_t^{11}=&\mathbb{E}_{p(X_{0:t+1}|Y_{1:t+1})}[-\Delta_{X_t}^{X_t}\log~p(X_{t+1}|X_t)]; \label{eq:Chap2E6a}\\ I_t^{12}=&\mathbb{E}_{p(X_{0:t+1}|Y_{1:t+1})}[-\Delta_{X_t}^{X_{t+1}}\log~p(X_{t+1}|X_t)]; \label{eq:Chap2E6b}\\ I_t^{22,a}=&\mathbb{E}_{p(X_{0:t+1}|Y_{1:t+1})}[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(X_{t+1}|X_t)]; \label{eq:Chap2E6c}\\ I_t^{22,b}=&\mathbb{E}_{p(X_{0:t+1}|Y_{1:t+1})}[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y_{t+1}|X_{t+1})], \label{eq:Chap2E6d} \end{align} \text{where:} \begin{align} D_t^{11}&=\mathbb{E}_{p(Y_{1:t+1})}[I_t^{11}]; \label{eq:Chap2E6e}\\ D_t^{12}&=\mathbb{E}_{p(Y_{1:t+1})}[I_t^{12}]; \label{eq:Chap2E6f}\\ D_t^{22}&=\mathbb{E}_{p(Y_{1:t+1})}[I_t^{22,a}+I_t^{22,b}]. \label{eq:Chap2E6g} \end{align} \end{subequations} \end{proposition} \begin{IEEEproof} The proof is based on decomposition of the pdf $p(x_{0:t+1},y_{1:t+1})$ in (\ref{eq:Chap2E5}), using the probability condition ${p(x_{0:t+1},y_{1:t+1})=p(y_{1:t+1})p(x_{0:t+1}|y_{1:t+1})}$. \end{IEEEproof} \begin{rmk} \label{remark:Chap2R1} In Proposition \ref{proposition:Chap2P1} the integrals are with respect to $p(y_{1:t+1})$ and $p(x_{0:t+1}|y_{1:t+1})$. The advantage of representing (\ref{eq:Chap2E5}) as (\ref{eq:Chap2E6}) is evident: using historical test-data, expectations with respect to $p(y_{1:t+1})$ can be approximated using MC, while that defined with respect to $p(x_{0:t+1}|y_{1:t+1})$ can be approximated using an SMC method. \end{rmk} \subsection{SMC based PCRLB approximation} \label{sec:Chap2S4.1} It is not our aim here to review SMC methods in details, but to simply highlight their role in approximating the multi-dimensional integrals in Proposition \ref{proposition:Chap2P1}. For a detailed exposition on SMC methods, see \cite{D2001,R2004}. The essential idea behind SMC methods is to generate a large set of random particles (samples) from the target pdf, with respect to which the integrals are defined. The target pdf of interest in Proposition \ref{proposition:Chap2P1} is ${p}(x_{0:t}|y_{1:t})$. Using SMC methods, the target distribution, defined as $p(dx_{0:t+1}|y_{1:t+1})\triangleq p(x_{0:t+1}|y_{1:t+1})dx_{0:t+1}$ can be approximated as given below. \begin{align} \tilde{p}(dx_{0:t+1}|y_{1:t+1})=\sum_{i=1}^N W_{0:t+1|t+1}^i\delta_{X^i_{0:t+1|t+1}}(dx_{0:t+1}),\label{eq:Chap2EQ2} \end{align} where: $\tilde{p}(dx_{0:t+1}|y_{1:t+1})$ is an $N$-particle SMC approximation of the target distribution $p(dx_{0:t+1}|y_{1:t+1})$ and $\{X^i_{0:t+1|t+1};~W_{0:t+1|t+1}^i\}_{i=1}^{N}$ are the $N$ pairs of particle realizations and their associated weights distributed according to $p(x_{0:t+1}|y_{1:t+1})$, such that $\sum_{i=1}^NW_{0:t|t}^i=1$. Using (\ref{eq:Chap2EQ2}), an SMC approximation of (\ref{eq:Chap2E6a}), for example, can be computed as \begin{align} \tilde{I}_t^{11}=\sum_{i=1}^N W^i_{0:t+1|t+1}[-\Delta_{X_t}^{X_t}\log~p(X^i_{t+1|t+1}|X^i_{t|t+1})].\label{eq:Chap2E7aa} \end{align} where $\tilde{I}_t^{11}$ is an SMC estimate of $I_t^{11}$ and the Laplacian is evaluated at $\{X^i_{t:t+1|t+1}\}_{i=1}^N$. The convergence of (\ref{eq:Chap2E7aa}) to (\ref{eq:Chap2E6a}) depends on (\ref{eq:Chap2EQ2}). Many sharp results on convergence of SMC methods are available (see \cite{CD2002} for a survey paper and \cite{DM2004} for a book length review). A selection of these results highlighting the difficulties in approximating $p(dx_{0:t}|y_{1:t})$ with an SMC method are presented below. \begin{theorem} \label{theorem:Chap2T1} For any bounded test function ${\phi_t: \mathcal{X}^{t+1} \rightarrow \mathbb{R}}$, there exists ${C_{t,p}<\infty}$, such that for any $p>0$, ${N\geq1}$ and $t\geq1$, the following inequality holds \begin{align} \label{eq:Chap2E21} \mathbb{E}\bigg[\bigg|\int_{\mathcal{X}^{t+1}} \phi_t(x_{0:t})\epsilon_t(dx_{0:t}|y_{1:t})\bigg|^p\bigg]^{\frac{1}{p}}\leq\frac{C_{t,p}\bar{\phi}_t}{N^{1/2}}, \end{align} where ${\epsilon_t(dx_{0:t}|y_{1:t})= \tilde{p}(dx_{0:t}|y_{1:t})-{p}(dx_{0:t}|y_{1:t})}$ is the $N$-particle approximation error, $\bar{\phi}_t=\sup_{x_{0:t}\in\mathcal{X}^{t+1}}|\phi_t(x_{0:t})|$, and the expectation is with respect to the particle realizations. \end{theorem} \begin{IEEEproof} See Theorem 2 in \cite{PDD2003} for a detailed proof. \end{IEEEproof} \begin{rmk} \label{remark:Chap2R2} The result in Theorem \ref{theorem:Chap2T1} is weak, since ${C_{t,p}\in\mathbb{R}}$ being a function of ${t\in\mathbb{N}}$, grows exponentially/polynomially with time \cite{Kantas2009}. To guarantee a fixed precision of the approximation in (\ref{eq:Chap2E7aa}), $N$ has to increase with $t$. The result in Theorem \ref{theorem:Chap2T1} is not surprising, since (\ref{eq:Chap2EQ2}) requires sampling from the pdf $p(x_{0:t}|y_{1:t})$, whose dimension increases as $n(t+1)$. In literature Theorem \ref{theorem:Chap2T1} is referred to as the sample path degeneracy problem. This is a fundamental limitation of SMC methods; wherein, for ${N\in\mathbb{N}}$, the quality of the approximation of $p(dx_{0:t}|y_{1:t})$ deteriorates with time. \end{rmk} The motivation to use SMC methods to approximate the complex, multi-dimensional integrals in Proposition \ref{proposition:Chap2P1} is based on the fact that encouraging results can be obtained under the exponential forgetting assumption on Model \ref{Chap2Model1}. Since ${{\theta} \in{\Theta}}$ is assumed to be known (see Assumption \ref{Chap2A1}), the forgetting property in Model \ref{Chap2Model1} holds. With the forgetting property, it is possible to establish results of the form given in the next theorem. \begin{theorem} \label{theorem:Chap2T2} For an integer $L>0$, and any bounded test function $\phi_L: \mathcal{X}^{L} \rightarrow \mathbb{R}$, there exists $D_{L,p}<\infty$, such that for any $p>0$, $N\geq1$ and $t\geq1$, the following inequality holds \begin{align} \label{eq:Chap2E22} \mathbb{E}\bigg[\bigg|\int_{\mathcal{X}^{L}} \phi_L(x_{t-L+1:t})\epsilon_L(dx_{t-L+1:t}|y_{1:t})\bigg|^p\bigg]^{\frac{1}{p}}\leq\frac{D_{L,p}\bar{\phi}_L}{N^{1/2}}, \end{align} where $\epsilon_L(dx_{t-L+1:t}|y_{1:t})= \int_{\mathcal{X}^{t-L+1}}\epsilon_t(dx_{0:t}|y_{1:t})$. \end{theorem} \begin{IEEEproof} See Theorem 2 in \cite{PDD2003} for a detailed proof. \end{IEEEproof} \begin{rmk} \label{remark:Chap2R3} Since ${D_{L,p}\in\mathbb{R}}$ is independent of ${t\in\mathbb{N}}$, Theorem \ref{theorem:Chap2T2} suggests that an SMC based approximation of the most recent marginal posterior pdf $p(x_{t-L+1:t}|y_{1:t})$, over a fixed horizon $L>0$ does not result in the error accumulation. \end{rmk} For our purposes, to make the SMC based PCRLB approximation effective, the dimension of the integrals in Proposition \ref{proposition:Chap2P1} needs to be reduced. An SMC based approximation of the PCRLB over a reduced dimensional state-space is discussed next. \begin{lemma} \label{lemma:Chap2L3} For a system represented by Model \ref{Chap2Model1}, using the Markov property of the target states in Assumptions \ref{Chap2A2E}, Proposition \ref{proposition:Chap2P1} can be written as follows: \begin{subequations} \begin{align} I_t^{11}=&\mathbb{E}_{p(X_{t:t+1}|Y_{1:t+1})}[-\Delta_{X_t}^{X_t}\log~p(X_{t+1}|X_t)];\label{eq:Chap2EQE1a}\\ I_t^{12}=&\mathbb{E}_{p(X_{t:t+1}|Y_{1:t+1})}[-\Delta_{X_t}^{X_{t+1}}\log~p(X_{t+1}|X_t)];\label{eq:Chap2EQE1b}\\ I_t^{22,a}=&\mathbb{E}_{p(X_{t:t+1}|Y_{1:t+1})}[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(X_{t+1}|X_t)];\label{eq:Chap2EQE1c}\\ I_t^{22,b}=&\mathbb{E}_{p(X_{t+1}|Y_{1:t+1})}[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y_{t+1}|X_{t+1})].\label{eq:Chap2EQE1d} \end{align} \end{subequations} \end{lemma} \begin{IEEEproof} The proof is based on a straightforward use of the definition of expectation and Markov property of Model \ref{Chap2Model1}. For example, the integrals in (\ref{eq:Chap2E6a}) can be written as \begin{subequations} \begin{align} I_t^{11}=&\int_{\mathcal{X}^{t+2}}[-\Delta_{x_t}^{x_t}\log~p(x_{t+1}|x_t)]p(dx_{0:t+1}|y_{1:t+1}),\\ =&\int_{\mathcal{X}^2}[-\Delta_{x_t}^{x_t}\log~p(x_{t+1}|x_t)]p(dx_{t:t+1}|y_{1:t+1}),\\ =&\mathbb{E}_{p(X_{t:t+1}|Y_{1:t+1})}[-\Delta_{X_t}^{X_t}\log~p(X_{t+1}|X_t)],\label{eq:Chap2E9} \end{align} \end{subequations} where ${p(dx_{0:t+1}|y_{1:t+1})\triangleq p(x_{0:t+1}|y_{1:t+1})dx_{0:t+1}}$, and in (\ref{eq:Chap2E9}), since the integrand is independent of ${x_{0:t-1}\in\mathcal{X}^{t}}$, it is marginalized out of the integral. Equations (\ref{eq:Chap2EQE1b}) through (\ref{eq:Chap2EQE1d}) can be derived based on similar arguments, which completes the proof. \end{IEEEproof} \begin{rmk} \label{remark:Chap2R4} The dimension of the expectations in (\ref{eq:Chap2E6a}) through (\ref{eq:Chap2E6c}) reduces from $n(t+2)$ to $2n$; whereas, in (\ref{eq:Chap2E6d}), it reduces from $n(t+2)$ to $n$ for all ${t\in\mathbb{N}}$. Moreover, since expectations in Lemma \ref{lemma:Chap2L3} are with respect to $p(x_{t:t+1}|y_{1:t+1})$ and $p(x_{t+1}|y_{1:t+1})$, an SMC method can be effectively used with a finite number of particles (see Theorem \ref{theorem:Chap2T2}). \end{rmk} \subsection{General non-linear SSMs} \label{sec:Chap2S4.2} To approximate the multi-dimensional integrals in Lemma \ref{lemma:Chap2L3} for Model \ref{Chap2Model1}, a set of randomly generated samples from the target distribution $p(dx_{t:t+1}|y_{1:t+1})$ is required. First note that the target pdf $p(x_{t:t+1}|y_{1:t+1})$ can alternatively be written as given below. \begin{lemma} \label{lemma:Chap2L4} The target pdf $p(x_{t:t+1}|y_{1:t+1})$, with respect to which the integrals in Lemma \ref{lemma:Chap2L3} are defined can be decomposed, and written as \begin{align} \label{eq:Chap2E10} p(x_{t:t+1}|y_{1:t+1})=\frac{p(x_{t+1}|x_t)p(x_t|y_{1:t})p(x_{t+1}|y_{1:t+1})}{\int_{\mathcal{X}}p(x_{t+1}|x_{t})p(dx_{t}|y_{1:t})}. \end{align} \end{lemma} \begin{IEEEproof} First note that the target pdf $p(x_{t:t+1}|y_{1:t+1})$ can be written as \begin{align} \label{eq:Chap2EB1} p(x_{t:t+1}|y_{1:t+1})=p(x_t|x_{t+1},y_{1:t}, y_{t+1})p(x_{t+1}|y_{1:t+1}). \end{align} From the Markov property of (\ref{eq:Chap2E1}), and from the Bayes' theorem, (\ref{eq:Chap2EB1}) can be written as \begin{subequations} \begin{align} &p(x_{t:t+1}|y_{1:t+1})\nonumber\\ =&\frac{p(y_{t+1}|x_t,x_{t+1},y_{1:t})p(x_t|x_{t+1},y_{1:t})p(x_{t+1}|y_{1:t+1})}{p(y_{t+1}|x_{t+1},y_{1:t})},\label{eq:Chap2EB2a}\\ =&\frac{p(y_{t+1}|x_{t+1},y_{1:t})p(x_t|x_{t+1},y_{1:t})p(x_{t+1}|y_{1:t+1})}{p(y_{t+1}|x_{t+1},y_{1:t})}\label{eq:Chap2EB2b},\\ =&p(x_t|x_{t+1},y_{1:t})p(x_{t+1}|y_{1:t+1}).\label{eq:Chap2EB2c} \end{align} \end{subequations} Applying Bayes' theorem again in (\ref{eq:Chap2EB2c}) yields \begin{subequations} \begin{align} &p(x_{t:t+1}|y_{1:t+1})\nonumber\\ =&\frac{p(x_{t+1}|x_t,y_{1:t})p(x_t|y_{1:t})p(x_{t+1}|y_{1:t+1})}{p(x_{t+1}|y_{1:t})}\label{eq:Chap2EB3a},\\ =&\frac{p(x_{t+1}|x_t)p(x_t|y_{1:t})p(x_{t+1}|y_{1:t+1})}{\int_{\mathcal{X}}p(x_{t+1}|x_{t})p(dx_{t}|y_{1:t})},\label{eq:Chap2EB3b} \end{align} \end{subequations} where in (\ref{eq:Chap2EB3b}), the Law of Total Probability is used, which completes the proof. \end{IEEEproof} \begin{rmk} \label{remark:Chap2R5} The procedure for generating random particles from densities, such as the uniform or Gaussian, is well described in literature; however, due to the multi-variate, and non-Gaussian nature of the target pdf, generating random particles from $p(x_{t:t+1}|y_{1:t+1})$ is a non-trivial problem. An alternative idea is to employ an importance sampling function (ISF), from which random particles are easier to generate \cite{D2001}. \end{rmk} In this paper, the product of two pdfs in (\ref{eq:Chap2E10}) is selected as the ISF, such that \begin{align} \label{eq:Chap2EQ3} q(x_{t:t+1}|y_{1:t+1})\triangleq p(x_t|y_{1:t})p(x_{t+1}|y_{1:t+1}), \end{align} where $q(x_{t:t+1}|y_{1:t+1})$ is a non-negative ISF on $\mathcal{X}^2$, such that $\supp q(x_{t:t+1}|y_{1:t+1})\supseteq\supp p(x_{t:t+1}|y_{1:t+1})$. Choice of an ISF similar to (\ref{eq:Chap2EQ3}) was also employed in \cite{T2001,S2011} to develop a particle smoothing algorithm for discrete-time, non-linear SSMs. Thus to be able to generate random samples from (\ref{eq:Chap2EQ3}), samples from the two posteriors $p(x_t|y_{1:t})$ and $p(x_{t+1}|y_{1:t+1})$ need to be generated first. Again, using the principles of ISF, particles from the posterior pdf can be generated using any advanced SMC methods (e.g., ASIR \cite{APF1999}, resample-move algorithm \cite{WRC2002}, block sampling strategy \cite{AMS2006}) or for example, using the method in \cite{S2011,G2008}. The method described in \cite{S2011,G2008} is outlined in Algorithm \ref{algorithm:Chap2A0}. \newcounter{ALC@tempcntr \newcommand{\LCOMMENT}[1]{% \setcounter{ALC@tempcntr}{\arabic{ALC@rem} \setcounter{ALC@rem}{1 \item \setcounter{ALC@rem}{\arabic{ALC@tempcntr} } \begin{algorithm}[!t] \caption{SMC based posterior density approximation} \label{algorithm:Chap2A0} \begin{algorithmic}[1] \LCOMMENT ~ \textbf{Input:} Given Model \ref{Chap2Model1}, satisfying Assumptions \ref{Chap2A1} through \ref{Chap2A3}, assume a prior pdf on $X_0$, such that ${X_0\sim p(x_0)}$. Also, select algorithm parameter $N$. \LCOMMENT ~ \textbf{Output:} Recursive SMC approximation of the posterior $p(dx_t|y_{1:t})$ for all $t\in\mathbb{N}$. \STATE Generate $N$ independent and identically distributed particles ${\{{X}^i_{0|-1}\}_{i=1}^N\sim p({x}_0)}$ and set the associated weights to $\{W^i_{0|-1}=N^{-1}\}_{i=1}^N$. Set $t\leftarrow 1$. \STATE Sample ${\{X^i_{t|t-1}\}_{i=1}^N\sim p(x_t|y_{1:t-1})}$. Set ${\{W^i_{t|t-1}=N^{-1}\}_{i=1}^N}$. \WHILE{$t\in\mathbb{N}$} \STATE Use ${\{Y_t=y_t\}}$ and compute the importance weights $\{W^i_{t|t}\}_{i=1}^N$ using \begin{align} \label{eq:Chap2E27} W^i_{t|t}= \frac{W^i_{t|t-1}p(y_t|X^i_{t|t-1})}{\sum_{j=1}^NW^j_{t|t-1}p(y_t|X^i_{t|t-1})}. \end{align} \STATE Resample the particle set $\{{X}^j_{t|t}\}_{j=1}^N$ with replacement from $\{X^i_{t|t-1}\}_{i=1}^N$, such that \begin{align} \label{eq:Chap2E28} \pr({X}^j_{t|t}=X^i_{t|t-1})=W^i_{t|t}, \end{align} where $Pr(\cdot)$ is a probability measure. Set $\{W^i_{t|t}=N^{-1} \}_{i=1}^N$. \STATE Sample ${\{X^i_{t+1|t}\}_{i=1}^N\sim p(x_{t+1}|y_{1:t})}$ using (\ref{eq:Chap2E33}). Set ${\{W^i_{t+1|t}=N^{-1}\}_{i=1}^N}$. \STATE Set $t\leftarrow t+1$. \ENDWHILE \end{algorithmic} \end{algorithm} It is important to note that in importance sampling, degeneracy is a common problem; wherein, after a few time instances, the density of the weights in (\ref{eq:Chap2E27}) become skewed. The resampling step in (\ref{eq:Chap2E27}) is crucial in limiting the effects of degeneracy. Finally using {Algorithm \ref{algorithm:Chap2A0}.}, the particle representation of $p(dx_t|y_{1:t})$ and $p(dx_{t+1}|y_{1:{t+1}})$ are given by \begin{subequations} \label{eq:Chap2E12} \begin{align} \tilde{p}(dx_t|y_{1:t})&=\frac{1}{N}\sum_{i=1}^N\delta_{{X}^i_{t|t}}(dx_t)\label{eq:Chap2E12a},\\ \tilde{p}(dx_{t+1}|y_{1:t+1})&=\frac{1}{N}\sum_{j=1}^N\delta_{{X}^j_{t+1|t+1}}(dx_{t+1}).\label{eq:Chap2E12b} \end{align} \end{subequations} Here ${\{{X}^i_{t|t}\}_{i=1}^N\sim \tilde{p}(x_t|y_{1:t})}$ and ${\{{X}^i_{t+1|t+1}\}_{i=1}^N\sim \tilde{p}(x_{t+1}|y_{1:t+1})}$ are the $N$ pairs of resampled i.i.d.~samples from ${\tilde{p}(x_t|y_{1:t})}$ and ${\tilde{p}(x_{t+1}|y_{1:t+1})}$, respectively. \begin{rmk} \label{remark:Chap2R6} Uniform convergence in time of (\ref{eq:Chap2E12}) has been established by \cite{DM2004,NC2004}. Although these results rely on strong mixing assumptions of Model \ref{Chap2Model1}, uniform convergence has been observed in numerical studies for a wide class of non-linear time-series models, where the mixing assumptions are not satisfied. \end{rmk} Substituting (\ref{eq:Chap2E12}) into (\ref{eq:Chap2EQ3}), yields an SMC approximation of the ISF, i.e., \begin{align} \tilde{q}(dx_{t:t+1}|y_{1:t+1})=&\frac{1}{N^2}\sum_{j=1}^N\sum_{i=1}^N\delta_{X^i_{t|t}, X^j_{t+1|t+1}}(dx_{t:t+1}),\label{eq:Chap2E13} \end{align} where $\tilde{q}(dx_{t:t+1}|y_{1:t+1})$ is an $N^2$-particle SMC approximation of the ISF distribution ${q}(dx_{t:t+1}|y_{1:t+1})$ and $\{X^i_{t|t};~X^j_{t+1|t+1}\}_{i=1,j=1}^{N,N}\sim \tilde{q}(x_{t:t+1}|y_{1:t+1})$ are particles from the ISF. \begin{lemma} \label{lemma:Chap2L5} An SMC approximation of the target distribution ${p}(dx_{t:t+1}|y_{1:t+1})$ can be computed using the SMC approximation of ${q}(dx_{t:t+1}|y_{1:t+1})$ given in (\ref{eq:Chap2E13}), such that \begin{align} \label{eq:Chap2EQ5} \tilde{p}&(dx_{t:t+1}|y_{1:t+1})=\sum_{i=1}^NW^{i}_{t|t, t+1|t+1}\delta_{{X}^{i}_{t|t},{X}^{i}_{t+1|t+1}}(dx_{t:t+1}), \end{align} where: \begin{subequations} \label{eq:Chap2E32} \begin{align} W^{i}_{t|t, t+1|t+1}&\triangleq\frac{\zeta^{i}_{t|t, t+1|t+1}}{\sum_{j=1}^N\zeta^{j}_{t|t, t+1|t+1}};\label{eq:Chap2E32a}\\ \zeta^{i}_{t|t, t+1|t+1}&\triangleq\frac{p({X}^i_{t+1|t+1}|{X}^i_{t|t})}{N\sum_{m=1}^Np({X}^i_{t+1|t+1}|{X}^m_{t|t})};\label{eq:Chap2E32b} \end{align} \end{subequations} and $\tilde{p}(dx_{t:t+1}|y_{1:t+1})$ is an SMC approximation of the target distribution ${p}(dx_{t:t+1}|y_{1:t+1})$. \end{lemma} \begin{IEEEproof} Substituting (\ref{eq:Chap2E13}) into (\ref{eq:Chap2E10}) followed by several algebraic manipulations yields an SMC approximation of ${p}(dx_{t:t+1}|y_{1:t+1})$, denoted by $\tilde{p}(dx_{t:t+1}|y_{1:t+1})$, such that \begin{subequations} \label{eq:Chap2EC1} \begin{align} &\tilde{p}(dx_{t:t+1}|y_{1:t+1})\nonumber\\ =&\frac{p(x_{t+1}|x_t)\tilde{q}(dx_{t:t+1}|y_{1:t+1})}{\int_{\mathcal{X}}p(x_{t+1}|x_{t})\tilde{p}(dx_{t}|y_{1:t})},\label{eq:Chap2EC1a}\\ =&\frac{N p(x_{t+1}|x_t)\sum_{j=1}^N\sum_{i=1}^N\delta_{{X}^{i}_{t|t},{X}^{j}_{t+1|t+1}}(dx_{t:t+1})}{N^{2}\int_{\mathcal{X}}p(x_{t+1}|x_{t})\sum_{m=1}^N\delta_{{X}^i_{t|t}}(dx_t)},\label{eq:Chap2EC1b} \end{align} \begin{align} =&\frac{\sum_{j=1}^N\sum_{i=1}^Np({X}^j_{t+1|t+1}|{X}^i_{t|t})\delta_{{X}^{i}_{t|t},{X}^{j}_{t+1|t+1}}(dx_{t:t+1})}{N\sum_{m=1}^Np({X}^j_{t+1|t+1}|{X}^m_{t|t})},\label{eq:Chap2EC1c}\\ =&\sum_{j=1}^N\sum_{i=1}^NW^{i,j}_{t|t, t+1|t+1}\delta_{{X}^{i}_{t|t},{X}^{j}_{t+1|t+1}}(dx_{t:t+1})\label{eq:Chap2EC1d}, \end{align} \end{subequations} where \begin{align} \label{eq:Chap2EC2} W^{i,j}_{t|t, t+1|t+1}&\triangleq\frac{p({X}^j_{t+1|t+1}|{X}^i_{t|t})}{N\sum_{m=1}^Np({X}^j_{t+1|t+1}|{X}^m_{t|t})}, \end{align} Equation (\ref{eq:Chap2EC1d}) is an SMC approximation of $p(dx_{t:t+1}|y_{1:t+1})$. The computational complexity of the weights in (\ref{eq:Chap2EC2}) is of the order $\mathcal{O}(N^2)$. As suggested in \cite{G2008}, without significant loss in the quality of the approximation, the complexity can be reduced to the order $\mathcal{O}(N)$ by replacing (\ref{eq:Chap2EC1d}) with (\ref{eq:Chap2EQ5}), which completes the proof. \end{IEEEproof} The distribution of weights in (\ref{eq:Chap2EQ5}) becomes skewed after a few time instances. To avoid this, the particles in (\ref{eq:Chap2EQ5}) are resampled using systematic resampling, such that \begin{align} \label{eq:Chap2EQ7} \pr({X}^{j}_{t:t+1|t+1}=\{{X}^{i}_{t|t};~{X}^{i}_{t+1|t+1}\})=W^{i}_{t|t,t+1|t+1}, \end{align} where $\{{X}^{i}_{t:t+1|t+1}\}_{i=1}^N\sim \tilde{p}(x_{t:t+1}|y_{1:t+1})$ are resampled i.i.d. particles. With resampling, the SMC approximation of the target distribution in (\ref{eq:Chap2EQ5}) can be represented as \begin{align} \label{eq:Chap2EQ8} \tilde{p}(dx_{t:t+1}|y_{1:t+1})=\frac{1}{N}\sum_{i=1}^N\delta_{X^i_{t:t+1|t+1}}(dx_{t:t+1}). \end{align} Expectation in Lemma \ref{lemma:Chap2L3}, with respect to the marginalized pdf $p(x_{t}|y_{1:t+1})$ (see (\ref{eq:Chap2EQE1d})) can also be approximated using SMC methods as given in the next lemma. \begin{lemma} \label{lemma:Chap2L6} Let $\{{X}^{i}_{t:t+1|t+1}\}_{i=1}^N$ in (\ref{eq:Chap2EQ8}) be i.i.d.~resampled particles distributed according to $\tilde{p}({x}_{t:t+1}| {y}_{1:t+1})$ then an SMC approximation of $p(dx_{t}|{y}_{1:t+1})$ is given by \begin{align} \label{eq:Chap2EL5} \tilde p(dx_{t}|y_{1:t+1})=\frac{1}{N}\sum_{i=1}^{N} \delta_{X^i_{t|t+1}}(dx_t), \end{align} where $\tilde{p}(dx_{t}|y_{1:t+1})$ is an SMC approximation of $p(dx_{t}|y_{1:t+1})$ and $\delta_{X^i_{t|t+1}}(\cdot)$ is a marginalized Dirac delta function in $dx_{t}$, centred around the random particle $X^i_{t|t+1}$. \end{lemma} \begin{IEEEproof} See \cite{ABBF2012} for the proof. \end{IEEEproof} Lemma \ref{lemma:Chap2L6} gives a procedure for computing an SMC approximation of $p(dx_{t}|y_{1:t+1})$, using the particles from the SMC approximation of ${p}(d{x}_{t:t+1}| {y}_{1:t+1})$. Expectation with respect to $p(y_{1:t+1})$ in Proposition \ref{proposition:Chap2P1} can be approximated using MC method, such that \begin{align} \label{eq:Chap2EQ9} \tilde{p}(dy_{1:t+1})=\frac{1}{M}\sum_{j=1}^M\delta_{Y^j_{1:t+1}}(dy_{1:t+1}), \end{align} where $\tilde{p}(dy_{1:t+1})$ is an MC approximation of ${p}(dy_{1:t+1})$, and $M$ is the total number of i.i.d. measurement sequences obtained from the historical test-data. Note that the approximation in (\ref{eq:Chap2EQ9}) is possible only under Assumption \ref{Chap2A1}; however, in general, estimating the marginalized likelihood function ${p}(y_{1:t+1})$ is non-trivial \cite{Kantas2009}. Finally, an SMC approximation of the PCRLB for systems represented by Model \ref{Chap2Model1} and operating under Assumptions \ref{Chap2A1} through \ref{Chap2A3} is summarized in the next lemma. \begin{lemma} \label{lemma:Chap2L7} Let a general stochastic non-linear system be represented by Model \ref{Chap2Model1}, such that it satisfies Assumption \ref{Chap2A1} through \ref{Chap2A3}. Let ${\{Y_{1:t}=y^j_{1:t}\}_{j=1}^M}$ be ${M\in\mathbb{N}}$ i.i.d.~measurement sequences generated from Model \ref{Chap2Model1}, then the matrices (\ref{eq:Chap2E5b}) through (\ref{eq:Chap2E5e}) in Lemma \ref{lemma:Chap2L2} can be recursively approximated as follows: \begin{subequations} \begin{align} \tilde{D}_t^{11}=&-\frac{1}{MN}\sum_{j=1}^M\sum_{i=1}^N[\Delta_{X_t}^{X_t}\log~p(X^{i,j}_{t+1|t+1}|X^{i,j}_{t|t+1})];\label{eq:Chap2EE38a}\\ \tilde{D}_t^{12}=&-\frac{1}{MN}\sum_{j=1}^M\sum_{i=1}^N[\Delta_{X_t}^{X_{t+1}}\log~p(X^{i,j}_{t+1|t+1}|X^{i,j}_{t|t+1})];\label{eq:Chap2EE38b}\\ \tilde{D}_t^{22}=&-\frac{1}{MN}\sum_{j=1}^M\sum_{i=1}^N[\Delta_{X_{t+1}}^{X_{t+1}}\log~p(X^{i,j}_{t+1|t+1}|X^{i,j}_{t|t+1})\nonumber\\ &+\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y^j_{t+1}|X^{i,j}_{t+1|t+1})]; \label{eq:Chap2EE38c} \end{align} \end{subequations} and ${\{X^{i,j}_{t:t+1|t+1}\}_{i=1}^{N}\sim p(x_{t:t+1}|y^j_{1:t+1})}$ is a set of $N$ resampled particles from (\ref{eq:Chap2EQ8}), distributed according to $p(x_{t:t+1}|y^j_{1:t+1})$ for all ${\{Y_{1:t+1}=y^j_{1:t+1}\}_{j=1}^M}$. \end{lemma} \begin{IEEEproof} For a measurement sequence ${\{Y_{1:t}=y^j_{1:t}\}}$, an SMC approximation of the target distribution in (\ref{eq:Chap2EQ8}) can be written as \begin{align} \label{eq:Chap2EQ10} \tilde{p}(dx_{t:t+1}|y^j_{1:t+1})=\frac{1}{N}\sum_{i=1}^N\delta_{X^{i,j}_{t:t+1|t+1}}(dx_{t:t+1}), \end{align} where ${X^{i,j}_{t:t+1|t+1}\sim p(x_{t:t+1}|y^j_{1:t+1})}$ are resampled particles. Substituting (\ref{eq:Chap2EQ10}) into Lemma \ref{lemma:Chap2L3}, an SMC approximation of (\ref{eq:Chap2EQE1a}) through (\ref{eq:Chap2EQE1d}) can be obtained as follows: \begin{subequations} \label{eq:Chap2EQE2} \begin{align} \tilde{I}_t^{11}=&\frac{1}{N}\sum_{i=1}^N-\Delta_{X_t}^{X_t}\log~p(X^{i,j}_{t+1|t+1}|X^{i,j}_{t|t+1});\label{eq:Chap2EQE2a}\\ \tilde{I}_t^{12}=&\frac{1}{N}\sum_{i=1}^N-\Delta_{X_t}^{X_{t+1}}\log~p(X^{i,j}_{t+1|t+1}|X^{i,j}_{t|t+1});\label{eq:Chap2EQE2b}\\ \tilde{I}_t^{22,a}=&\frac{1}{N}\sum_{i=1}^N-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(X^{i,j}_{t+1|t+1}|X^{i,j}_{t|t+1});\label{eq:Chap2EQE2c}\\ \tilde{I}_t^{22,b}=&\frac{1}{N}\sum_{i=1}^N-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y^j_{t+1}|X^{i,j}_{t+1|t+1}),\label{eq:Chap2EQE2d} \end{align} where $\tilde{I}_t$ is an SMC approximation of ${I}_t$. Substituting (\ref{eq:Chap2EQE2}) and (\ref{eq:Chap2EQ9}) into (\ref{eq:Chap2E6e}) through (\ref{eq:Chap2E6g}) yields (\ref{eq:Chap2EE38a}) through (\ref{eq:Chap2EE38c}), which completes the proof. \end{subequations} \end{IEEEproof} Lemma \ref{lemma:Chap2L7} gives an SMC based numerical method to approximate the complex, multi-dimensional integrals in Lemma \ref{lemma:Chap2L2}. Note that since Lemma \ref{lemma:Chap2L7} is valid for a general non-linear SSMs, the derivatives of the logarithms of the pdfs in (\ref{eq:Chap2EE38a}) through (\ref{eq:Chap2EE38c}) are left in its original form, but can be computed for a given system. Building on the developments in this section, an SMC approximation of the PCRLB for a class of non-linear SSMs with additive Gaussian noise is presented next. \subsection{Non-linear SSMs with additive Gaussian noise} \label{sec:Chap2S4.3} Many practical applications in tracking (e.g., ballistic target tracking \cite{ARB2002}, bearings-only tracking \cite{C1998}, range-only tracking \cite{Song1999}, multi-sensor resource deployment \cite{HB2004} and other navigation problems \cite{KGK2003}) can be described by non-linear SSMs with additive Gaussian noise. Since the class of practical problems with additive Gaussian noise is extensive, especially in tracking, navigation and sensor management, an SMC based numerical method for approximating the PCRLB for such class of non-linear systems is presented. \begin{model} \label{Chap2Model2} Consider the class of non-linear SSMs with additive Gaussian noise \begin{subequations} \label{eq:Chap2E23} \begin{align} {X}_{t+1} =&{f}_t({X}_{t})+ {V}_{t}, \label{eq:Chap2E23a}\\ {Y}_t =&{g}_t({X}_{t})+ {W}_{t},\label{eq:Chap2E23b} \end{align} \end{subequations} where ${V_t\in\mathbb{R}^n}$ and ${W_t\in\mathbb{R}^m}$ are mutually independent sequences from the Gaussian distribution, such that ${V_t\sim\mathcal{N}(v_t|0,Q_t)}$ and ${W_t\sim\mathcal{N}(w_t|0,R_t)}$. \end{model} Note that Model \ref{Chap2Model2} can also be represented as \begin{subequations} \begin{align} \log[p&(X_{t+1}|X_t)]=c_1-\frac{1}{2}[X_{t+1}-f_t(X_t)]^TQ_t^{-1}\nonumber\\ &\times[X_{t+1}-f_t(X_t)],\label{eq:Chap2E25a}\\ \log[p&(Y_{t+1}|X_{t+1})]=c_2-\frac{1}{2}[Y_{t+1}-g_{t+1}(X_{t+1})]^TR_{t+1}^{-1}\nonumber\\ &\times[Y_{t+1}-g_{t+1}(X_{t+1})],\label{eq:Chap2E25b} \end{align} \end{subequations} where ${c_1\in\mathbb{R}_+}$ and ${c_2\in\mathbb{R}_+}$ are normalizing constant and ${\mathbb{R}_+:=[0,\infty)}$. \begin{result} \label{result:Chap2R1} The first and second order partial derivative of (\ref{eq:Chap2E25a}) is given by \begin{subequations} \label{eq:Chap2E26} \begin{align} \nabla_{X_t}\log[p(X_{t+1}|X_t)]=&[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}[X_{t+1}-f_t(X_t)],\label{eq:Chap2E26a}\\ \Delta_{X_t}^{X_t}\log[p(X_{t+1}|X_t)]=&-[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}[\nabla_{X_t}f_t(X_t)]\nonumber\\ &+[\Delta_{X_t}^{X_t}f^T_t(X_t)]\Lambda_{X_{t}}^{-1}\Psi_{X_{t}},\label{eq:Chap2E26b} \end{align} and the first with respect to ${X_{t+1}\in\mathcal{X}}$ and the second with respect to ${X_t\in\mathcal{X}}$ is given by \begin{align} \Delta_{X_t}^{X_{t+1}}\log[p(X_{t+1}|X_t)]=&[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1},\label{eq:Chap2E26c} \end{align} \end{subequations} where: ${\Lambda_{X_{t}}^{-1}=Q_t^{-1}I_{n^2\times n^2}}$; ${\Psi_{X_{t}}=[X_{t+1}-f_t(X_t)]I_{n^2\times n}}$; $I_{n^2\times n^2}$, and $I_{n^2\times n}$ are ${n^2\times n^2}$ and ${n^2\times n}$ identity matrix, respectively. Also: $[\nabla_{X_t}f^T_t(X_t)]$ and $[\Delta_{X_t}^{X_t}f^T_t(X_t)]$ are \begin{subequations} \begin{align} [\nabla_{X_t}f^T_t(X_t)]\triangleq&[\nabla_{X_t}f^{(1)}_t(X_t),\cdots, \nabla_{X_t}f^{(n)}_t(X_t)]_{n\times n},\label{eq:Chap2E28a}\\ [\Delta_{X_t}^{X_t}f^T_t(x_t)]\triangleq&[\Delta_{X_t}^{X_t}f^{(1)}_t(X_t),\cdots, \Delta_{X_t}^{X_t}f^{(n)}_t(X_t)]_{n\times n^2},\label{eq:Chap2E28b} \end{align} \end{subequations} where $f_t(X_t)\triangleq[f^{(1)}_t(X_t),\cdots,f^{(n)}_t(X_t)]^T$ is a $n\times1$ vector valued function in (\ref{eq:Chap2E23a}). \end{result} \begin{result} \label{result:Chap2R2} The second order partial derivative of (\ref{eq:Chap2E25a}) and (\ref{eq:Chap2E25b}) is given by \begin{subequations} \label{eq:Chap2EF3} \begin{align} &\Delta_{X_{t+1}}^{X_{t+1}}\log[p(X_{t+1}|X_t)]=-Q_t^{-1}\label{eq:Chap2EF3a}\\ &\Delta_{X_{t+1}}^{X_{t+1}}\log[p(Y_{t+1}|X_{t+1})]=[\Delta_{X_{t+1}}^{X_{t+1}}g^T_{t+1}(X_{t+1})]\Lambda_{Y_{t+1}}^{-1}\Psi_{Y_{t+1}}\nonumber\\ &-[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}[\nabla_{X_{t+1}}g_{t+1}(X_{t+1})]\label{eq:Chap2EF3b} \end{align} \end{subequations} where: ${\Lambda_{Y_{t+1}}^{-1}=R_{t+1}^{-1}I_{n^2\times n^2}}$; ${\Psi_{Y_{t+1}}=[Y_{t+1}-g_{t+1}(X_{t+1})]I_{n^2\times n}}$; ${I_{n^2\times n^2}}$, and ${I_{n^2\times n}}$ are ${n^2\times n^2}$ and ${n^2\times n}$ identity matrix. Also: ${[\nabla_{X_{t+1}}g_{t+1}(X_{t+1})]}$ and ${[\Delta_{X_{t+1}}^{X_{t+1}}g_{t+1}(X_{t+1})]}$ are \begin{subequations} \begin{align} &[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]=\nonumber\\ &=[\nabla_{X_{t+1}}g^{(1)}_{t+1}(X_{t+1}),\dots,\nabla_{X_{t+1}}g^{(m)}_{t+1}(X_{t+1})]_{m\times m}\label{eq:Chap2EF5a};\\ &[\Delta_{X_{t+1}}^{X_{t+1}}g^T_{t+1}(X_{t+1})]\nonumber\\ &=[\Delta_{X_{t+1}}^{X_{t+1}}g^{(1)}_{t+1}(X_{t+1}),\dots, \Delta_{X_{t+1}}^{X_{t+1}}g^{(m)}_{t+1}(X_{t+1})]_{m\times m^2}\label{eq:Chap2EF5b}; \end{align} \end{subequations} where ${g_{t+1}(X_{t+1})\triangleq[g^{(1)}_{t+1}(X_{t+1}),\cdots,g^{(n)}_{t+1}(X_{t+1})]^T}$ is a ${m\times1}$ vector function in (\ref{eq:Chap2E23b}). \end{result} \begin{lemma} \label{lemma:Chap2L8} For a system given by Model \ref{Chap2Model2}, under Assumptions \ref{Chap2A1} through \ref{Chap2A3} the matrices (\ref{eq:Chap2EQE1a}) through (\ref{eq:Chap2EQE1d}) in Lemma \ref{lemma:Chap2L3} can be written as: \begin{subequations} \begin{align} I_t^{11}=&\mathbb{E}_{p(X_{t}|Y_{1:t+1})}[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}[\nabla_{X_t}f_t(X_t)];\label{eq:Chap2E30a}\\ I_t^{12}=&\mathbb{E}_{p(X_t|Y_{1:t+1})}[-\nabla_{X_t}f^T_t(X_t)]Q_t^{-1};\label{eq:Chap2E30b}\\ I_t^{22,a}=&Q_t^{-1};\label{eq:Chap2E30c}\\ I_t^{22,b}=&\mathbb{E}_{\frac{p(Y_{1:t})}{p(Y_{1:t+1})}}\mathbb{E}_{p(X_{t+1}|Y_{1:t})}[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}\nonumber\\ &\times[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})].\label{eq:Chap2E30d} \end{align} \end{subequations} \end{lemma} \begin{IEEEproof} (\ref{eq:Chap2E30a}): Substituting (\ref{eq:Chap2E26b}) into (\ref{eq:Chap2EQE1a}) yields \begin{subequations} \label{eq:Chap2E29} \begin{align} &I_t^{11}=\mathbb{E}_{p(X_{t:t+1}|Y_{1:t+1})}[[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}[\nabla_{X_t}f_t(X_t)]\nonumber\\ &-[\Delta_{X_t}^{X_t}f^T_t(x_t)]\Lambda_{X_{t}}^{-1}\Psi_{X_{t}}],\label{eq:Chap2E29a}\\ &=\mathbb{E}_{p(X_{t}|Y_{1:t+1})}\mathbb{E}_{p(X_{t+1}|X_t, Y_{1:t+1})}[[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}\nonumber\\ &\times[\nabla_{X_t}f_t(X_t)]-[\Delta_{X_t}^{X_t}f^T_t(X_t)]\Lambda_{X_{t}}^{-1}\Psi_{X_{t}}],\label{eq:Chap2E29b} \end{align} \end{subequations} where (\ref{eq:Chap2E29b}) is obtained by substituting the probability relation $p(x_{t:t+1}|y_{1:t+1})=p(x_{t+1}|x_t, y_{1:t+1})p(x_{t}|y_{1:t+1})$ into (\ref{eq:Chap2E29a}). Finally, by noting the following two conditions \begin{align} &\mathbb{E}_{p(X_{t+1}|X_t,Y_{1:t+1})}[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}[\nabla_{X_t}f_t(X_t)]\nonumber\\ &=[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}[\nabla_{X_t}f_t(X_t)],\label{eq:Chap2E29c}\\ &\mathbb{E}_{p(X_{t+1}|X_t,Y_{1:t+1})}[\Delta_{X_t}^{X_t}f^T_t(X_t)]\Lambda_{X_{t}}^{-1}\Psi_{X_{t}}\nonumber\\ &=[\Delta_{X_t}^{X_t}f^T_t(X_t)]\Lambda_{X_{t}}^{-1}\mathbb{E}_{p(X_{t+1}|X_t,Y_{1:t+1})}[\Psi_{x_{t}}]=0,\label{eq:Chap2E29d} \end{align} and substituting (\ref{eq:Chap2E29c}) and (\ref{eq:Chap2E29d}) into (\ref{eq:Chap2E29b}) yields (\ref{eq:Chap2E30a}).\\ \noindent (\ref{eq:Chap2E30b}): Substituting (\ref{eq:Chap2E26c}) into (\ref{eq:Chap2EQE1b}) yields \begin{align} \label{eq:Chap2EAF2} I_t^{12}=\mathbb{E}&_{p(X_{t:t+1}|Y_{1:t+1})}[-[\nabla_{X_t}f^T_t(X_t)]Q_t^{-1}]. \end{align} Substituting the probability relation $p(x_{t:t+1}|y_{1:t+1})=p(x_{t+1}|x_t, y_{1:t+1})p(x_{t}|y_{1:t+1})$ into (\ref{eq:Chap2EAF2}), followed by taking independent terms out of the integral yields (\ref{eq:Chap2E30b}).\\ \noindent (\ref{eq:Chap2E30c}): Substituting (\ref{eq:Chap2EF3a}) into (\ref{eq:Chap2EQE1c}) yields (\ref{eq:Chap2E30c}).\\ \noindent (\ref{eq:Chap2E30d}): Using Bayes' rule, the expectation in (\ref{eq:Chap2EQE1d}) can be rewritten as \begin{align} I_t^{22,b}=\mathbb{E}_{\frac{p(X_{t+1}, Y_{1:t+1})}{p(Y_{1:t+1})}}[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y_{t+1}|X_{t+1})].\label{eq:Chap2EF9a} \end{align} Now using the probability condition ${p(x_{t+1},y_{1:t+1})=p(y_{t+1}|x_{t+1})p(x_{t+1}|y_{1:t})p(y_{1:t})}$, the expectation in (\ref{eq:Chap2EF9a}) can further be decomposed and written as \begin{align} I_t^{22,b}=&\mathbb{E}_{\frac{p(Y_{1:t})}{p(Y_{1:t+1})}}\mathbb{E}_{p(X_{t+1}|Y_{1:t})}\mathbb{E}_{p(Y_{t+1}|X_{t+1})}\nonumber\\ &\times[-\Delta_{X_{t+1}}^{X_{t+1}}\log~p(Y_{t+1}|X_{t+1})].\label{eq:Chap2EF9b} \end{align} Substituting (\ref{eq:Chap2EF3b}) into (\ref{eq:Chap2EF9b}) yields \begin{align} \label{eq:Chap2EF10} &I_t^{22,b}=\mathbb{E}_{\frac{p(Y_{1:t})}{p(Y_{1:t+1})}}\mathbb{E}_{p(X_{t+1}|Y_{1:t})}\mathbb{E}_{p(Y_{t+1}|X_{t+1})}\nonumber\\ &\left[[-\Delta_{X_{t+1}}^{X_{t+1}}g^T_{t+1}(X_{t+1})]\Lambda_{Y_{t+1}}^{-1}\Psi_{Y_{t+1}}\right.\nonumber\\ &\left.+[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}[\nabla_{X_{t+1}}g_{t+1}(X_{t+1})]\right]. \end{align} Noting the following two conditions \begin{align} &\mathbb{E}_{p(Y_{t+1}|X_{t+1})}[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}[\nabla_{X_{t+1}}g_{t+1}(X_{t+1})]\nonumber\\ &=[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}[\nabla_{X_{t+1}}g_{t+1}(X_{t+1})],\label{eq:Chap2EF11}\\ &\mathbb{E}_{p(Y_{t+1}|X_{t+1})}[\Delta_{X_{t+1}}^{X_{t+1}}g^T_{t+1}(X_{t+1})]\Lambda_{Y_{t+1}}^{-1}\Psi_{Y_{t+1}}]\nonumber\\ &=\Delta_{X_{t+1}}^{X_{t+1}}g^T_{t+1}(X_{t+1})\Lambda_{Y_{t+1}}^{-1}\mathbb{E}_{p(Y_{t+1}|X_{t+1})}[\Psi_{Y_{t+1}}]=0,\label{eq:Chap2EF12} \end{align} and substituting (\ref{eq:Chap2EF11}) and (\ref{eq:Chap2EF12}) into (\ref{eq:Chap2EF10}) yields (\ref{eq:Chap2E30d}), which completes the proof. \end{IEEEproof} Using the results of Lemma \ref{lemma:Chap2L8}, an SMC approximation of the PCRLB for Model \ref{Chap2Model2} can be subsequently computed, as discussed in the next lemma. \begin{lemma} \label{lemma:Chap2L9} Let a stochastic non-linear system with additive Gaussian state and sensor noise be represented by Model \ref{Chap2Model2}, such that it satisfies Assumption \ref{Chap2A1} through \ref{Chap2A3}. Let ${\{Y_{1:t}=y^j_{1:t}\}_{j=1}^M}$ be ${M\in\mathbb{N}}$ i.i.d.~measurement sequences generated from Model \ref{Chap2Model2}, then (\ref{eq:Chap2E5b}) through (\ref{eq:Chap2E5e}) in Lemma \ref{lemma:Chap2L2} can be recursively approximated as follows: \begin{subequations} \begin{align} \tilde{D}_t^{11}=&\frac{1}{MN}\sum_{j=1}^M\sum_{i=1}^N[\nabla_{X_t}f^T_t(X^{i,j}_{t|t+1})]Q_t^{-1}[\nabla_{X_t}f_t(X^{i,j}_{t|t+1})];\label{eq:Chap2EQ38a}\\ \tilde{D}_t^{12}=&\frac{1}{MN}\sum_{j=1}^M\sum_{i=1}^N-[\nabla_{X_t}f^T_t(X^{i,j}_{t|t+1})]Q_t^{-1};\label{eq:Chap2EQ38b}\\ \tilde{D}_t^{22}=&Q_t^{-1}+\frac{1}{MN}\sum_{j=1}^M\sum_{i=1}^N[\nabla_{X_{t+1}}g^T_{t+1}(X^{i,j}_{t+1|t})]R_{t+1}^{-1}\nonumber\\ &\times[\nabla_{X_{t+1}}g^T_{t+1}(X^{i,j}_{t+1|t})]; \label{eq:Chap2EQ38c} \end{align} \end{subequations} and ${\{X^{i,j}_{t|t+1}\}_{i=1}^{N}\sim p(x_{t}|y^j_{1:t+1})}$ and ${\{X^{i,j}_{t+1|t}\}_{i=1}^{N}\sim p(x_{t+1}|y^j_{1:t})}$ are sets of $N$ resampled particles from Lemma \ref{lemma:Chap2L6} and {Algorithm \ref{algorithm:Chap2A0}}, respectively, for all ${\{Y_{1:t+1}=y^j_{1:t+1}\}_{j=1}^M}$. \end{lemma} \begin{IEEEproof} For ${\{Y_{1:t}=y^j_{1:t}\}}$, the SMC approximation in (\ref{eq:Chap2EL5}) can be written as \begin{align} \label{eq:Chap2EQ39} \tilde{p}(dx_{t}|y^j_{1:t+1})=\frac{1}{N}\sum_{i=1}^N\delta_{X^{i,j}_{t|t+1}}(dx_{t}), \end{align} where ${X^{i,j}_{t|t+1}\sim p(x_{t}|y^j_{1:t+1})}$. Substituting (\ref{eq:Chap2EQ39}) into (\ref{eq:Chap2E30a}) and (\ref{eq:Chap2E30b}) yields \begin{subequations} \label{eq:Chap2EQ40} \begin{align} \tilde{I}_t^{11}=&\frac{1}{N}\sum_{i=1}^N[\nabla_{X_t}f^T_t(X^{i,j}_{t|t+1})]Q_t^{-1}[\nabla_{X_t}f_t(X^{i,j}_{t|t+1})],\label{eq:Chap2EQ40a}\\ \tilde{I}_t^{12}=&-\frac{1}{N}\sum_{i=1}^N[\nabla_{X_t}f^T_t(X^{i,j}_{t|t+1})]Q_t^{-1},\label{eq:Chap2EQ40b} \end{align} \end{subequations} where $\tilde{I}_t$ is an SMC approximations of ${I}_t$. Substituting (\ref{eq:Chap2EQ40}) and (\ref{eq:Chap2EQ9}) into (\ref{eq:Chap2E6e}) and (\ref{eq:Chap2E6f}) yields (\ref{eq:Chap2EQ38a}) and (\ref{eq:Chap2EQ38b}), respectively. Computing an SMC approximation of $D_t^{22}$ in (\ref{eq:Chap2E6g}) for Model \ref{Chap2Model2} requires a slightly different approach. Substituting (\ref{eq:Chap2E30c}) and (\ref{eq:Chap2E30d}) into (\ref{eq:Chap2E6g}) yields \begin{subequations} \label{eq:Chap2EQ41} \begin{align} D_t^{22}=&\mathbb{E}_{p(Y_{1:t+1})}[Q_t^{-1}+\mathbb{E}_{\frac{p(Y_{1:t})}{p(Y_{1:t+1})}}\mathbb{E}_{p(X_{t+1}|Y_{1:t})}\nonumber\\ &\times[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]],\label{eq:Chap2EQ41a}\\ =&Q_t^{-1}+\mathbb{E}_{p(Y_{1:t})}\mathbb{E}_{p(X_{t+1}|Y_{1:t})}[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})]R_{t+1}^{-1}\nonumber\\ &\times[\nabla_{X_{t+1}}g^T_{t+1}(X_{t+1})],\label{eq:Chap2EQ41b} \end{align} \end{subequations} where $Q_t^{-1}$ is independent of the measurement sequence. Also, $\mathbb{E}_{p(Y_{1:t+1})}\mathbb{E}_{\frac{p(Y_{1:t})}{p(Y_{1:t+1})}}[\cdot]=\mathbb{E}_{p(Y_{1:t})}[\cdot]$. For ${\{Y_{1:t}=y^j_{1:t}\}}$, random samples $\{X^{i,j}_{t+1|t}\}_{i=1}^N\sim p(x_{t+1}|y^j_{1:t})$ from {Algorithm \ref{algorithm:Chap2A0}} delivers an SMC approximation of $p(dx_{t+1}|y^j_{1:t})$ given as \begin{align} \label{eq:Chap2EQ42} \tilde p(dx_{t+1}|y^j_{1:t})=\frac{1}{N}\sum_{i=1}^N\delta_{X^{i,j}_{t+1|t}}(dx_{t+1}) \end{align} where $\tilde p(dx_{t+1}|y^j_{1:t})$ is an SMC approximation of $p(dx_{t+1}|y^j_{1:t})$. Substituting (\ref{eq:Chap2EQ42}) and (\ref{eq:Chap2EQ9}) into (\ref{eq:Chap2EQ41b}) yields (\ref{eq:Chap2EQ38c}), which completes the proof. \end{IEEEproof} \begin{result} \label{result:Chap2R1E} An SMC approximation of the PFIM for Model \ref{Chap2Model2} is obtained by substituting (\ref{eq:Chap2EQ38a}) through (\ref{eq:Chap2EQ38c}) in Lemma \ref{lemma:Chap2L9} into (\ref{eq:Chap2E4}) in Lemma \ref{lemma:Chap2L2}, such that \begin{align} \label{eq:Chap2E22} \tilde{J}_{t+1}=\tilde{D}_t^{22}-[\tilde{D}_t^{12}]^T(\tilde{J}_t+\tilde{D}_t^{11})^{-1}\tilde{D}_t^{12}, \end{align} where $\tilde{J}_{t+1}$ is an SMC approximation of ${J}_{t+1}$. Applying matrix inversion lemma \cite{RA1985} in (\ref{eq:Chap2E22}) gives an SMC approximation of the PCRLB, such that \begin{align} \label{eq:Chap2EQ11} &\tilde{J}_{t+1}^{-1}=[\tilde{D}_t^{22}]^{-1}-[\tilde{D}_t^{22}]^{-1}[\tilde{D}_t^{12}]^T\nonumber\\ &\times\left[\tilde{D}_t^{12}[\tilde{D}_t^{22}]^{-1}[\tilde{D}_t^{12}]^T-(\tilde{J}_t+\tilde{D}_t^{11})\right]^{-1}\tilde{D}_t^{12}[\tilde{D}_t^{22}]^{-1}, \end{align} where $\tilde{J}_{t+1}^{-1}$ is an SMC approximation of ${J}_{t+1}^{-1}$ in (\ref{eq:Chap2E2}). \end{result} \section{Final Algorithm} \label{sec:Chap2S5} Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2} give the procedure for computing an SMC approximation of the PCRLB for Models \ref{Chap2Model1} and \ref{Chap2Model2}, respectively. \begin{algorithm}[h] \caption{SMC based PCRLB for Model \ref{Chap2Model1}} \label{algorithm:Chap2A1} \begin{algorithmic}[1] \LCOMMENT ~ \textbf{Input:} Given Model \ref{Chap2Model1}, satisfying Assumptions \ref{Chap2A1} through \ref{Chap2A3}, assume a prior pdf on $X_0$, such that ${X_0\sim p(x_0)}$. Also, select algorithm parameters- $T$, $N$ and $M$. \LCOMMENT ~ \textbf{Output:} SMC approximation of the PCRLB for Model \ref{Chap2Model1}. \STATE Generate and store $M$ i.i.d.~sequences ${{\{Y^j_{1:T}\}_{j=1}^M}\sim p(y_{1:T})}$ of length T, by simulating Model \ref{Chap2Model1}, $M$ times starting at $M$ i.i.d.~initial states ${\{X^i_{0|-1}\}_{j=1}^M\sim p(x_0)}$. \FOR{$j=1~\text{to}~M$} \FOR{$t=1~\text{to}~T$} \STATE Store resampled particles ${\{{X}^{i,j}_{t|t}\}_{i=1}^{N}\sim p(x_t|y^j_{1:t})}$ using {Algorithm \ref{algorithm:Chap2A0}}. \STATE Store resampled particles ${\{{X}^{i,j}_{t-1:t|t}\}_{i=1}^{N}\sim p(x_{t-1:t}|y^j_{1:t})}$ using Lemma \ref{lemma:Chap2L5}. \ENDFOR \ENDFOR \STATE Compute PFIM $J_0$ at ${t=0}$ based on the initial target state pdf ${X_0\sim p(x_0)}$. If $X_0\sim \mathcal{N}(x_0|C_{x_{0}},P_{0|0})$ then from Lemma \ref{lemma:Chap2L2}, ${J_0=P^{-1}_{0|0}}$. \FOR{$t=0~\text{to}~T-1$} \STATE Compute an SMC estimate (\ref{eq:Chap2EE38a}) through (\ref{eq:Chap2EE38c}) in Lemma \ref{lemma:Chap2L7}. \STATE Compute PCRLB $\tilde{J}^{-1}_{t+1}$ by substituting (\ref{eq:Chap2EE38a}) through (\ref{eq:Chap2EE38c}) into (\ref{eq:Chap2EQ11}). \ENDFOR \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{SMC based PCRLB for Model \ref{Chap2Model2}} \label{algorithm:Chap2A2} \begin{algorithmic}[1] \LCOMMENT ~ \textbf{Input:} Given Model \ref{Chap2Model2}, satisfying Assumptions \ref{Chap2A1} through \ref{Chap2A3}, assume a prior on $X_0$, such that ${X_0\sim p(x_0)}$. Also, select algorithm parameters- $T$, $N$ and $M$. \LCOMMENT ~ \textbf{Output:} SMC approximation of the PCRLB for Model \ref{Chap2Model2}. \STATE Generate and store $M$ i.i.d.~sequences ${{\{Y^j_{1:T}\}_{j=1}^M}\sim p(y_{1:T})}$ of length T, by simulating Model \ref{Chap2Model2}, $M$ times starting at $M$ i.i.d.~initial states ${\{X^i_{0|-1}\}_{j=1}^M\sim p(x_0)}$. \FOR{$j=1~\text{to}~M$} \FOR{$t=1~\text{to}~T$} \STATE Store predicted particles ${\{{X}^{i,j}_{t|t-1}\}_{i=1}^{N}\sim p(x_t|y^j_{1:t-1})}$ using {Algorithm \ref{algorithm:Chap2A0}}. \STATE Store resampled particles ${\{{X}^{i,j}_{t|t}\}_{i=1}^{N}\sim p(x_t|y^j_{1:t})}$ using {Algorithm \ref{algorithm:Chap2A0}}. \STATE Store resampled particles ${\{{X}^{i,j}_{t-1|t}\}_{i=1}^{N}\sim p(x_{t-1:t}|y^j_{1:t})}$ using Lemma \ref{lemma:Chap2L6}. \ENDFOR \ENDFOR \STATE Compute PFIM $J_0$ at ${t=0}$ based on the initial target state pdf ${X_0\sim p(x_0)}$. If $X_0\sim \mathcal{N}(x_0|C_{x_{0}},P_{0|0})$ then from Lemma \ref{lemma:Chap2L2}, ${J_0=P^{-1}_{0|0}}$. \FOR{$t=0~\text{to}~T-1$} \STATE Compute an SMC estimate (\ref{eq:Chap2EQ38a}) through (\ref{eq:Chap2EQ38c}) in Lemma \ref{lemma:Chap2L9}. \STATE Compute PCRLB $\tilde{J}^{-1}_{t+1}$ by substituting (\ref{eq:Chap2EQ38a}) through (\ref{eq:Chap2EQ38c}) into using (\ref{eq:Chap2EQ11}). \ENDFOR \end{algorithmic} \end{algorithm} \begin{rmk} \label{remark:Chap2R8} In practice, an ensemble of $M$ measurement sequences ${\{Y_{1:T}=y^j_{1:T}\}_{j=1}^M}$ required by Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2} are obtained from historical process data; however, in simulations, it can be generated by simulating Models \ref{Chap2Model1} and \ref{Chap2Model2}, $M$ times starting at i.i.d.~initial states drawn from ${X_0\sim p(x_0)}$. Note that this procedure also requires simulation of the true states; however, true states are not used in Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2}. \end{rmk} For illustrative purposes, to assess the numerical reliability of Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2}, a quality measure is defined as follows \begin{align} \label{eq:Chap2Extra1} \Lambda_J=\frac{1}{T}\sum_{t=1}^T[{J}_t^{-1}-\tilde{J}_t^{-1}]\circ[{J}_t^{-1}-\tilde{J}_t^{-1}], \end{align} where $\Lambda_J$ is the average sum of square of errors in approximating the PCRLB and $\circ$ is the Hadamard product. $\Lambda_J$ is a $n\times n$ matrix, with diagonal element $\Lambda_J(j,j)$ as the average sum of square of errors accumulated in approximating the PCRLB for state $j$, where $1\leq j\leq n$. \section{Convergence} \label{sec:Chap2S6} Computing the PCRLB in Lemma \ref{lemma:Chap2L1} involves solving the complex, multi-dimensional integrals; however, as stated earlier, for Models \ref{Chap2Model1} and \ref{Chap2Model2} the PCRLB cannot be solved in closed form. Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2} gives a $N$ particle and $M$ simulation based SMC approximation of the PCRLB for Models \ref{Chap2Model1} and \ref{Chap2Model2}, respectively. It is therefore natural to question the convergence properties of the proposed numerical method. In this regard, results such as Theorem \ref{theorem:Chap2T2} and Remark \ref{remark:Chap2R6} are important as it ensures that the proposed numerical solution does not result in accumulation of errors. It is emphasized that although Theorem \ref{theorem:Chap2T2} and Remark \ref{remark:Chap2R6} not necessarily imply convergence of the SMC based PCRLB and MSE to its theoretical values, nevertheless, it provides a strong theoretical basis for the numerous approximations used in Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2}. From an application perspective, it is instructive to highlight that the numerical quality of the SMC based PCRLB approximation in Algorithms \ref{algorithm:Chap2A1} and \ref{algorithm:Chap2A2} can be made accurate by simply increasing the number of particles ($N$) and the MC simulations ($M$). The choice of $N$ and $M$ are user defined, which can be selected based on the required numerical accuracy, and available computing speed. It is important to emphasize that due to the multiple approximations involved in deriving a tractable solution, for practical purposes, with a finite $N$ and $M$, the condition ${P}_{t|t}-\tilde{J}^{-1}_t\succcurlyeq 0$ is not guaranteed to hold for all $t\in\mathbb{N}$. The quality of the SMC based PCRLB solution is validated next via simulation. \section{Simulation examples} \label{sec:Chap2S7} In this section, two simulation examples are presented to demonstrate the utility and performance of the proposed SMC based PCRLB solution. The first example is a ballistic target tracking problem at re-entry phase. The aim of this study is three fold: first to demonstrate the performance and utility of the proposed method on a practical problem; second, to demonstrate the quality of the bound approximation for a range of target state and sensor noise variances; and third, to study the sensitivity of the involved SMC approximations to the number of particles used. The performance of the SMC based PCRLB solution on a second example involving a uni-variate, non-stationary growth model, which is a standard non-linear, and bimodal benchmark model is then illustrated. This example is profiled to demonstrate the accuracy of the SMC based PCRLB solution for highly non-linear SSMs with non-Gaussian noise. \subsection{Example 1: Ballistic target tracking at re-entry} \label{sec:Chap2S7.1} In Section \ref{sec:Chap2S4.3}, an SMC based method for approximating the PCRLB was presented for non-linear SSMs with additive Gaussian state and sensor noise (See Algorithm \ref{algorithm:Chap2A2}). In this section, the quality of Algorithm \ref{algorithm:Chap2A2} is validated on a practical problem of ballistic target tracking at re-entry phase. This particular problem has attracted a lot of attention from researchers for both theoretical and practical reasons. See \cite{RP2001} and the references cited therein for a detailed survey on the ballistic target tracking. \subsubsection{Model setup} Consider a target launched along a ballistic flight whose kinematics are described in a 2D Cartesian coordinate system. This particular description of the kinematics assumes that the only forces acting on the target at any given time are the forces due to gravity and drag. All other forces such as: centrifugal acceleration, Coriolis acceleration, wind, lift force and spinning motion are assumed to have a small effect on the target trajectory. With the position and the velocity of the target at time ${t\in\mathbb{N}}$ described in 2D Cartesian coordinate system as $(\text{X}_t, \text{H}_t)$ and $(\dot{\text{X}}_t, \dot{\text{H}}_t)$, respectively, its motion in the re-entry phase can be described by the following discrete-time non-linear SSM \cite{ARB2002} \begin{align} \label{eq:Chap2E33} {X}_{t+1}=A{X}_t+GF_t({X}_t)+G.\left[ \begin{array}{c} 0 \\ -g \end{array} \right]+V_t, \end{align} where the states ${X}_t\triangleq[\text{X}_t\quad \dot{\text{X}}_t\quad \text{H}_t\quad \dot{\text{H}}_t]^T$. Also, the matrices $A$ and $G$ are as follows \begin{align} A\triangleq\left[ \begin{array}{cccc} 1 & \Delta T & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & \Delta T \\ 0 & 0 & 0 & 1 \\ \end{array} \right], G\triangleq \left[ \begin{array}{cc} \displaystyle {\frac{\Delta T^2}{2}} & 0 \\ \displaystyle \Delta T & 0 \\ 0 &\displaystyle \frac{\Delta T^2}{2} \\ 0 &\displaystyle \Delta T \\ \end{array} \right], \end{align} where $\Delta T$ is the time interval between two consecutive radar measurements. In (\ref{eq:Chap2E33}) $F_t(X_t)$ models the drag force, which acts in a direction opposite to the target velocity. In terms of the states, $F_t(X_t)$ can be modelled as \begin{align} \label{eq:Chap2E34} \displaystyle{ F_t({X}_t)=-\frac{g\rho(\text{H}_t)}{2\beta}\sqrt{\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}} \left[ \begin{array}{c} \dot{\text{X}}_t \\ \dot{\text{H}}_t \end{array} \right], \end{align} where: $g$ is the acceleration due to gravity; $\beta$ is the ballistic coefficient whose value depends on the shape, mass and the cross sectional area of the target \cite{R2004}; and $\rho(\text{H}_t)$ is the density of the air, defined as an exponentially decaying function of $\text{H}_t$, such that \begin{align} \rho(\text{H}_t)=\alpha_1e^{(-\alpha_2\text{H}_t)} \end{align} where: $\alpha_1=1.227$ kg$\cdot \text{m}^{-3}$, $\alpha_2=1.09310\times 10^{-4}\text{m}^{-1}$ for $\text{H}_t<9144\text{m}$; and $\alpha_1=1.754$ kg$\cdot\text{m}^{-3}$, $\alpha_2=1.4910\times 10^{-4}\text{m}^{-1}$ for $\text{H}_t\geq9144 \text{m}$. Note that the drag force, $F_t(X_t)$ is the only non-linear term in the state equation. In (\ref{eq:Chap2E33}) the state noise ${V_t\in\mathbb{R}^4}$ is a i.i.d.~sequence of multi-variate Gaussian random vector represented as ${V_t\sim \mathcal{N}(v_t|0,Q_t)}$, with zero mean and covariance matrix $Q_t$ given as \begin{align} Q_t=\gamma I_{2\times 2}\otimes \Theta, \quad \Theta= \left[ \begin{array}{cc} \displaystyle \frac{\Delta T^3}{3}&\displaystyle \frac{\Delta T^2}{2} \\\displaystyle \displaystyle \frac{\Delta T^2}{2}&\displaystyle\Delta T \end{array} \right],\quad \end{align} where: $\gamma\in \mathbb{R}_+$; $I_{2\times 2}$ is a $2\times2$ identity matrix; and $\otimes$ is the Kronecker product. The intensity of the state noise, determined by $\gamma$, accounts for all the forces neglected in (\ref{eq:Chap2E33}), including any deviations arising due to system-model mismatch. The target measurements are collected by a conventional radar (e.g., dish radar) assumed to be stationed at the origin. The sensor readings are measured in the natural sensor coordinate system, which include range ($R_t$) and elevation ($E_t$) of the target. The radar readings $Y_t=[R_t\quad E_t]^T$ are related to the states $X_t$ through a non-linear observation model given below. \begin{align} \label{eq:Chap2E35} {Y}_{t}=\left[ \begin{array}{c} \displaystyle{{\sqrt{\text{X}_t^2+\text{H}_t^2}}} \\ \displaystyle{\arctan\left(\frac{\text{H}_t}{\text{X}_t}\right)} \end{array} \right]+W_t. \end{align} In (\ref{eq:Chap2E35}) $W_t\in\mathbb{R}^2$ is an i.i.d. sequence of multi-variate Gaussian random vector represented as $W_t\sim \mathcal{N}(w_t|0,R_t)$, with zero mean and non-singular covariance matrix $R_t$ given as \begin{align} R_t=\left[ \begin{array}{cc} \sigma_r^2& 0 \\ 0&\sigma_e^2 \end{array} \right], \end{align} where ${\sigma_r\in\mathbb{R}_+}$ and ${\sigma_e\in\mathbb{R}_+}$ are the standard deviation associated with range and elevation measurements. In (\ref{eq:Chap2E35}), it is assumed that the true target elevation angle lies between $0$ and $\pi/2$ radians; otherwise, it suffices to add $\pi$ radians to the $\arctan$ term in (\ref{eq:Chap2E35}). \begin{rmk} \label{remark:Chap2R9} To avoid use of a non-linear sensor model, some authors \cite{ARB2002,Lei2011} considered transforming the radar measurements in (\ref{eq:Chap2E35}) into the Cartesian coordinate system, wherein the sensor dynamics manifest themselves into a linear model. Even though this strategy eliminates the need to handle non-linearity in sensor measurements, tracking in Cartesian coordinates couples the sensor noise across two coordinate systems and makes the noise non-Gaussian and state dependent \cite{RP2001c}. Since the proposed method can deal with strong state and sensor non-linearities, the radar readings are monitored in natural sensor coordinates alone. \end{rmk} \subsubsection{Simulation setup} For simulation, the model parameters are selected as given in Table \ref{tab:Chap2Tab1}. The aim of this study is to evaluate the quality of the SMC based PCRLB solution for a range of target state and sensor noise variances. This allows full investigation of the quality of the SMC based approximation for a range of noise characteristics. The cases considered here are given in Table \ref{tab:Chap2Tab2}. From Assumption \ref{Chap2A1}, $\beta$ is assumed to be fixed and known {a priori}. \begin{table}[h] \caption{Parameter values used in Example 1.} \centering \begin{tabular}{lll} \hline Process variables &Symbol&values\\ \hline accel. due to gravity&$g$&$9.8~\text{m/s}^2$\\ ballistic coefficient &$\beta$&$40000~\text{kg.m}^{-1}\cdot\text{s}^{-2}$\\ radar sampling time&$\Delta T$&$2~\text{s}$\\ total tracking time&$T$&$120~\text{s}$\\ state noise&$V_t$&$V_t\sim \mathcal{N}(v_t|0,Q_t)$\\ sensor noise&$W_t$&$W_t\sim \mathcal{N}(w_t|0,R_t)$\\ noise parameters&$\gamma,~\sigma_r,~\sigma_e$&see Table \ref{tab:Chap2Tab2}\\ initial states&$X^\star_0$&$\left[ \begin{array}{l} 232~\text{km} \\ 2.290\cos{(190^0)}~\text{km/s}\\ 88~\text{km}\\ 2.290\sin(190^o)~\text{km/s}\\ \end{array} \right]$\\ probability of detection&$\pr_d$&1\\ probability of false alarm&$\pr_f$&0\\ \hline \end{tabular} \label{tab:Chap2Tab1} \end{table} \begin{table}[h] \caption{Cases considered for Example 1.} \centering \begin{tabular}{cccc} \hline Case &$\gamma$&$\sigma_r$&$\sigma_\epsilon$\\ \hline 1&$1.0$&$100\text{m}$&$0.017\text{rad}$\\ 2&$5.0$&$100\text{m}$&$0.017\text{rad}$\\ 3&$1.0$&$500\text{m}$&$0.085\text{rad}$\\ 4&$5.0$&$500\text{m}$&$0.085\text{rad}$\\ \hline \end{tabular} \label{tab:Chap2Tab2} \end{table} \begin{figure*}[t] \centering \includegraphics[scale=0.6]{SampleTrajectory.eps} \caption{{Sample trajectory showing position and velocity of the target at re-entry phase.}} \label{fig:Chap2F0} \end{figure*} \begin{table}[!h] \caption{Variable values used in Example 1.} \centering \begin{tabular}{lll} \hline Process variables &Symbol&values\\ \hline state noise&$V_t$&$V_t\sim \mathcal{N}(v_t|0,Q_t)$\\ sensor noise&$W_t$&$W_t\sim \mathcal{N}(w_t|0,R_t)$\\ noise parameters&$\gamma,~\sigma_r,~\sigma_e$&see Table \ref{tab:Chap2Tab2}\\ initial states&$X_0$&$X_0\sim \mathcal{N}(x_0|C_{x_0},P_{0|0})$\\ &$C_{x_0}$&$\left[ \begin{array}{l} 232~\text{km} \\ 2.290\cos{(190^0)}~\text{km/s}\\ 88~\text{km}\\ 2.290\sin(190^o)~\text{km/s}\\ \end{array} \right]$\\ &$[P_{0|0}]^{1/2}$&$\left[ \begin{array}{llll} 1\text{km} & 0 & 0 & 0 \\ 0 &20\text{m/s} & 0 & 0 \\ 0 & 0 &1\text{km}& 0 \\ 0 & 0 & 0 & 20\text{m/s} \\ \end{array} \right]$\\ Number of particles&N&1000\\ MC simulations&M&200\\ \hline \end{tabular} \label{tab:Chap2Tab3} \end{table} Figure \ref{fig:Chap2F0} shows a sample trajectory of the target in the ${X-H}$ plane along with its velocity map as a function of time, generated using Case 1 (see Table \ref{tab:Chap2Tab2}). \subsubsection{Results} The kinematics of the ballistic target consist of nonlinear state and sensor models with additive Gaussian noise, for which the PCRLB can be approximated using Algorithm \ref{algorithm:Chap2A2}. First, the state and sensor models in (\ref{eq:Chap2E33}) and (\ref{eq:Chap2E35}), respectively, are defined as \begin{subequations} \begin{align} f_t(X_t)&=A{X}_t+GF_t({X}_t)+G.\left[ \begin{array}{c} 0 \\ -g \end{array} \right]\label{eq:Chap2E36a},\\ g_{t+1}(X_{t+1})&=\left[ \begin{array}{c} \displaystyle{{\sqrt{\text{X}_{t+1}^2+\text{H}_{t+1}^2}}} \\ \displaystyle{\arctan\left(\frac{\text{H}_{t+1}}{\text{X}_{t+1}}\right)} \end{array} \right].\label{eq:Chap2E36b} \end{align} \end{subequations} To compute the required gradients $\nabla_{X_t}f_t(X_t)$ and $\nabla_{X_{t+1}}g_{t+1}(X_{t+1})$, differentiating (\ref{eq:Chap2E33}) with respect to $X_t$, and (\ref{eq:Chap2E35}) with respect to $X_{t+1}$, yields \begin{subequations} \begin{align} \nabla_{X_t}f_t(X_t)&=A+GM_t({X}_t),\label{eq:Chap2E37a}\\ \nabla_{X_{t+1}}g_{t+1}(X_{t+1})&=N_{t+1}({X}_{t+1})\label{eq:Chap2E37b}, \end{align} \end{subequations} where: $M_t({X}_t)$ and $N_{t+1}({X}_{t+1})$ in (\ref{eq:Chap2E37a}) and (\ref{eq:Chap2E37b}), respectively, are $2\times4$ matrices, whose entries are: \begin{subequations} \begin{align} M_t(X_t)[1,1]&=0,\\ M_t(X_t)[2,1]&=0,\\ M_t(X_t)[1,2]&=-\frac{g}{2\beta}\rho(\text{H}_t)\left[\frac{2\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}{\sqrt{\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}}\right],\\ M_t(X_t)[2,2]&=-\frac{g}{2\beta}\rho(\text{H}_t)\left[\frac{\dot{\text{X}}_t\dot{\text{H}}_t}{\sqrt{\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}}\right],\\ M_t(X_t)[1,3]&=\frac{g\alpha_2}{2\beta}\rho(\text{H}_t)\left[{\sqrt{\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}}\right]\dot{\text{X}}_t,\\ M_t(X_t)[2,3]&=\frac{g\alpha_2}{2\beta}\rho(\text{H}_t)\left[{\sqrt{\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}}\right]\dot{\text{H}}_t, \end{align} \begin{align} M_t(X_t)[1,4]&=M_t(X_t)[2,2],\\ M_t(X_t)[2,4]&=-\frac{g}{2\beta}\rho(\text{H}_t)\left[\frac{\dot{\text{X}}_t^2+2\dot{\text{H}}_t^2}{\sqrt{\dot{\text{X}}_t^2+\dot{\text{H}}_t^2}}\right]; \end{align} \end{subequations} and: \begin{subequations} \begin{align} N_{t+1}(X_{t+1})[1,1]&=\frac{\text{X}_{t+1}}{\sqrt{\dot{\text{X}}_{t+1}^2+\dot{\text{H}}_{t+1}^2}},\\ N_{t+1}(X_{t+1})[2,1]&=\frac{\text{H}_{t+1}}{{\dot{\text{X}}_{t+1}^2+\dot{\text{H}}_{t+1}^2}},\\ N_{t+1}(X_{t+1})[1,2]&=0,\\ N_{t+1}(X_{t+1})[2,2]&=0,\\ N_{t+1}(X_{t+1})[1,3]&=\frac{\text{H}_{t+1}}{\sqrt{\dot{\text{X}}_{t+1}^2+\dot{\text{H}}_{t+1}^2}},\\ N_{t+1}(X_{t+1})[2,3]&=\frac{\text{X}_{t+1}}{{\dot{\text{X}}_{t+1}^2+\dot{\text{H}}_{t+1}^2}},\\ N_{t+1}(X_{t+1})[1,4]&=0,\\ N_{t+1}(X_{t+1})[2,4]&=0. \end{align} \end{subequations} To evaluate the numerical quality of Algorithm 1, we compare the SMC based PCRLB solution against the theoretical values. The theoretical bound is computed using an ensemble of the true state trajectories, simulated using (\ref{eq:Chap2E33}) (see \cite{ARB2002,R2004} for further details). Here we compare the square root of the diagonal elements of the theoretical PCRLB matrix $J^{-1}_t$ and its approximation $\tilde{J}^{-1}_t$ for all $t\in[0,T]$. The results are summarized next for the cases given in Table \ref{tab:Chap2Tab2}. For fair comparison of all the cases, the parameters required by Algorithm \ref{algorithm:Chap2A2} are specified as given in Table \ref{tab:Chap2Tab3}. \noindent \emph{Case 1:} Figure \ref{fig:Chap2F1} compares the square root of the SMC based approximate bound against the theoretical PCRLB. Clearly, the approximate bound for both the position and velocity of the target in both $\text{X}$ and $\text{H}$ coordinates accurately follows the theoretical bound at all tracking time instants. Note that the high values of the PCRLB in Figure\ref{fig:Chap2F1} highlights tracking difficulties as the target approaches the ground.\\ \emph{Case 2:} In this case the state noise intensity is increased five fold and the sensor noise is kept at a small value (see Table \ref{tab:Chap2Tab2}). Notwithstanding the increased noise variance, the PCRLB approximation is almost exact at all tracking time instants. The results for Case 2 are shown in Figure \ref{fig:Chap2F1b}. Table \ref{tab:Chap2Tab4} compares the $\Lambda_J$ values for Case 2 computed using (\ref{eq:Chap2Extra1}). Based on Table \ref{tab:Chap2Tab4}, the results from Cases 1 and 2 closely compare in terms of the order of the $\Lambda_J$ values. To allow further comparison with Case 1, the square root of the approximate PCRLBs for Cases 1 and 2 are compared in Figure \ref{fig:Chap2F6}. In terms of the magnitude, the PCRLB for Case 2 is higher than that for Case 1, suggesting tracking difficulties with larger noise intensity.\\ \emph{Case 3:} Again for Case 3, performance similar to Figure \ref{fig:Chap2F1} is obtained as given in Figure \ref{fig:Chap2F1c}. The same is evident from Table \ref{tab:Chap2Tab4}, where the average sum of square of error in approximating the PCRLB for Cases 1 and 3 are of the same order.\\ \emph{Case 4:} Results for Case 4 is given in Figure \ref{fig:Chap2F1d}. Higher values of the PCRLB for Case 4 in Figure \ref{fig:Chap2F6} reaffirms the estimation issues associated with larger noise variances. Similar conclusions can be drawn based on Table \ref{tab:Chap2Tab4}, where the $\Lambda_J$ values for Case 4 are the highest compared to the previous cases. Nevertheless, the errors are bounded and within a few orders of the $\Lambda_J$ values reported for Case 1. \begin{table} \caption{Average sum of square of errors in approximating the PCRLB for the states in Example 1, under the cases in Table \protect\ref{tab:Chap2Tab2}.} \centering \begin{tabular}{lcccc} \hline $\Lambda_J$ values& Case 1 & Case 2 & Case 3 & Case 4 \\ \hline $\Lambda_J(1,1)~(\times 10^{-6})$ & 9.30 & 50.7 & 5.87 & 130 \\ $\Lambda_J(2,2)~(\times 10^{-11})$ & 4.50 & 2.06 & 7.08 & 46.2 \\ $\Lambda_J(3,3)~(\times 10^{-5})$ & 3.56 & 23.1 & 2.96 & 100 \\ $\Lambda_J(4,4)~(\times 10^{-13})$ & 8.63 & 24.8 & 19.6 & 122 \\ \hline \end{tabular} \label{tab:Chap2Tab4} \end{table} All the above case studies suggest that the proposed approach is accurate in approximating the theoretical PCRLB under large state and sensor noise variances. \begin{figure*}[!htbp] \label{fig:subfigures} \begin{center} \subfigure[Square root of the theoretical (solid line with marker) and approximate PCRLB (solid line) for all the target states under Case 1]{% \label{fig:Chap2F1} \includegraphics[scale=0.40]{q_1_R_100_17.eps} }% \subfigure[Square root of the theoretical (solid line with marker) and approximate PCRLB (solid line) for all the target states under Case 2]{% \label{fig:Chap2F1b} \includegraphics[scale=0.40]{q_5_R_100_17.eps} }\\ \subfigure[Square root of the theoretical (solid line with marker) and approximate PCRLB (solid line) for all the target states under Case 3]{% \label{fig:Chap2F1c} \includegraphics[scale=0.40]{q_1_R_500_85.eps} }% \subfigure[Square root of the theoretical (solid line with marker) and approximate PCRLB (solid line) for all the target states under Case 4]{% \label{fig:Chap2F1d} \includegraphics[scale=0.40]{q_5_R_500_85.eps} }\\ \subfigure[Square root of the approximate PCRLBs for the target states under the cases listed in Table \ref{tab:Chap2Tab2}.]{% \label{fig:Chap2F6} \includegraphics[scale=0.40]{Figure2.eps} }% \subfigure[Square root of the theoretical and approximate PCRLBs for different values of $N$ in Example 1, Case 4. Note that all the sub-figures have been appropriately scaled up allow clear illustration of the effect of $N$ on the quality of approximation.]{% \label{fig:Chap2F7} \includegraphics[scale=0.40]{Figure3.eps} }% \end{center} \caption{% Results for Simulation Example 1. }% \end{figure*} \begin{rmk} \label{remark:Chap2R10} Note that in \cite{Lei2011}, a similar ballistic target tracking problem at re-entry phase was considered to illustrate the use of an EKF and UKF based method in approximating the theoretical PCRLB. Unlike the non-linear sensor model considered here (see (\ref{eq:Chap2E35})), \cite{Lei2011} used the change of coordinates method to obtain a linear sensor model representation. It is important to highlight that even with a linear sensor model, the EKF and UKF based method yields a biased estimate of the PCRLB for the target states (see Figures 4 through 7 in \cite{Lei2011}). Whereas, under a more challenging situation, as one considered here, the SMC based method yields an unbiased estimate of the PCRLB (see Figures \ref{fig:Chap2F1} through \ref{fig:Chap2F1d}, and Table \ref{tab:Chap2Tab4}). This highlights the advantages of the SMC based method (both in terms of the accuracy and applicability) over the EKF and UKF based PCRLB in presence of strong system or sensor non-linearities. \end{rmk} Next we study the sensitivity of the involved SMC approximations to the number of particles used. In Figure \ref{fig:Chap2F7}, approximate PCRLB bounds are compared against the theoretical PCRLB for different values of $N$. The results are obtained by varying $N$ in Algorithm 1. From Figure \ref{fig:Chap2F7}, it is clear that by simply increasing $N$, which is a tuning parameter in Algorithm 1, the quality of the SMC approximations can be significantly improved. For all the simulation cases, the number of Monte Carlo simulations was selected as ${M=200}$ (see Table \ref{tab:Chap2Tab3}). Computation of a single Monte Carlo simulation took 0.69 seconds on a 3.33 GHz Intel Core i5 processor running on Windows 7. Note that the reported absolute execution time is solely for instructive purposes and is not intended to reflect on the true computational complexity of the proposed algorithm. Collectively, from Figures \ref{fig:Chap2F1} through \ref{fig:Chap2F7}, it is evident that the SMC based method is accurate in approximating the theoretical PCRLB for a range of target state and sensor noise variances. \subsection{Example 2: A non-linear and non-Gaussian system} \label{sec:Chap2S7.2} The aim of this study is to demonstrate the effectiveness of the proposed SMC based method in approximating the PCRLB in presence of a non-Gaussian noise. \subsubsection{Model setup} A more challenging situation is considered in this section that involves the following discrete-time, uni-variate non-stationary growth model \begin{subequations} \label{eq:Chap2E38} \begin{align} X_{t+1}=&\frac{X_t}{2}+\frac{25X_t}{1+X_t^2}+8\cos{(1.2t)}+V_t,\label{eq:Chap2E38a}\\ Y_t=&\frac{X_t^2}{20}+W_t \label{eq:Chap2E38b}, \end{align} \end{subequations} where ${V_t\in\mathbb{R}}$ is an i.i.d.~sequence following a Gaussian distribution, such that ${V_t\sim \mathcal{N}(v_t|0,Q_t)}$. The noise variance is defined as $Q_t=5\times 10^{-3}~\forall t\in[1,T]$, where $T$ is $30$ seconds. Also, the initial state is modelled as ${X_0\sim\mathcal{N}(x_0|0,0.01)}$. This example has been profiled due to it being acknowledged as a benchmark problem in non-linear state estimation in several previous studies \cite{D2001,HB2004}. \subsubsection{Simulation setup} To compute the SMC based approximate PCRLB solution, two different sensor noise models are considered in (\ref{eq:Chap2E38b}). For Case 1, $W_t\in\mathbb{R}$ is an i.i.d.~sequence following a Gaussian distribution, such that $W_t\sim \mathcal{N}(w_t|0, R_t)$, while for Case 2, $W_t\in\mathbb{R}$ is again an i.i.d~sequence, but follows a Rayleigh distribution, such that $W_t\sim \mathcal{R}(w_t|R_t)$. For both the cases, the sensor noise variance ${R_t=1\times 10^{-3}~\forall t\in[1,T]}$ is considered. Here Case 2 represents a much more challenging situation, where estimation is considered under a non-Gaussian sensor noise. For fair comparison, ${M=200}$ and ${N=100}$ are selected. \subsubsection{Results} \noindent \emph{Case 1:} Comparison of the approximate and the theoretical PCRLB for the Gaussian sensor noise case is given in Figure \ref{fig:Chap2F8}. The results suggest that for the chosen $N$, the approximate PCRLB almost exactly follows the theoretical PCRLB at all filtering time instants. The same is reflected in the error value computed using (\ref{eq:Chap2Extra1}), which is $\Lambda_J=4.19\times 10^{-9}$. \noindent \emph{Case 2:} Figure \ref{fig:Chap2F8} compares the approximate PCRLB solution against the theoretical PCRLB for the Rayleigh sensor noise case. Although the approximation almost exactly follows the theoretical solution, compared to Case 1, the approximation is relatively coarser at certain time instants. This highlights the issues associated with estimation under non-Gaussian noise with limited $N$. Finally, the $\Lambda_J$ value for Case 2 is ${4.62\times 10^{-8}}$, which is within an order of the value reported for Case 1. The simulation study clearly illustrates the efficacy of the proposed method in approximating the PCRLB for non-linear SSMs with non-Gaussian noise. \begin{figure}[t!] \centering \includegraphics[scale=0.42]{Figure4.eps} \caption{{Comparing the approximate PCRLB against the theoretical PCRLB in Example 2 under Gaussian (left) and Rayleigh (right) sensor noise distributions.}} \label{fig:Chap2F8} \end{figure} \section{Discussions} \label{sec:Chap2S8} The simulation results in Section \ref{sec:Chap2S7} demonstrate the utility and performance of the SMC based PCRLB approximation method developed in this paper. It is important to highlight that despite of the many convergence results discussed in Section \ref{sec:Chap2S6}, the choice of an SMC method plays a crucial role in determining the quality of the PCRLB approximation. Here, the use of a sequential-importance-resampling (SIR) filter of \cite{G2008,S2011} is motivated by the fact that it is relatively less sensitive to large state noise and is computationally less expensive. Furthermore, the importance weights are easily evaluated and the importance functions can be easily sampled \cite{R2004}; however, other algorithms such as Auxiliary-SIR (ASIR) \cite{APF1999} or Regularized PF (RPF) \cite{RPF2001} algorithm can also be used in place of SIR, as long as they are consistent with the approach developed herein. An appropriate choice of the resampling method in Algorithm \ref{algorithm:Chap2A0} is also crucial as it can substantially improve the quality of the approximations. The choice of the systematic resampling is supported by an easy implementation procedure and the low-order of computational complexity $\mathcal{O}(N)$ \cite{D2001}. Other resampling schemes such as stratified sampling \cite{K1996} and residual sampling \cite{LC1998} can also be used as an alternative to systematic resampling in the proposed framework. In summary, with the aforementioned options, coupled with the user-defined choice of the parameters $N$ and $M$, an SMC based PCRLB approximation approach provides an efficient control over the numerical quality of the solution. \section{Conclusions} \label{sec:Chap2S9} In this paper a numerical method to recursively approximate the PCRLB in \cite{T1998} for a general discrete-time, non-linear SSMs operating with ${\pr_d=1}$ and ${\pr_f=0}$ is presented. The presented method is effective in approximating the PCRLB, when the true states are hidden or unavailable. This has practical relevance in situations; wherein, the test-data consist of only sensor readings. The proposed approach makes use of the sensor readings to estimate the hidden true states, using an SMC method. The method is general and can be used to compute the lower bound for non-linear dynamical systems, with non-Gaussian state and sensor noise. The quality and utility of the SMC based PCRLB approximation was validated on two simulation examples, including a practical problem of ballistic target tracking at re-entry phase. The analysis of the numerical quality of the SMC based PCRLB approximation was investigated for a range of target state and sensor noise variances, and with different number of particles. The proposed method exhibited acceptable and consistent performance in all the simulations. Increasing the number of particles was in particular, found to be effective in reducing the errors in the PCRLB estimates. Finally, some of the strategies for improving the quality of the SMC based approximations were also discussed. The current paper assumes the model parameters to be known a priori; however, for certain applications, this assumption might be a little restrictive. Future work will focus on extending the results of this work to handle such situations. Furthermore, use of SMC method in approximating the modified versions of the PCRLB, which allow tracking in situations, such as: target generated measurements; measurement origin uncertainty; cluttered environments; and Markovian models will also be considered. \section*{Acknowledgement} This work was supported by the Natural Sciences and Engineering Research Council (NSERC), Canada. \bibliographystyle{IEEETran}
1,314,259,995,530
arxiv
\section{Introduction and summary} The experimental program at the Large Hadron Collider (LHC) at CERN may lead to discovery of new physics beyond the Standard Model (SM) of elementary particle interactions soon. Some hints about the insufficience of the Standard model have been already provided by e.g. observation of neutrino masses and, more recently, by the measurements concerning heavy flavors \cite{Lunghi:2009sm,Soni:2008bc}. More definite conclusions are expected after the Higgs sector of SM becomes subject to intensive probing at the LHC. Decades of intensive research on beyond Standard model (BSM) model building and phenomenology have provided several alternative theoretical concepts for experimental testing. Possible model building paradigms include extra dimensions, supersymmetry, unified models and technicolor. One of the simplest extensions of the standard model, arguably, is the hypothetical existence of a fourth generation of elementary matter fermions \cite{Frampton:1999xi,Holdom:2006mr,Bobrowski:2009ng}. Since we do not know about the origins of flavor, there seems to be no need to stick to just three generations. However, the {\em{ad hoc}} insertion of yet another replica of quark and lepton doublets and singlets may seem, admittedly, a very unimaginative direction to proceed. The phenomenological appeal for just three generations is due to the close link between the constraints on the number of neutrino species from the formation of light elements in the early universe, i.e. the Big Bang nucleosynthesis and the high-energy experiments measuring the $Z$-width. Taken these together, seems to show indisputably that there are only three conventional neutrinos with mass below $M_Z/2\simeq 45$ GeV. However, the need for BSM physics provides phenomenological tension towards the other direction and this has led to a large variety of further model building resting on the introduction of new degrees of freedom. For example, in grand-unified theories (GUTs) beyond the minimal SU(5) case there inevitably appear new fermions and models with supersymmetry or new gauge interactions introduce, by definition, extended particle spectrum. In this paper, as a general framework, we consider (walking) technicolor \cite{TC,Hill:2002ap,Holdom:1981rm,Yamawaki:1985zg,Appelquist:an}, and as a particular realization we take the minimal walking technicolor (MWTC) model \cite{Sannino:2004qp}. In this model the electroweak symmetry breaking is due to strong dynamics of two Dirac fermions (techniquarks) transforming in the adjoint representation of SU(2) technicolor gauge group. This particular matter content is minimal in the sense that with relatively small number of new matter fields, this model has been proposed to lie close to an infrared fixed point \cite{Sannino:2004qp} which renders the coupling almost conformal over large hierarchy of scales. Phenomenologically such feature is welcome since on one hand walking is required to tame the effects of flavor changing neutral currents and on the other hand the minimal matter content is necessary to keep the contributions to the precision $S$-parameter on the level compatible with observations \cite{Dietrich:2005jn}. To strengthen the phenomenological viability of this model, it has recently been studied from first principles on the lattice by measuring the properties of the physical spectrum \cite{Catterall:2007yx, Catterall:2008qk, Hietanen:2008mr} and the evolution of the coupling constant \cite{Hietanen:2009az}. These studies point to the conclusion that this theory is indeed near conformal as originally proposed in \cite{Sannino:2004qp}, and hence provides a good model building basis for walking Technicolor -type theory. The matter field spectrum of the MWTC model, from the electroweak interaction viewpoint, features three (techni)quark doublets. Due to a global anomaly \cite{Witten:1982fp}, such particle content results in an ill defined theory, but this anomaly can be simply cured by introducing just one further doublet taken to be singlet under QCD and technicolor interactions and this doublet therefore resembles the leptons of ordinary three SM generations. Hence, by the internal consistency of the underlying gauge theory, we are led to consider a model with a somewhat nonstandard fourth generation. Several phenomenological constraints on the properties of this fourth generation exist. Since the interactions of the techniquarks become strong at the scale of the order of $v\simeq$250 GeV, they are confined inside technihadrons which the past and present colliders simply were not able to produce. With the leptons on one hand, we know that the analogue of the ordinary electron has to be very massive, few times $M_Z$ at least. For the fourth generation neutrino the issue is more subtle: For example, consider the fourth neutrino as a Dirac particle of mass $\sim M_Z/2$. Then, if it is unstable it is ruled out by the LEP II bounds and if it is stable, then it contributes to the dark matter abundance and is ruled out by the CDMS experiment \cite{Akerib:2004fq,Kainulainen:2006wq}. For a stable purely left-handed Majorana neutrino the constraints are weaker due to smaller cross sections. However, the dark matter searches have ruled out a purely left-handed sequential fourth generation Majorana neutrino up to masses of the order of a few TeV \cite{Angle:2008we}. These bounds are alleviated if more general mixing patterns in the neutrino sector are allowed for \cite{Kouvaris:2007iq,KTVII}. In particular, if the lightest state is dominated by the weak singlet component, then its couplings to standard model are further weakened, allowing it to escape detection so far even for relatively small masses below ${\mathcal{O}}(M_Z/2)$. Such mix ing patterns will be important in relation to the dark matter a bundance \cite{KTVII}; here we concentrate on the the collider phenomenology aspects of this model and do not require absolute stability of the fourth generation neutrino. Various aspects of the phenomenology implied by this model have already been investigated in the literature. For the technihadronic sector, see e.g. \cite{Foadi:2007ue}. The leptonic sector is particularly interesting since its contributions to the precision observables can be perturbatively evaluated, and existing data can be used to constrain the masses of these leptons. This analysis has been carried out for the cases in which the fourth generation neutrino has only a Dirac mass \cite{Dietrich:2005jn,He:2001tp}, is a purely left-handed Majorana state \cite{Holdom:1996bn,Kainulainen:2006wq} or has a particular mixing pattern between left- and right-handed neutrinos \cite{Bertolini:1990ek,Gates:1991uu,Kniehl:1992ez}. In this paper we extend these studies to allow for the most general mass and mixing patterns of the fourth generation neutrino. Our results are general and provide, to our knowledge, a so far unexplored completion of the existing literature. Furthermore, we apply this analysis to the MWTC model in order to identify the phenomenologically most interesting mass ranges of the fourth generation leptons. Given these, we investigate several possible signals which should be of interest at the LHC. In particular we emphasize important differences between the minimal technicolor model and models where a sequential full fourth SM-like generation is considered, see e.g. \cite{Kribs:2007nz}: For example, there is no fourth QCD-quark generation and therefore the Higgs production through gluon fusion is not enhanced in the case of technicolor. However, the Higgs can decay into fourth generation neutrino, which has to be massive to avoid observation so far, and creates a new channel which will, for neutrino masses $\sim {\mathcal{O}}(M_z/2)$, diminish other channels expected to be relevant for light Higgs on the basis of the Standard Model or the onset of the channels with $ZZ$ and $WW$ final states if the new neutrino has mass of the order of $M_Z$. Together with these results, the fourth generation leptons with masses in the range accessible at LHC provide clear direct signals already at first 10 fb$^-1$ of integrated luminosity as our analysis shows. The paper is organized as follows: In sec. \ref{model} we first present the details of the minimal walking technicolor, in particular its leptonic sector. In sec. \ref{results} we first present a general analysis of the oblique corrections for a lepton generation with massive neutrino and apply it to the MWTC model. Then we discuss collider signatures for the production of new leptons and how they may affect the Higgs production and decay rates. We conclude and outline some future directions in sec. \ref{checkout}. \section{The model} \label{model} As discussed in the introduction, the present Technicolor model building rests on two paradigms: On one hand, walking of the coupling is required in order to suppress the flavor changing neutral current interactions which will arise if the Technicolor model is embedded into some extended Technicolor framework. On the other hand, the walking must be achieved with reasonably small number, say two or three, of techniquark flavors in order to not generate too large contributions to the precision observable $S$. It has been proposed \cite{Sannino:2004qp} that $SU(2)$ gauge theory with two fermion flavors in the adjoint representation of the gauge group is a minimal candidate for such a theory \footnote{For an ultraminimal alternative, see \cite{Ryttov:2008xe}}. Let us now build up the concrete model Lagrangian, starting with the Technicolored sector \begin{eqnarray} {\mathcal{L}}_{\rm{TC}}=-\frac{1}{4}{\mathcal{F}}^a_{\mu\nu}{\mathcal{F}}^{a\mu\nu} +i\overline{Q}_L\gamma^\mu D_\mu{Q}_L+i\overline{U}_R\gamma^\mu D_\mu{U}_R +i\overline{D}_R\gamma^\mu D_\mu{D}_R, \label{TC_lagrangian} \end{eqnarray} where ${\mathcal{F}}^a_{\mu\nu}=\partial_\mu{\mathcal{A}}^a_\nu-\partial_\nu{\mathcal{A}}^a_\mu +ig_{\rm{TC}}\epsilon^{abc}{\mathcal{A}}^b_\mu{\mathcal{A}}^c_\nu$ is the usual field strength, $a=1,2,3$, and the $SU_L(2)$ weak doublet $Q^T_L=(U_L,D_L)$ while $U_R$ and $D_R$ are singlets under the weak isospin. The number of weak doublets in this theory is odd, rendering the gauge theory $SU_L(2)$ of weak interactions anomalous. To cure this anomaly at least one weak doublet is needed. Since the walking behavior of the Technicolor theory should not be spoiled, we add a doublet uncharged under technicolor $SU(2)$ gauge group. To add just one doublet, we cannot assign QCD color either, and hence this doublet resembles a new generation of ordinary leptons. The anomaly free hypercharge assignments are \begin{eqnarray} Y(Q_L) &=& y/2, ~~Y(U_R,D_R)=((y+1)/2,(y-1)/2) \nonumber \\ Y(L_L) &=& -3y/2, ~~Y(\zeta_R, \nu_{\zeta,R})=((-3y+1)/2,(-3y-1)/2),\nonumber \end{eqnarray} where $y$ is any real number. A particular choice $y=1/3$ corresponds to a standard model -like fourth family. Other choices are possible, but here we confine ourselves to $y=1/3$, and we will be mostly interested in the phenomenology arising from the leptonic sector. \begin{eqnarray} {\mathcal{L}}_\ell=i\overline{L}_L\gamma^\mu D_\mu L_L+i\overline{E}_R\gamma^\mu D_\mu E_R +i\overline{N}_R\gamma^\mu D_\mu N_R. \label{lepton_lagrangian} \end{eqnarray} However, instead of the Lagrangian (\ref{TC_lagrangian}), at the electroweak scale the Technicolor part is better described by the chiral effective theory coupled to the electroweak gauge fields and matter fields. Such chiral effective theory can also be systematically improved, and also additional degrees of freedom like the vectors and axial vectors can be included \cite{Foadi:2007ue}. Technicolor theories are constructed to describe only the mass patterns of the electroweak gauge bosons, and one needs to address the question of the masses of the elementary matter fields separately. One traditional direction which has been pursued in the literature is so called extended technicolor (ETC), which couples technifermions and ordinary SM fermions with each other by extended gauge interactions assumed broken at some high scale $M_{\textrm{ETC}}\gg \Lambda_{\textrm{TC}}$ and described at energies below $M_{\textrm{ETC}}$ by effective four-fermion interactions. Condensation of technifermions then leads to mass terms for the SM fermions. Since we do not know the ultraviolet complete gauge theory possibly underlying fermion mass generation as described above, we choose here a more modest bottom-up approach in order to parametrize our ignorance of the origin of mass for the matter fields in terms of effective Yukawa interactions between the fermions and the Higgs. In MWTC model, with the hypercharge assignments we are using here, the hypercharge conservation allows coupling only between the SM matter fields and the technimeson with quantum numbers of the SM-like Higgs scalar. Hence, to estimate the effects of the scalar sector on the new leptons up to and including dimension five operators, we consider following effective interactions \cite{Foadi:2007ue}. \begin{eqnarray} & &{\mathcal{L}}_{{\rm{Mass}}} = (y \bar{L}_L H E_R+ {\rm{h.c.}})+C_D\bar{L}_L\tilde{H}N_{R}\nonumber \\ &+& \frac{C_{L}}{\Lambda}(\bar{L}^c\tilde{H})(\tilde{H}^TL)+\frac{C_{R}}{\Lambda}(H^\dagger H)\bar{N}^c_{R}N_{R}+{\rm{h.c.}} \label{scalar_fermion} \end{eqnarray} where $\tilde{H}=i\tau^2H^\ast$. The first term in (\ref{scalar_fermion}) lead to the usual (Dirac) mass for the charged fourth generation lepton, and the remaining terms allow for the most general mass structure of the fourth neutrino. After symmetry breaking the resulting neutrino mass terms are \begin{eqnarray} -\frac{1}{2}\bar{n}_L^{~c}Mn_L+{\rm{h.c.}},~~~M=\left(\begin{array}{cc} M_L & m_D \\ m_D & M_R\end{array}\right), \end{eqnarray} where $n_L=(N_{L}, N_{R}^{~c})^T$, $m_D=C_Dv/\sqrt{2}$ and $M_{L,R}=C_{{L,R}}v^2/2\Lambda$. The scale $\Lambda$ is of the order of 1 TeV. The special cases are pure Dirac and pure left-handed Majorana neutrino which are obtained, respectively, by discarding dimension five operators and by removing the right handed field $N_{R}$. In the general case there are two Majorana eigenstates, $\chi_1$ and $\chi_2$ associated with the eigenvalues \begin{eqnarray} \lambda_{1,2}=\frac{1}{2}\left[(M_L+M_R)\pm\sqrt{(M_L-M_R)^2+4m_D^2}\right] \end{eqnarray} of the mass matrix. Since $\lambda$ may be positive or negative, we define $\lambda_k=M^\prime_k\equiv M_k\rho_k$, where $\rho_k=\pm 1$ so that $M_k>0$ is ensured. Note that there are basically two equivalent ways to treat the $\rho$-factors \cite{Barroso:1983rd}. Here we will choose to include these factors into the definition of the transformation into the mass eigenbasis. The advantage of this approach is that the $\rho$-dependence will show on the Lagrangian level explicitly. Another alternative is to include the $\rho$-factors into the definition of the Majorana field operators and then one must keep track of the appearance of these factors when evaluating the contractions corresponding to individual Feynman diagrams. In order to maintain full generality, we keep track of these phases explicitly and present the results for the charged and neutral weak currents as well as for the couplings to the composite Higgs in detail below. The following discussion has been adapted from \cite{KTVII} where a similar derivation was carried out for the first time. The mass eigenstates are obtained with the diagonalizing matrix, \begin{equation} {\mathcal{U}}=\left(\begin{array}{cc} \sqrt{\rho_1}\cos\theta & \sqrt{\rho_2}\sin\theta \\ -\sqrt{\rho_1}\sin\theta & \sqrt{\rho_2}\cos\theta \end{array}\right), \end{equation} and the eigenstates are \begin{eqnarray} \chi={\mathcal{U}}^\dagger n_L+{\mathcal{U}}^Tn_L^c, \end{eqnarray} Note that with this prescription $\chi_k^c=\chi_k$, since $\chi_k$ is a Majorana state with mass $M_k$ by construction. The mixing angle $\theta$ is given by $\tan(2\theta)=2m_D/(M_R-M_L)$. In the mass eigenbasis the gauge interactions are \begin{eqnarray} W^+_\mu\bar{N}_L\gamma^\mu E_L &=& \frac{\cos\theta}{\sqrt{\rho_1}}\bar{\chi}_{1L}W^+_\mu\gamma^\mu E_L +\frac{\sin\theta}{\sqrt{\rho_2}}\bar{\chi}_{2L}W^+_\mu\gamma^\mu E_L\nonumber \\ Z_\mu\bar{N}_L\gamma^\mu N_L &=& \cos^2\theta Z_\mu\bar{\chi}_{1L}\gamma^\mu \chi_{1L}+\sin^2\theta Z_\mu\bar{\chi}_{2L}\gamma^\mu \chi_{2L}\nonumber \\ && +\frac{1}{2}\sin(2\theta)Z_\mu(\frac{1}{\sqrt{\rho_2}^\ast\sqrt{\rho_1}}\bar{\chi}_{1L}\gamma^\mu \chi_{2L}+\frac{1}{\sqrt{\rho_2}\sqrt{\rho_1}^\ast}\bar{\chi}_{2L}\gamma^\mu \chi_{1L}) \label{gauge_interactions} \end{eqnarray} The last terms in the neutral current can be combined into \begin{eqnarray} \frac{1}{\sqrt{\rho_2}\sqrt{\rho_1}^\ast}(\bar{\chi}_2\gamma^\mu P_L\chi_1+(\sqrt{\rho_2}\sqrt{\rho_1}^\ast)^2\bar{\chi}_1\gamma^\mu P_L\chi_2)=\frac{1}{\sqrt{\rho_1}^\ast\sqrt{\rho_2}}\bar{\chi}_2\gamma^\mu(\alpha-\beta\gamma_5)\chi_1, \end{eqnarray} where $\alpha=\frac{1}{2}(1-(\sqrt{\rho_1}^\ast\sqrt{\rho_2})^2)$ and $\beta=\frac{1}{2}(1+(\sqrt{\rho_1}^\ast\sqrt{\rho_2})^2)$. The effective interactions between the Higgs and neutrino following from (\ref{scalar_fermion}) are \begin{eqnarray} \frac{h}{2v}\bar{n}_L^c\left(\begin{array}{cc} 2M_L & m_D \\ m_D & 2M_R\end{array}\right)n_L +{\rm{h.c.}}, \label{nu_higgs_L} \end{eqnarray} where interaction terms of ${\mathcal{O}}(h^2)$ have been neglected, since we do not need them in the following because we will be interested in the vertices relevant for the decay of the Higgs to the new neutrinos. Translating into the mass eigenbasis we obtain \begin{eqnarray} {\mathcal{L}}_{\rm{Higgs}}=C_{22}h\bar{\chi}_2\chi_2+C_{11}h\bar{\chi}_1\chi_1+C_{21}h\bar{\chi}_1(\beta+\alpha\gamma_5)\chi_2+\dots \label{higgs_interactions} \end{eqnarray} where we have defined \begin{eqnarray} C_{11} &=& \frac{M_1}{v}(1-\frac{1}{4}\sin^2(2\theta)(1-(\sqrt{\rho_1}^\ast\sqrt{\rho_2})^2\frac{M_2}{M_1})), \nonumber \\ C_{22} &=& \frac{M_2}{v}(1-\frac{1}{4}\sin^2(2\theta)(1-(\sqrt{\rho_1}\sqrt{\rho_2}^\ast)^2\frac{M_1}{M_2})), \nonumber \\ C_{12} &=& -\frac{M_2}{4v}\sqrt{\rho_1}\sqrt{\rho_2}^\ast\sin(4\theta)(1-(\sqrt{\rho_1}^\ast\sqrt{\rho_2})^2\frac{M_1}{M_2}), \end{eqnarray} and the factors $\alpha$ and $\beta$ are the same factors as defined few lines earlier for the neutral current. Since the parameters $M_L$, $M_R$ and $m_D$ are simply coupling constants in our formulation, there is no need to restrict to positive values. If we assume that these parameters are real numbers free to take any value, then in terms of the mass eigenvalues and -states as defined above the parameter space contains three domains corresponding to $\rho_1=\rho_2=\pm 1$ and $\rho_1=-\rho_2=1$, and in each case $M_1$ and $M_2$ assume all positive real values and $0\le \sin\theta\le 0.5$ \cite{KTVII}. In Figure \ref{rho_fig} a slice of the parameter space $(M_L,M_R,m_D)$ is shown for some fixed finite value of $m_D$. The hyperbolas correspond to surfaces $m_D^2=M_LM_R$, and together with the plane $M_R=-M_L$ they divide the parameter space into three distinct parts in which the values of the $\rho$-parameters are as indicated in the figure. The parameter space is symmetric with respect to the plane $M_R=-M_L$ with replacement $M_1\leftrightarrow M_2$. It is therefore sufficient to restrict to the upper half corresponding to $M_R\ge -M_L$, and this is also reflected by the fact that the interactions practically only depend on the product of $\rho_1\rho_2$ and not separately on $\rho_1$ and $\rho_2$. \begin{center} \begin{figure} \centerline{\includegraphics[width=10cm]{rho_fig.eps}} \caption{Schematic figure of the parameter space corresponding to a fixed value of $m_D$. The hyperbolas and the straight diagonal line correspond to sections of surfaces $m_D^2=M_LM_R$ and $M_R=-M_L$, respectively. The $m_D$-axis is perpendicular to the $(M_L,M_R)$-plane.} \label{rho_fig} \end{figure} \end{center} Typical special cases correspond to setting either $M_L$ or $M_R$ equal to zero, and these both correspond to $\rho_1=-\rho_2=1$. Both $M_L$ and $M_R$ need to be nonzero in order to obtain same sign for the $\rho$-parameters. Our results for the currents in (\ref{gauge_interactions}) as well as the Higgs interactions in (\ref{nu_higgs_L}) can be applied for any value of $M_L$, $M_R$ and $m_D$ with appropriate choice of $\rho$-parameters. For the mixing phenomena we stress the following general feature: using the above formulae it can be shown that in the case $M_L=0$ the lighter state is always dominated by the weak doublet component $\nu_L$ and hence couples stronger to the electroweak currents than the heavy state. On the other hand, for $M_R=0$ the lighter state consists dominantly of the weak singlet component and in this case the lighter component has weaker coupling to the electroweak currents. A general feature therefore is that a large hierarchy $M_L\gg M_R$ will make the lighter neutrino state more difficult to observe since its couplings to the weak currents are diminished. In this part of the parameter space the heavier neutrino state might be phenomenologically more accessible provided that its mass is still within reach at the LHC. And vice versa, in the parameter space domain corresponding to $M_L\ll M_R$ the lighter neutrino state should be better accessible than the heavy one. Let us then turn to the phenomenological implications. \section{Results} \label{results} In this section we will present the phenomenological results of the model. We first study the oblique corrections $S$ and $T$. This study could be enlarged to include full set of electroweak precision parameters \cite{Barbieri:2004qk}, but here we concentrate only on these two mainly since the $S$-parameter is known to provide most stringent constraints for the Technicolor models in general. We have also checked that the precision parameter $U$ is small over the parameter range which we consider. Since the data \cite{:2005ema,Amsler:2008zzb} on $S$ and $T$ shows tendency along the direction $S\sim T$, we apply the generic strategy \cite{Peskin:2001rw} of compensating for positive contribution to $S$ from the techniquarks by a positive contribution to $T$ from the mass splitting within the fourth generation leptons. The leptons typically also provide small negative contribution to the $S$ parameter and this further helps to reconcile the model with the data. The main phenomenological goal of the $S,T$-analysis here is therefore to obtain constraints for the mass splittings within the fourth generation leptons. Once we have this information we then proceed to consider plausible collider signatures for the production rates of these leptons and also their possible implications on the Higgs physics. \subsection{Oblique corrections} In the literature the oblique corrections have been analyzed for certain special cases of the mass spectrum of the fourth generation neutrinos. In particular, these include the fourth generation neutrino with Dirac mass term \cite{Dietrich:2005jn} ($M_L=M_R=0$), a pure left Majorana state \cite{Holdom:1996bn,Kainulainen:2006wq} ($M_R=m_D=0$) and for the case corresponding to the usual type I seesaw mass matrix \cite{Bertolini:1990ek,Gates:1991uu,Kniehl:1992ez} ($M_L=0$). Here we treat the general case as described in the previous section and present explicitly the formulas required for the evaluation of the oblique corrections. We have checked both analytically and numerically that the results of the above mentioned special cases are properly obtained in the corresponding limits of our formulas. We stress that our results can be applied for any values of $M_L,M_R$ and $m_D$. Recall the charged and neutral currents from (\ref{gauge_interactions}) involving neutrinos which, together with the usual forms of the currents for the charged lepton allow us to evaluate following contributions to the self energies: \begin{eqnarray} \Pi_{3Y}(q^2) &=& -\frac{1}{2}\cos^4\theta\left[\Pi_{LL}(M_1^2,M_1^2,q^2)-\Pi_{LR}(M_1^2,M_1^2,q^2)\right]\nonumber \\ && -\frac{1}{2}\sin^4\theta\left[\Pi_{LL}(M_2^2,M_2^2,q^2)-\Pi_{LR}(M_2^2,M_2^2,q^2)\right]\nonumber \\ && -\frac{1}{4}\sin^2(2\theta)\left[\Pi_{LL}(M_1^2,M_2^2,q^2)+(-1)^\beta\Pi_{LR}(M_1^2,M_2^2,q^2)\right]\nonumber \\ & & +\Pi_{LR}(M_E^2,M_E^2,q^2)+\frac{1}{2}\Pi_{LL}(M_E^2,M_E^2,q^2), \end{eqnarray} which will be needed for the $S$-parameter and \begin{eqnarray} \Pi_{11}(0)-\Pi_{33}(0) &=& \frac{1}{2}\cos^2\theta\Pi_{LL}(M_1^2,M_E^2,0)+\frac{1}{2}\sin^2\theta\Pi_{LL}(M_2^2,M_E^2,0)-\frac{1}{4}\Pi_{LL}(M_E^2,M_E^2,0) \nonumber \\ &&-\frac{1}{4}\cos^4\theta\left[\Pi_{LL}(M_1^2,M_1^2,0)-\Pi_{LR}(M_1^2,M_1^2,0)\right] \nonumber \\ &&-\frac{1}{4}\sin^4\theta\left[\Pi_{LL}(M_2^2,M_2^2,0)-\Pi_{LR}(M_2^2,M_2^2,0)\right] \nonumber \\ &&-\frac{1}{8}\sin^2(2\theta)\left[\Pi_{LL}(M_1^2,M_2^2,0)+(-1)^\beta\Pi_{LR}(M_1^2,M_2^2,0)\right] \nonumber \\ \end{eqnarray} which is needed for the $T$-parameter. The subscripts refer to electroweak gauge boson quantum numbers in the unbroken basis and the relevant Feynman rules for Majorana particles are discussed e.g. in \cite{Denner:1992me}. The vacuum polarizations of the left- and right-handed currents appearing in the above equations are given by \begin{eqnarray} \Pi_{LL}(m_1^2,m_2^2,q^2) &=& -\frac{4}{(4\pi)^2}\int_0^1 dx\ln\left[\frac{\mu^2}{M^2-x(1-x)q^2}\right](x(1-x)q^2-\frac{1}{2}M^2)\\ \Pi_{LR}(m_1^2,m_2^2,q^2) &=& -\frac{4}{(4\pi)^2}\int_0^1 dx\ln\left[\frac{\mu^2}{M^2-x(1-x)q^2}\right]\frac{1}{2}m_1m_2, \end{eqnarray} where $M^2=xm_1^2+(1-x)m_2^2$. The cutoff $\mu$ has physical significance since we are considering an effective field theory for the generation of Majorana masses of the fourth generation neutrino. With these preliminary definitions, the $S$ parameter is given by \begin{eqnarray} -8\pi\frac{d\pi_{3Y}(q^2)}{dq^2}\left|_{q^2=0}\right.\approx -\frac{8\pi}{M_Z^2}(\Pi_{3Y}(M_Z^2)-\Pi_{3Y}(0)), \end{eqnarray} and the definition of $T$ is \begin{eqnarray} T=\frac{4\pi}{s^2c^2M_Z^2}\left(\Pi_{11}(0)-\Pi_{33}(0)\right), \end{eqnarray} where $s^2=\sin^2\theta_W$ and $c^2=\cos^2\theta_W$ with $\theta_W$ the usual weak mixing angle. As already mentioned in the previous section it suffices to concentrate only on two regions in the parameter space spanned by the neutrino masses: We assume $M_1>M_2$ and this corresponds to $M_R>-M_L$. This implies that $\rho_1$ is always positive, and the sign of $\rho_2$ is determined by the ratio of $m_D^2$ and $M_RM_L$ so that negative $\rho_2$ corresponds to $m_D^2>M_RM_L$. The sign of $\rho_2$ is reflected in the interaction terms by $\alpha=0$ and $\beta=1$ for $\rho_2=1$ and vice versa for $\rho_2=-1$. Note that the Dirac limit is contained only in the latter domain. In both of these domains all positive values of $M_1$ and $M_2$ as well as all values $0\le \sin\theta\le 1.0$ are allowed, but the difference follows from the property that $\rho_2$ is positive (negative) for $m_D^2>M_LM_R$ ($m_D^2<M_LM_R$). Separately for each of these domains, we explored the parameter space spanned by $M_E,M_1,M_2$ and $\sin\theta$. It is rather obvious that with four parameters it is not difficult to find ranges of values where the experimental constraints from $S$ and $T$ are satisfied. We probed the parameter space in terms of the mass differences of charged and neutral leptons in order to identify the possible spectra consistent with the current experimental bounds. For the technicolor sector we include the naive perturbative estimate $(S,T)=(1/(2\pi),0)\approx (0.16,0)$. Replacing the derivative with a finite difference in the definition of the $S$-parameter is a standard approximation known to be valid for new physics with mass scales above $M_Z$. We will consider situations where one neutrino state is lighter than $M_Z$, and one might worry that this is a source for large uncertainty due to the approximation. However this is not the case, since generally the contribution to $S$ from the leptons in the case of one light neutrino is much smaller than from the techniquarks. We have checked that even in the worst case, the error due to the approximation of replacing the derivative with finite difference is at few percent level for the total value of $S$. We found that the results in domains $\rho_2=-1$ and $\rho_2=+1$ are practically identical for the mass ranges of interest. From now on in this section we therefore consider explicitly only the case $\rho_2=-1$. The $S$-parameter is independent of the cutoff as can be directly verified using above definitions. However, for $T$-parameter the scale dependence is more subtle. The divergent contribution can be extracted analytically and it has the simple form $\sim M_L\ln(\mu)$, so in the special cases where $M_L=0$ also $T$ is scale independent. The existence of this divergence signifies the fact that within the model we consider there does not exist a renormalizable Yukawa interaction which would provide the mass for the left-handed Majorana state. We fix the scale by the mass of the heavier neutrino eigenstate, $\mu\propto M_1$, and estimate the uncertainties resulting from the choice of the constant of proportionality. For purely left handed Majorana state this prescription coincides with the one employed in \cite{Holdom:1996bn,Kainulainen:2006wq}. Varying the scale from $\mu=1.5M_1$ to $\mu=2M_1$ results in at most roughly ${\mathcal{O}}(10\%)$ uncertainty in our results concerning the $T$-parameter. As already noted, $S$-parameter does not depend on the scale at all in this case, but it turns out to be far less restrictive than $T$. With this uncertainty in mind, we fix $\mu=1.5 M_1$ in what follows. In Fig. \ref{ST_scat}, we show the typical scatter plot on the resulting $S$ and $T$ values as the masses $M_1$, $M_2$ and $M_E$ are allowed to vary from 0.5$M_Z$ to 10$M_Z$ with the ordering $M_2<M_1<M_E$, and mixing angle $\sin\theta=0.3$. \begin{figure} \centerline{\includegraphics[width=0.7\textwidth]{ST_scat.eps}} \caption{The resulting $S$ and $T$ values as the masses $M_1$, $M_2$ and $M_E$ in the lepton sector vary from 0.5$M_Z$ to 10$M_Z$. The value of the mixing angle is set to $\sin\theta=0.3$.} \label{ST_scat} \end{figure} We then investigate the constraints for the parameter space of the model imposed by $S$ and $T$ in more detail taking as guiding limits $|S|\le 0.3$ and $0<T<1$. We fix the mass of the lighter neutrino state ($M_2$) to be equal to $M_Z/2$ or $M_Z$ and the results for corresponding constant $S$ contours are shown in Fig. \ref{Scontours_rhoneg}. We have checked that the results depend only very weakly on the mixing angle, and hence the curves shown in the figure explicitly for $\sin\theta=0.1$ can be taken as representatives for any value of $\sin\theta$. If we consider larger values of $M_2$, the results lead to similar curves as would be expected since the contributions to both $S$ and $T$ should depend on the relative differences of the masses rather than their absolute values. \begin{figure}[htb] \includegraphics[width=0.5\textwidth]{Scontours1.eps} \includegraphics[width=0.5\textwidth]{Scontours2.eps} \caption{Constant $S$ -contours in the $((M_1-M_2),(M_E-M_2)$-plane for the choice of masses $M_2=0.5 M_Z$, (left) and $M_2=M_Z$ (right). The curves correspond to $\sin\theta=0.1$ but the dependence on the angle is very weak.} \label{Scontours_rhoneg} \end{figure} Then, consider the information on $S$ together with the values of $T$ shown in left and right panels of Fig. \ref{Tcontours_rhoneg}. Within each panel, the two sets of curves correspond to two different values of the mixing angle $\sin\theta$. The relation between the masses $M_1$ and $M_E$ is roughly $M_E\sim 2M_1-M_2$, with the constant of proportionality changing from $2$ to $1.6$ as the value of the mixing angle increases from $\sin\theta=0.1$ to $\sin\theta=0.5$. The results for larger values of $M_2$ fall almost on these same curves and in particular for $M_2<M_1,M_E$ and considering $M_2$ up to 10$M_Z$ the above mentioned relation $M_E\sim 2M_1-M_2$ remains valid. \begin{figure}[htb] \includegraphics[width=0.5\textwidth]{Tzerocontours1.eps} \includegraphics[width=0.5\textwidth]{Tzerocontours2.eps} \caption{Constant $T$ -contours in the $((M_1-M_2),(M_E-M_2)$-plane for the choice of masses $M_2=0.5 M_Z$, (left) and $M_2=M_Z$ (right). Within each panel, the two sets of curves correspond to $\sin\theta=0.1$ (left set) and $\sin\theta=0.5$ (right set).} \label{Tcontours_rhoneg} \end{figure} We observe that $S$-parameter is only modestly restrictive over the mass ranges considered here in comparison to $T$ which provides more stringent constraints. Also note how relatively large mass differences can be accommodated within this model, namely from the above figures one infers that the ratio between the charged lepton and the neutrino masses can easily be a factor of ten, while the values of $S$ and $T$ are, respectively, 0.1 and 0.5 which are consistent with current data. Overall, from this section we conclude that MWTC model with most general mass spectrum for the fourth generation leptons is compatible with the current precision data on the electroweak observables. Furthermore, the approximate estimate $M_E\sim 2M_N$ between the masses of the charged lepton and the heavier neutrino remains as a good guiding rule of thumb within the spectrum similarly to the already established special cases of Dirac \cite{Dietrich:2005jn} and purely left handed Majorana \cite{Kainulainen:2006wq} neutrinos. Our analysis implies that the precision observables do not impose a strong preference towards particular neutrino mass pattern. Namely, for any values of $M_L$ and $M_R$ by adjusting $m_D$ and $M_E$ accordingly one can find portions of parameter space where $S$ and $T$ will satisfy the experimental constraints and all mass eigenstates, $E$, $\chi_1$ and $\chi_2$, are heavy enough to have escaped direct detection so far. As an example, in Fig. \ref{mass_constraints} we show the values of $M_E$ and $m_D$ allowed by restricting $|S|<0.3$ and $0<T<0.5$ with different sets of points corresponding to different hierarchy between $M_L$ and $M_R$. The lowermost band corresponds to $-M_L=M_R=M_Z$, the middle band corresponds to $-M_L=M_R/5=M_z$ and the upper band corresponds to $-M_L=M_R/10=M_Z$. From the figure it would seem at first that as the hierarchy between $M_L$ and $M_R$ increases, the allowed values of $M_E$ and $m_D$ also increase but this actually follows from two effects: First, the values of $M_E$ and $m_D$ reflect the overall value of $M_L$ and $M_R$ and not their ratio. For example, if we set $M_L=0.1 M_Z$ and $M_R=5 M_z$ the resulting $M_E(m_D)$ curve would lie on top of the middle curve in Fig. \ref{mass_constraints} although the corresponding ratio $M_L/M_R$ differs by order of magnitude between the two cases. Second the results are also affected by the requirement that $M_E>M_1>M_2$ for the cases which we consider in this paper. \begin{figure}[htb] \centerline{\includegraphics[width=10cm]{mD_ME_scatter.eps}} \caption{Values of $M_E$ and $m_D$ allowed by constraints $|S|\le 0.2$ and $0<T<0.5$ analysis. On different bands of points $M_L$ and $M_R$ take different fixed values. From bottom to top: $-M_L=M_R=M_Z$ (dots), $-M_L=M_R/5=M_Z$ (stars) and $-M_L=M_R/10=M_Z$ (squares).} \label{mass_constraints} \end{figure} In general, as noted already in the beginning of this section, the possible mass splittings within the fourth generation leptons are important in achieving agreement with precision data. For suitable values of the masses the leptons will generate negative contribution to $S$ which helps to partly compensate for positive contribution due to techniquarks. Generally the number of techniquark doublets is large and a single doublet of leptons does typically not provide a big enough contribution to cancel big enough portion of the techniquark contribution. In the MWTC model considered here it is also important in this respect that there are only three technidoublets contributing to the $S$-parameter which is therefore small to begin with. The split masses of the leptons will also lead to nonzero contribution to the $T$-parameter. Together with positive overall contribution to $S$ it is important to have also overall positive contribution to $T$ since the data showns preference for the $T\sim S$ direction. However, we note the well known fact for Majorana neutrinos, that $T$-parameter can be negative over a significant portion of the parameter space in contrast to the Dirac case where the $T$-parameter is manifestly positive definite. While this feature is not of direct interest for the model study here, it might be useful in models with extended particle content which would also yield further positive contributions to the $T$-parameter. In the following sections we will consider the phenomenological implications of the fourth generation neutrinos in the MWTC model. As a starting point for their searches at the LHC we shall take the masses of the charged lepton and the heavier neutrino eigenstate to be in the range of the electroweak scale and consider the lightest neutrino state to have a mass of $M_Z/2$ or $M_Z$. These cases provide a natural starting point for the searches at LHC although consistency with precision data also allows for relatively heavy leptons, even up to the TeV range. \subsection{Production of new leptons} Given the analysis of the previous section, the favored range for the masses of the fourth generation leptons is of the order of the electroweak scale. Therefore these leptons serve as an important probe of this model at the LHC, as discussed in e.g. \cite{Holdom:2006mr}. A Simple process which comes to mind is pair production of charged fourth generation leptons. However, the production of the fourth generation neutrinos, $\bar{\chi}\chi$, may be more interesting than production of $\bar{E}E$ pair. This is so since on the one hand the neutrino is expected to be lighter than the charged lepton and on the other hand its decay modes may provide more interesting observables in case that the fourth generation neutrino is not absolutely stable. For example, consider production of a pair of neutrinos, and subsequent decay $\chi\rightarrow \ell W$, where $\ell=\mu,\tau$ or $e$, then the possible final states are \begin{itemize} \item $2\ell+4{\textrm{jets}}$ \item $3\ell+2{\textrm{jets}}+\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.28cm} E$ \item $4\ell+\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.28cm} E$. \end{itemize} Apparently the third one is similar to what one would expect from the decay of a pair of neutralinos, but the other two should provide a way to distinguish neutrinos from neutralinos. The first one appears interesting since two same-sign leptons can appear in the final state due to the fact that the initial neutrinos are Majorana particles. In addition to $\chi\chi$ production, $2\ell$ and $3\ell$ signals may also be generated in charged current $\ell^{\pm}\chi$ production channel. Bearing this in mind, in the following we focus on the $\ell^{\pm}\ell^{\pm}$ and $\ell^{\pm}\ell^{\pm}\ell^{\mp}$ final states arising from $Z^* \to \chi\chi$ and $W^{*} \to \ell^{\pm}\chi$ production channels. We assume that the SM neutrinos are Majorana particles which will only affect the partial width of the heavy neutrinos in the $\chi\to Z\nu$ decay. Decay channel to Higgs is assumed to be kinematically forbidden for the neutrino masses we consider in this section; see also Fig.\ref{Higgsfrac}. Feynman rules are given in the appendix of \cite{delAguila:2008cj} in Table 31. We take off-diagonal $V_{l\chi}$ lepton mixing angle to be real and consider $\rho_1=\rho_2=1$ case, for simplicity. In accordance with the notation of previous sections, we will generically call our two heavy Majorana neutrinos as $\chi_1$ and $\chi_2$ (with $\chi_1$ being the heavy state). Following \cite{delAguila:2008cj}, we assume that $\chi_1$-neutrino couples to muons only and saturates the latest experimental bound on off-diagonal lepton mixing element $|V_{\mu \chi}|^2<0.0032$ \footnote{Note that this experimentally constrained factor contains contribution from the flavor mixing between generations and also the contribution $\sim\cos\theta$ from the mixing between left- and right handed fourth generation neutrino states.}. In our specific examples, $\chi_2$-neutrino will decouple by either being dominantly right-handed state or by assuming it being lighter than the W and Z and, thus, decaying via off-shell gauge bosons. In the left panel of Fig.\ref{prod} we plot the $\sigma (pp \to \chi_1\chi_1)$ production cross-section as a function of $\chi_1$-neutrino mass for sin$\theta$=0. For a typical type I seesaw scenario this cross-section is very small because it requires to mix SM neutrinos with heavy neutrinos twice which suppresses cross-section by a $|V_{\ell \chi}|^4$. Corresponding $\sigma (pp \to \ell^{\pm}\chi_1)$ production cross-section can be found, for example, in \cite{delAguila:2008cj} and we confirmed it in our numerical simulation. For every scenario considered later in this section we show contributions from $\ell^{\pm}\chi_1$ and $\chi_1\chi_1$ production channels separately. \begin{figure}[htb] \includegraphics[width=0.5\textwidth]{prod.eps} \includegraphics[width=0.5\textwidth]{totprod.eps} \caption{ (left) 2 $\to$ 2 cross section for heavy neutrino production $q\bar{q} \to \chi_1\chi_1$ for sin$\theta=0$ via off-shell Z boson channel; (right) total heavy neutrino production $q\bar{q} \to \chi_1\chi_1$ for sin$\theta=0$ including vector boson fusion mechanism. Higgs masses are 100 GeV (solid green), 150 GeV (blue-dotted), 200 GeV (red-dotted). Solid black is the same as on the left figure.} \label{prod} \end{figure} Our first scenario deals with same-sign leptons in the final state and Table \ref{numerics} illustrates this scenario numerically. First two specific realizations of this scenario feature $\chi_2$-neutrino being the right-handed state (sin$\theta=0$) case with $\chi_1$-neutrino mass equal to 90 GeV and 100 GeV. These masses were chosen to go in parallel with S and T analysis of the previous section ($M_V\equiv M_2=M_Z/2$) and for easy comparison with \cite{delAguila:2008cj} (100 GeV mass case). Third realization with $M_N$=135 GeV and mixing angle sin$\theta=0.5$ was selected to probe heavier neutrino masses. Again, in this last realization, the $\chi_2$-neutrino is assumed to decay via off-shell gauge bosons and, as such, being neglected in the analysis. For event pre-selection we require the presence of two like-sign charged leptons with transverse momentum larger than 30 GeV, and an additional lepton of opposite charge. The choice of the $p_T$ cut for like-sign leptons is motivated by the need to reduce backgrounds where soft leptons are produced in $b$ decays, for example $t \bar t nj$ ($nj$ standing for $n$ jets) in the dilepton channel. For the final event selection we also require: \begin{itemize} \item[(i)] at least two jets in the final state with $p_T > 20$ GeV, and no $b$-tagged jets; \item[(ii)] missing energy smaller than 30 GeV; \item[(iii)] the transverse angle between the two leptons must be larger than $\pi/2$. \end{itemize} The signal and SM background cross sections for these two stages of event selection are also given in Table~\ref{numerics}. The number of the selected events for an integrated luminosity of 10 fb$^{-1}$ and the corresponding statistical significance are also presented there. We observe that 100 GeV and 135 GeV neutrinos may be discovered early under these conditions. \begin{table} \caption{Signal cross-sections $\sigma$(in fb) with the corresponding leading SM background for three scenarios described in the text. Pre-selection and selection criteria are also described in the text.} \label{numerics} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\mu^{\pm}\mu^{\pm}$ X ($M_{\chi_1}$=100 GeV)&Pre-selected $\sigma$(fb)&Selected $\sigma$(fb)&Events/10 fb$^{-1}$&S/$\sqrt{B}$\\ \hline $\mu^{\pm}\chi_1:\mu^{\pm}\mu^{\pm}$ + 2 jets &1.11&0.6&6&7.25\\ \hline $\chi_1\chi_1:\mu^{\pm}\mu^{\pm}$ + 4 jets &2.5&1.32&13.2&\\ \hline SM background &10.25&0.7&7&\\ \hline \hline $\mu^{\pm}\mu^{\pm}$ X ($M_{\chi_1}$=90 GeV)&Pre-selected $\sigma$(fb)&Selected $\sigma$(fb)&Events/10 fb$^{-1}$&S/$\sqrt{B}$\\ \hline $\mu^{\pm}\chi_1:\mu^{\pm}\mu^{\pm}$ + 2 jets &0.21&0.113&1.1&1.89\\ \hline $\chi_1\chi_1:\mu^{\pm}\mu^{\pm}$ + 4 jets &0.73&0.39&3.9&\\ \hline SM background &10.25&0.7&7&\\ \hline \hline $\mu^{\pm}\mu^{\pm}$ X ($M_{\chi_1}$=135 GeV)&Pre-selected $\sigma$(fb)&Selected $\sigma$(fb)&Events/10 fb$^{-1}$&S/$\sqrt{B}$\\ \hline $\mu^{\pm}\chi_1:\mu^{\pm}\mu^{\pm}$ + 2 jets &2.1&1.1&11&6.95\\ \hline $\chi_1\chi_1:\mu^{\pm}\mu^{\pm}$ + 4 jets &1.4&0.74&7.4&\\ \hline SM background &10.25&0.7&7&\\ \hline \hline $\ell^{\pm}\ell^{\pm}\ell^{\mp}$ X ($M_{\chi_1}$=100 GeV)&Pre-selected $\sigma$(fb)&Selected $\sigma$(fb)&Events/10 fb$^{-1}$&S/$\sqrt{B}$\\ \hline $\mu^{\pm}\chi_1:\ell^{\pm}\ell^{\pm}\ell^{\mp}$ + $\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.28cm} E$ &1.95&1.52&15.2&12.15\\ \hline $\chi_1\chi_1:\ell^{\pm}\ell^{\pm}\ell^{\mp} $+ 2 jets+ $\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.28cm} E$ &4.10&3.20&32&\\ \hline SM background &76.7&1.51&15.1&\\ \hline \end{tabular} \end{center} \end{table} In the second scenario we consider trilepton final state with $\ell=e,\mu$, $M_{\chi_1}$=100 GeV and all other conditions as in the first scenario. Trilepton signals can be produced in the two charged current decay channels of the heavy neutrino, with subsequent leptonic decay of the $W$ boson, {\em e.g.} \begin{align} & \ell^+ \chi_1 \to \ell^+ \ell^- W^+ \to \ell^+ \ell^- \ell^+ \bar \nu \,, \nonumber \\ & \ell^+ \chi_1 \to \ell^+ \ell^+ W^- \to \ell^+ \ell^+ \ell^- \nu \,. \end{align} They can also be produced in the $\chi_1\chi_1$ production channel {\em e.g.} \begin{align} & \chi_1\chi_1 \to \ell^+ \ell^- W^+ W^- \to \ell^+ \ell^- \ell^- \bar \nu + 2 \text{ jets} \,, \nonumber \\ & \chi_1\chi_1 \to \ell^+ \ell^+ W^- W^- \to \ell^+ \ell^+ \ell^- \bar \nu + 2 \text{ jets} \,, \end{align} (and small additional contributions from $\tau$ leptonic decays for both production channels). This trilepton final state is very clean once that $WZnj$ production can be almost eliminated with a simple cut on the invariant mass of opposite charge leptons. For event pre-selection we again require two same-sign charged leptons with $p_T> 30 $ GeV. For event selection, we require that neither of the two opposite-sign lepton pairs have an invariant mass closer to $M_Z$ than 10 GeV and we ask that \begin{itemize} \item[(i)] no $b$ jets can be present in the final state; \item[(ii)] the like-sign leptons must be back-to-back, with their angle in transverse plane larger than $\pi/2$. \end{itemize} Our results are shown in Table \ref{numerics} and again we notice that trilepton channel may cross-check potential discovery scenario in dilepton channel. Up to this point we only considered pp$\to \chi_1\chi_1(\chi_2\chi_2)$ neutrino production via off-shell Z boson. In the right panel of Fig.\ref{prod} we include the vector boson fusion channel to the Higgs or Z boson under the same conditions as in left panel of the same figure. We observe the enhancement in the low neutrino mass region due to on-shell Higgs decay to the pair of $\chi_1$-neutrinos. Higher neutrino mass region is also enhanced due to the vector boson fusion to the Z boson which also consequently decay to the pair of $\chi_1$-neutrinos. If the Higgs mass happens to be in the specified regions, this additional production channel would modify the corresponding numbers in Table \ref{numerics} with an appropriate multiplicative factor. Similar enhancement might also occur in the charged current production channel pp $\to \ell^{\pm}\chi_1$. However, for the masses considered in the Table \ref{numerics}, significant enhancements would occur only for parameters values where Higgs is heavy enough to produce two neutrinos and, at the same time, neutrinos are heavy enough to decay to the on-shell W and Z. For example, 150 GeV Higgs case considered in the right panel of Fig \ref{prod} would not satisfy this condition, and only 200 GeV Higgs mass case would give an enhancement. Thus, for example, 90 GeV $\chi_1$-neutrino in the first scenario would, approximately, receive an additional multiplicative factor of 2 in all the corresponding numbers in Table \ref{numerics}. \subsection{Higgs decay} Since the effective coupling between the composite scalar sector and standard model matter fields in the MWTC theory is simple, let us also outline possible effects that the new leptons would have on the decays of the composite Higgs boson. Since the coupling between the composite Higgs and matter fields is only effective one, these results should be taken as qualitative ones illustrating possible effects which can be expected to arise. There are several different possibilities depending on the masses of the new leptons and the composite Higgs. Since the new charged lepton is constrained to have mass $M_E\ge 2M_z$, its effect looks similar to that of the top quark. Most interesting implications are due to the new neutrino which can be relatively light, but couple only weakly to electroweak currents and hence evade the LEP bounds. From equations (\ref{gauge_interactions}) and (\ref{higgs_interactions}) it follows that depending on the neutrino and higgs masses, the decay rates can be dramatically affected by the existence of such fourth generation neutrino. The dominant effect for the Higgs decay is insensitive to the magnitude of weak interactions, i.e. the neutrino mixing angle, since the Higgs field couples with the strength proportional to the mass of these particles. However, as can be seen from (\ref{higgs_interactions}), there is also a contribution depending on the mixing angle. Also the sign of $\rho_1\rho_2$ affects the Higgs couplings, but qualitatively the effects are similar both for $\rho_1\rho_2=1$ and for $\rho_1\rho_2=-1$. In Fig. \ref{Higgsfrac} we show the branching ratios of the Higgs boson for different final states as a function of the Higgs boson mass. The left panel shows the familiar figure corresponding to the final states present in the standard model. In the right panel on the other hand we have taken into account the new leptons in the MWTC model. If the lighter neutrino state in the fourth generation is around $M_Z/2$, it will create an important channel significantly reducing the contribution of the other final states relevant for the light Higgs searches at the LHC. \begin{figure}[htb] \includegraphics[width=0.5\textwidth]{Decay_SM.eps} \includegraphics[width=0.5\textwidth]{Decay_TC_rho.eps} \caption{The branching ratios for the decay of the Higgs boson in Standard Model (left panel) and in MWTC (right panel)} \label{Higgsfrac} \end{figure} To compute the contribution in MWTC, we set $M_E=300$ GeV, $M_1=130$ GeV and $M_2=50$ GeV, $\sin\theta=0.45$ and $\rho_2=+1$ which gives acceptable values $(S,T)=(0.17,0.6)$. It is clear that realization of this particular scenario requires a significant amount of fine tuning regarding the masses of the fourth generation leptons as well as the mass of the Higgs boson, and we only point this out as an interesting special case. Furthermore, there are likely to be important corrections to Higgs decay due to the underlying strong dynamics. However, assuming that the composite SM-like Higgs is the lightest technihadron and for neutrino masses below $M_Z$ our effective vertices should provide qualitatively correct picture. Careful studies of various effects relevant for light Higgs are important since there is some theoretical and phenomenological bias towards this mass range; also due to recent experimental results from the Tevatron \cite{Phenomena:2009pt}. Also the Higgs production is modified from that of the standard model. Here it should be stressed that this modification is different for the case here and for the case of sequential fourth generation where also a QCD-like heavy quark generation appears. In the latter case the new QCD-matter doublets enhance the gluon-gluon fusion due to new degrees of freedom available to run in the loop coupling the gluons with the Higgs \cite{Kribs:2007nz}. In the former case relevant here no such QCD contribution exists. Rather, in this case the modifications to the Higgs production rates arise from the resonant weak boson scattering. These different production channels of spin-0 resonance at LHC will provide important tool to discriminate between these two different possibilities if a fourth generation of standard model -like matter appears in nature. In \cite{Antipin:2009ch} similar reasoning has been applied to other theories which lead to appearance of spin-2 resonances. \section{Conclusions and outlook} \label{checkout} In this paper we have analyzed the contributions to the precision parameters from fourth generation leptons and in particular considered phenomenological implications of a novel technicolor scenario where a fourth family of leptons arises without new QCD quarks which would appear in typical sequential extensions of the Standard Model. Allowing for the most general mass structure for these leptons we have evaluated the $S$ and $T$ parameters and shown how these constrain the masses of the new fourth generation leptons. We have discussed some collider signatures which can be used to probe the existence of these new leptons at the LHC. We have shown that the decay rates of the Higgs particle can be affected through the decays to these new particles in the case of light Higgs particle. We chose to study the minimal walking technicolor model, since it is naturally required to contain a full fourth generation in order to saturate the Witten anomaly. The effects from the leptonic sector are novel since the quark sector does not carry QCD color but is instead technicolored. The technicolor sector will provide its own characteristic signals which are probably best studied through $WW$ scattering, while the new leptons will be manifest through other initial states as well. Here we have concentrated on the typical mass hierarchy of the leptons, i.e. charged lepton always heavier than the lightest neutrino. However, one can also entertain a thought that the charged state would be the lightest. Then its decay could only proceed via mixing with the lighter generations, making it relatively long lived and perhaps amendable even to direct observation. Also, we have considered exclusively the standard model like hypercharge assignments for the techniquarks and fourth generation leptons. However, saturation of gauge anomalies in MWTC theory allows for other possibilities which should be studied in detail. For example, the "neutrino" could have charge $-e$ while the other lepton would be doubly charged. There are also other theories which one can consider. For example, another promising candidate for strong dynamics is a SU(3) gauge theory with two flavors of sextet techniquarks. Also this particle content should lead to walking behavior of the technicolor coupling constant as was proposed in \cite{Sannino:2004qp}. For a lattice simulations concerning the running of the coupling in this model see \cite{Shamir:2008pb}, and for a study of the associated collider phenomenology see \cite{Belyaev:2008yj}. In this model one does not need to include additional leptons to saturate Witten anomaly and also the naive $S$-parameter is larger than in the minimal model; namely $S\sim 1/\pi$ which is still marginally within the experimental bounds but $T=0$ if degenerate techniquarks are assumed. However, if one insists on standard model like hypercharge assignments for the techniquarks, then, to saturate the gauge anomalies, the introduction of two lepton doublets would be necessary. From the weak interaction viewpoint one could regard the model as one with two new generations. Now, given this extended particle content the $S$ and $T$ can be made better compatible with the existing data since with appropriate masses for the new two lepton generations generate negative contribution to $S$ making it smaller than $1/\pi$ and positive contribution to $T$ similarly to what happens in the minimal model which we have considered here. The collider signatures of the (lightest) additional leptons are likely to be similar as the ones considered here. Hence, the existence of fourth family of matter analogous to the three currently featured in the standard model remains as a simple and theoretically motivated possibility of new physics appearing at the LHC. If indeed such matter content appears, the story is likely to be richer than just a sequential fourth generation. \acknowledgments We thank K.~Kainulainen and J.~Virkaj\"arvi for discussions. M.H. was supported by a grant from M.~Ehrnrooth foundation.
1,314,259,995,531
arxiv
\section{Introduction} \label{Introduction} Blockchain based cryptocurrencies are usually unmanaged, distributed, consensus-based systems in which no single entity has control \cite{Baliga2017}. They use open consensus based approaches that allow anyone to participate in maintaining the blockchain, even retaining their anonymity. Such systems remove the need for a third party in financial transactions and eliminate the double spending problem (where the same digital cash is spent multiple times) \cite{Swan2015}. This lack of a need for a trusted third party is supposed to result in reduced transaction fees over non-cryptocurrency based systems (e.g., credit cards), enabling efficient micropayments \cite{Narayanan2016}. Recently however, limitations with some cryptocurrencies on transaction throughput has caused transaction fees to be high. Lastly, such systems generally provide a level of anonymity where individuals are not linked to accounts and where it is trivial for an individual to produce and use new accounts. Examples of such systems include Bitcoin \cite{Nakamoto2008}, Ethereum \cite{wood2014ethereum}, Bitcoin Cash \cite{BitcoinCash}, Litecoin \cite{Litecoin}, Cardano \cite{Cardano}, NEM \cite{NEM}, Dash \cite{Duffield2014}\footnote{Any mention of commercial products is for information only; it does not imply recommendation or endorsement. The blockchain based cryptocurrencies listed are the ones with the largest market capitalization in descending order as of 2017-12-29 according to \cite{Coinmarketcap}.}. In this work, we consider how to bring many of the advantages of such open consensus based cryptocurrencies to the area of managed cryptocurrencies\footnote{Note that managed cryptocurrencies also use consensus methods but they are not open to public participation.}. We refer to a currency as `managed' if there exists an owner that can exert control over the currency. Managed currencies include electronic representations of fiat currencies as well as virtual world and in-game currencies. In the cryptocurrency realm, they are often referred to as `permissioned blockchains' (examples include Multichain \cite{Greenspan2015} and Ripple). With managed currencies, the identity of individuals is often, but not necessarily, linked to the accounts (e.g., as when someone opens a bank checking account). Furthermore, the managing entity usually reserves the right to control the money supply (i.e., they can print money). And law enforcement related functions may include freezing or confiscating assets. Managed cryptocurrencies can be implemented with private blockchains using tools such as Multichain. However, in such implementations the owners have complete control to perform arbitrary activity without transparency. This is because the owners authorize (and thus control) the servers maintaining the blockchain. In our research we explore a hybrid approach where we merge strengths of open consensus based cryptocurrencies with features often found in managed currencies. In doing so we design not a particular cryptocurrency, but instead a flexible architecture that allows for different implementations. From the open consensus approach we leverage the ability of the mining community to enforce the rules of the currency and to enforce transparency, where all transactions are publicly viewable. In this way the managing entity of the cryptocurrency cannot perform arbitrary actions, but only those explicitly allowed in the cryptocurrency design and all such management actions are publicly recorded in the blockchain. From the managed currencies, we leverage concepts such as the ability of the currency administrator to create funds, tie user identity to accounts, freeze/confiscate funds (e.g., due to illegal activity), and set the block awards for miners. This last feature indirectly enables the currency administrator to control the electricity consumption of the consensus mechanism (since fewer miners will participate if the rewards are lower). Energy consumption has often been cited as a major problem with consensus 'proof-of-work' systems; in 2014 Bitcoin mining consumed as much electricity as Ireland\cite{2014bitcoin}. Since our approach is an architecture, the creator of any particular managed cryptocurrency instance can choose which features to include or exclude. Our architecture is flexible such that it can be used to implement open consensus environments like Bitcoin as well as closed controlled environments achievable with systems like Multichain. However, our approach is not intended for that purpose. Our area of interest is where the architecture is used to create hybrid approaches that combine the strengths (and weaknesses) of both. Note that we are not advocating any particular approach in this work and our goal is not to propose the creation of any specific cryptocurrency. Rather, we explore here the technological foundations that can enable the merging of the managed cryptocurrency idea with an open consensus based architecture and explore the resultant strengths and weaknesses. To enable management of the currency, we propose using a genesis transaction. All blockchains have a genesis block which is the first block, but this genesis transaction is a first transaction from which all subsequent transactions are authorized. The genesis transaction authorizes a special root account that has the currency manager role and that will be controlled by the currency administrator (the entity issuing the cryptocurrency). Our tagging of accounts with roles is key to our architecture. Accounts with the currency manager role can configure the currency to have different properties through defining policy (e.g., adjusting the roles implemented and mining rewards). Also, these accounts can issue transactions to create other accounts with different roles, in a hierarchical fashion with accounts closer to the root being more authoritative. The possible roles include currency manager, central banker, law enforcement, user, and account manager. The central bankers can create and delete funds. Law enforcement can freeze account and confiscate funds (e.g., for fraudulently gained funds being sent to terrorist organizations \cite{Lee2017})\footnote{Note that in most consensus based cryptocurrencies, restoration of funds is impossible without forking the currency.}. Users can perform monetary transactions without the need for a trusted third party. And account managers can create user accounts (and may be required to link them to physical identities). We demonstrate how to implement our approach through modest modifications to the implicit Bitcoin specification. We chose Bitcoin because it is was the first blockchain based cryptocurrency and is the most used. However, our approach can be applied to most any blockchain based cryptocurrency (including smart contract approaches such as Ethereum). We modify Bitcoin as little as possible to facilitate implementation of our specification; all of our features were implemented through small changes to the Bitcoin transaction format. Currency managers can issue policy in such a way that the changes are reversible or permanent. Permanent changes restrict the currency manager's future actions (since they cannot be undone). Such changes are important as they can provide users confidence in the system through knowledge that the currency administrator will abide by a set of self-established rules. Added to this, the architecture requires that all management actions be transparent to the users. Key to this approach are our solutions for maintaining a balace of power. The consensus based methods must ensure that the currency administrator (who owns the root currency manager node) abides by the stated rules of the cryptocurrency and enforces transparency of all management actions. However, the participants in the consensus methods should not be able to take control away from the currency administrator nor exclude any management transactions from entering the blockchain. In summary, open consensus based unmanaged cryptocurrencies provide significant new benefits over previous electronic cash efforts. They eliminate the need for trusted third parties by eliminating the double spending problem, remove the need for a dedicated and centralized infrastructure, and allow for the possibility of very low transaction fees thus enabling inexpensive micro-transactions \footnote{Bitcoin has high transaction fees due to limits on transaction throughput, but this is a technical problem not necessarily present in other cryptocurrencies.}. However, this model is unsuitable for managed cryptocurrencies because it is completely controlled by whomever joins the cryptocurrency network to maintain the blockchain (an open and anonymous group). Previous efforts to support managed cryptocurrencies have used permission-based blockchains where the administrators can control all access to the blockchain, ability of users to issue transactions, and ability of miners to maintain the blockchain. This is a powerful and efficient paradigm for many use cases. However, the user base must have complete trust in the currency administrator. In our work, we are attempting to eliminate the need for users to trust the currency administrator but also to enable the administrator to manage the cryptocurrency. At the same time, we are attempting to incorporate the many benefits achieved by unmanaged cryptocurrencies while mitigating the weaknesses (especially in the area of power consumption in maintaining the blockchain). The main deliverable this paper is a novel architecture for maintaining a managed cryptocurrency through distributed consensus based approaches (eliminating the need for users to trust the currency administrator), as well as an evaluation of the resultant benefits and weaknesses. It also provides technical bit-level details on how to modify the Bitcoin specification in order to implement the approach. In future work, we will provide such an implementation and perform empirical studies. We expect the necessary code changes to be relatively straightforward given our modest changes to the specification, but this cannot be claimed until a prototype implementation has been developed. \section{Related Work} \label{Related Work} To our knowledge, this is the only work combining the idea of a managed cryptocurrency with the open consensus model used by unmanaged currencies. The work most similar to ours is Multichain. It provides a platform for creating and deploying `private' blockchains within or between organizations. It is designed to provide the following features \cite{Greenspan2015}: \begin{enumerate} \item `to ensure that the blockchain's activity is only visible to chosen participants' \item `to introduce controls over which transactions are permitted' \item `to enable mining to take place securely without proof of work and its associated costs' \end{enumerate} Instances of Multichain have an administrator or group of administrators that define the ongoing policy of the system. They have complete control in defining who can view the blockchain, who can put transactions on the blockchain, and who can maintain the blockchain (those mining new blocks). This last feature enables them to maintain the blockchain at very little cost since the computationally expensive proof-of-work consensus methods of Bitcoin can be dispensed with. This is replaced with a flexible round robin approach where the miners mostly take turns publishing the new blocks and generally do not receive any reward for doing so (since the work is trivial). While a powerful approach for organization-run blockchains, Multichain cannot be used to satisfy our stated objectives since the administrators have complete control. There is no mechanism to implement a balance of power where the administrators can manage the currency in an ongoing fashion but where the maintainers of the blockchain can ensure that the administrators follow the stated rules of the cryptocurrency. Country specific managed cryptocurrencies exist or are in the process of being deployed, not all of them being blockchain based, and the degree to which they are `managed' varies greatly. Dubai has launched its own cryptocurrency called emCash \cite{Buck2017}. Singapore has announced experimentation with one \cite{Cheng2017} and Estonia has announced thier `estcoin' \cite{Korjus2017}. The company Monetas \cite{Monetas} offers a product to enable countries to issue their own digital currencies; it is being actively used by several countries. Senegal is piloting a digital currency called eCFA using the Monetas platform that, if successful, will be used by Cote d'Ivoire, Benin, Burkina Faso, Mali, Niger, Togo and Lusophone Guinea Bissau \cite{Chutel2016}. Tunisia has done the same using the Monetas platform \cite{Smart2016}. The Russian Central Bank has publicly pushed for a national cryptocurrency \cite{Helms2017}. Venezuela has announced that it will launch an oil-backed cryptocurrency \cite{AlexandraUlmer2017}. And lastly, the Bank for International Settlements released a report noting that countries may need to replace cash with national cryptocurrencies \cite{Cheng2017}. In the area of unmanaged cryptocurrencies, there exist hundreds of them. Bitcoin was the first to use blockchains and was introduced in 2008 \cite{Nakamoto2008}. There exist many forks and variants of Bitcoin, mostly optimizing certain features but often introducing novel and revolutionary architectural changes. We review here the blockchain based cryptocurrencies with the largest market capitalization, as of 2017-12-29. Ethereum was the first production product to enable executable programs (called smart contracts) to be put on a cryptocurrency blockchain \cite{wood2014ethereum}. Ripple \cite{Ripple} provides a solution for banks to send payments globally. Bitcoin Cash \cite{BitcoinCash} is a fork of Bitcoin with a much larger block size limit. This enables many more transactions per block thereby increasing throughput and driving down transaction fees. Litecoin \cite{Litecoin} is almost identical to Bitcoin but with several differences: smaller block publication time, larger maximum number of coins, and a change in hashing algorithm. Cardano \cite{Cardano} is based on \cite{Kiayias2017} describing a `provably secure proof-of-stake blockchain protocol'. NEM \cite{NEM} incorporates a reputation system, proof-of-importance, and multisignature accounts. Dash \cite{Duffield2014} is `privacy-centric' with a two-tiered administration network and an ability for users to instantly send coin. \section{Managed Cryptocurrency Architecture} \label{Architecture} All blockchains contain a `genesis block'. This is the first block on the blockchain and it has no pointer to a previous block (being the first one). All users of the blockchain must agree on this first block for a consistent view of the blockchain to exist. We propose the addition of a `genesis transaction'\footnote{This is related to the "asset genesis" metadata transaction idea \cite{Greenspan2015} but is more powerful as it controls all transactions on the blockchain.}. This is the first transaction in the blockchain and it defines an account that has the currency manager role (and is owned by the currency administrator). In our system, only accounts with roles can issue transactions and only accounts with the currency manager role can create other accounts with roles (with one important exception, discussed later). Thus, the genesis transaction is the transaction that enables all other transactions. The initial account is the root of a hierarchical tree of nodes, where each node represents an account labeled with a set of roles\footnote{We use the terms node and account interchangeably depending upon the desired perspective (node in a tree versus account owned by a user)}. The root node not only has the currency manager (M) role\footnote{The M role is distinct from the currency administrator. Many accounts may have the M role but there exists a single entity which is the currency administrator.}, but it has all other available roles: central banker (C), law enforcement (L), user (U), and account manager (A). We label the roles of an account by concatenating all applicable labels. Thus, the root node has the role set `MCLUA'. When a node with the M role creates a new account (more precisely, it labels some unlabeled account created by some user), it bestows on that account a, not necessarily proper, subset of its roles. Thus, the cardinality of the set labels for nodes monotonically decreases as one traverses higher in the hierarchy tree. One exception to this monotonicity rule is that nodes with the M label may also modify the role sets of nodes higher in the tree (provided they are on the path from the target node to the root), restricted again to the set of roles possessed. Nodes with the A role may also create and delete accounts, but such created accounts may only have the U role. The currency administrator then can delegate user account management to third party organizations by giving them the A role. The different roles provide different accounts different capabilities: \begin{itemize} \item The U role enables an account to receive and spend coins. An account for which the U role has been removed has its funds frozen. \item The A role enables an node to create accounts with the U role (and only the U role). It may also remove the U label for its descendants. \item The C role enables the creation of new coins (apart from the block mining rewards). \item The L role enables an account to forcibly move funds between accounts, to remove the U label, and to restore a previously removed U label. However, these actions can only be performed against nodes with the same or greater distance from the root. \end{itemize} The currency administrator, who will own the root M labelled node, may require that A nodes verify users' identities prior to providing an account. In this case, the architecture enables a system where the `know your customer' (KYC) laws might be satisfied. Individual transacting parties would not know each other's identities but some account authorizing entity would have a record for each account with the U role. Fulfilling KYC laws is a general problem for cryptocurrencies \cite{Staples2017}. Figure \ref{fig:hierarchy} shows an example account hierarchy where we label nodes with their roles (e.g., a MUA node has the M, U, and A roles). The initial node created by the genesis transaction is at the bottom. Each node is labeled with its set of roles. Each UA node represents an organization authorized to manage user accounts. The MUA nodes authorize the UA nodes and can undo any undesired action taken by the UA nodes, since they are on the path from all UA nodes to the root. This action could be taken if there is negligence on the part of a UA node in creating U nodes or if a UA node's credentials are stolen. Note that there are two MUA nodes, one on top of the other. The topmost node will be used to create and delete UA nodes, the bottom one will be used to fix the system in the event that the topmost node's credentials are stolen. This is also the reason why there are two MCLUA nodes, one on top of the other. The root node ideally is never used again after creating the MCLUA node above it. This helps prevent the root node's credentials from being stolen. In general, actions should be performed by nodes higher up in the tree that have the least privilege possible since the use of a node puts it in a more vulnerable position. The credentials of nodes not used can be secured simply by converting them to physical form and locking them in a safe (which we recommend doing with the initial node's credentials). This hierarchical node and role structure then enables the currency administrator to create a defense in depth security model. Accounts lower in the hierarchy have greater power and their credentials should be locked securely and rarely used. \begin{figure} \centerline{\includegraphics[scale=.2]{images/Example-MC-Hierarchy.jpg}} \caption{Example Managed Cryptocurrency Hierarchy.} \label{fig:hierarchy} \end{figure} A last capability not yet discussed is that accounts with M roles can issue policy that alters the cryptocurrency specification. In the event of policy conflicts between different M nodes, the nodes closer to the root are more authoritative. For M nodes the same distance from the root, those labeled with the M role in earlier blocks are more authoritative. In the event of a tie, the node labeled with the M role first within the same block wins. The policy deployed by the M nodes define the cryptocurrency. It is this policy that makes our approach an architecture. The policy can be set such that the cryptocurrency acts in an entirely unmanaged mode like the many popular open consensus cryptocurrencies in use today. The policy can also be set to allow the currency administrator full control as with the administrators in Multichain. More interesting to our research though is when the policy combines both open consensus and managed currency features. The policy enables each of the roles to be enabled or disabled and grants/limits the power of each role. Policy also can affect the mining community. A policy transaction can set a particular block reward or define a minimum transaction fee. Controlling these will affect the size of the mining community. For a proof-of-work based consensus mechanism such as Bitcoin, this will then indirectly control the amount of electricity used to manage the cryptocurrency (trading off power consumed against robustness of the mining pool against attack). This approach can enable an energy efficient proof-of-work consensus system where the currency administrator balances overall mining power desired vs. energy consumed. The exact capabilities available with policy are covered in section \ref{policy}. \section{Bitcoin Specification Overview} \label{Bitcoin Specification} There does not exist an official Bitcoin specification. The original Bitcoin paper \cite{Nakamoto2008} contained the primary architectural details but the specification is defined by the applications that maintain it on the network. That said, there exists a Bitcoin reference client 'bitcoind' and related protocol documentation \cite{bitcoinwiki}. From this was created a useful developers reference \cite{Okupski2014}. An in depth research analysis of Bitcoin is available in \cite{Bonneau2015}. \begin{figure*} \centerline{\includegraphics[scale=.4]{images/Bitcoin-Transaction-Structure.jpg}} \caption{Bitcoin Transaction Format for Sending Bitcoin (BTC), copied from \cite{Okupski2014}.} \label{fig:bitcoinFormat} \end{figure*} In this section we briefly review the features of the Bitcoin specification that will be of use for our modified specification. Figure \ref{fig:bitcoinFormat} shows the layout of a Bitcoin transaction (copied from \cite{Okupski2014}, see this for details). The vin[] sections describe the inputs to a Bitcoin transaction (the particular coins to be spent). The hash and n values specify particular coins from the output of some other Bitcoin transaction. The scriptSig is a script to provide cryptographic evidence that the owner of the coins approves of the coins being spent. It is a response script that meets the conditions of the challenge script in the transaction containing the coins that are to be spent (see the vout[] scriptPubKey field below). These conditions are usually met by proving ownership of the private key associated with the coins. The vout[] sections describe the outputs to a Bitcoin transaction (groupings of coins along with who owns each group). Ownership is specified within each scriptPubkey which is a script defining how the coins can be spent (usually specifying a public key). To satisfy the scripPubkey challenge script and spend the coins at some future time, the owner will need to generate a scriptSig response script in some vin[] field for some transaction in which they prove ownership of the private key associated with the specified public key. This is the Pay-to-Pubkey (P2PK) Bitcoin transaction type for moving coins between accounts (see section 4.3.1 of \cite{Okupski2014} for a detailed explanation). \begin{figure*} \centerline{\includegraphics[scale=.4]{images/Bitcoin-vin-ref.jpg}} \caption{Bitcoin vin[] Reference to a Previous Transaction (copied from \cite{Okupski2014}).} \label{fig:bitcoinVinRef} \end{figure*} Figure \ref{fig:bitcoinVinRef} shows how a vin[] field in a new transaction can reference a specific vout[] field in a previous transaction (copied from \cite{Okupski2014}, see this for details). The vin[] hash value specifies the transaction and the n value specifies the specific vout[] field. The scriptSig in the vin[] of the new transaction then satisfies the scriptPubkey from the vout[] field specified from a previous transaction so that the coins can be spent (i.e., proving that the owner of the coins wants them spent). \section{Technical Design Using Bitcoin Specification Modifications} \label{Technical Design} \begin{figure*} \centerline{\includegraphics[scale=.3]{images/MCCoinTransfer.jpg}} \caption{64 bit nValue Field Format for the Coin Transfer Mode} \label{fig:CoinTransferFormat} \end{figure*} This section provides the technical specification for our managed cryptocurrency architecture described in section \ref{Architecture}. Our approach is to implement our architecture using only modest changes to the Bitcoin specification, changing the regular Bitcoin transaction format. Section \ref{Bitcoin Specification} provided the necessary background on the Bitcoin specification. Interested readers should also consult the de facto Bitcoin specifications \cite{bitcoinwiki} and \cite{Okupski2014} to better understand these changes in the context of the larger blockchain system. To implement our architecture's functionality, we repurpose the regular Bitcoin transaction. The format remains the same as the Bitcoin transaction shown previously in figure \ref{fig:bitcoinFormat} with a few exceptions. Our primary change is to leverage and revamp the vout[] nValue field in order to implement account roles and cryptocurrency policy. Another major change is to require in a transaction the inclusion of vin[] fields that provide the necessary roles for a transaction to be valid. Our first modification was to change the transaction format version, nVersion, to 1944\footnote{This is the year big band leader Glenn Miller died while flying to France to encourage allied troops.}. Transaction format version 1 is used by the regular Bitcoin transactions and is disallowed by our architecture. The vin[] field operates similarly as before. In Bitcoin, a vin[] field specifies a set of coins from a particular transaction already posted on the blockchain. The vin[] field then provides the evidence that the owner of those coins wants to spend them by providing a vin[] scriptSig field that satisfies the vout[] scriptPubkey field of the coins to be spent. In our design, the vin[] field works the same way for coin transfers. However, the vin[] field can also be used to bring roles into a transaction to authorize activities that require roles (which is most any activity in our architecture, depending upon the specific policy enacted). Functionally, it is like we are `spending' a role to use it to authorize some action given the usual use of a vin[] field (but roles can be `spent' an infinite number of times and are not transferred like coin). A vin[] field can specify a former transaction where an account was given a role. The vin[] scriptSig field then provides evidence that the owner of that account wants to use their role in this transaction (the scriptSig field must satisfy the scriptPubkey field of the transaction where the account was given the role). Thus, each vin[] field can bring a particular role from a particular account into a transaction in order to meet the role requirements for that transaction. The vout[] field was also reinterpreted. The nValue field now specifies the mode in which its encompassing vout[] field will operate. There are three modes: coin transfer mode, role change mode, and policy change mode. Coin transfer mode moves coin between accounts similarly to a normal Bitcoin transaction. However, we restrict the transaction types that can be used in order to ensure that coins are linked to accounts. Role change mode enables accounts with the M, A, and L roles to modify the role labels of other accounts. Policy change mode enables accounts with the M role to enact and/or modify cryptocurrency policy (to essentially define the ongoing rules for the cryptocurrency). If the first bit of an nValue field is a 0, the encompassing vout[] field is in coin transfer mode. If the first two bits of an nValue field are `10', the encompassing vout[] field is in role change mode. And a nValue field beginning with `11' specifies policy change mode. Also within the vout[] field, we restrict the scriptPubkey field to only use the Pay-to-Pubkey (P2PK) transaction type. P2PK associates coins with a specific public key (an account in our architecture). If set up to do so, this enables cryptocurrencies implemented from our architecture to link accounts to account owners. This linkage can take place when an account with the A role grants the U role to another account (thereby authorizing it for coin transfers). In this case, the authorizing entity checks the user's identity using out-of-band traditional methods (e.g., passports, drivers licenses, and identity cards). \subsection{Coin Transfer Mode} If an nValue field has its first bit set to 0, the encompassing vout[] field is in coin transfer mode and is used to move coin between accounts. Since the first bit was used to specify this, the remaining 63 bits specify the amount of coin to be transferred (in Bitcoin all 64 bits are used). Figure \ref{fig:CoinTransferFormat} shows the changes to the nValue field for the transfer of coin (those nValue fields beginning with 0). Note that for all figures showing the revised nValue format (including this one), solid lines originate from bits that define the action to be taken while dotted lines originate from parameter values. Anytime a transaction has one or more vout[] fields in coin transfer mode, the original accounts owning the coins and the destination accounts for the coins must all have the U role. This is accomplished by including in the transaction vin[] fields that bring in the U roles for the accounts either sending or receiving coin. Lastly, coinbase transactions (the first transaction of each block where the miner sends itself the reward coins) are handled the same as with Bitcoin. However, the vout[] nValue field will start with a 0 bit, putting it in coin transfer mode. Also, the miner must include a vin[] field after the normal coinbase transaction vin[] field in which the miner provides the U role for the account to which the coins are destined. \subsection{Role Change Mode} \begin{figure*} \centerline{\includegraphics[scale=.3]{images/MCRoleChange.jpg}} \caption{64 bit nValue Field Format for the Role Change Mode} \label{fig:RoleChangeFormat} \end{figure*} If an nValue field has its first two bits set to `10', then the encompassing vout[] field is used to change the roles for a set of accounts. The third bit represents whether or not the vout[] field is removing or adding roles. 0 indicates that roles are being removed and a 1 represents that they are being added. The subsequent bits are flags referring to the different roles. Bits 4, 5, 6, 7, and 8 map to roles M, C, L, U, and A respectively. The remaining 56 bits are undefined. This may be wasteful of space but role change transactions will be relatively rare and we are trying to change the Bitcoin specification as little as possible. Figure \ref{fig:RoleChangeFormat} shows these changes to the nValue field. The vout[] scriptPubkeyLen and scriptPubkey fields specify the public key for the account that has these roles. The roles granted by the transaction can then be used in future transactions by the future transaction providing a vin[] scriptSig field that satisfies the vout[] field of the transaction granting the roles. Essentially, an owner of an account uses their private key in some future transaction to prove ownership of a public key documented in a past transaction where the roles were granted. Note that cryptocurrency participants, specifically the miners, will have to make sure that the roles being accessed by a transaction haven't been previously removed from the relevant accounts (roles can be removed by accounts with the M, L, or A roles). This check is similar to miners in Bitcoin checking to make sure that particular coins haven't already been spent. Every transaction requires one or more roles in order to be valid. Each role has different rules that must be satisfied for the applicable transaction to be valid: \subsubsection{M Role Processing} \label{subsub M role} Any addition or removal of roles requires the M role to be provided in one or more of the vin[] datastructures (with two exceptions, see the A and L roles). Each role change vout[] datastructure must be `covered' by a vin[] scriptSig field where the address specified is located between the root and the node affected in the node hierarchy. Also, the `covering' address (referenced by the vin[] scriptSig field) must have the role that is to be added or removed in the `covered' vout[] datastructure. \subsubsection{C Role Processing} The inclusion of a vin[] datastructure that has a scriptSig field that satisfies an account having the C role means that the transaction may create coins. There is no need then for other vin[] datastructures. The vout[] datastructures provide coins to the designated addresses. \subsubsection{L Role Processing} The inclusion of a vin[] datastructure that has a scriptSig field that satisfies an account having the L role means that the other vin[] fields do NOT need the scriptSigLen or scriptSig fields (for bringing coin into the transaction). Coins may be transferred without the permission of the owners with the inclusion of the L role in the transaction. Also, having the inclusion of the L role enables vout[] datastructures that remove the U role from other accounts. Also, the U role may be added back to accounts for which it was previously revoked. However, these abilities only apply to nodes in the hierarchy that are at a greater distance from the root than the vin[] specified node with the L role (this is to enable the currency administrator to limit this power by creating L role accounts at differing distances from the root). \subsubsection{U Role Processing} Any movement of funds requires the U role for the original owner of the coins (specified in the vin[] fields). The recipients of any coins (specified in the vout[] field) must also have the U role. \subsubsection{A Role Processing} The inclusion of a vin[] datastructure that has a scriptSig field that satisfies an account having the A role means that the vout[] fields may add role U to accounts. Doing so adds them as descendants in the hierarchical account tree. Accounts with the A role may likewise remove the U role from any descendant. If an A node removes one of its descendants U roles, another A node may add the U role to that node. In this case, the affected node becomes a descendant of the A node adding the U role. Note that if a node with the L role removes the U role from a node, it is put on a special list of frozen nodes and only another node with the L role may remove the affected node from the list. \subsection{Policy Change Mode} \label{policy} \begin{figure*} \centerline{\includegraphics[scale=.3]{images/MCPolicyChange.jpg}} \caption{64 bit nValue Field Format for the Policy Change Mode} \label{fig:PolicyChangeFormat} \end{figure*} If an nValue field has its first two bits set to `11', then the encompassing vout[] field is in policy change mode, used to create or modify cryptocurrency policy. Note that a vout[] field in policy change mode is only allowed in a transaction if at least one of the vin[] fields provides the M role (since only currency managers can modify policy). The third bit of the nValue field defines the permanence of the policy (0 is not permanent and 1 is permanent). If an account issues permanent policy, it may not change it in the future. However, M accounts with greater priority, as described in section \ref{Architecture}, can still trump the issued policy. If the initial root node issues permanent policy, it cannot be changed for the life of the cryptocurrency. This enables the issuance of a static instance of our cryptcurrency architecture. Some features may be made permanent while others are left open for change. It may not be immediately clear why an issuer of a currency would make anything permanent, because it reduces their flexibility. However, by making certain features permanent it provides guarantees to the users. The currency administrator is then constrained to operate within the published rules of the cryptocurrency even though they still manage it. This idea of permanence is important in order to limit the currency administrator from having absolute rule (which is the case in many of the private blockchain managed cryptocurrencies, such as with Multichain \cite{Greenspan2015}). After the first three bits of an nValue field are set (to 110 for not permanent or 111 for permanent), the remaining 61 bits specify the policy setting to be made. There is just one policy change made per nValue field, and just one nValue field per vout[] datastructure. However, a single transaction may have many vout[] datastructures. The next 27 bits specify an integer representing the policy change type while the last 32 bits are used to hold the policy change parameter. The structure of the nValue field in the policy change mode is shown in figure \ref{fig:PolicyChangeFormat}. \begin{table*}[htbp] \caption{Cryptocurrency Policy Settings} \label{tab:policy} \begin{center} \begin{tabular}{ccl} Policy Change Type & Description & Parameter \\ \hline 0 & Enable or disable the M role globally & 0 or 1 \\ 1 & Enable or disable the C role globally & 0 or 1 \\ 2 & Enable or disable the L role globally & 0 or 1 \\ 3 & Enable or disable the U role globally & 0 or 1 \\ 4 & Enable or disable the A role globally & 0 or 1 \\ 5 & Enable or disable the L roles from moving coins & 0 or 1 \\ 6 & C role coin creation limit per block (0 means no limit) & Integer \\ 7 & Set block reward mode (0 means manual, 1 means self-adjusting) & 0 or 1 \\ 8 & For manual mode, set block reward & Integer \\ 9 & For manual mode, set minimum block reward & Integer \\ 10 & For self-adjusting, set geometric decay rate & Float between 0 and 1 \\ 11 & For self-adjusting, set maximum decay rate & Float between 0 and 1 \\ 12 & Set transaction fee minimum (0 means no minimum) & Integer \\ 13 & Periodicity of management transaction inclusion in blocks & Integer \\ 14 & Minimum number of management transactions per period & Integer \\ 15 & No operation (used to prove the currency administrator is active) & 0 \\ \end{tabular} \end{center} \end{table*} For the policy change mode, there are currently 14 policy change types with associated parameters, shown in table \ref{tab:policy}. For the binary parameters, 0 means disable and 1 means enable. Binary parameters default to 1 (these policies are enabled by default when the cryptocurrency is initiated). Policy change types 0 to 5 enable or disable the various roles in available in the architecture (discussed in section \ref{Architecture}). Type 5 enables or disables the L role from moving coins (disabling would limit the L role to freezing accounts). Type 6 sets a limit for how much coin the set of C roles may create within any particular block. Type 7 sets the block reward mode (0 is the automated approach used by the base cryptocurrency system, Bitcoin in our case, while 1 enables a mode where a currency manager explicitly sets rewards). Type 8 and 9 are for the manual mode and enable setting the block reward and setting a minimum block reward. The purpose of the type 9 is to allow a currency manager to permanently set a minimum while still having the flexibility to adjust the current reward with type 8. Types 10 and 11 are for the self-adjusting mode and enable setting the decay rate for block rewards as well as setting a maximum decay rate. Again, the latter is intended to be used in a mode where it is set permanently. Type 12 sets a transaction fee minimum. Types 13-15 are important for setting security policy (discussed in detail in section \ref{Security}). Type 13 sets how often management transactions must appear in a consecutive sequence of blocks (0 disables this feature). For example, a setting of 5 indicates that a certain number of management transactions must appear within every subsequent grouping of 5 blocks. Type 14 specifies the minimum on how many management transactions must appear in that grouping of blocks. A management transaction is one that requires the M role to be present in one of the vin[] fields (see section \ref{subsub M role}). If the currency administrator doesn't have enough management transactions that they wish to put on the blockchain to meet the minimum, then they may issue one or more no operation (no-op) policy change mode transactions of type 15 using one of their M nodes. These do nothing but meet the requirement. A last nuance of this mechanism is that at least one of the management transactions must be a policy change mode transaction. This is to ensure that the currency administrator can always change policy (as the miners might just include non-policy management transactions to meet the minimum requirement). \section{Security Models} \label{Security} A key aspect of our architecture is to ensure that a balance of power is maintained. Users of the system, including currency managers, should be able to issue any valid transaction onto the blockchain (pursuant to the current policy settings). Miners should be able to enforce policy restrictions and provide transparency for all transactions added to the blockchain. There are two security models that can be used to enforce this balance of power. Each model slightly favors one party, currency managers or miners, although both achieve a reasonable balance (dependent upon the use case). \subsection{Independent Mining Model} In the independent mining model, the currency administrator permanently disables the requirement to include management transactions periodically (thus the blockchain is not dependent on receiving management transactions). This can be done by having the initial node permanently set the policy change type 13 to 0. In this mode the currency administrator cannot take over maintenance of the blockchain (since mining is unrestricted as with Bitcoin). However, if at least 51 \% of the miners collude to `revolt' against the currency managers, they can prevent future management transactions from entering the blockchain (as well as issuing the well known set of 51 \% attacks present with most blockchains \cite{yli2016current}). The way this attack works is that the miners controlling 51 \% of the computational power simply work on a chain with only their own blocks, excluding the blocks produced by others. Over time, their chain will be longer since they own the majority of the computational power and the other miners will follow their chain (fruitlessly trying to append blocks in a competition they will never win) \subsection{Dependent Mining Model} Even though the 51 \% attack possibility exists in Bitcoin and most other cryptocurrencies, the risk may be too great for some issuers of cryptocurrency; in such a case, the currency administrator can use our dependent mining model. In this case the blockchain is dependent on receiving management transactions. With this approach, the currency administrator using an M node sets policy change types 13 and 14. This forces the miners to include a certain number of management transaction per a certain number of blocks. We advise setting this liberally (type 13 large and type 14 small) since the expectation is that 51 \% of the miners will not revolt. If a revolt occurs and miner only include the minimum necessary, then these policy values can be changed to force the miners to allow for more management transactions. If the miners completely revolt and violate policy, the `compliant' miners will reject their blocks. This would fork the blockchain into a compliant chain and a non-compliant chain. This is the same thing that would happen with any cryptocurrency if a group of miners begin producing blocks that do not satisfy the specification requirements. An important aspect of this second model is that it gives more power to the currency administrator than the first model. This can be seen as a positive feature or a weakness depending upon the use case and perspective. With the second model, the currency managers accounts can refuse to submit management transactions, which will eventually cause block creation to halt (issuing management transactions would immediately restart production). This may not be considered a significant threat as the currency administrator initiated the blockchain and inherently will want it to continue operating (this argument is somewhat analogous to the one explaining why Bitcoin in practice is resistant to a 51 \% attack even though theoretically it is vulnerable \cite{yli2016current}: the miners have a huge stake in the system and won't want it to fail). This could even be considered a feature as owners of a blockchain could eventually deprecate it and move the data to a new blockchain with enhanced technical capabilities. Note that using such an option would be extremely visible and necessarily be rare as it would require all of the users' cryptocurrency software to be updated and reconfigured. \subsection{Node Software Security} We should note that in all cryptocurrency systems, the authors of the software used by the participating nodes (especially the mining nodes) have significant power. Our architecture is no exception. However, here there is also a balance of power. The currency administrator will likely be a maintainer of the software used by nodes to maintain the blockchain. Hypothetically, they could use this to violate established permanent policy and/or take control of the blockchain from the miners through the creation and publication of `malicious' software. However, this can only occur if the majority of miners adopt the malicious software. Even if this did happen (e.g., through miners blindly adopting an update), the miners could simply roll back to a previous non-malicious version to restore the proper function of the architecture. If miners author the node software, they publish `malicious' software, and the majority of miners adopt it, the miners could revolt against the currency administrator. However, this is identical to a 51 \% attack as described above. The result would be a forking of the blockchain, creating compliant and non-compliant chains. The compliant chain would continue to implement our architecture with a reduce set of compliant miners. \section{Conclusion} \label{Conclusion} We provide a novel cryptocurrency architecture which is a hybrid approach where a managed cryptocurrency is maintained through distributed open consensus based methods. Key to this architecture is the idea of a genesis transaction upon which all other transactions are based and which enables the establishment of a hierarchy of accounts with differing roles. It is these roles that enabled us to introduce features from fiat currencies into a cryptocurrency: law enforcement, central banking, and account management. Another novel feature is that the architecture allows the cryptocurrency policy to be maintained dynamically by the currency administrator, but certain policy settings can be made permanent in order to facilitate confidence in the stability of the system. This is especially important for the relationship between the currency administrator and an independent community of miners. The currency administrator can control block rewards, which indirectly enables the currency administrator to adjust the power consumption of blockchain maintenance. However, the currency administrator can enact permanent policy to guarantee the miners a certain level of reward. This is important not only to the miners but it prevents the currency administrator from lowering the block reward to nothing and then taking over the mining (and thus completely controlling the blockchain as with many permissioned blockchain systems). Our policy system thus enables a cryptocurrency to be set up that has a balance of power where the currency administrator can perform management functions but where a group of independent miners enforce policy and provide transparency through recording all administrative activity on the blockchain. However, the possibility still exists that the currency administrator or miners could violate policy and attempt to take control of the system. To mitigate this, we provide two security policies that can enforce the balance of power (each with a small bias one direction or the other). Lastly, we showed that our architecture can be implemented through modest changes to the Bitcoin specification. We note though that our approach is not tied to Bitcoin and can be implement on differing cryptocurrency platforms. \bibliographystyle{IEEEtran}
1,314,259,995,532
arxiv
\section{Introduction} A large number of extensions of the SM predict the existence of color triplet particles carrying simultaneously leptonic and baryonic number, the so-called leptoquarks. Leptoquarks are present in models that treat quarks and leptons on the same footing, such as composite models \cite{comp}, grand unified theories \cite{gut}, technicolor models \cite{tech}, and superstring-inspired models \cite{rizzo}. Since leptoquarks are an undeniable signal for physics beyond the SM, there have been several direct searches for them in accelerators. At the CERN Large Electron-Positron Collider (LEP), the experiments established a lower bound $M_{LQ} \gtrsim 45$--$73$ GeV for scalar leptoquarks \cite{lep}. On the other hand, the search for scalar leptoquarks decaying into an electron-jet pair in $p\bar{p}$ colliders constrained their masses to be $M_{LQ} \gtrsim 113$ GeV \cite{ppbar}. Furthermore, the experiments at the DESY $ep$ collider HERA \cite{hera} place limits on their masses and couplings, leading to $M_{LQ} \gtrsim 92-184$ GeV depending on the leptoquark type and couplings. There have also been many studies of the possibility of observing leptoquarks in the future $pp$ \cite{fut:pp}, $ep$ \cite{buch,fut:ep}, $e^+e^-$ \cite{fut:ee}, $e\gamma$ \cite{fut:eg}, and $\gamma\gamma$ \cite{fut:gg} colliders. In this work we study the constraints on scalar leptoquarks that can be obtained from their contributions to the radiative corrections to the $Z$ physics. We evaluated the one-loop contribution due to leptoquarks to all LEP observables and made a global fit in order to extract the 95\% confidence level limits on the leptoquarks masses and couplings \cite{nois}. The most stringent limits are for leptoquarks that couple to the top quark. Therefore, our results turn out to be complementary to the low energy bounds \cite{leurer,davi} since these constrain more strongly first and second generation leptoquarks. The masses and couplings of leptoquarks are constrained by low-energy experiments, since the leptoquarks induce two-lepton--two-quark effective interactions, for energies much smaller than their masses \cite{leurer,davi}. The processes that lead to strong limits are: $\bullet$ Leptoquarks can give rise to flavor changing neutral current (FCNC) processes if they couple to more than one family of quarks or leptons \cite{shanker,fcnc}. In order to avoid strong bounds from FCNC, we assumed that the leptoquarks couple to a single generation of quarks and a single one of leptons. However, due to mixing effects on the quark sector, there is still some amount of FCNC \cite{leurer} and, therefore, leptoquarks that couple to the first two generations of quarks must comply with some low-energy bounds \cite{leurer}. $\bullet$ The analyses of the decays of pseudoscalar mesons, like the pions, put stringent bounds on leptoquarks unless their coupling is chiral -- that is, it is either left-handed or right-handed \cite{shanker}. $\bullet$ Leptoquarks that couple to the first family of quarks and leptons are strongly constrained by atomic parity violation \cite{apv}. In this case, there is no choice of couplings that avoids the strong limits. It is interesting to keep in mind that the low-energy data constrain the masses of the first generation leptoquarks to be bigger than $0.5$--$1$ TeV when the coupling constants are equal to the electromagnetic coupling $e$ \cite{leurer}. The bounds on scalars leptoquarks coming from low-energy and $Z$ physics exclude large regions of the parameter space where the new collider experiments could search for these particles, however, not all of it \cite{fut:pp,fut:ep,fut:ee,fut:eg,fut:gg}. Notwithstanding, we should keep in mind that nothing substitutes the direct observation. \section{Effective Interactions and Analytical Expressions} \label{l:eff} A natural hypothesis for theories beyond the SM is that they exhibit the gauge symmetry $SU(2)_L \times U(1)_Y$ above the symmetry breaking scale $v$. Therefore, we imposed this symmetry on the leptoquark interactions. In order to avoid strong bounds coming from the proton lifetime experiments, we required baryon ($B$) and lepton ($L$) number conservation. The most general effective Lagrangian for leptoquarks satisfying the above requirements and electric charge and color conservation is \cite{buch} \begin{eqnarray} {\cal L}_{{eff}} & & = {\cal L}_{F=2} ~+~ {\cal L}_{F=0} \; , \nonumber \\ {\cal L}_{F=2} & & = \left ( g_{{1L}}~ \bar{q}^c_L~ i \tau_2~ \ell_L + g_{{1R}}~ \bar{u}^c_R~ e_R \right )~ S_1 + \tilde{g}_{{1R}}~ \bar{d}^c_R ~ e_R ~ \tilde{S}_1 + g_{3L}~ \bar{q}^c_L~ i \tau_2~\vec{\tau}~ \ell_L \cdot \vec{S}_3 ~ , \label{lag:fer} \label{eff} \\ {\cal L}_{F=0} & & = h_{{2L}}~ R_2^T~ \bar{u}_R~ i \tau_2 ~ \ell_L + h_{{2R}}~ \bar{q}_L ~ e_R ~ R_2 + \tilde{h}_{{2L}}~ \tilde{R}^T_2~ \bar{d}_R~ i \tau_2~ \ell_L \; , \nonumber \end{eqnarray} where $F=3B+L$, $q$ ($\ell$) stands for the left-handed quark (lepton) doublet, and $u_R$, $d_R$, and $e_R$ are the singlet components of the fermions. We denote the charge conjugated fermion fields by $\psi^c=C\bar\psi^T$ and we omitted in (\ref{lag:fer}) the flavor indices of the couplings to fermions and leptoquarks. The leptoquarks $S_1$ and $\tilde{S}_1$ are singlets under $SU(2)_L$ while $R_2$ and $\tilde{R}_2$ are doublets, and $S_3$ is a triplet. Furthermore, we assumed in this work that the leptoquarks belonging to a given $SU(2)_L$ multiplet are degenerate in mass, with their mass denoted by $M$. Local invariance under $SU(2)_L \times U(1)_Y$ implies that leptoquarks also couple to the electroweak gauge bosons. To obtain the couplings to $W^\pm$, $Z$, and $\gamma$, we substituted $\partial_\mu$ by the electroweak covariant derivative ($D_\mu$) in the leptoquark kinetic Lagrangian: \begin{equation} D_\mu \Phi = \left [ \partial_\mu - i \frac{e}{\sqrt{2} s_W} \left ( W_\mu^+ T^+ + W_\mu^- T^- \right ) - i e Q_Z Z_\mu + i e Q^\gamma A_\mu \right ] \Phi \; , \end{equation} where $\Phi$ stands for the leptoquarks fields, $Q^\gamma$ is the electric charge matrix of the leptoquarks, $s_W$ is the sine of the weak mixing angle, and the $T$'s are the generators of $SU(2)_L$ for the representation of the leptoquarks. The weak neutral charge is $Q_Z = (T_3 - s_W^2 Q^\gamma)/s_W c_W$. We employed the on-shell-renormalization scheme, adopting the conventions of Ref.\ [20]. We used as inputs the fermion masses, $G_F$, $\alpha_{{em}}$, and the $Z$ mass, and the electroweak mixing angle being a derived quantity that is defined through $\sin^2 \theta_W = s_W^2 \equiv 1 - M^2_W / M^2_Z$. We evaluated the loops integrals using dimensional regularization and we adopted the Feynman gauge to perform the calculations. Close to the $Z$ resonance, the physics can be summarized by the effective neutral current \begin{equation} J_\mu = \left ( \sqrt{2} G_\mu M_Z^2 \rho_f \right )^{1/2} \left [ \left ( I_3^f - 2 Q^f s_W^2 \kappa_f \right ) \gamma_\mu - I_3^f \gamma_\mu \gamma_5 \right ] \; , \label{form:nc} \end{equation} where $Q^f$ ($I_3^f$) is the fermion electric charge (third component of weak isospin). The form factors $\rho_f$ and $\kappa_f$ have universal contributions, {\em i.e.} independent of the fermion species, as well as non-universal parts: \begin{eqnarray} \rho_f & = & 1 + \Delta \rho_{{univ}} + \Delta \rho_{{non}} \; , \\ \kappa_f & = & 1 + \Delta \kappa_{{univ}} + \Delta \kappa_{{non}} \; . \end{eqnarray} Leptoquarks can affect the physics at the $Z$ pole through their contributions to both universal and non-universal corrections. The universal contributions can be expressed in terms of the unrenormalized vector boson self-energy ($\Sigma$) as \begin{eqnarray} \Delta \rho^{LQ}_{{univ}}(s) &=& -\frac{\Sigma^Z_{LQ}(s)-\Sigma^Z_{LQ}(M_Z^2)}{s-M_Z^2} +\frac{\Sigma^Z_{LQ}(M_Z^2)}{M_Z^2} -\frac{\Sigma^W_{LQ}(0)}{M_W^2} - 2 \frac{s_W}{c_W} \frac{\Sigma^{\gamma Z}_{LQ}(0)} {M_Z^2} - \chi_e - \chi_\mu \; ,\\ \Delta \kappa^{LQ}_{{univ}} &=& - \frac{c_W}{s_W}~ \frac{\Sigma^{\gamma Z}_{LQ}(M_Z^2)}{M_Z^2} - \frac{c_W}{s_W}~ \frac{\Sigma^{\gamma Z}_{LQ}(0)}{M_Z^2} +\frac{c_W^2}{s_W^2} \left[ \frac{\Sigma_{LQ}^Z(M_Z^2)}{M_Z^2}- \frac{\Sigma_{LQ}^W(M_W^2)}{M_W^2}\right] \; , \end{eqnarray} where the factors $\chi_\ell$ are defined below. The leptoquark contributions to the self-energies can be easily evaluated, yielding \begin{equation} {\Sigma}^{V}_{{LQ}}(k^2) = - \frac{\alpha_{{em}}}{4\pi} N_c \sum_{j} {\cal F}^V_j~ {\cal H} \left ( k^2, M^2\right ) \; , \label{sig:g} \end{equation} where $N_c = 3$ is the number of colors and the sum is over all members of the leptoquark multiplet. The coefficient ${\cal F}^V_j$ is given by $(Q^\gamma_{j})^2$, $\left ( Q_Z^{j} \right) ^2$, $ -Q^\gamma_{j} Q_Z^{j}$, and $ \left ( T_3^{j} \right )^2/s_W^2$ for $V = \gamma$, $Z$, $\gamma Z$, and $ W$ respectively. The function ${\cal H}$ is defined according to: \begin{equation} {\cal H}(k^2, M^2) = - \frac{k^2}{3} \Delta_M - \frac{2}{9}k^2 - \frac{4 M^2 - k^2}{3} \int^1_0 dx~ \ln \left [ \frac{{ x^2 k^2 - x k^2 + M^2 - i \epsilon}} {M^2} \right ] \; , \end{equation} with \begin{equation} \Delta_M = \frac{2}{4-d} - \gamma_E + \ln(4\pi) - \ln \left ( \frac{M^2}{\mu^2} \right ) \; , \label{delta} \end{equation} and $d$ being the number of dimensions. The factors $\chi_\ell$ ($\ell = e$, $\mu$) stem from corrections to the effective coupling between the $W$ and fermions at low energy. Leptoquarks modify this coupling, inducing a contribution that we parametrize as \begin{equation} i \frac{e}{\sqrt{2} s_W}~ \chi_\ell~ \gamma_\mu P_L \; , \end{equation} where $P_L$ ($P_R$) is the left-handed (right-handed) projector and $\ell$ stands for the lepton flavor. Since this correction modifies the muon decay, it contributes to $\Delta r$, and consequently, to $\Delta \rho_{{univ}}$. Leptoquarks with right-handed couplings, as well as the $F=0$ ones, do not contribute to $\chi_\ell$. The analytical for $\chi_\ell$ due to left-handed leptoquarks in the $F=2$ sector can be found in Ref.\ [14]. Corrections to the vertex $Z f \bar{f}$ give rise to non-universal contributions to $\rho_f$ and $\kappa_f$. We parametrize the effect of leptoquarks to these couplings by \begin{equation} i \frac{e}{2 s_W c_W} \left [ \gamma_\mu F_{VLQ}^{Zf} - \gamma_\mu \gamma_5 F_{ALQ}^{Zf} + I_3^f \gamma_\mu (1 - \gamma_5) \frac{c_W}{s_W} ~ \frac{\Sigma^{\gamma Z}_{LQ}(0)}{M_Z^2} \right ] \; , \end{equation} where for leptons ($\ell$) and leptoquarks with $F=2$ \begin{equation} \begin{array}{ll} F^\ell_{VLQ}= & \pm F^\ell_{ALQ}= \frac{g_{LQ,X}^2}{32 \pi^2} N_c {\displaystyle \sum_{j, q} } {M^{j}_{\ell q}}^\dagger M^{j}_{q\ell} \\ & \left\{ \frac{g^q_X}{2} - s_W c_W Q_Z^{j}- \left (g_X^q + 2 s_W c_W Q_Z^{j} \right )~ \frac{M^2 - m_q^2}{M_Z^2} \left [ - \frac{1}{2} \ln \left ( \frac{M^2}{m_q^2} \right ) + \bar{B_0} ( 0, m_q^2,M^2 ) \right ] \right. \\ & + 2 s_W c_W Q_Z^{j} \frac{M^2 - m_q^2 - \frac{1}{2} M_Z^2}{M_Z^2} \left [ - \ln \left ( \frac{M^2}{m_q^2} \right ) + \bar{B_0} ( M_Z^2, M^2, M^2) \right ] \\ & + g_X^q \frac{M^2-m_q^2 - \frac{1}{2} M_Z^2}{M_Z^2} \bar{B_0} (M_Z^2, m_q^2, m_q^2 ) + g^{\ell}_X \bar{B_1} (0, m_q^2, M^2) \\ & + \left [ g_{-X}^q m_q^2 + g_X^q \frac{(M^2-m_q^2)^2}{M_Z^2} \right ] C_0 (0, M_Z^2, 0, M^2, m_q^2, m_q^2 ) \\ & \left. - 2 s_W c_W Q_Z^{j} \frac{(M^2-m_q^2)^2 + m_q^2 M_Z^2}{M_Z^2} C_0 (0, M_Z^2, 0, m_q^2, M^2, M^2) \right\} \; , \end{array} \label{z:ll} \end{equation} where the $+$ $(-)$ corresponds to left- (right-) handed leptoquarks and $g_{L/R}^f = v^f \mp a^f$ with the neutral current couplings being $a_f = I_3^f$ and $v_f = I_3^f - 2 Q^f s_W^2$. $M^{j}_{q \ell}$ summarizes the couplings between leptoquarks and fermions. The functions $B_1$, $C_0$, $C_{00}$, and $C_{12}$ are the Passarino-Veltman functions \cite{passa}. We used the convention $X=L,R$ and $-L=R$ ($-R=L$). We also defined \begin{eqnarray} B_0 (k^2, M^2, {M^\prime}^2) & \equiv & \frac{1}{2}\Delta_M+ \frac{1}{2} \Delta_{M'} + \bar{B_0} (k^2, M^2, {M^\prime}^2 ) \; , \\ B_1 (k^2, M^2, {M^\prime}^2) & \equiv & - \frac{1}{2} \Delta_M + \bar{B_1} (k^2, M^2, {M^\prime}^2) \; , \end{eqnarray} with $\Delta_M$ given by Eq.\ (\ref{delta}). From this last expression we can obtain the effect of $F=2$ leptoquarks on the vertex $Z q \bar{q}$ simply by the change $\ell \Leftrightarrow q$. Moreover, we can also employ the expression (\ref{z:ll}) to $F=0$ leptoquarks provided we substitute $g_{LQ,X} \Rightarrow h_{LQ,X}$ and $g^q_{\pm X} \Rightarrow - g^q_{\mp X}$. With all this we have \begin{eqnarray} \Delta \rho^{LQ}_{{non}} & = & \frac{F_{ALQ}^{Zf}}{a_f}(M_Z^2) \; , \\ \Delta \kappa^{LQ}_{{non}} & = & - \frac{1}{2 s_W^2 Q^f} \left [ F_{VLQ}^{Zf}(M_Z^2) - \frac{v_f}{a_f}~ F_{ALQ}^{Zf}(M_Z^2) \right ] \; . \end{eqnarray} One very interesting property of the general leptoquark interactions that we are analyzing is that all the physical observables are rendered finite by using the same counter-terms as appear in the SM calculations \cite{hollik}. For instance, starting from the unrenormalized self-energies (\ref{sig:g}) and the mass and wave-function counter-terms we obtain finite expression for the two-point functions of vector bosons. Moreover, the contributions to the vertex functions $Z f \bar{f}$ and $W f \bar{f^\prime}$ are finite. In order to check the consistency of our calculations, we analyzed the effect of leptoquarks to the $\gamma f \bar{f}$ vertex at zero momentum. It turns out that the leptoquark contribution to this vertex function not only is finite but also vanishes at $k^2=0$ for all fermion species. Therefore, our expressions for the different leptoquark contributions satisfy the appropriate QED Ward identities, and leave the fermion electric charges unchanged. Moreover, we also verified explicitly that the leptoquarks decouple in the limit of large $M$. \section{Results and Discussion} \label{res} In our analyses, we assumed that the leptoquarks couple to leptons and quarks of the same family. In order to gain some insight on which corrections are the most relevant, let us begin our analyses by studying just the oblique corrections \cite{obli}, which we parametrized in terms of the variables $\epsilon_1$, $\epsilon_2$, and $\epsilon_3$. These variables depend only upon the interaction of leptoquarks with the gauge bosons and it is easy to see that leptoquarks contribute only to $\epsilon_2$. Imposing that this contribution must be within the limits allowed by the LEP data, we find out that the constraints coming from oblique corrections are less restrictive than the available experimental limits \cite{lep,ppbar,hera}. We then performed a global fit to all LEP data including both universal and non-universal contributions. In Table \ref{LEPdata} we show the the combined results of the four LEP experiments \cite{sm} that were used in our analysis. In order to perform the global fit we constructed the $\chi^2$ function associated to these data and we minimized it using the package MINUIT. We expressed the theoretical predictions to these observables in terms of $\kappa^f$, $\rho^f$, and $\Delta r$, with the SM contributions being obtained from the program ZFITTER \cite{zfit}. In our fit we used five parameters, three from the SM: $m_{{top}}$, $M_H$, and $\alpha_s(M_Z^2)$, and two new ones: $M$, and the leptoquark coupling denoted by $g_{LQ}$. Furthermore, we have also studied the dependence upon the SM inputs $M_Z$, $\alpha_{{em}}$, and $G_F$. \begin{table} \caption{LEP data} \label{LEPdata} \begin{displaymath} \begin{array}{|l|l|} \hline \hline \mbox{Quantity} & \mbox{Experimental value} \\ \hline M_Z \mbox{[GeV]} & 91.1888 \pm 0.0044 \\ \Gamma_Z \mbox{[GeV]} & 2.4974 \pm 0.0038 \\ \sigma_{\rm had}^0 \mbox{[nb]} & 41.49 \pm 0.12\\ R_e = \frac{\Gamma({\rm had})}{\Gamma(e^+ e^-)} & 20.850 \pm 0.067 \\ R_\mu = \frac{\Gamma({\rm had})}{\Gamma(\mu^+ \mu^-)} & 20.824 \pm 0.059 \\ R_\tau = \frac{\Gamma({\rm had})} {\Gamma(\mu^+ \mu^-)} & 20.749 \pm 0.070 \\ A_{FB}^{0e} & 0.0156 \pm 0.0034 \\ A_{FB}^{0\mu} & 0.041 \pm 0.0021 \\ A_{FB}^{0\tau} & 0.0228 \pm 0.0026 \\ A_{\tau}^0 & 0.143 \pm 0.010 \\ A_e^0 & 0.135 \pm 0.011 \\ R_b = \frac{\Gamma(b \bar{b})}{ \Gamma({\rm had})} & 0.2202 \pm 0.0020\\ R_c = \frac{\Gamma(c\bar{c}) }{\Gamma({\rm had})} & 0.1583 \pm 0.0098\\ A_{FB}^{0b} & 0.0967 \pm 0.0038 \\ A_{FB}^{0c} & 0.0760 \pm 0.0091 \\ \hline \hline \end{array} \end{displaymath} \end{table} The first part of our analysis consisted of the study of the constraints on the leptoquark masses and couplings. In order to determine the allowed region in the $M_{LQ}$--$ g_{LQ}$ plane, shown in Fig.\ \ref{contours} for the different models, we obtained the minimum $\chi^2_{{min}}$ of the $\chi^2$ function with respect to the parameters above for each leptoquark model, and we then required that $\chi^2 \leq \chi^2_{{min}} +\Delta \chi^2(2,90\% \hbox{CL})$, with $\Delta\chi^2(2,90\% \hbox{CL})=4.61$. In this procedure, the parameters $m_{{top}}$, $M_H$, and $\alpha_s$, as well as the SM inputs $M_Z$, $\alpha_{{em}}$, and $G_F$ were varied so as to minimize $\chi^2$. We must comment here that the dependence on $\alpha_{{em}}$ and $G_F$ is negligible when they are allowed to vary in their $90\%$ CL range. On the other hand, the variation of $M_Z$ in the interval $91.18\leq M_Z\le 91.196$ leads to a change on the allowed values of leptoquarks parameters of at most 1\%. The contour plots exhibited in Fig.\ \ref{contours} were obtained for third generation leptoquarks. From this figure we can see that the bounds are much more stringent for the leptoquarks that couple to the top quark, {\em i.e.} for $S_{1L(R)}$, $S_3$, and $R_{2L(R)}$, since their contributions are enhanced by powers of the top quark mass. Moreover, the limits are slightly better for left-handed leptoquarks than for right-handed ones, given a leptoquark type, and the curve is symmetric around $g_{LQ}=0$ since the leptoquark contributions are quadratic functions of $g_{LQ}$. The contributions from $\tilde R_2$ and $\tilde S_1$ are not enhanced by powers of the top quark mass since these leptoquarks do not couple directly to up-type quarks. Therefore, their limits are much weaker, depending on $m_{{top}}$ only through the SM contribution, and the bounds for these leptoquarks are worse than the present discovery limits unless they are strongly coupled ($g_{LQ}^2 = 4 \pi$). Moreover, the limits on first and second generation leptoquarks are also uninteresting for the same reason. Nevertheless, if we allow leptoquarks to mix the third generation of quarks with leptons of another generation the bounds obtained are basically the same as the ones discussed above\footnote{In the case of first generation leptons, we must also add a tree level $t$-channel leptoquark exchange to some observables.}, since the main contribution to the constraints comes from the $Z$ widths. We next present our results as 95\% CL lower limits in the leptoquark mass and study the dependence of these limits upon all other parameters. For this, we minimized the $\chi^2$ function for fixed values of $\alpha_s$, $M_H$, and $m_{{top}}$ and then required $\chi^2 (\alpha_s, M_H, m_{{top}})\le \chi^2_{{min}} (\alpha_s, M_H, m_{{top}})+ \Delta\chi^2 (1,90\% \hbox{CL})$, with $ \Delta\chi^2(1,90\% \hbox{CL})=2.71$. Our results are shown in Table \ref{res:top} where we give the 95\% CL limits obtained for a third generation leptoquark for several values of the coupling constants $g_{LQ}$ ($=\sqrt{4\pi}$, $1$, and $e/s_W$). The values given correspond to $m_{top}=175$ GeV and variation of $M_H=60-1000$ GeV and $\alpha_s(M_Z^2)=0.126\pm 0.005$, which is the range associated to the best values obtained from a fit in the framework of the SM \cite{sm}. For a fixed value of $m_{{top}}$ and leptoquark coupling constant, the dependence on $\alpha_s(M_Z^2)$ and $M_H$ is such that the limits are more stringent as $\alpha_s(M_Z^2)$ increases and $M_H$ decreases. The SM parameters $M_Z$, $\alpha_{{em}}$, and $G_F$ have been also varied in their allowed range. However, this did not affect the results in a noticeable way. \begin{table} \caption{ Lower limits (95\% CL) for the mass of third generation leptoquarks in GeV for different values of the couplings, assuming $m_{{top}} = 175$ GeV, $\alpha_s(M_Z^2) = 0.126\pm 0.005$, and $M_H = 60-1000$ GeV.} \label{res:top} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \hline $g_{LQ}$ & $S_1^R$ & $S_1^L$ & $S_3$ & $R_2^R$ & $R_2^L$ & $\tilde{S}_1^R$ & $\tilde{R}_2^L$ \\ \hline $\protect\sqrt{4\pi}$ & 5800--3200 & 6000--3500 & 8000--3700 & 6000--3300 & 6800--3400 & 300--100 & 550--120\\ 1 & 1200--550 & 1200--600 & 1700--700 & 1250--600 & 1400--600 & --- & ---\\ ${\displaystyle \frac{e}{s_W}}$ & 550--200 & 600--225 & 900--325 & 600-250 & 700-250 & --- & --- \\ \hline \hline \end{tabular} \end{table} We would like to stress that the large apparent uncertainty associated with the value of $\alpha_s$ and $M_H$ can be considered somehow fictitious as the value of $\chi^2_{{min}}$ grows very fast when we move from the central value $\alpha_s=0.126$, $M_H=300$ GeV what means that the quality of the fit for the extreme values of these parameters is rather bad. For instance, $\alpha_s=0.117$, results in a too high $\chi^2$, even in the context of the SM ($\chi^2_{{min}}>26/12$). \section{Acknowledgements} I would like to thank Alan Sommerer for his hospitality. This work was supported by Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq) and by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP).
1,314,259,995,533
arxiv
\section{\label{sec:level1}Introduction} Randomness plays a prominent role in quantum computing, quantum information processing and physics in general. In particular, random unitaries chosen from the Haar measure \cite{4} on the unitary group U(N) find applications in randomized benchmarking \cite{6} , noise estimation \cite{5}, quantum metrology \cite{OAC+16}, as well as modeling thermalization \cite{7} and even black hole physics \cite{8}. Unfortunately, genuine Haar distributed unitaries are hard to create as the scaling required is exponential in the number of qubits \cite{9}. On the other hand efficient substitutes of Haar distributed unitaries were shown to exist \cite{1}, \cite{10}, \cite{11}, \cite{12}, \cite{13}, \cite{3}, \cite{15}. These substitutes are known as unitary t-designs \cite{10} - ensembles over subsets of U(N) which mimic exactly \cite{10}, \cite{11} , \cite{15} or approximately \cite{1}, \cite{12}, \cite{13},\cite{3},\cite{15} choosing from the Haar measure up to order t in the statistical moments. Following \cite{15} our approach is to harness the quantum randomness arising from applying a measurement based (MB) scheme to produce approximate designs. In MB computation \cite{14}, unitary determinsitic computation is achieved by making sequential, adaptive measurements on an entangled multipartite state, known as a graph state \cite{20}. Without these adaptive feedforward corrections, the inherent randomness of the measurements effectively samples from ensembles of unitaries. In \cite{15} it was shown that starting with a fixed graph state, applying fixed angle measurements (with no need for feedforward corrections), effectively samples from an approximate t-design. Furthermore this process is efficient in the number of qubits, preperation and measurements, following from the efficiency of the construction of Brandao et al. \cite{1}. Indeed the construction of the graph state essentially mimics the random circuit construction of Brandao et al. \cite{1}. However, in doing so, the graph itself is rather complicated, and moreover is not a simple regular lattice. A natural question is then, can simple, regular lattices (such as those useful for universal measurement based computation \cite{14}, \cite{19}) applied to generate t-designs? As well as being more convenient from a practical point of view (in terms of generating the graph state), this connects the question of optimal generation of ensembles to standard measurement based quantum computation. Furthermore it requires a new proof that it is an approximate t-design (though the techniques also follow along the lines of \cite{1} it does not follow directly from their results). In this work we show that it is possible. In particular we show that running fixed measurement MB scheme on a regular graph with poly-log (in n, t and $\dfrac{1}{\varepsilon}$ ) number of qubits, with no feed-forward, results in an ensemble of random unitaries which forms a $\varepsilon$-approximate t-design ensemble. The graph we use is very similar to the brickwork graph known to be a universal resource for MB quantum computation \cite{19}. The proofs presented here rely principally on the G-local random circuit construction (GLRC) of Brandao et al. \cite{1}, the detectability lemma (DL) of Aharonov et al. \cite{2} as well as a theorem \cite{33}, \cite {3}, \cite{1} on the equivalence between tensor product expanders (TPE’s) and approximate t-designs.\\ This paper is divided as follows: section II defines some preliminary notions, section III provides a brief statement of the results, section IV contains a detailed proof of the results, finally section V discusses briefly some potential applications. \section{\label{sec:level2}Preliminaries} \subsection{\label{subsec:level1} MBQC} The model of computation used throughout this work is the measurement based quantum computation (MBQC) model. This model was first proposed by Raussendorf and Briegel \cite{14} as an alternative to the gate model of quantum computing \cite{16}. Computation is carried out by first preparing a large entangled state (a graph state), followed by single qubit measurements. Crucially, in order to deterministically perform a desired unitary, the measurements must be done adaptively - to counter the inherent randomness arising from the measurements the measurement angles are corrected by feedforward process using previous measurement results. However, instead of doing these corrections we will use this randomness as a resource to sample from an ensemble of unitaries with the desired structure - namely that they are a t-design. A \emph{graph state} \cite{20} is a pure entangled quantum state of $n$ qubits in one to one correspondence with a graph $G=\{E,V\}$ of $n$ vertices and edges. Each vertex $i \in V$ of the graph is associated with a qubit, and each edge ${i,j \in E}$ represents a preperation entanglement \begin{eqnarray} |G\rangle = \prod_{i,j \in E} CZ_{i,j} |+...+\rangle_V \nonumber. \end{eqnarray} For computations on quantum inputs, a set of vertices $I \subset V$ are assigned as the input vertices, with initial input state $|\psi_{in}\rangle_I$, and the associated \emph{open graph state} is defined as \begin{eqnarray} \label{eq Open Graph State} |G(\psi)\rangle = \prod_{i,j \in E} CZ_{i,j} |\psi_{in}\rangle_I|+...+\rangle_{V\setminus I}. \end{eqnarray} We also identify the set of output qubits by the vertices $O\subset V$. Computation is then carried out by sequentially measuring all the non output qubits, in our case in a basis on the $X-Y$ plane. Each measurement is represented by an angle $\alpha$, corresponding to measureing in the basis \newline $\{|\pm\alpha\rangle := (|0\rangle \pm e^{i \alpha}|1\rangle)\}$. In standard MBQC the correction strategy is given by the gflow \cite{21}- a partial order over the graph as well as a function, which give a time order for the measurements and the dependency respectively. In this work, following \cite{15}, we do not adapt the measurement angles so that for each measurement result a potentially different unitary is performed. Following the convention of \cite{21} (see also \cite{15}) we represent these resources as graphs where the input vertices have squares on them and the measured qubits have an angle inside representing the measurement angle, hence the quantum outputs $O$ are empty circles (note that in \cite{21} measured qubits are simply coloured black). Figure \ref{fig1} illustrates this for a simple example. Following equation (\ref{eq Open Graph State}), for input state $|\psi_{in}\rangle = a|0\rangle+b|1\rangle$ Fig. \ref{fig1} corresponds to an initial open graph state \begin{eqnarray} |G_\psi\rangle & =& a |0\rangle |+\rangle + b |1\rangle |-\rangle \nonumber \\ & = &\frac{1}{\sqrt{2}}\left(|+\alpha\rangle HZ(\alpha) |\psi_{in}\rangle +|-\alpha\rangle H Z Z(\alpha) |\psi_{in}\rangle \right)\nonumber, \end{eqnarray} where $H$ is the Hadamard gate, $Z$ is the Pauli Z gate, $Z(\alpha):=e^{-i \alpha Z/2}$ is a rotation by angle $\alpha$ around the Z axis. Denoting $m$ as the binary measurement outcome, associating $m=0$ to outcome $+\alpha$ and $m=1$ to $-\alpha$, it is clear that measuring the first qubit is equivalent to applying the random unitary \begin{eqnarray} U(m) = H Z^m Z(\alpha), \end{eqnarray} with equal probabilty for $m=0$ and $m=1$. As in \cite{15}, the same idea applied to larger graphs, with more inputs and outputs is the source of the random unitary ensembles we will study in this work. \begin{figure}[h] \begin{center} \graphicspath{} \includegraphics[trim={0 10cm 0 0cm}, scale=0.18]{fig1corr.pdf} \caption{MB scheme on a 2 qubit cluster state, the input (squared) qubit when measured at an angle $\alpha$ in the XY plane results in propagation of the input state to output along with application of a random unitary $U$ =$ H $$Z^{m}$ $Z$($\alpha$).} \label{fig1} \end{center} \end{figure} \subsection{\label{subsec:level2}t-designs and Tensor Product Expanders} Unitary t-designs \cite{10}, \cite{12}, \cite{13} are subsets of the unitary group U(N) , sampled with some probability distribution, which mimic either exactly or approximately choosing from the Haar measure on U(N) up to order t in the statistical moments. Our concern in this work is with the latter, known as $\varepsilon$-approximate t-designs. More formally, let H =($C ^{ 2} )^{ \otimes n} $ be a Hilbert space on n qubits, define the density matrix $\rho$=$\ket{\phi}\bra{\phi}$ with $\ket{\phi}$ a unit vector in H (n-qubit state). Then let \{ $p_{i}$ , $U_{i}$ \} be a collection of unitaries in U(N)= U($2^{n}$) (the n-qubit unitary group), where we sample each $ U_{i}$ with probability $p_{i}$ . Let $\mu_{H}$ denote the Haar measure \cite{4} on U($2^{n}$ ). Consider now t-copies of our n-qubit system, an ensemble $\{p_i, U_i\}$ is an \emph{exact t-design} if it satisfies \begin{equation} \begin{aligned} \label{eq1} \sum_{i}p_{i} U_{i} ^{\otimes t} \rho^{t} U_{i} ^{\dagger \otimes t}=\int_{ U(2^{n})}U ^{\otimes t}\rho^{t}U ^{\dagger \otimes t}\mu_{H}(dU), \end{aligned} \end{equation} for all $\rho^{t} \in \mathcal{B}(H^{\otimes t})$, where $\int_{ U(2^{n})}U ^{\otimes t}\rho^{t}U ^{\dagger \otimes t}\mu_{H}(dU)$ represents averaging over the Haar measure. $\varepsilon$-approximate t-designs are defined similarly as ensembles which satisfy \begin{equation} \begin{aligned} \label{eq2} (1-\varepsilon)\int_{ U(2^{n})}U ^{\otimes t}\rho^{t}U ^{\dagger \otimes t}\mu_{H}(dU) \leq \sum_{i}p_{i} U_{i} ^{\otimes t} \rho^{t} U_{i} ^{\dagger \otimes t} \\ \leq (1+\varepsilon) \int_{ U(2^{n})}U ^{\otimes t}\rho^{t}U ^{\dagger \otimes t}\mu_{H}(dU). \end{aligned} \end{equation Approximate t-designs can also be defined using various norms \cite{1}, \cite{13}, \cite{3}. A useful concept equivalent to a t-design is a tensor product expander (TPE) \cite{22} . We say that a couple \{ $p_{i}$ , $U_{i}$ \} is an ($\eta$,t)-TPE if the following equation holds \cite{3}, \cite{22} : \begin{equation} \begin{aligned} \label{eq3} g(t,\mu):=\mid\mid \sum_{i }p_i U_i^{\otimes t,t} - \int_{U(2^{n}) } U^{\otimes t,t}\mu_{H}(dU) \mid\mid_\infty\leq\eta, \end{aligned} \end{equation} where $U^{\otimes t,t} =U^{\otimes t} \otimes U^{\star \otimes t}$, $\star$ denotes the complex conjugate and $\mu$ represents the probability measure on U(d) which results in choosing $U_i$ with probability $p_i$. In our main proof we will rely on the following theorem \cite{33}, \cite {3}, \cite{1} on the equivalence between TPE’s and $\varepsilon$-approximate t-designs: \newtheorem{theorem}{Theorem} \begin{theorem} \cite{33}, \cite {3}, \cite{1}\\ Let \{ $p_{i}$ , $U_{i}$ \} be an ($\eta$,t)-TPE with $\eta$ < 1.Denote by $\mathcal{U}_{i}$ the set of all possible unitaries $U_{i}$. Then iterating this TPE k times ( i.e. obtaining the product $U$=$\prod_{j=1,...,k}$$U_{\pi (j)}$ with the $U_{\pi (j)}$'s independently chosen from the ensemble \{$p_{i}$ , $U_{i}$\} ,\\ $\pi$(j) $\in$ $\{1,…,|\mathcal{U}_{i}|\}$ with k $\ge$ $\dfrac{1}{log(\dfrac{1}{\eta})}log(\dfrac{d^{t}}{\varepsilon})$ results in an ensemble\{ {$p_U$ , $U$=$\prod_{j=1,...,k}$$U_{\pi (j)}$ }\} which is an $\varepsilon$-approximate t-design. Here $d$ is the dimension of the unitary group. \end{theorem} For convenience we define the t’th \emph{moment super operator} of \{$p_{i}$ , $U_{i}$\} (or simply moment super operator) as follows: \begin{equation} \begin{aligned} \label{momop} M_t [\mu]:=\sum_{i} p_i U_i^{\otimes t,t}. \end{aligned} \end{equation} Its role in the TPE condition (\ref{eq3}) means $M_t [\mu]$ plays a major role in our proofs. Indeed, for any ensemble such that sampling it many times does eventually lead to the Haar measure, then $g(t,\mu)$ is equal to the second highest eigenvalue of $M_t[\mu]$ \cite{12,1}. We will then follow the techniques of \cite{1}, \cite{12}, \cite{13} in connecting the calculation of this to gaps of Hamiltonians, which will allow us to prove the connection to t-designs via theorem 1. \subsection{\label{subsec:level3}Many body Physics and t-designs} It has been known for some time \cite{1}, \cite{12}, \cite{13} that the problem of estimating the scaling rate (number of iterations needed to reach a desired accuracy $\varepsilon$) of an $\varepsilon$- approximate t-design can be reduced to a problem of finding the spectral gap (the difference of energy between the ground and first excited state) of some many-body Hamiltonian. Here we give an overview of these techniques, in particular as used in \cite{1}. An extensive body of research (\cite{23}, \cite{24}, \cite {25}, \cite{26} and many others) has been devoted to the case of 1D spin chains, with local Hamiltonians (we assume a finite interaction range) with translational symmetry (see Fig.\ref{fig2}). We will focus exclusively on this case, and more precisely on a type of 1D Hamiltonian (the one we use in our proof) consisting of local terms acting on nearest neighbor spins $ i$ and $i$+1 with translational symmetry, which are frustration free (the entire Hamiltonian can be minimized by minimizing each of its local terms individually) as well as verifying the Nachtergaele criterion (\cite{23}, condition C.3). This family of Hamiltonians was used by Brandao et al. \cite {1} to study their local random circuit (LRC) construction, and later the so-called G-local random circuits (GLRC). We will briefly define these families of circuits and review these proofs. \begin{figure}[h] \begin{center} \graphicspath{} \includegraphics[trim={0 12cm 0 16cm}, scale=0.08]{Figurefornewarticlecorr.pdf} \caption{Example of a 4 spins 1D system. The local Hamiltonians $h_{i,i+1}$ have a range of 2 (i.e. act on nearest neighbor spins). For example $h_{1,2}$ acts on spins 1 and 2. Translational invariance means that any $ h_{i,i+1}$ has the same form on all 2 qubit systems ($i$, $i$+1). In this case the total Hamiltonian of the system of 4 spins can be written as a sum of local Hamiltonians, i.e. H=$h_{1,2}$+$h_{2,3}$+$h_{3,4}$. } \label{fig2} \end{center} \end{figure} The local random circuits (LRC) in \cite{1} generate random circuits on n-qubits as follows. For each run of the LRC, a unitary $ U$ $\in$ U(4) is chosen from the Haar measure on U(4), then an index $i$ is chosen uniformly at random from the set \{1,…., n-1\} , finally $U$ is applied to qubits $i$ and $ i$+1. The LRC defines a couple \{$\mu_{LRC}$,$\mathcal{U}$\} where $\mu_{LRC}$ is the probability measure induced by one LRC run, and $\mathcal{U}$ is the set of all the possible unitaries which can be generated by one LRC run. We arrive at the following moment super operator associated to one run of the LRC \begin{equation} \begin{aligned} \label{eq4} M_{t}[\mu_{LRC}]=\dfrac{1}{n-1}\sum_{i=1}^{n-1}\int_{U(4)}U_{i,i+1}^{\otimes t,t}\mu_{H}(dU)=\dfrac{1}{n-1}\sum_{i=1}^{n-1}P_{i,i+1}, \end{aligned} \end{equation} \\ where $ U_{i,i+1}$=$ 1^{\otimes i-1}\otimes U \otimes1^{\otimes n-i-1}$, $U$ $ \in$ U(4), $P_{i,i+1}:=\int_{U(4)}U_{i,i+1}^{\otimes t,t}\mu_{H}(dU)$ and $\mu_{H}$ is the Haar measure on U(4). Since each of the $P_{i,i+1}$’s is Hermitian, then $M_t [\mu_{LRC} ]$ is itself Hermitian. Now consider the Hamiltonian \begin{equation} H=\sum_{i}h_{i,i+1}, \end{equation} where $ h_{i,i+1}$=$1- P_{i,i+1}$. Then \begin{equation} M_{t}[\mu_{LRC}] = I-\dfrac{H}{n-1} \end{equation} The ground space of H has an eigenvalue of 0 and the gap between its ground and first excited spaces gives the second highest eigenvalue of $M_{t}[\mu_{LRC}]$. This $H$ is a 1D spin chain Hamiltonian with nearest neighbor local terms which are translationally invariant. It is also frustration free by construction because the Hamiltonian can be minimized simply by minimizing all of its local terms (taking their ground state of energy 0) individually. Brandao et al. also proved that this Hamiltonian verifies the Nachtergaele criterion, then also bounded the Nachtergaele Bound using path coupling techniques \cite{1}. In this way they show that the spectral gap $\Delta H$ of $H$ admits the following (polynomial in t) bound for n $\ge$ $\lfloor 2.5log_{2}(4t) \rfloor$ : \begin{equation} \begin{aligned} \label{eq5} \Delta H \ge (1700. \lfloor log_{2} (4t) \rfloor^{2}. t^{5}. t^{\dfrac{3.1}{log(2)}})^{-1}, \end{aligned} \end{equation} where $\lfloor x \rfloor$ denotes the floor function acting on variable $x$. We now move to G-local random circuits, which are the finite set counter parts of LRC. One run of the GLRC follows exactly as the LRC case, but instead of choosing a unitary $U$ from the Haar measure of U(4), we choose with uniform probability from a finite set G of SU(4) which is universal and contains inverses. One can show from the beautiful result of Bourgain and Gamburd \cite{26} that the Hamiltonian $H_{GLRC}$=$\sum_{i} h^{'}_{i,i+1}$=$\sum_{i}(1-P^{'}_{i.i+1})$ with $P^{'}_{i.i+1}$ =$\dfrac{1}{|G|} \sum_{U \in G}(1^{\otimes i-1}\otimes U \otimes1^{\otimes n-i-1})^{\otimes t,t}$ admits the following bound for its spectral gap: \begin{equation} \begin{aligned} \label{eq6} \Delta H_{GLRC} \ge \alpha.\Delta H, \end{aligned} \end{equation} with $\alpha$ a constant and $\Delta H$ the spectral gap of the LRC Hamiltonian. Note that because the set G contains unitaries and their inverses and samples them uniformly, then $ \mu_{G }$($U$)=$\mu_{G}$ ($U^{\dagger}$), for all $U$ ,$ U^{\dagger}$ $\in$ G. This means that $P^{'}_{i,i+1}$ (hence $ H_{GLRC}$ ) a Hermitian operator, and the above definition of a GLRC Hamiltonian makes sense. We can rewrite equation (\ref{eq6}) as follows: \begin{equation} \begin{aligned} \label{eq7} \Delta H_{GLRC} \ge (C. \lfloor log_{2} (4t)\rfloor^{2}. t^{5}. t^{\dfrac{3.1}{log(2)}})^{-1}= P_{GLRC} \end{aligned} \end{equation} C being a constant depending on the gate set G. These spectral gaps directly give the second highest eigenvalue of the corresponding moment super operators, equal to the $g(t, \mu)$ in equation (\ref{eq3}) confirming the TPE conditions, which through theorem 1 then allow statements about their efficiency as t-designs \cite{1}. \section{\label{sec:level4} Main Results} In order to state the main results, we define some simple graph states for two input qubits. We will hence forth refer to a graph state with an MB scheme applied to it as a gadget. These will act as the building blocks for our construction. Consider the 2, 5-column 2-row brickwork states with a fixed angle MB scheme in Fig.\ref{fig3} and Fig.\ref{fig4} which we call $S_{I_{1}}$ and $S_{I_{2}}$ . \begin{figure}[h] \begin{center} \graphicspath{} \includegraphics[trim={0 12cm 0 0cm}, scale=0.4]{fig3acorr.pdf} \caption{The 2-row, 5-column brickwork state gadget giving rise to $S_{I_{1 }}$ } \label{fig3} \end{center} \end{figure} \begin{figure}[h] \begin{center} \graphicspath{} \includegraphics[trim={0 12cm 0 0cm}, scale=0.4]{fig3bcorr.pdf} \caption{The 2-row, 5-column brickwork state gadget giving rise to $S_{I_{2 }}$ } \label{fig4} \end{center} \end{figure} $S_{I_{1} }$ and $ S_{I_{2} }$ give rise respectively to 2 MB ensembles \{$\dfrac{1}{2^{8}}$,$U_{1_{i}}$\} and \{$\dfrac{1}{2^{8}}$,$U_{2_{i}}$\}. The number of unitaries generated by the MB scheme on the 2-row, 5-column brickwork state is $2^{8}$ It can be easily checked that each unitary of the ensembles $S_{I_{1} }$ and $ S_{I_{2} }$ contains (up to a global phase) an inverse in the ensemble. That is, denoting $\mathcal{U}_{S_{I_1}}$ as the set of unitaries generated by $S_{I_1}$ (and similarly for $S_{I_2}$), for all $U_{1_{i}}$$\in \mathcal{U}_{S_{1_i}}$ there exists $U_{1_{j}}$$\in \mathcal{U}_{S_{I_1}}$ such that $U_{1_{i}}=U^{\dagger}_{1_{j}}$. Simlarly for $S_{I_2}$. Consider now a 13-column brickwork gadget: B= $S_{I_{1}} \circ S_{I_{2 }} \circ S_{I_{1} }$, where we mean by W $\circ$ V a concatenation which identifies the output of graph W as an input of graph V. We are now in a position to state our main results:\\ \\ \begin{theorem} The gadget B gives rise to an ensemble of unitaries which is \\ (i) universal on SU(4) \\ (ii) contains elements and their inverses, and\\ (iii) is sampled with a uniform probability. \\ \end{theorem} This theorem means that the set of unitaries generated by B (call it $\mathcal{U}_B$) satisfies the conditions necessary to form a GLRC. Though this is not exactly how our construction works, it will be important in proving our construction works in the proof of theorem 3. Now consider the gadget on n-qubits given in Fig.\ref{fig5} which we call $G_{n}$. The horizontal line with a circle in the middle means a direct link between output and input performed only on the 1st and last rows. The square with the letter B is our 13-column brickwork gadget B, and the empty 3 sided square means that there is no vertical entanglement. \begin{figure}[h] \begin{center} \graphicspath{} \includegraphics[trim={0 5cm 0 0cm},scale=0.4]{fig4corr.pdf} \caption{The graph gadget $G_{n}$ pictured here for even n (the odd n case follows straightforwardly) } \label{fig5} \end{center} \end{figure} The first and last rows of $G_{n}$ are made up of 13 qubits, and all rows in between are made up of 25 qubits. This gives rise in total to a graph composed of 26$+$25(n$-$2) = 25n$-$24 qubits. We now state our second main result. \\ \\ \begin{theorem} The k(n,t,$\varepsilon$)-fold concatenation of $G_{n}$, \newline $E_{n}$ = $G_{n} \circ G_{n} \circ$... , results in an ensemble of unitaries which forms an $\varepsilon$-approximate t-design on n-qubits ( n $\ge$ $\lfloor 2.5log_{2}(4t) \rfloor$ ), with: \\ k(n,t,$\varepsilon$) $\ge$ $\dfrac{3}{log_{2}(1+\dfrac{P_{GLRC}}{2})}(nt+log_{2}(\dfrac{1}{\varepsilon}))$ \end{theorem} \section{\label{sec:level5}Proofs} \subsection{\label{subsec:level6}Proof of Theorem 2} Before going on to universality, let us briefly explain why the set of unitaries generated by B, $\mathcal{U}_B$ contains inverses ($(ii)$ in Theorem 2). Any element $U$ $\in$ $\mathcal{U}_B$ may be written as: $U$=$U_{1}.U_{2}.U_{1}^{'}$, where $U_{1}$ , $U_{1}^{'}$ $\in$ $\mathcal{U}_{S_{I_1}}$and $U_{2}$ $\in$ $\mathcal{U}_{S_{I_2}}$. Since $\mathcal{U}_{S_{I_1}}$and $\mathcal{U}_{S_{I_2}}$contain unitaries and their inverses, we can always find $ U^{\dagger}$=$U_{1}^{ ' \dagger}$.$U_{2}^{\dagger}$.$U_{1}^{\dagger}$ $\in$ $\mathcal{U}_B$. Furthermore, each unitary $U_{\{m\}} \in \mathcal{U}_B$ associated to a specific binary string $\{m\}$ is sampled with a uniform probability of $\dfrac{1}{|\mathcal{U}_B|}$, proving $(iii)$ in Theorem 2. The remainder of this subsection is devoted to proving universality ($(i)$ in Theorem 2), and we will use the approach outlined in \cite{27} and \cite{34} for doing so. Following \cite{27} and \cite{34}, one can show that the group generated by the set of unitaries \{$A$,$A^{\dagger}$,$C$,$C^{\dagger}$,$E$,$E^{\dagger}$,$F$,$F^{\dagger}$\} is dense (universal) on U(4) if the following conditions are satisfied:\\ \\ $C_{1}$ : $H_{1}$:=$\dfrac{log(A)}{i}$ , $H_{2}$:=$\dfrac{log(C)}{i}$, $H_{3}$:=$\dfrac{log(E)}{i}$ and $H_{4}$:=$\dfrac{log(F)}{i}$ and their commutators form a set of 16 linearly independent Hamiltonians which span the Lie algebra \cite{28} of U(4).\\ \\ $C_{2}$ : $H_{1}$, $ H_{2}$, $ H_{3}$ and $ H_{4}$ have eigenvalues that are irrationally related to $\pi$ .\\ We first consider $C_{1}$. We found 4 distinct unitaries $A$=$U_{\{m\}}$, $C$=$U_{\{m^{'}\}}$, $E$=$U_{\{m^{''}\}}$, and $F$=$U_{\{m^{'''}\}}$ . Where $U_{\{m\}}$, $U_{\{m^{'}\}}$, $U_{\{m^{''}\}}$ and $U_{\{m^{'''}\}}$ $\in$ $\mathcal{U}_B$ are associated to the binary strings \{$m$\}=\{0,1,1,0,1,1,1,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,1,0\} , \{$m^{'}$\}=\{0,0,0,1,1,1,1,0,1,0,1,0,1,1,1,1,0,1,1,1,0,1,0,0\}, \{$m^{''}$\}=\{0,0,0,1,1,1,1,0,1,0,1,0,1,1,1,1,0,1,1,1,0,1,0,0\}, and \{$m^{'''}$\}=\{0,1,1,1,0,1,1,0,0,0,0,1,0,0,0,1,0,0,1,0,0,0,1,0\}. We adopt the convention that the first 12 binary numbers appearing in a given binary string represent the measurement results on qubits of the first row of B from left to right (input towards output), and the last 12 binaries represent the measurements performed on the qubits of the second row of B from left to right. We then construct 16 Hamiltonians $H_{1}$ ,…, $H_{16}$ as follows:\\ \\ $H_{1}=\dfrac{log(A)}{i}$\\ $H_{2}=\dfrac{log(C)}{i}$\\ $H_{3}=\dfrac{log(E)}{i}$\\ $H_{4}=\dfrac{log(F)}{i}$\\ $H_{5}=i.[H_{1},H_{2}]$\\ $H_{6}=i.[H_{1},H_{3}]$\\ $H_{7}=i.[H_{1},H_{4}]$\\ $H_{8}=i.[H_{2},H_{3}]$\\ $H_{9}=i.[H_{2},H_{4}]$\\ $H_{10}=i.[H_{2},H_{5}]$\\ $H_{11}=i.[H_{2},H_{6}]$\\ $H_{12}=i.[H_{3},H_{4}]$\\ $H_{13}=i.[H_{3},H_{5}]$\\ $H_{14}=i.[H_{3},H_{6}]$\\ $H_{15}=i.[H_{4},H_{5}]$\\ $H_{16}=i.[H_{4},H_{6}]$\\ After that, we expand each of the 16 Hamiltonians in the basis: P=\{$P_{ij}$\} $i$,$j$=0,..,3, where P is a basis of the Lie algebra of U(4) . In other words, we write each: $H_{k}$=$a_{k}^{ij}$.($P_{ij}$) (Einstein summation convention adopted over i and j), where the $a_{k}^{ij}$ ’ s are real numbers. Since P is a basis of the Lie algebra of U(4) (over the field of real numbers ), proving linear independence of the 16 Hamiltonians \{$H_{k}$\} $k$=1,..16 in the basis P means that the set \{$H_{k}$\} is itself a basis of the Lie algebra of U(4). The linear independence of the 16 generators \{$H_k$\} is equivalent to the non-vanishing of the determinant of a 16 by 16 matrix M, where each of the 16 columns of M are made up of the 16 coefficients\{$a_{k}^{ij}$\} for a given $ k$. We found that the 16 Hamiltonians of our above constructed scheme give rise to a matrix M with non-vanishing determinant \footnote{this was done numerically however well within numerical precision}, thus this scheme forms a basis of the Lie algebra of U(4) and $C_{1}$ is verified for a subset of $\mathcal{U}_B$ (and hence for $\mathcal{U}_B$ itself ). Proving $C_{2}$ requires the use of a result in algebraic number theory called Lehmer’s theorem \cite{30}. Its context is described in the following Lemma: \newtheorem{lem}{Lemma} \begin{lem} \cite{30} If n>2 and k and n are coprime integers, then 2cos($\dfrac{2k\pi}{n}$) is an algebraic integer. \end{lem} An algebraic number is a complex number which is a solution of a polynomial equation with integer coefficients. The minimal polynomial of an algebraic number $z$ is the polynomial of lowest degree with integer coefficients for which $z$ is a solution. An algebraic integer is an algebraic number whose minimal polynomial is monic (that is, the coefficient in front of the highest degree variable is 1) \cite{31}. Lehmer’s theorem states that angles $\alpha$ = $\dfrac{2k\pi}{n}$ which are rationally related to $\pi$ must have $2cos(\alpha)$ an algebraic integer. So, if we can find instances of angles $\alpha$ in which $2cos(\alpha)$ is not an algebraic integer, then $\alpha$ has to be irrationally related to $\pi$ as a consequence of Lehmer’s theorem. Each of the eigenvalues $\lambda$ of $A$, $C$, $E$ or $F$ is a complex number with unit norm (because they are unitary matrices). Thus, $\lambda$=$e^{i\theta}$. We calculated the expression $ 2cos(\theta)$ and constructed its minimal polynomial. We found that for each of the eigenvalues $\lambda$ of $A$ , $C$, $E$ and $F$, $ 2cos(\theta)$ does not verify a monic minimal polynomial (not an algebraic integer) and thus all the $\theta$’s are irrationally related to $\pi$ by Lehmer’s theorem. Further, because $A$, $C$, $E$ and $F$ are diagonal in the same basis as their Hamiltonians $H_{1}$, $H_{2}$, $H_{3}$ and $H_{4}$ \cite{27}, the $\theta$’s we calculated are the eigenvalues of these Hamiltonians. Hence, the eigenvalues of the Hamiltonians are irrationally related to $\pi$ which proves $C_{2}$. Proving $C_{1}$ and $C_{2}$ means that the subset \{$A$,$A^{\dagger}$,$C$,$C^{\dagger}$,$E$,$E^{\dagger}$,$F$,$F^{\dagger}$\} of $\mathcal{U}_B$ is universal on U(4), and thus so is $\mathcal{U}_B$. But $(i)$ further requires that the set be on SU(4). Fortunately, the moment super operator of a set sampled from U(4) can always be thought of as a sampling from SU(4). This can be seen by noting that for all $U$ $\in$ U(4) we have $det(U)$ $\neq$ 0, hence $U^{\otimes t,t }$=$|det(U)|^{\dfrac{t}{2}}$.$U^{ ' \otimes t,t}$=$U^{ ' \otimes t,t}$, where $U^{'}$ $\in$ SU(4). \subsection{\label{subsec:level7}Proof of Theorem 3} Our approach for proving Theorem 3 can be summarized by 2 steps. In the first step, we prove that the ensemble generated by the gadget $G_{n}$ is an ($\eta$,t)-TPE with $\eta$=poly(t) <1 . We do so by using Aharonov et al.’s detecability lemma \cite{2}. The second step uses Theorem 1 to establish the bound on $k(n,t,\varepsilon)$. Consider a GLRC n-qubit Hamiltonian \begin{eqnarray} H_{GLRC} &=& \sum_{i}h^{'}_{i,i+1} \nonumber \\ &=&\sum_{i}(1-P^{'}_{i,i+1}) \end{eqnarray} with G = $\mathcal{U}_B$, and \\ $P^{'}_{i,i+1}$=$\dfrac{1}{|\mathcal{U}_B|} \sum_{U\in \mathcal{U}_B}(1^{\otimes i-1}\otimes U\otimes1^{\otimes n-i-1})^{\otimes t,t}$. Define $ P_{odd}$=$P^{'}_{1,2} .P^{'}_{3,4}$… and $P_{even}$=$P^{'}_{2,3} . P^{'}_{4,5}$….$ P_{odd}$ and $ P_{even}$ can be considered as projectors onto the "odd" and "even" ground spaces of $H_{GLRC}$. Let $P_{0}$ be the projector onto the entire ground space of $H_{GLRC}$. Further, because $H_{GLRC}$ is constructed from universal sets on U(4), then its ground space projector is nothing but the t'th Haar moment super operator \cite{12}, In other words $P_{0}$=$\int_{U(2^{n})}$$U^{\otimes t,t }$$ \mu_{H}(dU)$ ; $U$ $\in$ U($2^{n}$) and $\mu_{H}$ being the Haar measure on U($2^{n}$) . The statement of the detecability lemma is the following : \begin{lem} \cite{2} \\ $\mid\mid P_{even}.P_{odd} - P_{0} \mid\mid_{\infty}$ $\leq$ $(1+\dfrac{\Delta H_{GLRC}}{2} )^{-\dfrac{1}{3}}$ \end{lem} To relate this to the ensemble generated by the gadget $ G_{n}$ we prove the following Lemma: \begin{lem} $ M_{t} [\mu_{G_{n} } ]$=$P_{even}.P_{odd}$ \end{lem} \textbf{Proof of Lemma 3} :\\ \\ We first note that because all unitaries are drawn independently, we can think of the moment super operator as being composed of 2 layers (an odd layer (left part of the gadget of Fig.\ref{fig5}) and an even layer (right part of the gadget of Fig.\ref{fig5} ), this is similar to reasoning found in \cite{3}). Then:\\ $M_{t} [\mu_{G_{n} } ]$=($\dfrac{1}{|\mathcal{U}_B|})^{\delta_{even}}\sum_{U_{23} \in \mathcal{U}_B,U_{45} \in \mathcal{U}_B,...} (U_{23}\otimes U_{45} \otimes …)^{\otimes t,t}$. ($\dfrac{1}{|\mathcal{U}_B|})^{\delta_{odd}}\sum_{U_{12} \in \mathcal{U}_B,U_{34} \in \mathcal{U}_B,...} (U_{12} \otimes U_{34} \otimes …)^{\otimes t,t}$, \\ where $\delta_{odd}$=\{ $\dfrac{n}{2}$ if n$mod$2=0 or $\dfrac{n-1}{2}$ if n$ mod$2 =1 \} \\ and $\delta_{even}$=\{ $\dfrac{n}{2}-1$ if n$mod$2=0 or $\dfrac{n-1}{2}$ if n$mod$2=1 \}.\\ Note that since the $U_{ii+1}$ ' s are independently drawn from $\mathcal{U}_B$, one can rewrite this as:\\ $M_{t} [\mu_{G_{n} } ]$= ($\dfrac{1}{|\mathcal{U}_B|} \sum_{U_{23} \in \mathcal{U}_B}(1 \otimes U_{23} \otimes 1^{\otimes n-3})^{\otimes t,t}$ .\\$\dfrac{1}{|\mathcal{U}_B|}(\sum_{U_{45} \in \mathcal{U}_B}(1^{\otimes 3}\otimes U_{45}\otimes 1^{\otimes n-5})^{\otimes t,t}$...) .\\($\dfrac{1}{|\mathcal{U}_B|} (\sum_{U_{12} \in \mathcal{U}_B}(U_{12} \otimes 1^{\otimes n-2})^{\otimes t,t}$.\\$\dfrac{1}{|\mathcal{U}_B|}\sum_{U_{34} \in \mathcal{U}_B}(1^{\otimes 2} \otimes U_{34} \otimes 1^{\otimes n-4})^{\otimes t,t}$...)\\=$(P^{'}_{2,3}.P^{'}_{4,5}...)$.$(P^{'}_{1,2}.P^{'}_{3,4}...)$\\=$P_{even}$. $P_{odd}$. $\Box$ \\ \\ Then, as a direct consequence of the detectibility lemma we obtain: \begin{equation} \begin{aligned} \label{dl} g(t,\mu_{G_{n} }) = \mid\mid M_{t} [\mu_{G_{n} } ] - P_{0} \mid\mid_{\infty}\leq(1+\dfrac{\Delta H_{GLRC}}{2} )^{-\dfrac{1}{3}} \end{aligned} \end{equation} All that remains now is to bound the RHS of Equation (\ref{dl}). Using Equation (\ref{eq7}) one directly obtains: \begin{equation} \begin{aligned} \label{eq8} (1+\dfrac{\Delta H_{GLRC}}{2})^{-\dfrac{1}{3}}\leq(1+\dfrac{P_{GLRC}}{2})^{-\dfrac{1}{3}} \end{aligned} \end{equation} Equation (\ref{dl}) along with Equation (\ref{eq8}) directly leads to the following Corollary: \newtheorem{corollary}{Corollary} \begin{corollary} The ensemble $\{\dfrac{1}{|\mathcal{U_{G}}|} , \mathcal{U_{G}}\}$ generated by the gadget $G_{n}$ is an ($\eta$,t)-TPE with :\\ $\eta$=$(1+\dfrac{P_{GLRC}}{2})^{-\dfrac{1}{3}}$= poly(t) < 1. \end{corollary} Plugging Corollary 1 into Theorem 1 with: \\ \{ $p_{i}$, $U_{i}$ \}=$\{\dfrac{1}{|\mathcal{U_{G}}|} , \mathcal{U_{G}}\}$ , $d$=$2^{n}$ and $\eta$=$(1+\dfrac{P_{GLRC}}{2})^{-\dfrac{1}{3}}$ , then multiplying and dividing the bound of $k$ in Theorem 1 by $log(2)$ allows one to obtain Theorem 3. \section{\label{sec:level8}Conclusions and discussion} We have found a simple n-qubit graph gadget which implements an $\varepsilon$-approximate t-design under repeated concatenations with fixed measurement and no feedforward. The number of concatenations $k(n,t,\varepsilon)$=$\Omega(nt+ log(\dfrac{1}{\varepsilon}))$ required is linear in both the qubit number and order t of the design. Also, because the number of qubits in the graph gadget scales linearly with n, we thus only require $\Omega(n^{2}t + nlog(\dfrac{1}{\varepsilon}))$ qubits in total to implement the gadget $E_{n}$=$G_{n} \circ G_{n} \circ...$. Furthermore, the choice of the 2-qubit gadget B is not at all unique. In fact, $G_{n}$ could be made even more practical provided simpler (less number of qubits, less needed entanglements,...) 2-qubit gadgets possessing the properties of B can be found. Our construction is very similar to the brickwork state, which is a universal resource for MBQC \cite{19} - it is basically the brickwork state but with regular holes. In MBQC these holes would simply teleport the inputs through, so that the proofs of universality of \cite{19} easily extend to our graph - that is, concatenations of the graph used in $G_n$ is also a universal resource for MBQC. In addition to being pleasing from a practical point of view, this opens the door to applications of techniques for delegation of ensemble generation, as done for computation \cite{19,barz2013experimental}, and indeed the possibility to hide whether one is sampling unitaries or performing some deterministic computation.\\ \\ \\ \section{\label{sec:level8}Acknowledgements} We thank Y. Nakata for fruitful discussions and P. Turner for comments. R. Mezher Acknowledges funding from the Lebanese PhD grant $CNRS-L$/$UL$. DM acknowledges support from ANR grant COMB.
1,314,259,995,534
arxiv
\section{Introduction} Shape oscillations of levitated metal droplets can be used to measure the surface tension and viscosity of liquid metals (\citealt{Rhim-etal99,Egry-etal05}). Theoretically, the former determines the frequency, while the latter accounts for the damping rate of oscillations. In the reality, experimental measurements may be affected by several side-effects. Firstly, levitated drops may be significantly aspherical and the oscillations amplitudes not necessarily small, whereas the classical theories describing the oscillation frequencies (\citealt{Rayl45}) and damping rates (\citealt{Lamb93,Cha81,Reid60}) assume small-amplitude oscillations about an ideally spherical equilibrium shape. Corrections due to the drop asphericity have been calculated by \citet{CumBlack91} and \citet{SurBay91}. \citet{BraEgr95} find the same order correction to the damping rate resulting also from AC-magnetic field. The effect of a moderate amplitude on the oscillations of inviscid drops has been analysed by \citet{TsaBro83} who find that the oscillation frequency decreases with the square of the amplitude. Using a boundary-integral method, \citet{LunMan91} show that small viscosity has a relatively large effect on the resonant-mode coupling phenomena in the nonlinear oscillations of large axially symmetric drops in zero gravity. Numerical simulation of large-amplitude axisymmetric oscillations of viscous liquid drop by \citet{Bas92}, who uses the Galerkin/finite-element technique, shows that a finite viscosity results in a much stronger mode coupling than predicted by the small-viscosity approximation. Secondly, the measurements may strongly be disturbed by AC-driven flow in the drop. The mode coupling by the internal circulation in axisymmetrically oscillating drop has been studied numerically by \citet{MasAsh98} using the Galerkin/finite-element technique. To reduce the strength of the AC field necessary for the levitation and, thus, to minimise the flow, experiments may be conducted under the microgravity conditions during parabolic flights or on the board of space station (\citealt{Egry-etal99}). A cheaper alternative might be to apply a sufficiently strong DC magnetic field that can not only stabilise AC-driven flow but also suppress the convective heat and momentum transport responsible for the mode coupling under the terrestrial conditions as originally shown by \citet{SPG03}. Such an approach has been implemented first by \citet{Yas-etal04} on the electromagnetically levitated drops of Copper and Nickel which were submitted to a DC field of the induction up to $10\, T.$ The only motion of $Cu$ drops observed to persist in magnetic field above $1\, T$ was a solid-body rotation about an axis parallel to the magnetic field. No shape oscillations, usually induced by the AC-driven flow fluctuations, were observed. Note that this implies only the suppression of AC-driven flow but not of the shape oscillations themselves which require an external excitation to be observable. \citet{Yas-etal05} study the effect of suppression of the melt flow on the structure of various alloys obtained by the electromagnetic levitation melting technique in a strong superimposed DC magnetic field. The use of high magnetic fields in various material processing applications is reviewed by \citet{Yas07}. Note that a similar suppression of AC-driven flow can also be achieved by a fast spinning of the drop (\citealt{SPG07}) that may be driven by an electromagnetic spin-up instability (\citealt{PG00,PG06}). The effects of both the drop rotation and AC-driven flow on the frequency spectrum of shape oscillations have been modelled numerically by \citet{BojPer09}. \citet{Wat09} demonstrates numerically that a large enough oscillation amplitude can compensate for the effect of rotation on the frequency shift. A novel method of measuring thermal conductivity of liquid silicon using the electromagnetic levitation in a strong superimposed DC magnetic has been introduced by \citet{Kob-etal07}. Subsequent numerical modelling by \citet{Tsu-etal09} shows that applying a DC magnetic field of $4\, T$ can suppress convection in molten silicon droplet enough to measure its real thermal conductivity. Later on this method has been extended to the measurements of heat capacity of molten austenitic stainless steel (\citealt{Fuk-etal09}) and also that of supercooled liquid silicon (\citealt{Kob-etal10}). In order to determine the surface tension and viscosity or the electrical conductivity one needs to relate the observed surface oscillations with the relevant thermophysical properties of the liquid. General small-amplitude shape oscillations of conducting drop in a uniform DC magnetic field have been analysed first by \citet{Gail66}. Although a magnetic field of arbitrary strength is considered, the solution is restricted to inviscid drops. Moreover, only the frequency spectrum and magnetic damping rates are found but not the associated shape eigenmodes, which may be useful for experimental identification of the oscillation modes. Energy dissipation by axisymmetric oscillations of a conducting drop in a weak DC magnetic field is considered by \citet{Zamb66}, who finds the magnetic damping rates in agreement with more general results of \citet{Gail66}. Axisymmetric oscillations of an electromagnetically levitated drop of molten $Al$ in a superimposed DC magnetic field are modelled numerically by \citet{BojPer03}. A moderate DC magnetic field is shown to stabilise AC-driven flow and, thus, to eliminate the associated shape oscillations. A three-dimensional numerical simulation of an oscillating liquid metal drop in a uniform static magnetic field has been carried out by \citet{Tag07}. The numerical results show that vertical magnetic field effectively damps the flow, while horizontal field tries to render the flow two-dimensional. In the present paper, we analyse free oscillations of a viscous electrically conducting drop in a homogeneous DC magnetic field. In contrast to \citet{Gail66}, we assume the viscosity to be small but non-zero and the magnetic field to be strong. This allows us to obtain an asymptotic solution to the eigenvalue problem for general small-amplitude 3D shape oscillations including the eigenmodes left out by \citet{Gail66}, which are necessary for the subsequent determination of the viscous damping. Firstly, we show that the eigenmodes of shape oscillations are not affected by strong magnetic field. Namely, they remain the spherical harmonics as in the non-magnetic case. The magnetic field, however, changes the internal flow associated with the surface oscillations and, thus, the frequency spectrum. As the drop oscillates in a strong magnetic field, the liquid moves as solid columns aligned with the field. Two types of such oscillations are possible: longitudinal and transversal to the magnetic field. The oscillations are weakly damped by a strong magnetic field, except for both the axisymmetric transversal and inherently 2D modes. The former are magnetically overdamped because the incompressibility constraint does not permit an axially uniform radial flow. The latter, which are transversal modes defined by the spherical harmonics with equal degree and order, $l=m$, are not affected at all because these modes are naturally invariant along the field. Because the magnetic damping for all other modes decreases inversely with the square of the field strength, the viscous damping may become important in a sufficiently strong magnetic field. The paper is organised as follows. The problem is formulated in $\S$\ref{sec:prob}. Section \ref{sec:invisc} presents an inviscid asymptotic solution which yields the shape eigenmodes and frequency spectrum of longitudinal and transversal oscillations. Magnetic damping is found in $\S$\ref{sub:mag-damp} as a next-order asymptotic correction to the frequency. Viscous damping rates are calculated in $\S$\ref{sub:visc-damp} first by the eigenvalue perturbation technique for the longitudinal modes and then by an energy dissipation approach for both of the oscillation modes. The paper is concluded by a summary and discussion of the results in $\S$\ref{sec:conc}. \section{\label{sec:prob}Problem formulation} \begin{figure} \begin{centering} \includegraphics[width=0.33\textwidth]{fig1} \par\end{centering} \caption{\label{fig:Sketch}Sketch to the formulation of problem.} \end{figure} Consider a spherical non-magnetic drop of an incompressible liquid with radius $R_{0},$ density $\rho,$ surface tension $\gamma,$ electrical conductivity $\sigma,$ and a small dynamic viscosity $\eta$ performing small-amplitude shape oscillations in a strong uniform DC magnetic field $\vec{B}$ as illustrated in figure \ref{fig:Sketch}. The velocity of liquid flow $\vec{v}$ and the pressure distribution $p$ are governed by the Navier-Stokes equation with electromagnetic body force\begin{equation} \rho\partial_{t}\vec{v}=-\vec{\nabla}p+\eta\vec{\nabla}^{2}\vec{v}+\vec{j}\times\vec{B},\label{eq:NS}\end{equation} where the induced current follows from Ohm's law for a moving medium \begin{equation} \vec{j}=\sigma(\vec{E}+\vec{v}\times\vec{B}).\label{eq:Ohm}\end{equation} Owing to the smallness of oscillation amplitude, the nonlinear term in (\ref{eq:NS}) as well as the induced magnetic field are both negligible. In addition, the characteristic oscillation period $\tau_{0}$ is supposed to be much longer than the magnetic diffusion time $\mu_{0}\sigma R_{0}^{2},$ where $\mu_{0}$ is the permeability of vacuum. This leads to the quasi-stationary approximation according to which $\vec{\nabla}\times\vec{E}=0$ and $\vec{E}=-\vec{\nabla}\varphi,$ where $\varphi$ is the electric potential. The incompressibility constraint $\vec{\nabla}\cdot\vec{v}=0$ and the charge conservation condition $\vec{\nabla}\cdot\vec{j}=0$ applied to (\ref{eq:NS}) and (\ref{eq:Ohm}) result, respectively, in\begin{eqnarray} \vec{\nabla}^{2}p & = & \sigma(\vec{B}\cdot\vec{\nabla})(\vec{B}\cdot\vec{v}),\label{eq:p}\\ \vec{\nabla}^{2}\varphi & = & \vec{B}\cdot\vec{\nabla}\times\vec{v}.\label{eq:phi}\end{eqnarray} For a uniform $\vec{B}$ under consideration here, applying the operators $\vec{\nabla}\times\vec{\nabla}\times,$ $(\vec{B}\cdot\vec{\nabla})\vec{B}\cdot$ and $(\vec{B}\cdot\vec{\nabla})\vec{B}\cdot\vec{\nabla}\times$ to (\ref{eq:NS}) and taking into account $\vec{\nabla}\times\vec{\nabla}\times(\vec{j}\times\vec{B})=\sigma(\vec{B}\cdot\vec{\nabla})^{2}\vec{v}$ together with (\ref{eq:p}) and (\ref{eq:phi}), we obtain\begin{equation} \left[\rho\vec{\nabla}^{2}\partial_{t}+\sigma(\vec{B}\cdot\vec{\nabla})^{2}-\eta\vec{\nabla}^{4}\right]\{p,\varphi,\vec{v}\}=0.\label{eq:pfv}\end{equation} Although the equation above applies to $p,$ $\varphi$ and $\vec{v}$ separately, these variables are not independent of each other. Firstly, owing to the incompressibility constraint, only two velocity components are mutually independent. Secondly, velocity is related to the pressure and electric potential by (\ref{eq:NS}), which can be used to represent $\vec{v}$ in terms of $p$ and $\varphi$ as done in the following. Boundary conditions are applied at the drop surface $S$ defined by its spherical radius $R=R_{0}+R_{1}(\theta,\phi,t),$ where $R_{1}$ is a small perturbation, which depends on the poloidal and azimuthal angles, $\theta$ and $\phi,$ and the time $t$. The radial velocity at the surface is related to the radius perturbation by the kinematic constraint \begin{equation} \left.v_{R}\right|_{S}=\partial_{t}R_{1}.\label{eq:kinc}\end{equation} The normal component of the current at the drop surface, which is assumed to be surrounded by vacuum or insulating gas, vanishes, i.e., $\left.j_{n}\right|_{S}=0.$ In addition, there is no tangential stress at the free surface: \begin{equation} \left.\vec{n}\cdot\partial_{\tau}\vec{v}+\vec{\tau}\cdot\partial_{n}\vec{v}\right|_{S}=0,\label{eq:tblnc}\end{equation} while the normal stress component is balanced by the capillary pressure \begin{equation} p_{0}+p-2\eta\partial_{n}v_{n}=\gamma\vec{\nabla}\cdot\vec{n},\label{eq:nblnc}\end{equation} where $p_{0}=2\gamma/R_{0}$ is the constant part of pressure, $\vec{\tau}$ is a unit tangent vector and $\vec{n}=\vec{\nabla}(R-R_{1})/|\vec{\nabla}(R-R_{1})|$ is the outward surface normal. For small-amplitude oscillations defined by $R_{1}\ll R_{0},$ we have $\vec{n}\approx\vec{e}_{R}-\vec{\nabla}R_{1}.$ Henceforth, we proceed to dimensionless variables by choosing the radius $R_{0}$ and the characteristic capillary pressure $P_{0}=\gamma/R_{0}$ as the length and pressure scales. The characteristic period of capillary oscillations is determined by the balance of inertia and pressure, which yields the time scale $\tau_{0}=\sqrt{R_{0}^{3}\rho/\gamma}.$ The velocity and potential scales are chosen as $v_{0}=R_{0}/\tau_{0}$ and $\varphi_{0}=v_{0}BR_{0},$ respectively, where $B=\left|\vec{B}\right|.$ In the dimensionless variables, (\ref{eq:pfv}) takes the form\begin{equation} \left[\vec{\nabla}^{2}\partial_{t}+\mathit{Cm}(\vec{\epsilon}\cdot\vec{\nabla})^{2}-\mathit{Ca}\vec{\nabla}^{4}\right]\{p,\varphi,\vec{v}\}=0,\label{eq:pfv-nd}\end{equation} where $\vec{\epsilon}=\vec{B}/B$ is a unit vector in the direction of the magnetic field and $\mathit{Ca}=\eta/\sqrt{R_{0}\rho\gamma}$ and $\mathit{Cm}=\sigma B^{2}R_{0}^{2}/\sqrt{R_{0}\rho\gamma}$ are the conventional and magnetic capillary numbers, respectively. They are the ratios of the capillary oscillation time $\tau_{0}$ defined above and the viscous and magnetic damping times, which are $\tau_{v}=\rho R_{0}^{2}/\eta$ and $\tau_{m}=\rho/(\sigma B^{2}),$ respectively. In the dimensionless form, the normal stress balance condition (\ref{eq:nblnc}) reads as \begin{equation} \left.(\vec{\nabla}^{2}+2)R_{1}+p-2\mathit{Ca}\partial_{R}v_{R}\right|_{R=1}=0.\label{eq:nblnc-nd}\end{equation} In the following, we assume viscosity to be small but the magnetic field strong so that $\mathit{Ca}\ll1$ and $\mathit{Cm}\gg1,$ which means that the second and third terms in (\ref{eq:pfv-nd}) are much greater and much smaller, respectively, than the first one. Thus, we first focus on the effect of the magnetic field and ignore that of viscosity, which is considered later in $\S$\ref{sub:visc-damp}. \section{\label{sec:invisc}Inviscid asymptotic solution} Here we ignore viscosity that allows us to formulate the problem in terms of $p,$ $\varphi$ and $R_{1}.$ Projecting the dimensionless counterpart of (\ref{eq:NS}), which takes the form \begin{equation} \mathit{Cm}\vec{v}+\partial_{t}\vec{v}=-\vec{\nabla}p+\mathit{Ca}\vec{\nabla}^{2}\vec{v}+\mathit{Cm}\left[\vec{\epsilon}\times\vec{\nabla}\varphi+\vec{\epsilon}(\vec{\epsilon}\cdot\vec{v})\right],\label{eq:NS-nd}\end{equation} onto $\vec{e}_{R}$ and $\vec{\epsilon},$ and putting $\mathit{Ca}=0,$ we obtain \begin{eqnarray} \mathit{Cm} v_{R}+\partial_{t}v_{R} & = & -\vec{e}_{R}\cdot\vec{\nabla}p+\mathit{Cm}\left[\vec{e}_{R}\times\vec{\epsilon}\cdot\vec{\nabla}\varphi+\vec{e}_{R}\cdot\vec{\epsilon}v_{\shortparallel}\right],\label{eq:NS-R}\\ \partial_{t}v_{\shortparallel} & = & -\vec{\epsilon}\cdot\vec{\nabla}p,\label{eq:NS-||}\end{eqnarray} where $v_{\shortparallel}=\vec{\epsilon}\cdot\vec{v}$ is the velocity component along the magnetic field. Differentiating (\ref{eq:NS-R}) with respect to $t$ and substituting $\partial_{t}v_{\shortparallel}$ from (\ref{eq:NS-||}), we represent (\ref{eq:kinc}) in terms of $p$ and $\varphi$\begin{equation} \mathit{Cm}\partial_{t}^{2}R_{1}+\partial_{t}^{3}R_{1}=\left[\mathit{Cm}\left(\vec{e}_{R}\times\vec{\epsilon}\cdot\vec{\nabla}\partial_{t}\varphi-(\vec{e}_{R}\cdot\vec{\epsilon})\vec{\epsilon}\cdot\vec{\nabla}p\right)-\vec{e}_{R}\cdot\vec{\nabla}\partial_{t}p\right]_{R=1}.\label{eq:R1-cnd}\end{equation} Velocity has to be eliminated also from the electric boundary condition given by the radial component of Ohm's law\begin{equation} \left.j_{R}\right|_{R=1}=-\vec{e}_{R}\cdot\left[\vec{\nabla}\varphi+\vec{\epsilon}\times\vec{v}\right]_{R=1}=0.\label{eq:jR}\end{equation} Firstly, applying $(\mathit{Cm}+\partial_{t})$ to (\ref{eq:jR}) and then using (\ref{eq:NS-nd}), we obtain \begin{equation} \left[\mathit{Cm}(\vec{e}_{R}\cdot\vec{\epsilon})\vec{\epsilon}\cdot\vec{\nabla}\varphi-\vec{e}_{R}\times\vec{\epsilon}\cdot\vec{\nabla}p+\vec{e}_{R}\cdot\vec{\nabla}\partial_{t}\varphi\right]_{R=1}=0.\label{eq:phi-cnd}\end{equation} In the inviscid approximation, (\ref{eq:nblnc-nd}) reduces to \begin{equation} \left.p\right|_{R=1}=-(\vec{\nabla}^{2}+2)R_{1}.\label{eq:p-cnd}\end{equation} In the following, besides the spherical coordinates $(R,\theta,\phi),$ we will be using also the cylindrical ones $(r,\phi,z)$ with the axis aligned along the magnetic field so that $\vec{\epsilon}=\vec{e}_{z}.$ Solution is sought in the normal mode form $\{p,\varphi,R_{1}\}=\{\hat{p},\hat{\varphi},\hat{R}\}(\vec{r})e^{\beta t+im\phi},$ where $\hat{p},$ $\hat{\varphi}$ and $\hat{R}$ are axisymmetric amplitude distributions, $m$ is the azimuthal wave number, and $\beta$ is a generally complex temporal variation rate which has to be determined depending on $m,$ $\mathit{Cm}$ and $\mathit{Ca}.$ Then boundary conditions (\ref{eq:R1-cnd}), (\ref{eq:phi-cnd}) and (\ref{eq:p-cnd}) for the oscillation amplitudes at $R=1$ take the form \begin{eqnarray} \beta^{2}\hat{R}+im\beta\hat{\varphi}+z\partial_{z}\hat{p} & = & -\mathit{Cm}^{-1}(\beta^{3}\hat{R}+\beta\partial_{R}\hat{p}),\label{eq:Rh-cnd}\\ z\partial_{z}\hat{\varphi} & = & -\mathit{Cm}^{-1}(im\hat{p}+\beta\partial_{R}\hat{\varphi}),\label{eq:phih-cnd}\\ \hat{p} & = & -(L_{z}+2)\hat{R},\label{eq:ph-cnd}\end{eqnarray} where $L_{z}\equiv\frac{d}{dz}\left((1-z^{2})\frac{d\,}{dz}\right)-\frac{m^{2}}{1-z^{2}}$ is the angular part of the Laplace operator in the spherical coordinates for the azimuthal mode $m$ written in terms of $z=\cos\theta.$ Further, it is important to note that\begin{equation} L_{z}P_{l}^{m}(z)=-l(l+1)P_{l}^{m}(z),\label{eq:Legendre}\end{equation} where $P_{l}^{m}(z),$ the associated Legendre function of degree $l$ and order $m,$ is an eigenfunction of $L_{z}$ with eigenvalue $-l(l+1)$ (\citealt{AbSt72}). Equation (\ref{eq:pfv-nd}) for $\hat{p}$ and $\hat{\phi}$ can be written as \begin{equation} \left[\partial_{z}^{2}+\mathit{Cm}^{-1}(L_{r}+\partial_{z}^{2})(\beta-\mathit{Ca}(L_{r}+\partial_{z}^{2}))\right]\{\hat{p},\hat{\varphi}\}=0,\label{eq:phip}\end{equation} where $L_{r}\equiv\partial_{r}^{2}+r^{-1}\partial_{r}-m^{2}/r^{2}$ is the radial part of the Laplace operator in the cylindrical coordinates for the azimuthal mode $m.$ Here we put $\mathit{Ca}=0,$ suppose $\mathit{Cm}\gg1,$ and search for an asymptotic solution in the terms of a small parameter $\mathit{Cm}^{-1}$ as\[ \{\hat{p},\hat{\varphi,}\hat{R},\beta\}\sim\{\hat{p}_{0},\hat{\varphi}_{0},\hat{R}_{0},\beta_{0}\}+\mathit{Cm}^{-1}\{\hat{p}_{1},\hat{\varphi}_{1},\hat{R}_{1},\beta_{1}\}+\cdots.\] Note that although (\ref{eq:phip}) admits solutions with $\beta\sim\mathit{Cm}$ found by \citet{Gail66}, such quickly relaxing modes cannot be related with the surface deformations. From the physical point of view, drop is driven to its equilibrium shape by the surface tension, and the magnetic field can only oppose but not to accelerate the associated liquid flow. From the mathematical point of view, $\beta\sim\mathit{Cm}\gg1$ applied to (\ref{eq:Rh-cnd}) results in $\hat{R}_{0}=0,$ which means no surface deformation at the leading order in agreement with the previous physical arguments. Consequently, these fast modes represent internal flow perturbations which are not relevant for the shape deformations under consideration here. \subsection{\label{sub:osc-freq}Oscillation frequencies} At the leading order, (\ref{eq:phip}) reduces to $\partial_{z}^{2}\left\{ \hat{p}_{0},\hat{\varphi}_{0}\right\} =0,$ whose general solution is \begin{equation} \{\hat{p}_{0},\hat{\varphi}_{0}\}(r,z)=\{\hat{p}_{0}^{+},\hat{\varphi}_{0}^{+}\}(r)+z\{\hat{p}_{0}^{-},\hat{\varphi}_{0}^{-}\}(r),\label{eq:p0f0}\end{equation} where the first pair of particular solutions are the functions of $r$ only, while the second pair is linear in $z$ but general in $r.$ Owing to the $z$-reflection symmetry of the problem these two types particular of solutions do not mix and, thus, they are subsequently considered separately. We refer to these solutions in accordance to their $z$-parity as even and odd ones using the indices $e$ and $o.$ As shown below, the odd and even solutions describe longitudinal and transversal oscillation modes, respectively. \subsubsection{\label{sub:L-freq}Longitudinal modes} For the odd solutions $\{\hat{p}_{0}^{o},\hat{\varphi}_{0}^{o}\}(r,z)=z\{\hat{p}_{0}^{-},\hat{\varphi}_{0}^{-}\}(r),$ boundary condition (\ref{eq:phih-cnd}), which at the leading order reads as $z\partial_{z}\hat{\varphi}_{0}=0,$ results in $\hat{\varphi}_{0}^{-}(r)=0.$ The two remaining boundary conditions (\ref{eq:Rh-cnd}) and (\ref{eq:ph-cnd}) take the form\begin{eqnarray} {\beta_{0}^{o}}^{2}\hat{R}_{0}^{o} & = & -z\hat{p}_{0}^{-},\label{eq:R0-o}\\ (L_{z}+2)\hat{R}_{0}^{o} & = & -z\hat{p}_{0}^{-}.\label{eq:p0-o}\end{eqnarray} Eliminating the pressure term between the equations above, we obtain an eigenvalue problem in $\beta_{0}^{2}$ for $\hat{R}_{0}^{o}$ \begin{equation} (L_{z}+2-{\beta_{0}^{o}}^{2})\hat{R}_{0}^{o}=0,\label{egv:R0-o}\end{equation} which is easily solved by using (\ref{eq:Legendre}) as\begin{eqnarray} \hat{R}_{0}^{o}(z) & = & R_{0}^{o}P_{l}^{m}(z),\label{sol:R0-o}\\ \beta_{0}^{o} & = & \pm i\sqrt{(l-1)(l+2)},\label{sol:bt0-o}\end{eqnarray} where $R_{0}^{o}$ is a small amplitude of oscillations and $l-m$ is an odd positive number. Note that imaginary $\beta_{0}^{o}$ describes constant-amplitude harmonic oscillations with the circular frequency $|\beta_{0}^{o}|$ which differs from the corresponding non-magnetic result only by the factor of $\sqrt{l}$ (\citealt{Lamb93}), and coincides with the result stated by \citet{Gail66}. Thus, strong magnetic field changes only the eigenfrequencies but not the eigenmodes of shape oscillations which, as without the magnetic field, are represented by separate spherical functions (associated Legendre functions with integer indices) (\citealt{AbSt72}). Similarly to the non-magnetic case, the frequency spectrum for odd modes is degenerate because it depends only on the degree $l$ but not on the order $m.$ Thus, for each $l,$ there are $[l/2]$ odd modes with different $m.$ Taking into account that $z=\sqrt{1-r^{2}}$ at the surface, the radial pressure distribution is obtained from (\ref{eq:R0-o}) as \begin{equation} \hat{p}_{0}^{-}(r)=-\beta_{0}^{2}\hat{R}_{0}^{o}(\sqrt{1-r^{2}})/\sqrt{1-r^{2}}.\label{sol:p0-m}\end{equation} According to (\ref{eq:NS-||}), this pressure distribution is associated with the axial velocity component \begin{equation} \hat{w}_{0}^{o}(r)=-\beta_{0}^{-1}\hat{p}_{0}^{-}(r),\label{eq:w0-o}\end{equation} while two other velocity components transversal to the magnetic field are absent in the leading-order approximation. Thus, the liquid effectively oscillates in solid columns along the magnetic field as illustrated in figure \ref{fig:odd} for the first four longitudinal oscillation modes defined by the indices $(l,m)=(2,1)$, $(3,0)$, $(3,2),$and $(4,1).$ Since such a flow does not cross the flux lines, the oscillations are not damped by the magnetic field in the leading-order approximation. \begin{figure} \begin{centering} \includegraphics[width=0.25\textwidth]{fig2a}\put(-100,0){(a)}\includegraphics[width=0.25\textwidth]{fig2b}\put(-100,0){(b)}\includegraphics[width=0.25\textwidth]{fig2c}\put(-100,0){(c)}\includegraphics[width=0.25\textwidth]{fig2d}\put(-100,0){(d)}\caption{\label{fig:odd}Shapes and the associated liquid oscillations in the $(x,z)$-plane parallel to the magnetic field for the first four longitudinal oscillation modes with indices ($l,m)=(2,1)$ (a), $(3,0)$ (b), $(3,2)$ (c), and (4,1) (d).} \par\end{centering} \end{figure} \subsubsection{\label{sub:T-freq}Transversal modes} For the even solutions $\{\hat{p}_{0}^{e},\hat{\varphi}_{0}^{e}\}(r,z)=\{\hat{p}_{0}^{+},\hat{\varphi}_{0}^{+}\}(r),$ the leading-order boundary condition (\ref{eq:phih-cnd}) is satisfied automatically. The two remaining conditions (\ref{eq:Rh-cnd}) and (\ref{eq:ph-cnd}) then take the form\begin{eqnarray} \beta_{0}^{e}\hat{R}_{0}^{e}+im\hat{\varphi}_{0}^{+} & = & 0,\label{eq:R0-e}\\ (L_{z}+2)\hat{R}_{0}^{e} & = & -\hat{p}_{0}^{+}.\label{eq:p0-e}\end{eqnarray} In contrast to the longitudinal modes considered above, now we have two equations (\ref{eq:R0-e}) and (\ref{eq:p0-e}) but three unknowns. To solve this problem, we need to consider the first-order solution to (\ref{eq:phip}) which now takes the form $\partial_{z}^{2}\{\hat{p}_{1}^{e},\hat{\varphi}_{1}^{e}\}=-\beta_{0}L_{r}\{\hat{p}_{0}^{+},\hat{\varphi}_{0}^{+}\}$ and yields \begin{equation} \{\hat{p}_{1}^{e},\hat{\varphi}_{1}^{e}\}(r,z)=\{\hat{p}_{1}^{+},\hat{\varphi}_{1}^{+}\}(r)-\frac{1}{2}\beta_{0}^{e}z^{2}L_{r}\{\hat{p}_{0}^{+},\hat{\varphi}_{0}^{+}\}.\label{eq:p-phi-1e}\end{equation} Then boundary condition (\ref{eq:phih-cnd}) results in $imp_{0}^{+}-\beta_{0}^{e}(z^{2}L_{r}-r\partial_{r})\hat{\varphi}_{0}^{+}=0.$ Combining this with (\ref{eq:R0-e}) and (\ref{eq:p0-e}) and taking into account \begin{equation} \left.z^{2}L_{r}-r\partial_{r}\right|_{R=1}\equiv L_{z}+m^{2},\label{eq:Lr-Lz}\end{equation} we obtain \begin{equation} \left[L_{z}+2+(\beta_{0}^{e}/m)^{2}(L_{z}+m^{2})\right]\hat{R}_{0}^{e}=0.\label{egv:R0-e}\end{equation} Using (\ref{eq:Legendre}), we readily obtain \begin{eqnarray} \hat{R}_{0}^{e}(z) & = & R_{0}^{e}P_{l}^{m}(z),\label{sol:R0-e}\\ \beta_{0}^{e} & = & \pm im\sqrt{\frac{(l-1)(l+2)}{l(l+1)-m^{2}}},\label{sol:bt0-e}\end{eqnarray} where $R_{0}^{e}$ is a small oscillation amplitude and $l-m$ is an even non-negative number. The result above again agrees with the asymptotic solution given by \citet{Gail66}. Similarly to the odd solutions found in the previous section, even eigenmodes are represented by separate spherical functions, and the oscillations are not damped at the leading order. In contrast to the odd modes as well as to the non-magnetic case, the frequency spectrum (\ref{sol:bt0-e}) is no longer degenerate and frequencies vary with the azimuthal wave number $m$. In particular, there are two important results implied by (\ref{sol:bt0-e}). Firstly, the oscillation frequency for the axisymmetric modes specified by $m=0$ is zero. This means that these modes are over-damped and do not oscillate at all. Secondly, the oscillation frequency for the modes with $m=l$ is exactly the same as without the magnetic field, i.e., $\sqrt{l(l-1)(l+2)}.$ This is because the liquid flow associated with these oscillation modes is inherently invariant along the field and, thus, not affected by the last (\citealt{Gail66}). The electric potential and pressure distributions follow from (\ref{eq:R0-e}) and (\ref{eq:p0-e}) as \begin{eqnarray} \hat{\varphi}_{0}^{e}(r) & = & im^{-1}\beta_{0}^{e}\hat{R}_{0}^{e}(\sqrt{1-r^{2}}),\label{sol:phi0-p}\\ \hat{p}_{0}^{e}(r) & = & (l-1)(l+2)\hat{R}_{0}^{e}(\sqrt{1-r^{2}}).\label{sol:p0-p}\end{eqnarray} The associated velocity distribution is obtained from (\ref{eq:NS-nd}). Firstly, equation (\ref{eq:NS-||}) implies that the liquid oscillations are purely transversal to the magnetic field. In the leading-order terms, we obtain from (\ref{eq:NS-nd}) \begin{equation} \vec{v}_{0}^{e}(r,\phi)=\vec{e}_{z}\times\vec{\nabla}\varphi_{0}^{e}(r,\phi),\label{sol:v0-e}\end{equation} which shows that the velocity is not only transversal but also invariant along the magnetic field. Thus, the liquid again oscillates as solid columns, but in this case transversely to the field which has no effect on such a flow. This is because the e.m.f induced by the flow, which is invariant along the magnetic field, is irrotational, i.e., $\vec{\nabla}\times(\vec{v}\times\vec{B})=(\vec{B}\cdot\vec{\nabla})\vec{v}\equiv0,$ and, thus, unable to drive current circulation in a closed liquid volume. Note that for the axisymmetric modes $(m=0),$ the potential (\ref{sol:phi0-p}) and the associated velocity (\ref{sol:v0-e}) take an indeterminate form. Namely, for $m=0,$ boundary condition (\ref{eq:R0-e}), which in this is case straightforwardly implies a zero frequency, is satisfied by an arbitrary potential distribution independent of the radius perturbation. As seen from (\ref{sol:phi0-p}), a non-zero axisymmetric potential is associated with a purely azimuthal velocity. Consequently, this mode is irrelevant and can subsequently be neglected because it represents an internal flow perturbation which is just compatible but not coupled with axisymmetric shape deformations similarly to the fast modes discussed at the end of $\S$\ref{sec:invisc}. Moreover, this is consistent with (\ref{sol:bt0-e}) according to which axisymmetric transversal modes are static in the leading-order approximation that implies a zero velocity and, consequently, a zero associated potential. Expression (\ref{sol:v0-e}) implies that the velocity streamlines coincide with the isolines of $\varphi_{0},$ which, thus, represents a stream function for the flow oscillations. Figure \ref{fig:even} shows the shapes and streamlines of the associated liquid flow in the horizontal mid-plane for the first four transversal oscillation modes. Note that the first and the third mode with the indices $(l,m)=(2,2)$ and $(3,3)$ are both naturally invariant in the direction of the magnetic field and, thus, effectively non-magnetic. The second mode with $(l,m)=(3,1)$ corresponds to the drop oscillating in such a way that horizontal cross-sections remain circular in the small-amplitude limit under consideration while the whole shape deforms because of vertical offset of the cross-sections. \begin{figure} \begin{centering} \includegraphics[width=0.25\textwidth]{fig3a}\put(-110,0){(a)}\includegraphics[width=0.25\textwidth]{fig3b}\put(-110,0){(b)}\includegraphics[width=0.25\textwidth]{fig3c}\put(-110,0){(c)}\includegraphics[width=0.25\textwidth]{fig3d}\put(-110,0){(d)}\caption{\label{fig:even}Shapes and the associated liquid flows in the horizontal mid-plane $(z=0)$ perpendicular to the magnetic field for the first four transversal oscillation modes defined by indices $(l,m)=(2,2)$ (a), $(3,1)$ (b), (3,3) (c) and $(4,2)$ (d).} \par\end{centering} \end{figure} \subsection{\label{sub:mag-damp}Magnetic damping} \subsubsection{Longitudinal modes} In order to determine the magnetic damping rates for longitudinal modes, we have to consider the first-order solution governed by\[ \partial_{z}^{2}\{\hat{p}_{1}^{o},\hat{\varphi}_{1}^{o}\}=-\beta_{0}^{o}zL_{r}\{\hat{p}_{0}^{-},0\},\] which yields\[ \{\hat{p}_{1}^{o},\hat{\varphi}_{1}^{o}\}(r,z)=z\{\hat{p}_{1}^{-},\hat{\varphi}_{1}^{-}\}(r)-\frac{1}{6}\beta_{0}^{o}z^{3}L_{r}\{\hat{p}_{0}^{-},0\}.\] Then (\ref{eq:phih-cnd}) and (\ref{eq:R0-o}) applied consecutively result in $z\hat{\varphi}_{1}^{-}=-imz\hat{p}_{0}^{-},$ which combined with (\ref{eq:Rh-cnd}), (\ref{eq:ph-cnd}) and (\ref{eq:R0-o}) yields \[ (L_{z}+2-{\beta_{0}^{o}}^{2})\hat{R}_{1}^{o}=\frac{1}{3}{\beta_{0}^{o}}^{3}z\left[z^{2}L_{r}-3r\partial_{r}\right]z^{-1}\hat{R}_{0}^{o}+\beta_{0}^{o}(2\beta_{1}^{o}-m^{2}{\beta_{0}^{o}}^{2})\hat{R}_{0}^{o}.\] After some algebra, we obtain $\left.z\left[z^{2}L_{r}-3r\partial_{r}\right]z^{-1}\right|_{R=1}\equiv L_{z}+2+m^{2},$ and, consequently, \begin{equation} (L_{z}+2-{\beta_{0}^{o}}^{2})\hat{R}_{1}^{o}=\frac{1}{3}\beta_{0}^{o}\left[{\beta_{0}^{o}}^{2}(L_{z}-2m^{2}+2)+6\beta_{1}^{o}\right]\hat{R}_{0}^{o}.\label{egv:R1-o}\end{equation} The l.h.s. operator above is the same as that in (\ref{egv:R0-o}) which has $\hat{R}_{0}^{o}$ as its eigensolution with a zero eigenvalue. Owing to (\ref{eq:Legendre}) and (\ref{sol:R0-o}), $\hat{R}_{0}^{o}$ is an eigensolution of the r.h.s operator of (\ref{egv:R1-o}), too. Thus, for (\ref{egv:R1-o}) to be solvable, its r.h.s has to be free of the terms proportional to $\hat{R}_{0}^{o},$ that yields \begin{equation} \beta_{1}^{o}=-\frac{1}{6}(l-1)(l+2)((l-1)(l+2)+2m^{2}).\label{eq:bt1-o}\end{equation} Note that conversely to the frequency for longitudinal oscillation modes (\ref{sol:bt0-o}), the magnetic damping rate above is not degenerate and varies with $m.$ \subsubsection{Transversal modes} Similarly to the oscillation frequency considered above, boundary conditions (\ref{eq:ph-cnd}) and (\ref{eq:Rh-cnd}) applied to the first-order solution (\ref{eq:p-phi-1e}) result in \begin{eqnarray} \beta_{0}\hat{R}_{1}^{e}+im\hat{\varphi}_{1}^{+} & = & \left[{\beta_{0}^{e}}^{2}\left(m^{-2}(z^{2}L_{r}-r\partial_{r})^{2}-\frac{1}{2}z^{2}L_{r}-1\right)-\beta_{1}^{e}\right]\hat{R}_{0}^{e},\label{eq:R1-e}\\ (L_{z}+2)\hat{R}_{1}^{e} & = & -\hat{p}_{1}^{e}.\label{eq:p1-e}\end{eqnarray} To solve this first-order problem, we again need a second-order solution governed by\[ \partial_{z}^{2}\{\hat{p}_{2}^{e},\hat{\varphi}_{2}^{e}\}=-\beta_{0}^{e}(L_{r}+\partial_{z}^{2})\{\hat{p}_{1}^{e},\hat{\varphi}_{1}^{e}\}-\beta_{1}^{e}L_{r}\{\hat{p}_{0}^{+},\hat{\varphi}_{0}^{+}\},\] which, by taking into account (\ref{eq:p-phi-1e}), yields \begin{equation} \hat{\varphi}_{2}^{e}(r,z)=\hat{\varphi}_{2}^{+}(r)-\frac{1}{2}\beta_{0}^{e}z^{2}L_{r}\hat{\varphi}_{1}^{+}+{\beta_{0}^{e}}^{2}\left[\frac{z^{2}}{2}L_{r}+\frac{z^{4}}{4!}L_{r}^{2}\right]\hat{\varphi}_{0}^{+}-\frac{1}{2}\beta_{1}^{e}z^{2}L_{r}\hat{\varphi}_{0}^{+},\label{eq:p-phi-2e}\end{equation} Then (\ref{eq:ph-cnd}) results in\begin{equation} im\hat{p}_{1}^{e}-\beta_{0}^{e}(z^{2}L_{r}-r\partial_{r})\hat{\varphi}_{1}^{+}=im^{-1}\left[\beta_{0}^{e}(z^{2}L_{r}-r\partial_{r})-\frac{1}{6}{\beta_{0}^{e}}^{2}z^{2}(z^{2}L_{r}-3r\partial_{r})L_{r}\right]\hat{R}_{0}^{e}.\label{eq:phi2-e}\end{equation} Substituting $\hat{\varphi}_{1}^{+}$ and $\hat{p}_{1}^{e}$ from (\ref{eq:R1-e}) and (\ref{eq:phi2-e}) into (\ref{eq:p1-e}) and using \[ \left.z^{2}(z^{2}L_{r}-3r\partial_{r})L_{r}-3(z^{2}L_{r}-r\partial_{r})z^{2}L_{r}\right|_{R=1}\equiv2m^{2}-2(L_{z}+m^{2})^{2},\] after some algebra we obtain an equation for $\hat{R}_{1}^{e},$ which is the same as (\ref{egv:R0-e}) for $\hat{R}_{0}^{e},$ except for the r.h.s. that now reads as \[ \frac{\beta_{0}^{e}}{3m^{2}}\left[(\beta_{0}^{e}/m)^{2}\left((L_{z}+m^{2})^{2}-m^{2}\right)(3L_{z}+2m^{2})-6\beta_{1}^{e}(L_{z}+m^{2})\right]\hat{R}_{0}^{e}\] By the same arguments as for (\ref{egv:R1-o}), the solvability condition applied to the expression above results in\begin{equation} \beta_{1}^{e}=-\frac{(l-1)(l+2)(l^{2}-m^{2})((l+1)^{2}-m^{2})(3l(l+1)-2m^{2})}{6(l(l+1)-m^{2})^{2}},\label{eq:bt1-e}\end{equation} which again coincides with the corresponding result of \citet{Gail66}. \subsection{\label{sub:visc-damp}Weak viscous damping} There are three effects due to viscosity in this problem. Firstly, viscosity appears in the normal stress balance condition (\ref{eq:nblnc-nd}) as a $O(\mathit{Ca})$ correction to the inviscid solution obtained above. Secondly, viscosity also appears as a small parameter $\mathit{Ca}$ in (\ref{eq:phip}) which again implies the same order correction when the leading-order inviscid solution is substituted into this term. Thirdly, viscosity enters the problem implicitly through the free-slip boundary condition (\ref{eq:tblnc}) which was ignored by the inviscid solution but needs to be satisfied when viscosity is taken into account. To satisfy this condition, the leading-order solution needs to be corrected by the viscous term in (\ref{eq:phip}), where $\mathit{Ca}$ appears as a small parameter at the higher-order derivative. For this small viscous term to become comparable with the dominating magnetic term at the surface, the expected correction has to vary over the characteristic length scale $\delta\sim\sqrt{\mathit{Ca}/\mathit{Cm}}=\mathit{Ha}^{-1},$ which is defined by the Hartmann number $\mathit{Ha}=B_{0}R_{0}\sqrt{\sigma/(\rho\nu)}$. Moreover, for the viscous correction of the tangential velocity $\tilde{v}_{\tau}$ in the Hartmann layer to compensate for a $O(1)$ tangential stress due to the leading-order inviscid solution, $\tilde{v}_{\tau}\sim\mathit{Ha}^{-1}$ is required. Then the incompressibility constraint implies an associated normal velocity component of an order in $\delta$ smaller than $\tilde{v}_{\tau},$ i.e., $\tilde{v}_{n}\sim\mathit{Ha}^{-2}.$ This normal velocity correction is subsequently negligible. But this not the case for the tangential velocity correction $\tilde{v}_{\tau},$ which according to (\ref{eq:p}) is expected to produce a pressure correction $\tilde{p}\sim\mathit{Cm}/\mathit{Ha}^{2}\sim\mathit{Ca}.$ The last is comparable with the normal viscous stress produced by the leading-order inviscid flow. Taking into account the estimates above and $\tilde{\varphi}\sim\delta\tilde{v}_{\phi}\sim\mathit{Ha}^{-2},$ which follows from (\ref{eq:phi}), we search for a viscous correction as \begin{eqnarray*} \{\hat{p},\hat{\varphi},\hat{\vec{v}}\} & \sim & \{\hat{p}_{0},\hat{\varphi}_{0},\hat{\vec{v}}_{0}\}+\mathit{Ca}\{\hat{p}_{01},\hat{\varphi}_{01},\hat{\vec{v}}_{01}\}+\{\mathit{Ca}\tilde{p},\mathit{Ha}^{-2}\tilde{\varphi},\mathit{Ha}^{-1}\tilde{\vec{v}}\}\cdots,\\ \{\hat{R},\beta\} & \sim & \{\hat{R}_{0},\beta_{0}\}+\mathit{Ca}\{\hat{R}_{01},\beta_{01}\}+\cdots,\end{eqnarray*} where the terms with the tilde account for a Hartmann layer solution localised at the surface. \subsection{Eigenvalue perturbation for longitudinal modes} We start with the core region, where the additive boundary layer corrections are supposed to vanish. The first-order viscous corrections for the pressure and potential $\{\hat{p}_{01}^{o},\hat{\varphi}_{01}^{o}\}(r,z)=z\{\hat{p}_{01}^{-},\hat{\varphi}_{01}^{-}\}(r)$ are obtained similarly to the leading-order inviscid solution (\ref{eq:p0f0}). Now, instead of the kinematic and electric boundary conditions (\ref{eq:R1-cnd}) and (\ref{eq:phi-cnd}) derived in the inviscid approximation, we have to use the original ones (\ref{eq:kinc}) and (\ref{eq:jR}) containing the velocity, which again follows from the Navier-Stokes equation (\ref{eq:NS-nd}) including the viscous term $\sim\mathit{Ca}.$ For the longitudinal modes, described by the odd solutions, (\ref{eq:NS-nd}) yields \begin{eqnarray} \beta_{0}^{o}\hat{w}_{01}^{o}+\beta_{01}^{o}\hat{w}_{0}^{o} & = & -\hat{p}_{01}^{-}+L_{r}\hat{w}_{0}^{o},\label{eq:w01-o}\\ \hat{\vec{u}}_{01}^{o} & = & \vec{e}_{z}\times\vec{D}\hat{\varphi}_{01}^{o},\label{eq:u01-o}\end{eqnarray} where $\hat{w}$ and $\hat{\vec{u}}$ are the velocity components parallel and perpendicular, respectively, to the field direction $\vec{e}_{z},$ and $\vec{D}\equiv e^{-im\phi}\vec{\nabla}e^{im\phi}$ is a spectral counterpart of the nabla operator for the azimuthal mode $m.$ Since, as shown above, both the potential and velocity perturbations in the Hartmann layer are higher-order small quantities and, thus, negligible with respect to the core perturbation, the electric boundary condition (\ref{eq:jR}) can be applied at $R=1$ directly to the first-order core solution as $\partial_{R}\hat{\varphi}_{01}^{o}=-\vec{e}_{R}\cdot\vec{e}_{z}\times\hat{\vec{u}}_{01}^{o}.$ Taking into account (\ref{eq:u01-o}), this yields $\hat{\varphi}_{01}^{-}\equiv0$ and, hence, $\hat{\vec{u}}_{01}^{o}\equiv0.$ Consequently, the first-order velocity perturbation in the core for the odd modes is again purely longitudinal. Then, the kinematic constraint (\ref{eq:kinc}) for the leading- and first-order terms takes, respectively, the form\begin{eqnarray*} z\hat{w}_{0}^{o} & = & \beta_{0}^{o}\hat{R}_{0}^{o},\\ z\hat{w}_{01}^{o} & = & \beta_{0}^{o}\hat{R}_{01}^{o}+\beta_{01}^{o}\hat{R}_{0}^{o}.\end{eqnarray*} These expressions combined with (\ref{eq:w01-o}) result in\begin{equation} \beta_{0}^{o}(\beta_{0}^{o}\hat{R}_{01}^{o}+2\beta_{01}^{o}\hat{R}_{0}^{o}-zL_{r}z^{-1}\hat{R}_{0}^{o})=-\hat{p}_{01}^{o},\label{eq:p01-o}\end{equation} which defines the first-order core pressure perturbation at $R=1.$ In addition, we need also the Hartmann layer pressure correction which according to the estimates above is of the same order of magnitude as the core one. To resolve the Hartmann layer, we introduce a stretched coordinate $\tilde{R}=(1-R)/\delta$ (\citealt{Hinch91}), where $\delta=\mathit{Ha}^{-1}$ is the characteristic Hartmann layer thickness. In the Hartmann layer variables, (\ref{eq:phip}) takes the form\begin{equation} (\mathit{Cm}^{-1}\beta_{0}+z^{2}-\partial_{\tilde{R}}^{2})\partial_{\tilde{R}}^{2}\{\tilde{p},\tilde{\varphi},\tilde{\vec{v}}\}=0.\label{eq:pfv-bnd}\end{equation} For $\mathit{Cm}\gg1$, the inertial term $\sim\mathit{Cm}^{-1}$ is negligible in (\ref{eq:pfv-bnd}) with respect to the magnetic one $\sim z^{2},$ except for $|z|\lesssim\mathit{Cm}^{-1/2}.$ First, ignoring this term, which, as shown below, gives a next-order small correction, the solution of (\ref{eq:pfv-bnd}) vanishing outside the Hartmann layer can be written as \begin{equation} \{\tilde{p},\tilde{\varphi},\tilde{\vec{v}}\}=\{\tilde{p}^{s},\tilde{\varphi}^{s},\tilde{\vec{v}}^{s}\}(z)e^{-|z|\tilde{R}},\label{eq:pfv-s}\end{equation} where the index $s$ denotes the surface distribution of the corresponding quantity. Then the free-slip boundary condition (\ref{eq:tblnc}) results in\begin{eqnarray} \tilde{v}_{\phi}^{s} & = & -|z|^{-1}(imr^{-1}\hat{v}_{0,R}+\partial_{R}(\hat{v}_{0,\phi}/R)),\label{eq:vs-phi}\\ \tilde{v}_{\theta}^{s} & = & -|z|^{-1}(\partial_{\theta}\hat{v}_{0,R}+\partial_{R}(\hat{v}_{0,\theta}/R)).\label{eq:vs-theta}\end{eqnarray} For the longitudinal modes, defined by the odd solutions, the leading-order inviscid velocity is purely axial \begin{equation} \hat{\vec{v}}_{0}^{o}=\vec{e}_{z}\hat{w}_{0}(r)=-\vec{e}_{z}{\beta_{0}^{o}}^{-1}\hat{p}_{0}^{-}(r).\label{eq:v0-o}\end{equation} Substituting this into (\ref{eq:vs-theta}) and taking into account that the radial pressure distribution at the surface is related to the radius perturbation by (\ref{eq:R0-o}), we obtain\begin{equation} \tilde{v}_{\theta}^{s}=\beta_{0}\frac{r(z^{2}-r^{2})}{z|z|}\frac{d}{dz}\frac{\hat{R}_{0}^{o}}{z}.\label{eq:vs-theta-o}\end{equation} Pressure is related to the velocity by (\ref{eq:p}), which in the dimensionless form reads as $\vec{\nabla}^{2}p=\mathit{Cm}\partial_{z}v_{z}.$ In the Hartmann layer variables, this equation takes the form \begin{equation} \partial_{\tilde{R}}^{2}\tilde{p}=rz\partial_{\tilde{R}}\tilde{v}_{\theta}.\label{eq:p-bnd}\end{equation} Substituting the general solutions for pressure and velocity given by (\ref{eq:pfv-s}) into (\ref{eq:p-bnd}) and using (\ref{eq:vs-theta-o}), we find\begin{equation} \tilde{p}^{s}=-rz|z|^{-1}\tilde{v}_{\theta}^{s}.\label{eq:ps-o}\end{equation} Substituting the normal component of viscous stress \[ -2\partial_{R}\hat{v}_{0,R}^{o}=2\beta_{0}^{o}r^{2}\frac{d}{dz}\frac{\hat{R}_{0}^{o}}{z}\] together with the core and boundary layer pressure contributions defined by (\ref{eq:p01-o}) and (\ref{eq:ps-o}) into the normal stress balance condition (\ref{eq:nblnc-nd}), we finally obtain\begin{equation} (L_{z}+2-{\beta_{0}^{o}}^{2})\hat{R}_{01}^{o}=\beta_{0}^{o}\left[2\beta_{01}^{o}\hat{R}_{0}^{o}-z^{-2}(L_{z}+m^{2}+2)\hat{R}_{0}^{o}-2(1-z^{-2})\frac{d}{dz}\frac{\hat{R}_{0}^{o}}{z}\right].\label{eq:R01-o}\end{equation} The sought for viscous damping rate is obtained in the usual way by applying the solvability condition to (\ref{eq:R01-o}) that after some algebra results in\begin{eqnarray} \beta_{01}^{o} & = & -(2l+1)\frac{(l-m)!}{(l+m)!}\int_{0}^{1}\left[\frac{1}{2}(l(l+1)-m^{2}-2)\frac{P_{l}^{m}(z)}{z}\right.\label{sol:bet01-o}\\ & & \left.-(z-z^{-1})\frac{d}{dz}\frac{P_{l}^{m}(z)}{z}\right]\frac{P_{l}^{m}(z)}{z}\, dz=-(2l+1)\left[\frac{1}{2}(l(l+1)-m^{2})-1-I_{l}^{m}\right],\nonumber \end{eqnarray} where \begin{eqnarray} I_{l}^{m} & = & \frac{(l-m)!}{(l-m)!}\int_{0}^{1}\frac{P_{l}^{m}(z)}{z}(z-z^{-1})\frac{d}{dz}\frac{P_{l}^{m}(z)}{z}\, dz\nonumber \\ & = & \frac{((l-1)^{2}-m^{2})I_{l-2}^{m}+(2l-1)(l(l-1)-m^{2})}{l^{2}-m^{2}}\label{eq:I-lm}\end{eqnarray} can be calculated from the above recurrence relation starting with $l=m+1$ and taking into account that $I_{l}^{m}=0$ for $l<m.$ For the modes with $m=l-1$, we have $\beta_{01}^{o}=\frac{1}{2}(2l+1)(l-1),$ which is the half of the corresponding viscous damping rate without the magnetic field (\citealt{Lamb93}). Although the viscous damping rate increases for smaller $m,$ as seen from the numerical values of $-\beta_{01}^{o}$ for the first 7 longitudinal oscillation modes calculated by the Mathematica (\citealt{Wolf96}) and shown in table \ref{tab:bet01-o}, it remains below its non-magnetic counterpart up to $l=5$ modes. Note that the r.h.s of (\ref{eq:R01-o}) has a simple pole $(z^{-1})$ singularity at $z=0,$ which is due to the neglected inertial term in (\ref{eq:pfv-bnd}). As discussed above, this term becomes relevant for $|z|\lesssim\mathit{Cm}^{-1/2},$ where it cuts off the singularity at $z^{-1}\sim\mathit{Cm}^{1/2}.$ This cut-off integrated in (\ref{sol:bet01-o}) over $|z|\lesssim\mathit{Cm}^{-1/2},$ where $P_{l}^{m}(z)\sim z$ for the odd modes, results in the damping rate correction $O(\mathit{Cm}^{-1/2}),$ which is a higher-order small quantity. \begin{center} \begin{table} \begin{centering} \begin{tabular}{cccccccc} & $m=0$ & 1 & 2 & 3 & 4 & 5 & 6\tabularnewline \hline $l=2$ & & $\frac{5}{2}$ & & & & & \tabularnewline 3 & $\frac{35}{3}$ & & $7$ & & & & \tabularnewline 4 & & $\frac{51}{2}$ & & $\frac{27}{2}$ & & & \tabularnewline 5 & $\frac{154}{3}$ & & $44$ & & $22$ & & \tabularnewline 6 & & $\frac{169}{2}$ & & $\frac{403}{6}$ & & $\frac{65}{2}$ & \tabularnewline 7 & $135$ & & $125$ & & $95$ & & $45$\tabularnewline \end{tabular} \par\end{centering} \caption{\label{tab:bet01-o}The viscous damping rates $-\beta_{01}^{o}$ for the first 6 longitudinal oscillation modes.} \end{table} \par\end{center} \subsection{Viscous energy dissipation} Viscous damping rate can be found in an alternative much simpler way by considering the energy balance following from the dot product of (\ref{eq:NS-nd}) and $\vec{v},$ which integrated over the drop volume yields \begin{equation} \frac{1}{2}\partial_{t}\int_{V}\vec{v}^{2}\, dV+\int_{S}(\vec{\nabla}\cdot\vec{n})\vec{v}\cdot d\vec{s}=-\int_{V}(2\mathit{Ca}\breve{\varepsilon}^{2}+\mathit{Cm}\vec{j}^{2})\, dV,\label{eq:nrg}\end{equation} where the first and second term on the l.h.s. stand for the time-variation of kinetic and surface energies, while the terms on the r.h.s. with the rate-of-strain tensor $(\breve{\varepsilon})_{i,j}=\frac{1}{2}(v_{i,j}+v_{j,i})$ and the dimensionless current density $\vec{j}=-\vec{\nabla}\varphi+\vec{v}\times\vec{\epsilon}$ account for the viscous and ohmic dissipations, respectively. As estimated above, viscosity gives rise to the tangential current density $\sim\mathit{Ha}^{-1}$ in the Hartmann layer of the thickness $\sim\mathit{Ha}^{-1}$ that according to (\ref{eq:nrg}) produces the ohmic dissipation $\sim\mathit{Cm}/\mathit{Ha}^{3}\sim\mathit{Ca}/\mathit{Ha}$ which for $\mathit{Ha}\gg1$ is negligible with respect to the viscous dissipation $\sim\mathit{Ca}.$ Note that although the contribution of the Hartmann layer to the normal stress balance is important, its contribution to the energy dissipation is still negligible. This fact results in a substantial simplification of the solution procedure for the viscous damping rate. Thus, neglecting the ohmic dissipation and averaging the rest of (\ref{eq:nrg}) over the period of oscillation and taking into account that the mean kinetic and surface energies for small amplitude harmonic oscillations are equal, we obtain a simple expression for the viscous damping rate in terms of inviscid leading order solution (\citealt{Landau87}) \begin{equation} \beta_{01}=-\int_{V}|\breve{\hat{\varepsilon}}_{0}|^{2}\, dV/\int_{V}|\vec{\hat{v}}_{0}|^{2}\, dV.\label{eq:bet01}\end{equation} For the longitudinal modes, this equation takes the form \begin{equation} \beta_{01}^{e}=-\frac{\int_{0}^{1}[(rz^{-1}\partial_{z}\hat{w}_{0}^{o})^{2}+(m\hat{w}_{0}^{o}/r)^{2}]z^{2}\, dz}{\int_{0}^{1}\left.\hat{w}_{0}^{o}\right.^{2}z^{2}\, dz}.\label{eq:bt01-o}\end{equation} Substituting $\hat{w}_{0}^{o}(z)=\beta_{0}^{o}R_{0}^{o}P_{l}^{m}(z)/z$ from (\ref{eq:w0-o}) into (\ref{eq:bt01-o}), after some algebra the last can be shown to be equivalent to (\ref{sol:bet01-o}). This approach is particularly useful for the transversal modes for which the conventional eigenvalue perturbation solution becomes excessively complicated and, thus, it is omitted here. In this case, using (\ref{sol:v0-e}) we can represent (\ref{eq:bet01}) in terms of scalar potential \begin{equation} \beta_{01}^{e}=-\frac{\int_{0}^{1}\left[\left(r\partial_{r}(r^{-1}\partial_{r}\hat{\varphi}_{0}^{e})+m^{2}\hat{\varphi}_{0}^{e}/r\right)^{2}+\left(2m\partial_{r}(\hat{\varphi}_{0}^{e}/r)\right)^{2}\right]z^{2}\, dz}{\int_{0}^{1}\left[(\partial_{r}\hat{\varphi}_{0}^{e})^{2}+(m\hat{\varphi}_{0}^{e}/r)^{2}\right]z^{2}\, dz}.\label{eq:bt01-e}\end{equation} Substituting $\hat{\varphi}_{0}^{e}(z)=im^{-1}\beta_{0}^{e}R_{0}^{e}P_{l}^{m}(z)$ from (\ref{sol:phi0-p}) into (\ref{eq:bt01-e}), after a lengthy algebra we obtain\begin{equation} \beta_{01}^{e}=-(2l+1)\frac{l(l+1)(l-2)-m^{2}(l-3)+(l^{2}-m^{2})I_{l-1}^{m}}{2(l(l+1)-m^{2})},\label{sol:bt01-e}\end{equation} where $I_{l-1}^{m}$ is defined by (\ref{eq:I-lm}). Note that for 2D modes, defined by $m=l,$ which are not affected by the magnetic field, we recover the well-known non-magnetic result $\beta_{01}^{e}=-(2l+1)(l-1)$ (\citealt{Lamb93}). For other indices, (\ref{sol:bt01-e}) can be verified by a direct integration of (\ref{eq:bt01-e}) using the Mathematica (\citealt{Wolf96}). As seen from the numerical values shown in table \ref{tab:bet01-e}, the next even mode with $m=l-2$ has the viscous damping rate which is by the factor of $(l-2)/(l-4/5)$ lower than the non-magnetic counterpart given by $m=l$. Only for the modes with $m\leq l-4,$ the viscous damping rate in the magnetic field becomes higher than that without the field. \begin{center} \begin{table} \begin{centering} \begin{tabular}{cccccccc} & $m=1$ & 2 & 3 & 4 & 5 & 6 & 7\tabularnewline \hline $l=2$ & & $5$ & & & & & \tabularnewline 3 & $\frac{70}{11}$ & & $14$ & & & & \tabularnewline 4 & & $\frac{135}{8}$ & & $27$ & & & \tabularnewline 5 & $\frac{1232}{29}$ & & $\frac{220}{7}$ & & $44$ & & \tabularnewline 6 & & $\frac{1339}{19}$ & & $50$ & & $65$ & \tabularnewline 7 & $\frac{1350}{11}$ & & $\frac{4930}{47}$ & & $\frac{2250}{31}$ & & 90\tabularnewline \end{tabular} \par\end{centering} \caption{\label{tab:bet01-e}The viscous damping rates $-\beta_{01}^{e}$ for the first 6 transversal oscillation modes.} \end{table} \par\end{center} The approach above is not directly applicable to the axisymmetric transversal modes which, as discussed at the end of $\S$\ref{sub:T-freq}, are stationary in the leading-order inviscid approximation. For these overdamped modes, a flow with the velocity $\sim1/\mathit{Cm}$ relative to the leading-order radius perturbation appears only in the first-order approximation, which according to (\ref{eq:nrg}) produces the same order ohmic dissipation. In this case, dissipation takes place on the account of the surface energy reduction, while that of the kinetic energy is negligible because it is by $\sim1/\mathit{Cm}^{2}$ smaller than the former. The contribution of the viscous dissipation in (\ref{eq:nrg}) is $\sim\mathit{Ca}/\mathit{Cm}^{2},$ which for a low viscosity and a high magnetic field is much smaller than the ohmic dissipation $\sim1/\mathit{Cm},$ and, thus negligible with respect to the latter. \section{\label{sec:conc}Conclusion} In the present study, we have considered small-amplitude oscillations of a conducting liquid drop in a uniform DC magnetic field. Viscosity was assumed to be small but the magnetic field strong. Combining the regular and matched asymptotic expansion techniques we obtained a relatively simple solution to the associated eigenvalue problem. Firstly, we showed that the eigenmodes of shape oscillations are not affected by strong magnetic field. Namely, they remain the spherical harmonics as in the non-magnetic case. Strong magnetic field, however, constrains the liquid flow associated with the oscillations and, thus, reduces the oscillations frequency by increasing apparent inertia of the liquid. In such a field, liquid oscillates in a two-dimensional (2D) way as solid columns aligned with the field. Two types of oscillations are possible: longitudinal and transversal to the field. Such oscillations are weakly damped by strong magnetic field -- the stronger the field, the weaker the damping, except for the axisymmetric transversal and 2D modes. The former are magnetically overdamped because the incompressibility constraint does not permit an axially uniform radial flow. The latter, which are transversal modes defined by the spherical harmonics with equal degree and order, $l=m$, are not affected by the magnetic field because these modes are naturally invariant along the field. In a uniform magnetic field, no electric current is induced and, thus, no electromagnetic force acts on such a 2D transversal flow because the associated e.m.f. is irrotational. Because the magnetic damping for all other modes decreases inversely with the square of the field strength, the viscous damping may become important in a sufficiently strong magnetic field. Consequently, the relaxation of axisymmetric transversal modes, whose viscous damping is negligible relative to the magnetic one, can be used to determine the electrical conductivity, while the damping of $l=m$ modes can be used to determine the viscosity. The damping of all other modes is affected by both the viscous and ohmic dissipations. Although the latter reduces inversely with the square of the field strength while the former stays constant, an extremely strong magnetic field may be required for the viscous dissipation to be become dominant. As an example, let us consider a drop of Nickel of $1\, cm$ in diameter $(R_{0}=5\times10^{-3}\, m)$ which, at the melting point $(1455^{\circ}C),$ has the surface tension $\gamma=1.8\, N/m,$ density $\rho=7.9\times10^{3}\, kg/m^{3},$ the dynamic viscosity $\eta=4.9\times10^{-3}\, Ns/m^{2}$ and the electrical conductivity $\sigma=1.2\times10^{6}\, S/m$ (\citealt{Smithells}). The capillary time scale and frequency of non-magnetic fundamental mode $(l=2)$ for such a drop are $\tau_{0}=\sqrt{R_{0}^{3}\rho/\gamma}\approx23\, ms$ and $f=\sqrt{l(l-1)(l+2)}/(2\pi\tau_{0})\approx19\, Hz,$ respectively. The viscous damping time without the magnetic field (\citealt{Lamb93}) is $\tau_{v}/((2l+1)(l-1))\approx8\, s$, where $\tau_{v}=\rho R_{0}^{2}/\eta\approx40\, s$ is the viscous time scale. Note that weak-viscosity approximation is applicable in this case because $\mathit{Ca}=\tau_{0}/\tau_{v}=5.8\times10^{-4}$ is small. In the magnetic field of $B=5\, T,$ for which $\mathit{Cm}=\sigma B^{2}R_{0}^{2}/\sqrt{\rho\gamma R_{0}}\approx87\gg1,$ the oscillation frequency of longitudinal fundamental mode $(l,m)=(2,1)$ drops according to equation (\ref{sol:bt0-o}) to $f_{2,1}^{o}=\sqrt{(l-1)(l+2)}/(2\pi\tau_{0})\approx14\, Hz.$ The corresponding viscous damping time increases by the factor of two to $-\tau_{\nu}/\beta_{01}^{o}\approx16\, s,$ where $-\beta_{01}^{o}=5/2$ according to table \ref{tab:bet01-o}. The magnetic damping time of this mode, for which (\ref{eq:bt1-o}) yields $\beta_{1}^{o}=-4,$ is $-\tau_{0}\mathit{Cm}/\beta_{1}^{o}\approx0.5\, s.$ According to this formula, for the magnetic damping time to exceed the viscous one, a magnetic field of $B\gtrsim20\, T$ is necessary. The relaxation time for the axisymmetric fundamental mode $(l,m)=(2,0),$ which is magnetically over-damped, is $-\tau_{0}\mathit{Cm}/\beta_{1}^{e}\approx28\, ms,$ where $\beta_{1}^{e}=72$ follows from (\ref{eq:bt1-e}). The magnetic field affects neither the frequency nor the damping rate of $(l,m)=(2,2)$ transversal oscillation mode, which is naturally invariant along the field. For the same reason, there is no magnetic damping of this mode either. The first oscillatory transversal mode is $(l,m)=(3,1)$ whose frequency drops according to (\ref{sol:bt0-e}) from $f_{l}=\sqrt{l(l-1)(l+2)}/(2\pi\tau_{0})\approx38\, Hz$ without the magnetic field to $f_{3,1}^{e}=\sqrt{\frac{(l-1)(l+2)}{l(l+1)-m^{2}}}/(2\pi\tau_{0})\approx5\, Hz$ in a strong magnetic field. The magnetic damping time for this mode in a $5\, T$ magnetic field is $-\tau_{0}\mathit{Cm}/\beta_{1}^{e}\approx36\, ms,$ where $\beta_{1}^{e}=6800/121$ follows from (\ref{eq:bt1-e}). The viscous damping time for this mode is $-\tau_{\nu}/\beta_{01}^{e}\approx6\, s,$ where $\beta_{01}^{e}=70/11$ follows from table \ref{tab:bet01-e}. The viscous damping is small relative to the magnetic one for this mode, and a magnetic field of about $65\, T$ would be necessary for the magnetic damping time to become as long as the viscous one. In conclusion, this theoretical model provides a basis for the development of new measurement method of surface tension, viscosity and electrical conductivity of liquid metals using oscillating drop technique in a strong superimposed DC magnetic field. \begin{acknowledgements} The author would like to thank Agris Gailitis and Ra\'ul Avalos-Z\'u\~niga for constructive comments and stimulating discussions. \end{acknowledgements}
1,314,259,995,535
arxiv
\section{Introduction} In this note we show an estimate of the $L^p$-norm of the Bergman projection on the Hartogs triangle, the pseudoconvex domain in $\mathbb{C}^2$ defined as \begin{align*} \mathcal{H}=\left\{(z_1,z_2) \in \mathbb{C}^{2} \colon |z_1| < |z_2| <1 \right\}, \end{align*} for $4/3<p<4$. The Hartogs triangle has remarkable geometric and function-theoretic properties, and is a classical source of counterexamples in complex analysis. The boundary $b\mathcal{H}$ of the domain $\mathcal{H}$ has a serious singularity at the point 0, where $b\mathcal{H}$ cannot even be represented as a graph of a continuous function. The closure $\overline{\mathcal{H}}$ does not have a Stein neighborhood basis. Instead, it has a nontrivial Nebenh\"{u}lle. The $\overline{\partial}$-problem on $\mathcal{H}$ is not globally regular (see \cite{CC91}). \newline Let $\mathbb{D}$ denote the unit disc in $\mathbb{C}$, and $\nu$ the normalized Lebesgue volume measure on $\mathbb{D}$, while $\sigma$ is the normalized surface measure on its boundary $\mathbb{T}$. Let $dV$ denote the Lebesgue volume measure on $\mathcal{H}$. The space $L^p_h(\mathcal{H})$ consists of all holomorphic functions $f$ on $\mathcal{H}$, for which \begin{align*} \|f\|_p:=\left\{ \int_{\mathcal{H}} |f(z)|^p d\mu(z) \right\}^{1/p} <\infty, \end{align*} where $d\mu= \frac{ dV}{\pi^2}$. The orthogonal projection operator $\mathbf{P} \colon L^2(\mathcal{H}) \rightarrow L^2_h(\mathcal{H})$ is the Bergman projection associated with the domain $\mathcal{H}$. It follows from the Riesz representation theorem that the Bergman projection is an integral operator with the kernel $K_{\mathcal{H}} (z, w)$ on $\mathcal{H} \times \mathcal{H}$, i.e. $\mathbf{P} f(z) = \int_{\mathcal{H}} K_{\mathcal{H}} (z, w)f(w)\,d\mu(w)$ for all $f \in L^2_h(\mathcal{H})$ (see \cite{Kra}, section 1 for more on this topic). It is well known that (see \cite{Ed}) \begin{align*} K_{\mathcal{H}} ((z_1,z_2), (w_1,w_2))=\frac{z_2 \overline{w}_2}{(1-z_2 \overline{w}_2)^2 (z_2 \overline{w}_2 - z_1 \overline{w}_1)^2}. \end{align*} The Bergman projection is a central object in the study of analytic function spaces. It naturally relates to fundamental questions such as duality and harmonic conjugates, and it is also a building block for Toeplitz operators. Understanding its behaviour and estimating its size is therefore of vital importance on several occasions. There are several articles on Bergman projections. We refer the reader to the following articles and the references therein \cite{Chen,Dos,KP,LS,LS2,Liu,MN2,PS,Zey} for details of this interesting topic. In \cite{Ch}, Chakrabarti and Zeytuncu proved that the Bergman projection $\mathbf{P}$ is a bounded operator from $L^p(\mathcal{H})$ to $L^p_h(\mathcal{H})$ if and only if $4/3<p<4$. For the interested reader, we recommend \cite{Edholm} for more general result. A natural and interesting question is to determine the exact value of the $L^p$-operator norm $\|\mathbf{P}\|_p$ of this operator. This turns out to be a difficult task to accomplish, except for the trivial case when $p =2$. \section{Preliminaries} \subsection{The hypergeometric function} If the real part of the complex number $z$ is positive ($\Re (z) > 0$), then the integral \begin{align*} \Gamma(z)=\int_{0}^{\infty} x^{z-1} e^{-x} \, dx \end{align*} converges absolutely, and is known as the Euler integral of the second kind. The recurrence relation \begin{align}\label{gamma} z\Gamma(z)=\Gamma(z+1) \end{align} can be used to uniquely extend the integral formulation for $\Gamma(z)$ to a meromorphic function defined for all complex numbers $z$, except integers less than or equal to zero. Other important functional equations for the gamma function are Euler's reflection formula \begin{align}\label{reflection} \Gamma(1-z)\Gamma(z)=\frac{\pi}{\sin (\pi z)}, \quad z \notin \mathbb{Z}, \end{align} and the duplication formula \begin{align}\label{dupl} \Gamma(z) \Gamma\left(z + \frac{1}{2} \right) = 2^{1-2z} \sqrt{\pi} \Gamma(2z) \end{align} discovered by Legendre (see \cite{HTF}, Chapter I for more on this topic). Let $(a)_m = \frac{\Gamma(a+m)}{\Gamma(a)}$, that is, $(a)_0 = 1$, $(a)_m = a(a + 1)\cdots (a + m - 1)$ for $m = 1, 2, \ldots$. The notation $(a)_m$ is called the Pochhammer symbol. The classical Euler-Gauss hypergeometric function is defined by \begin{align*} F(a,b;c;z) =\sum_{n=0}^{\infty} \frac{(a)_n (b)_n}{(c)_n } \frac{z^n}{n!}. \end{align*} The series $F(a,b;c;z)$ converges when $|z| < 1$ and diverges when $|z| > 1$. For the readers's convenience, we list the properties of the function $F(a,b;c;z)$ that will be important for this paper \begin{align} &F(a,b;c;1)=\frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)}, \quad \Re(c-a-b)>0.& \label{hyp1} \\ &F(a,b;c;x)=(1-x)^{c-a-b} F(c-a,c-b;c;x). \label{hyp2}\\ &F(a,b;c;x)=\frac{\Gamma(c)}{\Gamma(b) \Gamma(c-b)} \int_{0}^{1} t^{b-1} (1-t)^{c-b-1} (1-tx)^{-a} \, dt,& \label{hyp3}\\ & \hskip26mm \Re(c)>\Re(b)>0; \, |\arg(1-x)|< \pi; \, x\neq 1.& \nonumber \\ &\frac{d^k}{dx^k} F(a,b;c;x)=\frac{(a)_k (b)_k}{(c)_k} F(a+k,b+k;c+k;x), \quad k \in \mathbb{N}.& \label{hyp4} \end{align} We refer to \cite{HTF}, Chapter II for more properties of this function. \subsection{Essential lemmas} \begin{lem}\label{torus} For $a \in \mathbb{R}$, we have (see \cite{Liu}) \begin{align*} \int_{\mathbb{T}} \frac{d\sigma(\zeta)}{|1- \langle z,\zeta \rangle |^{2a}}=F(a,a;1;|z|^2). \end{align*} \end{lem} It follows from the above lemma and the formula $\sum\limits_{n=0}^{\infty}x^n=\frac{1}{1-x}$ that \begin{align} \int_{\mathbb{T}} \frac{d\sigma(\zeta)}{|1- \langle z,\zeta \rangle |^2}=\frac{1}{1-|z|^2}. \end{align} \begin{lem}\label{fre} Let $c >0$ and $t >-1$. We have (see \cite{Liu}) \begin{align*} \sup_{z \in \mathbb{D}} \left\{ (1-|z|^2)^c \int_{\mathbb{D}} \frac{(1-|w|^2)^t d\nu(w)}{|1- \langle z,w \rangle|^{2+t+c}} \right\}=\frac{\Gamma(t+1) \Gamma(c)}{\Gamma^2(\frac{2+t+c}{2})}. \end{align*} \end{lem} \begin{lem}\label{mlem} Let $h((z_1,z_2))=(|z_2|^2 - |z_1|^2)(1-|z_2|^2)$. For $1>t>1/2$, we have \begin{align*} \sup_{(z_1,z_2) \in \mathcal{H}} \left\{ h((z_1,z_2))^t \int_{\mathcal{H}} \frac{|z_2 \overline{w}_2| h((w_1,w_2))^{-t} d\mu(w_1,w_2) }{|1-z_2 \overline{w}_2|^2 |z_2 \overline{w}_2 - z_1 \overline{w}_1|^2 } \right\} = \Gamma^2(1-t)\Gamma^2(t). \end{align*} \end{lem} \begin{proof} Fix $1>t>1/2$ and denote $$C(t):= \sup_{(z_1,z_2) \in \mathcal{H}} \left\{ h((z_1,z_2))^t \int_{\mathcal{H}} \frac{|z_2 \overline{w}_2| h((w_1,w_2))^{-t} d\mu(w_1,w_2) }{|1-z_2 \overline{w}_2|^2 |z_2 \overline{w}_2 - z_1 \overline{w}_1|^2 } \right\}.$$ Then the $C(t)$ equals \begin{align*} \sup_{(z_1,z_2) \in \mathcal{H}} \left\{ (1-|z_2|^2)^t \int_{\mathbb{D}^{*}} \frac{|z_2 \overline{w}_2| (1-|w_2|^2)^{-t} }{\pi^2 |1-z_2 \overline{w}_2|^2} \left[ \int_W A\, dV(w_1) \right]\, dV(w_2) \right\}, \end{align*} where $\mathbb{D}^{*}:=\{w_2 \colon 0 < |w_2| < 1\}$, $W:=\{w_1 \colon |w_1| < |w_2|\}$ and $$A:=\frac{(|z_2|^2 - |z_1|^2)^t}{|z_2|^2 |w_2|^{2+2t}} \left( 1 - \left| \frac{w_1}{w_2} \right|^2 \right)^{-t} \left| 1- \frac{z_1 \overline{w}_1}{z_2 \overline{w}_2} \right|^{-2}.$$ Now we focus on the integral in brackets. Making the substitution $u=\frac{w_1}{w_2}$ we have \begin{align*} \int_W A\, dV(w_1) = \int_{\mathbb{D}} \frac{(|z_2|^2 - |z_1|^2)^t}{|z_2|^2 |w_2|^{2t}} \left( 1 - \left| u \right|^2 \right)^{-t} \left| 1- \frac{z_1}{z_2} \overline{u} \right|^{-2} \, dV(u). \end{align*} For fixed $z_2 \in \mathbb{D}^{*}$, by Lemma \ref{fre}, we have \begin{align*} \sup_{|z_1|<|z_2|} \left\{ \int_W A\, dV(w_1) \right\} = \frac{\pi \Gamma(1-t)\Gamma(t)}{|w_2|^{2t} |z_2|^{2-2t}}. \end{align*} Therefore \begin{align*} C(t) = \sup_{z_2 \in \mathbb{D}^{*}} \left\{ \Gamma(1-t)\Gamma(t) \int_{\mathbb{D}^{*}} \frac{(1-|z_2|^2)^t |z_2|^{2t-1} |w_2|^{1-2t} }{\pi (1-|w_2|^2)^{t} |1-z_2 \overline{w}_2|^2 } \, dV(w_2) \right\}. \end{align*} For $z_2 \in \mathbb{D}^{*}$, denote $$ I(z_2)= \int_{\mathbb{D}^{*}} \frac{(1-|z_2|^2)^t |z_2|^{2t-1} |w_2|^{1-2t} }{\pi (1-|w_2|^2)^{t} |1-z_2 \overline{w}_2|^2 } \, dV(w_2). $$ Introducing polar coordinate in $w_2=r e^{i\zeta}$ variable we have \begin{align*} I(z_2)=2(1-|z_2|^2)^t |z_2|^{2t-1} \int_{0}^{1} \left[\int_{\mathbb{T}} \frac{d\sigma(\zeta)}{|1-\langle rz_2,\zeta \rangle|^2} \right] \frac{r^{2-2t} \, dr} {(1-r^2)^{t}} \end{align*} Next, by Lemma \ref{torus} \begin{align*} I(z_2)=2(1-|z_2|^2)^t |z_2|^{2t-1} \int_{0}^{1} \frac{r^{2-2t} (1-r^2)^{-t}}{1-r^2|z_2|^2} \,dr \end{align*} Hence, by (\ref{hyp3}) we obtain \begin{align*} I(z_2)=(1-|z_2|^2)^t |z_2|^{2t-1} \frac{2^{2t-1} \sqrt{\pi} \Gamma(2-2t)}{\Gamma\left(\frac{5}{2}-2t \right)} F\left(1,\frac{3}{2}-t; \frac{5}{2} -2t; |z_2|^2 \right). \end{align*} Finally, by (\ref{hyp2}) \begin{align*} I(z_2)= \frac{2^{2t-1} \sqrt{\pi} \Gamma(2-2t)}{\Gamma\left(\frac{5}{2}-2t \right)} |z_2|^{2t-1} F\left(\frac{3}{2}-2t,1-t; \frac{5}{2} -2t; |z_2|^2 \right). \end{align*} If $3/2-2t>0 \Leftrightarrow 3/4>t$, then $|z_2|^{2t-1} F\left(\frac{3}{2}-2t,1-t; \frac{5}{2} -2t; |z_2|^2 \right)$ is the increasing function of $|z_2|$ ($|z_2| \in [0,1)$), since its Taylor coefficients are all positive. Therefore, by (\ref{hyp1}) \begin{align*} C(t) = \Gamma(1-t)\Gamma(t) \frac{2^{2t-1} \sqrt{\pi} \Gamma(2-2t)}{\Gamma\left(\frac{5}{2}-2t \right)} \frac{\Gamma(t) \Gamma(2-2t)}{\Gamma\left(\frac{3}{2}-t \right)}. \end{align*} Using the duplication formula (\ref{dupl}), we get \begin{align*} C(t) = \Gamma^2(1-t)\Gamma^2(t). \end{align*} In the case when $3/2-2t \leq 0$ we consider the function $$f(\lambda):=\lambda^{t-1/2} F\left(\frac{3}{2}-2t,1-t; \frac{5}{2} -2t; \lambda \right), \quad \lambda \in [0,1].$$ Applying differentiation formula of the hypergeometric function (\ref{hyp4}), we have \begin{align*} f'(\lambda)=\lambda^{t-3/2} g(\lambda), \end{align*} where \begin{align*} g(\lambda)=&\left(t-\frac{1}{2}\right) F\left(\frac{3}{2}-2t,1-t; \frac{5}{2} -2t; \lambda \right) \\ &+ \lambda F\left(\frac{5}{2}-2t,2-t; \frac{7}{2} -2t; \lambda \right). \end{align*} Since $g(0)=t-1/2>0$ and $g'(\lambda) >0 $ for $\lambda \in (0,1)$, we obtain that the function $f$ is an increasing function on an interval $[0,1]$. Therefore the conclusion about a constant $C(t)$ is the same as in the case when $3/2-2t>0$ which completes the proof. \end{proof} \begin{lem}\label{cal}\cite{Liu} Let $a, b, c \in \mathbb{R}$ and $t >-1$. The identity \begin{align*} \int_{\mathbb{D}}& \frac{(1-|\xi|^2)^t d\nu(\xi)}{(1- \langle z,\xi \rangle)^a (1- \langle w,\xi \rangle)^b (1- \langle \xi,w \rangle)^c} \\ &=\frac{\Gamma(1+t)}{\Gamma(2+t)} \sum_{j=0}^{\infty} \frac{(a)_j (c)_j}{(2+t)_j j!} F\left(b,c+j;2+t+j;|w|^2\right) \langle z,w\rangle^j \end{align*} holds for any $z,w \in \mathbb{D}$. \end{lem} \begin{lem}\label{rozk}\cite{Liu} Let $1<p<\infty$ and \begin{align*} &\Psi_{\xi}(z):=\Gamma(2/p)\Gamma(2/q) \sum_{k=0}^{\infty} \epsilon_k \langle z, \xi \rangle^k, \\ &\Upsilon_{\xi}(z):= \sum_{k=0}^{\infty} a_k(\xi) \langle z, \xi \rangle^k, \end{align*} where \begin{align*} & \epsilon_k:= \frac{(2/p)_k}{k!} \left( \frac{\Gamma(k+2) \Gamma(k+1)}{ \Gamma(k+1+2/q) \Gamma(k+2/p) } -1 \right), \\ & a_k(\xi):= \frac{(1)_k}{k!} \left( F(2/p-1,k+1;k+2;|\xi|^2) - \frac{\Gamma(2/q) \Gamma(k+2) }{\Gamma(k+1+2/q) } \right). \end{align*} Then we have \begin{align*} &\sup_{\xi \in \mathbb{D}}\left\{\int_{\mathbb{D}} |\Psi_{\xi}(z)|^p \, d\nu(z) \right\} < \infty, \\ &\sup_{\xi \in \mathbb{D}} \left\{ \int_{\mathbb{D}} |\Upsilon_{\xi}(z)|^p \, d\nu(z) \right\} < \infty. \end{align*} \end{lem} \section{Main result} Our main result reads as follows. \begin{thm}[Main Theorem] For $4/3<p<4$, we have \begin{align}\label{main} \Gamma^2\left(\frac{2}{p}\right)\Gamma^2\left(\frac{2}{q}\right) \leq \|\mathbf{P}\|_p \leq \Gamma^2\left(1-\frac{2}{p}\right) \Gamma^2\left(\frac{2}{p}\right), \end{align} where $q:=\frac{p}{p-1}$ is the conjugate exponent of $p$. \end{thm} \begin{proof} First we prove the upper estimate in (\ref{main}). To do so, we use the well known Schur's test (see, for instance, \cite{Zhu} Theorem 3.6). \begin{lem} Suppose that $(X, \rho)$ is a $\sigma$-finite measure space and $K(x, y)$ is a nonnegative measurable function on $X \times X$ and $T$ the associated integral operator $$Tf(x)=\int_X K(x,y) f(y) \, d\rho(y).$$ Let $1 < p < \infty$ and $1/p +1/q=1$. If there exist a positive constant $C$ and a positive measurable function $h$ on $X$ such that $$\int_X K(x,y) h(y)^q \, d\rho(y) \leq C h(x)^q$$ for almost every $x$ in $X$ and $$\int_X K(x,y) h(x)^p \, d\rho(x) \leq C h(y)^p$$ for almost every $y$ in $X$, then $T$ is bounded on $L^p(X, d\rho)$ with $\|T\| \leq C$. \end{lem} We only need to consider the case when $4>p \geq 2$, and the case when $4/3 <p <2$ then follows from the duality. If we put \begin{align*} &K((z_1,z_2),(w_1,w_2))= \frac{z_2 \overline{w}_2}{(1-z_2 \overline{w}_2)^2 (z_2 \overline{w}_2 - z_1 \overline{w}_1)^2},\\ &h((z_1,z_2))=\left[(|z_2|^2 - |z_1|^2)(1-|z_2|^2)\right]^{-\frac{2}{pq}}, \\ &C(p)=\sup_{(z_1,z_2) \in \mathcal{H}} \left\{ h((z_1,z_2))^{2/p} \int_{\mathcal{H}} \frac{|z_2 \overline{w}_2| h((w_1,w_2))^{-2/p} d\mu(w_1,w_2) }{|1-z_2 \overline{w}_2|^2 |z_2 \overline{w}_2 - z_1 \overline{w}_1|^2 } \right\}, \end{align*} where $q$ is the conjugate exponent of $p$, it is clear that \begin{align*} \int_{ \mathcal{H}} K(z,w) h(w)^q \, d\mu(w) \leq C(p) h(z)^q\\ \int_{ \mathcal{H}} K(z,w) h(z)^p \, d\mu(z) \leq C(q) h(w)^p \end{align*} for almost every $z \in \mathcal{H}$ and $w \in \mathcal{H}$, respectively. From Lemma \ref{mlem} $$C(p)=\Gamma^2\left(1-\frac{2}{p}\right) \Gamma^2\left(\frac{2}{p}\right)=C(q).$$ Hence, an application of Schur's test gives the desired upper estimate. To prove the lower estimate, we define, for $(z_1,z_2), (\xi_1,\xi_2) \in \mathcal{H}$ \begin{align*} f_{(\xi_1,\xi_2)} ((z_1,z_2)) :=\frac{(1-\xi_2 \overline{z}_2)^{1-\frac{2}{p}}}{z_2(1-z_2 \overline{\xi}_2)} \frac{ \left(1-\frac{\xi_1}{\xi_2} \frac{\overline{z}_1}{\overline{z}_2} \right)^{1-\frac{2}{p}} }{ 1-\frac{z_1}{z_2} \frac{\overline{\xi}_1}{\overline{\xi}_2}}. \end{align*} We show that \begin{align}\label{f} \left\|f_{(\xi_1,\xi_2)}\right\|^p_p \approx \log \frac{1}{1-|\xi_2|^2} \log \frac{1}{1-\left| \xi_1/ \xi_2 \right|^2}, \quad \text{as} \quad |\xi_2|, \left| \xi_1/ \xi_2 \right| \rightarrow 1^{-}. \end{align} By the definition we have \begin{align*} \left\|f_{(\xi_1,\xi_2)}\right\|^p_p &= \int_{\mathcal{H}} \frac{|1-\xi_2 \overline{z}_2|^{p-2}}{|z_2|^p \left|1-z_2 \overline{\xi}_2\right|^p} \frac{ \left|1-\frac{\xi_1}{\xi_2} \frac{\overline{z}_1}{\overline{z}_2} \right|^{p-2} }{ \left|1-\frac{z_1}{z_2} \frac{\overline{\xi}_1}{\overline{\xi}_2} \right|^p} \, d\mu(z_1,z_2) \\ &= \int_{\mathcal{H}} \frac{1}{|z_2|^p \left|1-z_2 \overline{\xi}_2\right|^2} \frac{ 1 }{ \left|1-\frac{z_1}{z_2} \frac{\overline{\xi}_1}{\overline{\xi}_2} \right|^2} \, d\mu(z_1,z_2) \\ &= \int_{\mathbb{D}^{*}} \frac{|z_2|^{-p}}{\left|1-z_2 \overline{\xi}_2\right|^2} \left[ \int_{|z_1|<|z_2|} \frac{ 1 }{ \left|1-\frac{z_1}{z_2} \frac{\overline{\xi}_1}{\overline{\xi}_2} \right|^2} \, d\nu(z_1) \right] \, d\nu(z_2). \end{align*} Making the substitution $u=\frac{z_1}{z_2}$, we have \begin{align*} \left\|f_{(\xi_1,\xi_2)}\right\|^p_p = \int_{\mathbb{D}^{*}} \frac{|z_2|^{2-p}}{\left|1-z_2 \overline{\xi}_2\right|^2} \left[ \int_{\mathbb{D}} \frac{ 1 }{ \left|1- \overline{\xi}_1/ \overline{\xi}_2 u \right|^2} \, d\nu(u) \right] \, d\nu(z_2). \end{align*} Now (\ref{f}) follows from the well known Forelli-Rudin estimate (see \cite{R}, Proposition 1.4.10). \newline A similar calculations show that \begin{align*} \mathbf{P}f_{(\xi_1,\xi_2)} ((z_1,z_2))= \int_{ \mathbb{D}^{*}} \frac{ (1-\xi_2 \overline{z}_2)^{1-\frac{2}{p}} \, d\nu(w_2)}{z_2(1-z_2 \overline{w}_2)^2 (1-w_2 \overline{\xi}_2)} \int_{\mathbb{D}} \frac{ \left(1-\frac{\xi_1}{\xi_2} \overline{u} \right)^{1-\frac{2}{p}} \, d\nu(u)}{ \left( 1-\frac{z_1}{z_2} \overline{u}\right)^2 \left(1-u \frac{\overline{\xi}_1}{\overline{\xi}_2} \right)}. \end{align*} Using twice Lemma \ref{cal}, we get \begin{align*} \mathbf{P}f_{(\xi_1,\xi_2)} ((z_1,z_2))=&\frac{1}{z_2} \sum_{k=0}^{\infty} \frac{(1)_k}{k!} F\left(2/p-1,1+k;2+k;|\xi_2|^2\right) \langle z_2,\xi_2 \rangle^k \\ &\cdot \sum_{l=0}^{\infty} \frac{(1)_l}{l!} F\left(2/p-1,1+l;2+l;|\xi_2/ \xi_2|^2\right) \left\langle \frac{z_1}{z_2}, \frac{\xi_1}{\xi_2} \right\rangle^k. \end{align*} It is easy to check that \begin{align*} \mathbf{P}f_{(\xi_1,\xi_2)} ((z_1,z_2))=&\frac{1}{z_2}\left( \Phi_{\xi_2}(z_2) + \Psi_{\xi_2}(z_2) + \Upsilon_{\xi_2}(z_2) \right) \\ & \cdot \left(\Phi_{\xi_1 / \xi_2}(z_1/z_2) + \Psi_{\xi_1 / \xi_2}(z_1/ z_2) + \Upsilon_{\xi_2/ \xi_2}(z_1/z_2) \right), \end{align*} where $\Phi_{\xi}(z)=\Gamma(2/p)\Gamma(2/q)\left(1- z \overline{\xi} \right)^{-2/p}$. Hence \begin{align*} \|\mathbf{P}\|_p &\geq \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\| \mathbf{P}f_{(\xi_1,\xi_2)}\|_p}{\|f_{(\xi_1,\xi_2)}\|_p} \\& \geq \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \left(\frac{ \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/z_2)\right\|_p } {\|f_{(\xi_1,\xi_2)}\|_p} - \frac{R}{\|f_{(\xi_1,\xi_2)}\|_p} \right), \end{align*} where \begin{align*} R=&\left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p + \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p\\ &+ \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p + \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p \\&+ \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p + \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p \\ &+ \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p + \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p. \end{align*} It is obvious that \begin{align*} \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/z_2)\right\|_p=\Gamma^2(2/p)\Gamma^2(2/q) \|f_{(\xi_1,\xi_2)}\|_p. \end{align*} The following lemma completes the proof of the main result \begin{lem} Define $R$ as above. Then for $4/3<p<4$ \begin{align} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{R}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align} \end{lem} But to avoid disrupting the flow of the paper with several pages of computations, we postpone its proof to the next section. \end{proof} \section{Proof of Lemma 3.2} It is enough to show that each term of $R$ divided by $\|f_{(\xi_1,\xi_2)}\|_p$ goes to zero as $|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}$. We start with \begin{align*} \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p=\int_{\mathcal{H}} \left|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right|^p \, d\mu(z_1,z_2). \end{align*} Making the substitution $u=z_1/z_2$, we have \begin{align*} \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p= \int_{ \mathbb{D}^{*}} \frac{|\Phi_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \left\{ \int_{\mathbb{D}} |\Psi_{\xi_1 / \xi_2}(u)|^p \,d\nu(u) \right\} \, d\nu(z_2). \end{align*} Lemma \ref{rozk} yields that there exists a constant $C>0$ such that \begin{align*} \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C \int_{ \mathbb{D}^{*}} \frac{|\Phi_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \, d\nu(z_2). \end{align*} Since $\int_{ \mathbb{D}^{*}} |z_2|^{2-p} \, d\nu(z_2)$ is finite (when $p<4$) and $\Phi_{\xi_2}(0)=\Gamma(2/p)\Gamma(2/q)$ we can write \begin{align*} \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C_1 \int_{ \mathbb{D}^{*}} |\Phi_{\xi_2}(z_2)|^p \, d\nu(z_2), \end{align*} for some constant $C_1$. Therefore by the Forelli-Rudin estimate \begin{align*} \left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \approx \log \frac{1}{1-|\xi_2|^2}, \quad \text{as} \quad |\xi_2| \rightarrow 1^{-}. \end{align*} Hence, by (\ref{f}) \begin{align*} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align*} The limit \begin{align*} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Phi_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align*} can be obtained by the similar method and we omit the details. \newline Let us now consider \begin{align*} \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p=\int_{\mathcal{H}} \left| \frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2) \right|^p \, d\mu(z_1,z_2). \end{align*} As before we make a substitution $u=z_1/z_2$ \begin{align*} \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p=\int_{ \mathbb{D}^{*}} \frac{|\Psi_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \left\{ \int_{\mathbb{D}} |\Phi_{\xi_1 / \xi_2}(u)|^p \, d\nu(u) \right\} \, d\nu(z_2). \end{align*} From the Forelli-Rudin estimate there exists a constant $C>0$ such that \begin{align*} \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C \log \frac{1}{1-\left| \xi_1/ \xi_2 \right|^2} \int_{ \mathbb{D}^{*}} \frac{|\Psi_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \, d\nu(z_2). \end{align*} Since $\int_{ \mathbb{D}^{*}} |z_2|^{2-p} \, d\nu(z_2)$ is finite (when $p<4$) and $$\Psi_{\xi_2}(0)=\Gamma(2/p)\Gamma(2/q)\left(\frac{1}{\Gamma(1+2/q)\Gamma(2/p)}-1\right),$$ then \begin{align*} \left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C_1 \log \frac{1}{1-\left| \xi_1/ \xi_2 \right|^2}. \end{align*} Therefore, by (\ref{f}) \begin{align*} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align*} A similar calculation reveals that \begin{align*} \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C \log \frac{1}{1-\left| \xi_1/ \xi_2 \right|^2} \int_{ \mathbb{D}^{*}} \frac{|\Upsilon_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \, d\nu(z_2), \end{align*} for some constant $C$. Since \begin{align*} &\Upsilon_{0}(0)= 1 - \frac{\Gamma(2/q) \Gamma(2) }{\Gamma(1+2/q)}, \\ &\limsup_{|\xi_2| \rightarrow 1^{-}} \Upsilon_{\xi_2}(0)=\frac{\Gamma(2-2/p) \Gamma(2) }{\Gamma(3-2/p)} - \frac{\Gamma(2/q) \Gamma(2) }{\Gamma(1+2/q)}, \end{align*} we conclude that \begin{align*} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Phi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align*} Next we investigate \begin{align*} \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \quad \text{and} \quad \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p. \end{align*} After a change of variables \begin{align*} \left\|\frac{\Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)}{z_2} \right\|^p_p=\int_{ \mathbb{D}^{*}} \frac{|\Upsilon_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \left\{ \int_{\mathbb{D}} |\Upsilon_{\xi_1 / \xi_2}(u)|^p \, d\nu(u) \right\} \, d\nu(z_2),\\ \left\|\frac{\Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)}{z_2} \right\|^p_p=\int_{ \mathbb{D}^{*}} \frac{|\Upsilon_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \left\{ \int_{\mathbb{D}} |\Psi_{\xi_1 / \xi_2}(u)|^p \, d\nu(u) \right\} \, d\nu(z_2). \end{align*} Hence, (using Lemma \ref{rozk}) \begin{align*} \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C \int_{ \mathbb{D}^{*}} \frac{|\Upsilon_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \, d\nu(z_2),\\ \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq D \int_{ \mathbb{D}^{*}} \frac{|\Upsilon_{\xi_2}(z_2)|^p}{|z_2|^{p-2}} \, d\nu(z_2). \end{align*} for some constants $C,D$. Since the function $\Upsilon_{\xi_2}(0)$ (as a function of $\xi$) is bounded on $\mathbb{D}$ and $\int_{ \mathbb{D}^{*}} |z_2|^{2-p} \, d\nu(z_2)$ is finite, it is easy to see that \begin{align*} \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq C_1 \int_{ \mathbb{D}^{*}} |\Upsilon_{\xi_2}(z_2)|^p \, d\nu(z_2),\\ \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|^p_p \leq D_1 \int_{ \mathbb{D}^{*}} |\Upsilon_{\xi_2}(z_2)|^p \, d\nu(z_2). \end{align*} Apply Lemma \ref{rozk} again, we have \begin{align*} \sup_{(\xi_1,\xi_2) \in \mathcal{H}} \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p \leq \infty,\\ \sup_{(\xi_1,\xi_2) \in \mathcal{H}} \left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p \leq \infty. \end{align*} Thus \begin{align*} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0, \\ \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Upsilon_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align*} This is what we wanted to establish. A similar calculations show that \begin{align*} \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Psi_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0, \\ \limsup_{|\xi_2|,|\xi_1 / \xi_2|\rightarrow 1^{-}} \frac{\left\|\frac{1}{z_2} \Psi_{\xi_2}(z_2) \Upsilon_{\xi_1 / \xi_2}(z_1/ z_2)\right\|_p}{\|f_{(\xi_1,\xi_2)}\|_p}=0. \end{align*} That completes the proof. \bibliographystyle{amsplain}
1,314,259,995,536
arxiv
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \newcommand{\mbox{Tr}}{\mbox{Tr}} \renewcommand{\H}{\mathcal{H}} \newcommand{\mbox{SU}}{\mbox{SU}} \newcommand{\chi^{{\rm U}(\infty)}}{\chi^{{\rm U}(\infty)}} \newcommand{\rm f}{\rm f} \linespread{1.3} \begin{document} \begin{titlepage} \unitlength = 1mm \ \\ \vskip 1cm \begin{center} { \LARGE {\textsc{Low's Subleading Soft Theorem as a Symmetry of QED }}} \vspace{0.8cm} Vyacheslav Lysov, Sabrina Pasterski and Andrew Strominger \vspace{1cm} {\it Center for the Fundamental Laws of Nature, Harvard University,\\ Cambridge, MA 02138, USA} \begin{abstract} It was shown by F. Low in the 1950s that the subleading terms of soft photon ${\cal S}$-matrix elements obey a universal linear relation. In this paper we give a new interpretation to this old relation, for the case of massless QED, as an infinitesimal symmetry of the ${\cal S}$-matrix. The symmetry is shown to be locally generated by a vector field on the conformal sphere at null infinity. Explicit expressions are constructed for the associated charges as integrals over null infinity and shown to generate the symmetry. These charges are local generalizations of electric and magnetic dipole charges. \end{abstract} \vspace{1.0cm} \end{center} \end{titlepage} \pagestyle{empty} \pagestyle{plain} \def{\vec x}{{\vec x}} \def\partial{\partial} \def$\cal P_O${$\cal P_O$} \pagenumbering{arabic} \tableofcontents \section{Introduction} Soft theorems can be reinterpreted as symmetries of the ${\cal S}$-matrix for which the soft particles are Goldstone modes \cite{as,asbms,hms}. A priori, there is no guarantee that the resulting symmetry takes any simple or local form. However, for the case of the soft graviton theorem \cite{steve}, the symmetry turns out to be a diagonal subgroup of the product group of BMS \cite{bms} diffeomorphisms acting on past and future null infinity \cite{hms}. There is also a subleading soft graviton theorem~\cite{fc}, which is equivalent to a Virasoro symmetry at null infinity~\cite{klps, Adamo:2014yya,Geyer:2014lca}. For the leading soft photon theorem, the resulting symmetry was very recently shown to be the infinite-dimensional subgroup of $U(1)$ gauge transformations which approach the same angle-dependent constant at either end of any light ray crossing Minkowski space \cite{hmps}. In this paper we consider the subleading soft photon theorem, specializing to massless QED.\footnote{This specialization is made, as in \cite{hmps}, to avoid dealing with singularities in the conformal compactification of past and future timelike infinity in the massive case.} It has been known since the work of Low~\cite{low54, low}, Burnett-Kroll~\cite{bk} and Gell-Mann-Goldberger \cite{ggm} that the subleading, as well as the leading, term of soft photon absorption or emission is universal; see equation (3.2) below. In the massless case loop corrections are in general expected~\cite{DelDuca:1990gz,he,bdn}, but we will not consider their effects here. We re-express the subleading soft relation as a symmetry acting on in- and out-states. However, unlike all the cases mentioned above, the resulting symmetry is $not$ a subgroup of the original gauge symmetry.\footnote{We expect this also to be the case for the subsubleading soft graviton theorem.} It acts locally on the conformal sphere at ${\mathcal I}$ where it is parameterized by a vector field $Y$. However it is bilocal in advanced or retarded time. As already noted in~\cite{he}, the bilocal form is reminiscent of the Yangian appearing in $\mathcal{N}=4$ gauge theories, but we have not found a precise relation. There may also be a connection to the `extra' conserved quantities of Newman and Penrose~\cite{np}. If $Y$ is one of the global $SL(2,C)$ rotations, the symmetry implies global magnetic dipole charge conservation. Having a generic $Y$ is a local generalization of this, in the same sense that supertranslations (superrotations) are local generalizations of global translations (rotations) in the gravity case. We wish to stress that, despite the precise formulae presented, the nature and significance of the symmetry remains largely mysterious to us. It is not a subgroup of the gauge group and, unlike the cases considered in~\cite{as, hmps} does not come under the usual rubric of asymptotic symmetries. Moreover, the infinitesimal symmetry generators do not commute and their commutators give yet more symmetries. We do not know whether or not a finite version of the symmetry transformation exists. The presence of so many symmetries would ordinarily imply integrability, but it is highly implausible that all abelian theories with massless charges are integrable. Another possibility is that there is no simple extension to massive QED, and loop corrections in the massless case somehow eliminate the symmetries. Despite all these uncertainties, our formulae seem of interest and are presented here in the hope that further investigations can put them into proper context! This paper is organized as follows. In section~\ref{sec:maxwell} we give our conventions, the mode expansion for the $U(1)$ gauge field and define both the leading and subleading soft operators. In section~\ref{sec:soft} we review the subleading term in Low's soft photon theorem, and then rewrite it as a symmetry of the ${\cal S}$-matrix. We construct the associated charges and show that their actions on the fields reproduce the infinitesimal symmetries. The charges are first presented as integrals over all of past or future null infinity, and then, in section~\ref{sec:Qmax}, are shown to reduce to boundary expressions after using the gauge constraints. This is surprising as they are not gauge symmetries! Finally we discuss the connection to dipole charges. \section{Preliminaries} \label{sec:maxwell} In this section we collect essential formulae and introduce our conventions. For more details see \cite{hmps}. \subsection{Classical equations} Flat Minkowski coordinates $(x^0,x^1,x^2,x^3)$ are given by \begin{eqnarray}\label{flt} x^0&=& u+r~~=~v-r, \cr x^1+ix^2&=&{2rz \over 1+z{\bar z}},\cr x^3&=& {r(1-z{\bar z}) \over 1+z{\bar z}}. \end{eqnarray} where $u$ ($v$) is retarded (advanced) time. In retarded (advanced) coordinates, the metric is \begin{equation} \label{eq:coord} ds^2 =-du^2 -2du dr +2r^2 \gamma_{z{\bar z}} dzd{\bar z} =-dv^2 +2dv dr +2r^2 \gamma_{z{\bar z}} dzd{\bar z} , \end{equation} with $\gamma_{z{\bar z}}$ is a round metric on the unit $S^2$. In terms of $\mathcal{F}_{\mu\nu}=\partial_\mu\mathcal{A}_\nu-\partial_\nu\mathcal{A}_\mu$ and matter current $j_\nu^M$ the Maxwell equations in retarded coordinates are \be\begin{array}{c} \label{eq:maxwell} -\gamma_{z{\bar z}} r^2 \partial_u \mathcal{F}_{ru} + \partial_z \mathcal{F}_{{\bar z} u} + \partial_{{\bar z}} \mathcal{F}_{zu} +\partial_r (\gamma_{z{\bar z}}r^2\mathcal{F}_{ru} ) = e^2 \gamma_{z{\bar z}}r^2 j_u^M ,\\ \partial_z \mathcal{F}_{{\bar z} r} + \partial_{{\bar z}}\mathcal{F}_{zr} +\partial_r( \gamma_{z{\bar z}} r^2\mathcal{F}_{ru}) =e^2 \gamma_{z{\bar z}} r^2 j_r^M,\\ r^2 \partial_r(\mathcal{F}_{rz} -\mathcal{F}_{uz}) -r^2 \partial_u \mathcal{F}_{rz} +\partial_z (\gamma^{z{\bar z}} \mathcal{F}_{{\bar z} z}) = e^2 r^2 j_z^M. \end{array}\ee A similar expression applies to advanced coordinates. \subsection{Mode expansions} The mode expansion for the outgoing free Maxwell field is \begin{equation} \mathcal{A}^{out}_{\mu}(x) =e \sum\limits_{\alpha=\pm} \int \frac{ d^3q}{(2\pi)^3} \frac{1}{2 \omega_q} \left( {\epsilon}^{\alpha*}_{\mu} ({ \vec q})a^{out}_\alpha ({\vec q}) e^{i q \cdot x} + {\epsilon}^\alpha_{\mu}({\vec q}) a^{out}_\alpha ({\vec q})^\dagger e^{- i q \cdot x} \right), \end{equation} where $q^0 = \omega_q = | {\vec q}|$, $\alpha=\pm$ are the two helicities and \begin{equation}\label{rrd} [a^{out}_\alpha ({\vec q}), a^{out}_\beta ({\vec{q'}})^\dagger ]= 2\omega_q\delta_{\alpha\beta} (2\pi)^3 \delta^3 \left( {\vec q} - {\vec q}' \right). \end{equation} Outgoing photons with momentum $q$ and helicity $\alpha$ correspond to final-state insertions of $a^{out}_\alpha ({\vec q})$. They arrive at a point $w$ on the conformal sphere at ${\mathcal I}^+$. It is convenient to parametrize the photon four-momentum by $(\omega_q,w,{\bar w})$ \begin{equation}\label{gravmom} q^\mu = \frac{\omega_q}{1 + w {\bar w}} \left( 1 + w {\bar w} , w + {\bar w} , i \left( \bar{w} - w\right), 1 - w {\bar w} \right), \end{equation} with polarization tensors \be\begin{array}{c} \label{gg} { {\epsilon}}^{+\mu}( {\vec q} ) = \frac{1}{\sqrt{2}} \left( {\bar w}, 1, - i, - {\bar w} \right), \\ {{\epsilon}}^{-\mu}({\vec q} ) = \frac{1}{\sqrt{2}} \left( w , 1, i, - w \right). \end{array}\ee These obey ${\epsilon}^{\pm\mu}q_\mu=0$ and \begin{equation} {\epsilon}_{{\bar z}}^+ \left(\vec{q} \right) = \partial_{{\bar z}} x^\mu{\epsilon}^+_\mu \left( \vec{q} \right)= \frac{ \sqrt{2} r \left( 1 + z\bar{w} \right)}{ \left( 1 + z {\bar z} \right)^2 } ,~~~~{\epsilon}_{{\bar z}}^- \left({\vec q} \right) = \partial_{{\bar z}} x^\mu{\epsilon}^-_\mu \left( \vec{q} \right) = \frac{ \sqrt{2} r { z} \left( { w} - { z} \right) }{ \left( 1 + z {\bar z} \right)^2 }.\end{equation} We define the boundary field on ${\mathcal I}^+$ by \begin{equation} A_{{\bar z}}(u,z,{\bar z}) = \lim_{r\to\infty} \mathcal{A}^{out}_{{\bar z}}(u,r,z,{\bar z})=\lim_{r\to\infty} \partial_{\bar z} x^\mu \mathcal{A}^{out}_{\mu}(u,r,z,{\bar z}). \end{equation} This is related to the plane wave modes by \begin{equation} A_{{\bar z}} = e \lim\limits_{r\to\infty} \partial_{\bar z} x^\mu \sum\limits_{\alpha=\pm} \int \frac{ d^3q}{(2\pi)^3} \frac{1}{2 \omega_q} \left( {\epsilon}^{\alpha*}_{\mu} ({ \vec q})a^{out}_\alpha ({\vec q}) e^{- i \omega_q u - i \omega_q r \left( 1 - \cos\theta \right) } + h.c. \right) \end{equation} where $\theta$ is the angle between between the $\vec{x}$ and $\vec{p}$. At large $r$ the leading saddle point approximation near $\theta=0$ gives \begin{equation} \label{eq:mode} A_{{\bar z}} = - \frac{i e\hat{{\epsilon}}_{{\bar z}}^+}{8\pi^2} \int\limits^\infty_0 d\omega_q ( a^{out}_- (\omega_q \hat{x}) e^{- i \omega_q u} - a^{out}_+ (\omega_q \hat{x})^\dagger e^{ i \omega_q u} ). \end{equation} Here, $\hat{x}$ is parameterized by $(z,\bar{z})$ \begin{equation} \hat{x}\equiv \frac{\vec{x}}{r} = \frac{1}{1+z{\bar z}} (z+{\bar z}, i ({\bar z}-z), 1-z{\bar z}) \end{equation} and \begin{equation} \hat{{\epsilon}}^{+ }_{{\bar z}} = \frac{\partial_{\bar z} x^\mu }{r} {\epsilon}_{\mu}^{+} = \frac{\sqrt{2}}{1+z{\bar z}}. \end{equation} One may also check that in the gauge \eqref{gg}, $A_u=\lim\limits_{r\to \infty} \partial_u x^\mu \mathcal{A}^{out}_\mu$ vanishes on ${\mathcal I}^+$ and hence $F_{u{\bar z}}(u,z,{\bar z})= \partial_u A_{{\bar z}}(u,z,{\bar z})$. Using~(\ref{eq:mode}), a similar mode expansion for $A_{z}$, and the commutation relations~(\ref{rrd}) the ${\mathcal I}^+$ commutator is \begin{equation}\label{AAcomm} \big[F_{u{\bar z}}(u,z,{\bar z}), F_{u'w}(u',w,{\bar w})\big] = \frac{ie^2}{2} \delta^2 (z-w)\partial_u\delta (u-u'). \end{equation} Similarly, defining the field $A^-_{\bar z} $ on $\mathcal{I}^-$ by \begin{equation} \label{eq:mode2} A^-_{{\bar z}} = - \frac{i e\hat{{\epsilon}}_{{\bar z}}^+}{8\pi^2} \int\limits^\infty_0 d\omega_q ( a^{in}_- (\omega_q \hat{x}) e^{- i \omega_q v} - a^{in}_+ (\omega_q \hat{x})^\dagger e^{ i \omega_q v} ), \end{equation} gives \begin{equation}\label{Gcomm} \big[G_{v{\bar z}}(v,z,{\bar z}), G_{v'w}(v',w,{\bar w})\big] = \frac{ie^2}{2} \delta^2 (z-w)\partial_v\delta (v-v'), \end{equation} where $G_{vz} =\partial_v A^-_z$. \subsection{Soft photon operators} We would now like to construct the operators corresponding to soft photon insertions on $\mathcal{I}^+$ and $\mathcal{I}^-$. To examine the soft limit of the above mode expansions, we define \begin{equation} \begin{array}{ll} F^{\omega}_{u{\bar z}}&\equiv \int du e^{i\omega u} \partial_u A_{{\bar z}} \\ &= - \frac{e}{4\pi}\hat{{\epsilon}}_{{\bar z}}^+ \int\limits^\infty_0 \omega_q d\omega_q [ a^{out}_- (\omega_q \hat{x}) \delta(\omega-\omega_q) + a^{out}_+ (\omega_q \hat{x})^\dagger \delta(\omega+\omega_q) ]. \end{array} \end{equation} For $\omega>0$ only the first delta-function contributes, while for $\omega<0$ only the second: \begin{equation} \begin{array}{lll} F_{u{\bar z}}^\omega&=& - \frac{ e }{4\pi }\hat{ {\epsilon}}_{{\bar z}}^{+}\omega a^{out}_- (\omega {\hat x }) , \\ F_{u{\bar z}}^{-\omega}&=& - \frac{ e }{4\pi } \hat{ {\epsilon}}_{{\bar z}}^{+}\omega a^{out}_+ (\omega {\hat x })^\dagger, \end{array} \end{equation} with $\omega>0$ in both cases. Similarly on $\mathcal{I}^-$ \begin{equation} \begin{array}{lll} G_{v{\bar z}}^\omega&=& - \frac{ e }{4\pi } \hat{ {\epsilon}}_{{\bar z}}^{+} \omega a^{in}_- (\omega {\hat x }), \\ G_{v{\bar z}}^{-\omega}&=& - \frac{ e }{4\pi } \hat{ {\epsilon}}_{{\bar z}}^{+}\omega a^{in}_+ (\omega {\hat x })^\dagger. \end{array} \end{equation} The zero mode of $F_{u{\bar z}}$ is defined as \begin{equation} \label{eq:zF} \begin{array}{ll} F^0_{u{\bar z}}&\equiv \frac12 \lim\limits_{\omega\to 0}(F^\omega_{u{\bar z}}+F^{-\omega}_{u{\bar z}} ) \\ &=-\frac{e}{8\pi} \hat{{\epsilon}}_{{\bar z}}^{+}\lim\limits_{\omega\to 0}[\omega a^{out}_-(\omega\hat{x})+ \omega a^{out}_+(\omega\hat{x})^\dagger],\\ \end{array} \end{equation} while on $\mathcal{I}^-$ \begin{equation} \begin{array}{ll} G^0_{v{\bar z}}&\equiv \frac12 \lim\limits_{\omega\to 0}(G^\omega_{v{\bar z}}+G^{-\omega}_{v{\bar z}} )\\ &=-\frac{e }{8\pi}\hat{{\epsilon}}_{{\bar z}}^{+} \lim\limits_{\omega\to 0}[\omega a^{in}_-(\omega\hat{x})+ \omega a^{in}_+(\omega\hat{x})^\dagger]. \end{array} \end{equation} As in~\cite{klps}, it is useful to define operators which create subleading soft photons, insertions of which automatically have the soft pole projected out. These are given on ${\mathcal I}^+$ by \begin{equation} \begin{array}{ll} F^{(1)}_{u{\bar z}} &\equiv \int du~u \partial_u A_{{\bar z}} \\ &= - \lim\limits_{\omega \to 0}\frac{i}{2}(\partial_\omega F^\omega_{u{\bar z}}+\partial_{-\omega} F^{-\omega}_{u{\bar z}})\\ &=\frac{ie}{8\pi} \hat{{\epsilon}}_{{\bar z}}^{+} \lim\limits_{\omega\rightarrow0}(1+\omega\partial_\omega)[ a^{out}_-(\omega\hat{x})-a^{out}_+(\omega\hat{x})^\dagger], \end{array} \end{equation} while at $\mathcal{I}^-$ \begin{equation} \begin{array}{ll} G^{(1)}_{v{\bar z}} &\equiv \int dv~v \partial_v A^-_{{\bar z}} \\ &= - \lim\limits_{\omega \to 0}\frac{i}{2}(\partial_\omega G^\omega_{v{\bar z}}+\partial_{-\omega} G^{-\omega}_{v{\bar z}})\\ &=\frac{ie}{8\pi} \hat{{\epsilon}}_{{\bar z}}^{+} \lim\limits_{\omega\rightarrow0}(1+\omega\partial_\omega)[ a^{in}_-(\omega\hat{x})-a^{in}_+(\omega\hat{x})^\dagger]. \end{array} \end{equation} \section{Soft theorem $\to$ symmetry} \label{sec:soft} In this section we rewrite the subleading soft theorem as an asymptotic symmetry acting on in- and out-states. Let us denote a state with $n$ massless hard particles of energies $E_k$, charges $eQ_k$ and momenta \begin{equation} \label{mom} p_k^\mu = \frac{E_k}{1 + z_k {\bar z_k}} \left( 1 + z_k {\bar z_k} , z_k + {\bar z_k} , i \left( \bar{z}_k - z_k\right), 1 - z_k {\bar z_k} \right), \end{equation} by $|z_1,...\rangle$, and hard ${\cal S}$-matrix elements by $\langle z_{n+1},...| {\cal S} |z_1,...\rangle$. The Low-Burnett-Kroll-Goldberger-Gell-Mann soft theorem~\cite{low54,low,bk,ggm, ca, ad,zvi} then states that if we add to the out-state a positive helicity photon with energy $\omega \to 0$, the first two terms in the soft expansion are \begin{equation} \langle z_{n+1},...| a^{out}_-(\vec q) {\cal S} |z_1,...\rangle = (J^{(0)-} + J^{(1)-} ) \langle z_{n+1},...| {\cal S} |z_1,...\rangle + {\cal O}}\def\cv{{\cal V}(\omega). \end{equation} Here \begin{equation} \label{zz} J^{(0)-} = e\sum\limits_k Q_k\frac{p_k\cdot {\epsilon}^-}{p_k\cdot q}\sim {\cal O}}\def\cv{{\cal V}(\omega^{-1}),\;\;\;J^{(1)-}=-ie\sum\limits_k Q_k \frac{ q_\mu{\epsilon}_\nu^-J_k^{\mu\nu}} { p_k\cdot q}\sim {\cal O}}\def\cv{{\cal V}(\omega^{0}), \end{equation} with $J_{k\mu\nu} $ the total angular momentum operator of the $k^{th}$ particle. In \cite{hmps} it was shown that the leading $J^{(0)}$ term implies a symmetry under large gauge transformations which approach an arbitrary angle dependent gauge transformation at null infinity. Here we wish to understand the subleading $J^{(1)}$ term. For this purpose it is convenient to eliminate the $J^{(0)-}$ contribution using the projection operator $(1+\omega \partial_\omega)$ \begin{equation} \lim_{\omega\to 0} (1+\omega\partial_\omega)\langle z_{n+1},...| a^{out}_-(\vec q) {\cal S} |z_1,...\rangle = J^{(1)-} \langle z_{n+1},...| {\cal S} |z_1,...\rangle . \end{equation} From~(\ref{eq:zF}) one than has \begin{equation}\label{ss} \begin{array}{ll} e\hat{{\epsilon}}^+_{{\bar z}}J^{(1)-} \langle z_{n+1},...| {\cal S} |z_1,...\rangle&= e\hat{{\epsilon}}^+_{{\bar z}}\lim\limits_{\omega\to 0} (1+\omega\partial_\omega)\langle z_{n+1},...| a^{out}_-(\vec q) {\cal S} |z_1,...\rangle \\ &=-8\pi i \langle z_{n+1},...| F^{(1)}_{u{\bar z}} {\cal S} |z_1,...\rangle . \end{array} \end{equation} For the special case of a scalar field with $J_{k\mu\nu}=-i\left(p_{k\mu}{\partial \over \partial p_k^\nu}- p_{k\nu}{\partial \over \partial p_k^\mu}\right) $, rewriting ($p_k^\mu, q^\mu$) in terms of $(E_k, z_k, \bar{z}_k)$ in \eqref{zz} gives for the right hand side of \eqref{ss} \begin{equation} J^{(1)-}= -e\sum_k \frac{ Q_k}{\sqrt{2} ( {\bar z}_k - \bar{z} ) } \left[(1+z{\bar z}_k)\partial_{E_k}+ E_k^{-1} (z-z_k)(1+z_k\bar{z}_k) \partial_{z_k} \right]. \end{equation} This is nonlocal on the conformal sphere. However acting with two covariant derivatives gives the local expression \begin{equation} D_z^2 (\hat{{\epsilon}}_{\bar z}^+J^{(1)-}) = 2\pi e\sum_k Q_k \left( D_z \delta^2(z-z_k)\partial_{E_k} +E_k^{-1} \delta^2 (z-z_k) \partial_{z_k}\right). \end{equation} Acting with $D_z^2$ on both sides of the soft theorem and integrating the result against an arbitrary vector field $Y^z$ gives\footnote{\label{bound}Various conditions at the boundaries of ${\mathcal I}$ may lead one to impose constraints such as $D_{\bar z} D_z^2Y^z=0$.} \be\begin{array}{c} \int d^2z \;D_z^2 Y^ze\hat{{\epsilon}}^+_{{\bar z}}\lim\limits_{\omega\to 0} (1+\omega\partial_\omega)\langle z_{n+1},...| a^{out}_-(\vec q) {\cal S} |z_1,...\rangle~~~~~~~~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~= -2\pi e^2 \sum\limits_k Q_k\left( D_zY^z (z_k)\partial_{E_k}-E_k^{-1} Y^{z}(z_k)\partial_{z_k}\right) \langle z_{n+1},...| {\cal S} |z_1,...\rangle. \end{array}\ee For spinning fields we need to replace $Y^z\partial_{z}$ by the Lie derivative ${\cal L}_Y$. For a hermitian action we should include $\hat {\epsilon}_z^-$ and $Y^{\bar z}$ but we suppress this for notational brevity. Similarly for the insertion of an incoming soft photon \be\begin{array}{c} -\int d^2z \;D_z^2 Y^{z}e\hat{{\epsilon}}^+_{{\bar z}}\lim\limits_{\omega\to 0} (1+\omega\partial_\omega)\langle z_{n+1},...| {\cal S} a^{in}_+(\vec q)^\dagger |z_1,...\rangle~~~~~~~~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~= -2\pi e^2 \sum\limits_k Q_k\left( D_zY^{z} (z_k)\partial_{E_k}-E_k^{-1} Y^{z}(z_k)\partial_{z_k}\right) \langle z_{n+1},...| {\cal S} |z_1,...\rangle. \end{array}\ee Let us define soft photon operators \begin{equation} {\cal Q}^+_S = -\frac{2}{e^2} \int d^2z du\; u\partial_uA_{{\bar z}} D^2_z Y^z, \end{equation} \begin{equation} {\cal Q}_S^- = \frac{2}{e^2} \int d^2z dv\; v\partial_vA^-_{{\bar z}} D^2_z Y^{z}. \end{equation} Hard particle symmetry operators ${\cal Q}^\pm_H$ are defined by their action \begin{equation} \label{eq:qh} \begin{array}{ll} \langle E,z| {\cal Q}^+_H&= -i Q \left( D_zY^z \partial_{E}-E^{-1} Y^{z}\partial_{z}\right)\langle E,z|,\\ \end{array} \end{equation} \begin{equation} \label{eq:qha} \begin{array}{ll} {\cal Q}^-_H| E,z\rangle &= i Q \left( D_zY^{z} \partial_{E}-E^{-1} Y^{z}\partial_{z}\right)| E,z\rangle.\\ \end{array} \end{equation} Finally we write \begin{equation} {\cal Q}^\pm={\cal Q}^\pm_S+{\cal Q}^\pm_H. \end{equation} Then the subleading soft theorem for massless QED takes the form \begin{equation} \langle z_{n+1},...| {\cal Q}^+\mathcal{S} - \mathcal{S} {\cal Q}^- |z_1,...\rangle =0. \end{equation} This expresses the subleading term in Low's theorem as an infinitesimal symmetry of the massless QED ${\cal S}$-matrix. \section{Charges} \label{sec:Qmax} In this section we express the operators ${\cal Q}^\pm$, for the case of scalar charged fields, as integrals over local fields on ${\mathcal I}^\pm$. The fact that this is possible is perhaps surprising as the factor of $E^{-1}$ in \eqref{eq:qh} suggests time nonlocality. A massless scalar field has an expansion near ${\mathcal I}^+$ \begin{equation} \Phi(u,r,z,{\bar z}) = \frac{\phi(u,z,{\bar z})}{r} + \sum_{n=0}^\infty\frac{\phi^{n}(u,z,{\bar z})}{r^{n+2}}. \end{equation} The commutation relation for the boundary field at ${\mathcal I}^+$ is \begin{equation} \label{cc} [\phi (u,z,{\bar z}), \bar{\phi}(u',w,{\bar w})] = -\frac{i\gamma^{z{\bar z}}}{4} \Theta (u-u') \delta^2(z-w), \end{equation} where $\Theta(x)$ is the sign function. The boundary charge current is \begin{equation} J_\mu^M = i Q \lim_{ r \to \infty} r^2 (\bar{\Phi} \partial_\mu \Phi - \Phi \partial_\mu \bar{\Phi}) = iQ(\bar{\phi} \partial_\mu \phi - \phi \partial_\mu \bar{\phi}). \end{equation} Expressing $ {\cal Q}^+_H$ in terms of current operators gives \begin{equation} {\cal Q}^+_H = \int_{{\mathcal I}^+}d^2zdu( u D_zY_{\bar z} J^M_u +Y_{\bar z} J^M_z). \end{equation} Using \eqref{cc} as well as \begin{equation} \frac{i}{\pi}\int \frac{e^{-iEu}}{E+i\epsilon^+} dE = 1+\Theta (u), \end{equation} one finds the desired action on the Fourier transform $ \phi_E = \int du\; e^{iEu} \phi $ of $\phi$ \begin{equation} \label{cc2} [\mathcal{Q}^+_H, {\phi}_E(z,{\bar z})] = iQ \left( D_zY^{z} \partial_{E}-E^{-1} Y^{z}\partial_{z}\right) {\phi}_E(z,{\bar z}). \end{equation} Similarly on $\mathcal{I}^-$ \begin{equation} {\cal Q}^-_H = -\int_{\mathcal{I}^-}d^2zdv(v D_zY_{\bar z} J^M_v +Y_{\bar z} J^M_z) \end{equation} generates the hard action \eqref{eq:qha} on incoming massless scalars. It is likely possible to generalize the construction to spinning fields but we have not worked out the details. Using the constraint equations~(\ref{eq:maxwell}), one can eliminate the matter charge currents and express the combined hard and soft charges as a boundary term. On ${\mathcal I}^+$ \begin{equation} {\cal Q}^+=\lim\limits_{r\rightarrow\infty}\frac{1}{e^2}\int_{{\mathcal I}^+}dud^2z\partial_u\big(uD_zY^z(r^2\mathcal{F}_{ur}\gamma_{z{\bar z}}+\mathcal{F}_{z{\bar z}})+2r^2Y_{\bar z}\mathcal{F}_{zr}\big). \end{equation} For the field configurations that revert to vacuum at $\mathcal{I}^+_+$ this reduces to the $S^2$ integral \begin{equation} {\cal Q}^+=-\lim\limits_{r\rightarrow\infty}\frac{1}{e^2}\int_{{\mathcal I}^+_-}d^2z\big(uD_zY^z(r^2\mathcal{F}_{ur}\gamma_{z{\bar z}}+\mathcal{F}_{z{\bar z}})+2r^2Y_{\bar z}\mathcal{F}_{zr}\big).\end{equation} Similarly on ${\mathcal I}^-$ \begin{equation} {\cal Q}^-=\lim\limits_{r\rightarrow\infty}\frac{1}{e^2}\int_{{\mathcal I}^-_+}d^2z\big(vD_zY^z(r^2\mathcal{F}^-_{vr}\gamma_{z{\bar z}}-\mathcal{F}^-_{z{\bar z}})+2r^2Y_{\bar z}\mathcal{F}^-_{zr}\big). \end{equation} It is interesting to compare these to the expressions for the electric and magnetic charges $Q$ and $\tilde Q$ and the dipole moments $\vec{\wp}$ and $\vec{\mu}$: \begin{equation} e^2Q+2\pi i\tilde Q = \lim\limits_{r\rightarrow\infty}\int d^2z\; ( r^2\mathcal{F}_{ru}\gamma_{z{\bar z}}+\mathcal{F}_{z{\bar z}})\\ \end{equation} \begin{equation} -e^2\vec{\wp} + 2\pi i\vec{\mu} = \lim\limits_{r\rightarrow\infty}3\int d^2z\; r^2 \mathcal{F}_{zr}\partial_{{\bar z}} \hat{x}. \end{equation} We see that if we take $Y$ to be a global $SL(2,C)$ rotation and use the boundary condition $\mathcal{F}_{z{\bar z}}$=0 from~\cite{hmps},\footnote{For such rotations, $Y$ is real and hence entails nonzero $Y^{\bar z}$ which we have been suppressing. We note that the particular restriction on $Y$ mentioned in footnote~\ref{bound} would eliminate these rotations.} ${\cal Q}^\pm$ are nothing but the total magnetic dipole charge. This is `conserved' in the sense that, given that the system begins and ends in the vacuum, the total incoming dipole charge must equal the total outgoing dipole charge. More generally, ${\cal Q}^\pm$ are local generalizations of dipole charge in the same sense that supertranslations (superrotations) are local generalizations of global translations (rotations). Hence the conservation law that implies Low's subleading soft theorem may be heuristically thought of as the equality of total incoming and total outgoing dipole charge flux at every fixed angle. \section*{Acknowledgements} We are grateful to F. Cachazo, T. He, P. Mitra, and M. Schwartz for useful conversations. This work was supported in part by DOE grant DE-FG02-91ER40654 and the Fundamental Laws Initiative at Harvard.
1,314,259,995,537
arxiv
\section{INTRODUCTION} \section{INTRODUCTION} \label{intro} One intriguing puzzle in physics is the regular pattern of the three fermion and quark families. The existence of families gives rise to many of the free parameters of the Standard Model, in particular the fermion masses and the elements of the Cabibbo-Kobayashi-Maskawa matrix (CKM) \cite{KM} that describe the mixing between the quark generations. The determination of all of these parameters is required to fully define the Standard Model and may also reveal an underlying structure that will point to new physics. In the Standard Model of three generations the CKM matrix is defined by three real parameters and one complex phase. It relates the eigenstates of the strong and weak interactions and can be written \begin{equation} V =\pmatrix{V_{ud}&V_{us}&V_{ub}\cr V_{cd}&V_{cs}&V_{cb}\cr V_{td}&V_{ts}&V_{tb}\cr} \end{equation} The matrix $V$ can be expressed approximately as \begin{equation} \simeq \pmatrix{1-\lambda^2/2&\lambda&A\lambda^3(\rho-i\eta)\cr -\lambda&1-\lambda^2/2&A\lambda^2\cr A\lambda^3(1-\rho-i\eta)&-A\lambda^2&1\cr}+ O(\lambda^4) \end{equation} This empirical parameterization, suggested by Wolfenstein\cite{Wolfie}, is correct to terms of order $\lambda^4$ with $\lambda = \sin{\theta_{Cabibbo}}\approx 0.22$. In the case of the two generations, the matrix $V$ is a simple rotation matrix where $\theta_{Cabibbo}$ is the angle of rotation. For three generations, $V$ may contain complex elements and allows for CP violation if the parameter $\eta$ is non-zero. Although readily accommodated in the Standard Model, CP violation remains one of the least well understood phenomena in physics. So far it has only been observed in the decays of kaons. While the results from the kaon sector are consistent with the Standard Model, the complications introduced by strong interaction effects make it nearly impossible to ascertain whether the complex CKM phase is the sole source for the observed asymmetries. The only other observational constraint on CP violation comes from cosmology. As was first noted by Sakharov, there is an important connection between the observed baryon asymmetry in the universe and CP violation in fundamental processes \cite{Sakharov}. He postulated that CP violation in fundamental processes in the early universe, C and baryon number violation, and the absence of thermal equilibrium gave rise to the observed baryon asymmetry. However, recent work suggests that the Standard Model and the complex phase in the CKM matrix cannot provide sufficient CP violation to account for the magnitude of the baryon asymmetry so that other sources of CP violation must be present\cite{Dine}. An experimental effort to determine the magnitude of the CKM matrix elements and to measure the CP violating phase is therefore of fundamental importance. $B$ meson decay provides an ideal opportunity to pursue such a program. Since the dominant $B$ meson decay mechanisms involve generation changing transitions which are suppressed by the small CKM matrix element $V_{cb}$, rare processes such as $b \to s$, $b \to u$, and $b \to d$ transitions are expected to be observable. Several of these $B$ decay mechanisms are shown in Figure \ref{Fdiag}. Measurements of $B$ meson decay rates are used to determine the couplings between quarks of the third generation and lighter quarks, the CKM elements $|V_{cb}|$, $|V_{ub}|$, $|V_{ts}|$, and $|V_{td}|$. In addition, the Standard Model predicts large CP asymmetries for the $B$ system. Experiments with $B$ mesons may lead to the first precise determination of the complex CKM phase. In the framework of the Standard Model the CKM matrix must be unitary, {\it ie.} $VV^{\dagger}\; = \; 1$. This gives rise to the following relationships between the matrix elements. $$ V_{ud}^{*} V_{us} + V_{cd}^* V_{cs} + V_{td}^{*} V_{ts} = 0$$ $$ V_{us}^{*} V_{ub} + V_{cs}^* V_{cb} + V_{ts}^{*} V_{tb} = 0$$ $$ V_{ub}^{*} V_{ud} + V_{cb}^* V_{cd} + V_{tb}^{*} V_{td} = 0$$ Chau, Keung \cite{ChauK} and Bjorken have noted that the last equation can be visualized as a triangle in the complex plane with vertices at $(0,0)$, $(0,1)$ and $(\rho, \eta)$. Measurements of the magnitudes of the CKM elements determine the lengths of the sides of the triangle, while measurements of CP asymmetries determine the interior angles of the triangle. In recent years, there have been major advances in our understanding of $B$ meson decay. However, data samples at least one order of magnitude larger than those available at present are required to observe CP violating asymmetries in the $B$ meson system and to provide fundamental consistency checks of the Standard Model. This is the justification for the construction of high luminosity $e^+ e^-$ storage rings in the US at SLAC(PEP~II/BABAR)\cite{BABAR}, at Cornell (CESR PHASE~III, CLEO~III)\cite{PHASEIII}, and in Japan(TRISTAN-II/BELLE)\cite{BELLE}, as well as a dedicated fixed target experiment at the HERA ring at DESY\cite{DESY}, and proposals for hadron collider experiments at Fermilab\cite{CDFB} and at CERN\cite{LHC}. In this paper we will review the current status of experimental $B$ physics and then briefly discuss CP violation. Most of our present knowledge on $B$ mesons comes from experiments performed on the $\Upsilon (4S)$ resonance at a center of mass energy of 10.58 GeV and in recent times from the large data sample of about 3 fb$^{-1}$ that has been collected by the CLEO II collaboration at the Cornell Electron Storage Ring (CESR). Older results from the ARGUS experiment, which operated at the DORIS storage ring and from the CLEO~1.5 experiment, which preceded the CLEO~II detector are also included. We note that the LEP experiments and the CDF experiment at the Tevatron Collider have recently provided precise measurements of B meson lifetimes. The LEP experiments have also directly observed the time dependence of $B_{d}-\bar{B_d}$ mixing and set limits on the $B_s-\bar{B_s}$ mixing. They have also observed exclusive hadronic decays of both $B$ and $B_s$ mesons. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(5.,5.)(0,0) \put(1.01,0.1){\psfig{width=3.5in,height=5.0in,% file=bdecays.eps}} \end{picture} \bigskip \vskip 3 mm \caption{$B$ meson decay mechanisms: (a) external spectator diagram and (b) color suppressed spectator diagram (c) $b\to u$ spectator diagram (d) $b\to s\gamma$ electromagnetic penguin (e) W-exchange diagram (f) W-annihilation diagram and (g),(h) box diagrams for $B-\bar{B}$ mixing.} \label{Fdiag} \end{center} \end{figure} \subsection{Hadronic and Semileptonic $B$ Decays} $B$ meson decays occur primarily through the CKM favored $b\to c$ transition. In such decays the dominant weak decay diagram is the spectator diagram, shown in Fig.~\ref{Fdiag}(a), where the virtual $W^-$ materializes into either a lepton and anti-neutrino or a $\bar u d$ or $\bar c s$ quark pair. In hadronic decays, the quark pair becomes one of the final state hadrons while the $c$ quark pairs with the spectator anti-quark to form the other hadron, while in semileptonic decays, the c quark and spectator antiquark hadronize independently of the leptonic current. The extraction of Standard Model parameters from experimental results is complicated by the fact that only $B$ mesons can be studied and not free $b$ quarks. The light quarks and the gluons surrounding the $b$ quark in the $B$ meson lead to significant corrections that have to be taken into account. Since leptons do not interact strongly, semileptonic $B$ meson decays are less affected by these QCD corrections and the theoretical calculations are believed to be more reliable. We discuss several measurements of both inclusive and exclusive semileptonic decay rates that have been used to determine the strength of the $b \to c$ coupling, {\it ie.} $|V_{cb}|$. In hadronic decays, the spectator diagram is modified by hard gluon exchanges between the initial and final quark lines. This leads to the ``color suppressed'' diagram shown in Fig.~\ref{Fdiag}(b), which has a different set of quark pairings. Observation of $B \to \psi X_s$ decays, where $X_s$ is a strange meson, gives experimental evidence for the presence of this diagram. Further information on the size of the color suppressed contribution can be obtained from $\bar{B^0} \to D^0$ (or $D^{*0} ) X^0$ transitions, where $X^0$ is a neutral meson. In $B^-$ decays, both types of diagrams are present and can interfere. By comparing the rates for $B^-$ and $\bar{B^0}$ decays, the size and the relative sign of the color suppressed amplitude can be determined. It was suggested that in analogy to semileptonic decays, two body hadronic decays of $B$ mesons can be expressed as the product of two independent hadronic currents, one describing the formation of a charm meson and the other the hadronization of the $\bar{u} d$ (or $\bar c s$) system from the virtual $W^-$\cite{early}. Qualitatively, for a $B$ decay with a large energy release the $\bar{u} d$ pair, which is produced as a color singlet, travels fast enough to leave the interaction region without influencing the second hadron formed from the $c$ quark and the spectator anti-quark. The assumption that the amplitude can be expressed as the product of two hadronic currents is called ``factorization'' in this paper. It is expected that the simple approximation of the strong interaction effects by the factorization hypothesis will be more reliable in $B$ meson decays than in the equivalent $D$ meson decays due to the larger characteristic energy transfers and the consequent suppression of final state interactions. We will discuss several tests of the factorization hypothesis based on the comparison of semileptonic and hadronic $B$ meson decays\cite{Bjorken}. \subsection{Rare $B$ Decays} All $B$ meson decays that do not occur through the usual $b\to c$ transition are known as rare $B$ decays. The simplest diagram for a rare $B$ decay is obtained by replacing the $b\to c$ transition by a CKM suppressed $b\to u$ transition. These decays probe the small CKM matrix element $V_{ub}$, the magnitude of which sets bounds on the combination $\rho^2 + \eta^2$ in the Wolfenstein parameterization of the CKM matrix. So far the only measurement of the magnitude of $V_{ub}$ has been obtained from measurements of inclusive semileptonic $B$ decays\cite{btoulnu}. We will discuss the status of the search for rare hadronic $B$ decays, and in particular the possibility of measuring the decay $B^0\to\pi^+\pi^-$ which is important for the study of CP violation in $B$ decays. Since $b \to u$ transitions are suppressed by the small value of $|V_{ub}|$ it is expected that additional diagrams will make observable contributions to some hadronic decay modes. The most significant of these diagrams is the one-loop flavor-changing neutral current diagram known as the ``penguin'' diagram (Fig. \ref{Fdiag} d). The CKM favoured part of this diagram, corresponding to a $b\to s$ transition, is expected to dominate the amplitude of rare decays to final states with one or three $s$-quarks. There is also a CKM suppressed $b\to d$ amplitude which may not be negligible in decays to final states with no $c$ or $s$ quarks. It should be noted that the loop diagram is much more significant in $B$ decays than in $D$ decays because the $b\to s$ loop is sensitive to the large t quark couplings $V_{tb}$ and $V_{ts}$, whereas contributions to the equivalent $c\to u$ loop are suppressed either by the small couplings $V_{cb}$ and $V_{ub}$, or by the small $s$ and $d$ quark masses. The observation of the decay $B\to K^*(892) \gamma$, reported in 1993 by the CLEO~II experiment, is the first direct evidence for the penguin diagram \cite{PRLbsg}. This decay is described by the electromagnetic transition $b\to s\gamma$, which is a $b\to s$ penguin loop accompanied by the radiation of a photon from either the loop, or the initial or final state quarks. This important new result will be discussed in some detail. We will also comment on the recent discussion about the sensitivity of the $b\to s\gamma$ process to non-standard model contributions within the loop\cite{Hewett}. In many extensions of the standard model an additional contribution to $b\to s\gamma$ is expected to come from a charged Higgs. We will discuss the extent to which recent data from the CLEO~II experiment allow bounds to be set on such non-standard model contributions. To date, no evidence has been found that either annihilation or W-exchange processes are present in $B$ meson decay. The annihilation diagram (shown in Fig. \ref{Fdiag} (f)) would result in purely leptonic decays such as $B^+ \to \tau^+ \bar{\nu}$. These modes have been searched for and provide constraints on the $B$ meson decay constant, $f_B$. In 1955, Gell-Mann and Pais predicted oscillations between neutral meson and their antiparticles which were later observed in the neutral kaon system \cite{gellmann}. In the Standard Model such particle-antiparticle oscillations are described by the box diagrams shown in Fig. \ref{Fdiag} (g),(h). Historically, it was a great surprise in 1987 when ARGUS observed $B^0\bar{B}^0$ oscillations at a rate nearly two orders of magnitude larger than the theoretical expectation at that time \cite{argusdis}. This result was the first indication that the top quark was more massive than the 30-50 GeV range anticipated by the theorists and indicated by the UA1 experiment at the time. It also demonstrates how the study of $B$ mesons has provided insight into the physics of higher mass scales. To summarize, the detailed study of B mesons is driven by the need to determine the elements of the CKM matrix which are fundamental parameters of the Standard Model. A complete set of measurements will overconstrain the Standard Model and will check its internal consistency. In light of recent developments in cosmology, it is conceivable that this will lead us to new physics outside of the framework of the Standard Model. Efforts are now underway to measure additional rare $B$ decays and to observe CP Violation in the $B$ sector. As noted above, every major high energy physics laboratory has embarked on this program. \section{THE EXPERIMENTAL STUDY OF B DECAY} Experimental $b$ physics began in 1977 when the CFS collaboration at Fermilab observed a narrow resonance at an energy of about 9.5 GeV in the reaction p~+~nucleus $\to$ $\mu^+ \mu^-$~+~~X\cite{discovery}. This resonance was named $\Upsilon (9460)$ and was subsequently identified as the $1\,^3S_1$ state of the $b\bar{b}$ or bottomonium system. A second resonance at a mass near 10.0 GeV was later isolated in the Fermilab data and later identified as a radial excitation of the $b \bar{b}$ state. For almost two decades which followed, this was the last significant contribution to $B$ physics by a hadron machine as $e^+e^-$ storage rings took over. Within a year of its discovery, the $\Upsilon$ resonance was confirmed by experiments at DORIS \cite{doris1,doris2,doris3} and at CESR \cite{cesr1,cesr2}. Most of our current knowledge of $B$ mesons is based on analyses of data collected at these two machines. In recent years, advances in detector technology, in particular the introduction of high resolution silicon vertex detectors have allowed experiments at high energy colliders (i.e. LEP, SLC and the TEVATRON) to efficiently tag $b$ quarks. This has led to precise lifetime measurements and to the discovery of new $b$-flavored hadrons. \subsection{$\Upsilon (4S)$ Experiments} \label{y4sexp} The total $e^+e^-$ annihilation cross section as a function of center of mass energy in the region of the $\Upsilon$ resonances is shown in Fig. \ref{upsilon}. The width of the $\Upsilon (4S)$ state is $23.8 \pm 2.2$ MeV \cite{PDG} which is significantly larger than the width of the three lighter resonances. OZI (Okubo-Zweig-Iizuka) suppression of hadronic decays, which is responsible for the narrow width of the $\Upsilon(1S) \; \Upsilon(2S)$ and $\Upsilon(3S)$ states is no longer operative. This is the first indication that the $\Upsilon (4S)$ resonance lies above the threshold for $B \bar{B}$ production. Further evidence for $B$ meson production came from the observation of a dramatic increase in the lepton yield at the $\Upsilon (4S)$ resonance. The momentum spectrum of the leptons was found to be consistent with production of a heavy quark. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(4.,2.5)(0,0) \put(-0.31,-0.8){\psfig{width=4.5in,height=3.0in,% bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,file=upsilon.ps}} \end{picture} \vskip 3 mm \caption{$e^+e^-$ cross-section measured by CLEO and CUSB showing the masses of the $\Upsilon$ resonances.(Broken horizontal scale)} \label{upsilon} \end{center} \end{figure} The first fully reconstructed $B$ mesons were reported in 1983 by the CLEO~I collaboration \cite{FirstB}. Since then the CLEO~1.5 experiment \cite{anotherB,SecondB} has collected a sample with an integrated luminosity of $212 ~\rm{pb}^{-1}$ \cite{only4s}, the ARGUS experiment \cite{ThirdB,FourthB,FifthB,ARGUSDDs} has collected $246 ~\rm{pb}^{-1}$, and the CLEO~II experiment has collected about $3 ~\rm{fb}^{-1}$, of which between $0.9$ and $2.0 ~\rm{fb}^{-1}$ have been used to obtain the results described in this review\cite{SixthB,PRLkpi,cleodds,PRLbsg}. All of these experiments at $e^+ e^-$ colliders record data on the $\Upsilon (4S)$ resonance, which is only 20 MeV above $B\bar{B}$ threshold. The observed events originate from the decay of either the $B$ or the $\bar{B}$ meson as there is not sufficient energy to produce additional particles. The $B$ mesons are also produced nearly at rest. The average momentum is about $330$~MeV so the average decay length is approximately 30 $\mu m$. The $\Upsilon (4S)$ resonance decays only to $B^0\bar{B}^0$ or to $B^+B^-$ pairs, while heavier states such as $B_s$ or $B_c$ mesons and $b$-flavored baryons are not accessible. For quantitative studies of $B$ decays the initial composition of the data sample must be known. The ratio of the production of neutral and charged decays of the $\Upsilon (4S)$ is therefore an important parameter for these experiments. The ratio is denoted $$ \frac{f_+}{f_0}\; = \; \frac{\Upsilon(4S) \to B^+B^-}{\Upsilon(4S) \to B^0\bar{B}^0}. $$ CLEO has measured this ratio and found\cite{dstlnu} $$ \frac{f_+}{f_0}\; = \; 1.04 \pm 0.13 \pm 0.12 \pm 0.10 $$ The last error is due to the uncertainties in the ratio of $B^0$ and $B^+$ lifetimes. This is consistent with equal production of $B^+ B^-$ and $B^0 \bar{B^0}$ pairs and unless explicitly stated otherwise we will assume that $f_+/f_0\, = \, 1$. Older results which assumed other values of $f_+$ and $f_0$ have been rescaled. $B$ meson pairs from $\Upsilon(4S)$ decays are produced in a state with quantum numbers $J^{PC}\, = \, 1^{--}$. As a result the direction of flight of the $B$ meson will follow a $\sin^2 \theta_B$ distribution, whereas most of the background has a flat distribution in this variable. An important consequence of production with these quantum numbers is that a $B^0\bar{B}^0$ meson pair will evolve coherently until one of the mesons decays. \subsubsection {Continuum Background Suppression} The $\Upsilon (4 S)$ resonance sits on a continuum background consisting of $e^+ e^- \to q\bar{q}$, where $q$ can be any of $u,d,s,c$. The ratio of the resonance to continuum cross section is approximately 1:3. The continuum background is studied by taking a significant amount of data at an energy just below the $\Upsilon (4 S)$ resonance, e.g. CLEO~II records a third of its data at an energy 55~MeV below the resonance. Using this data sample, and Monte Carlo simulations of $q\bar{q}$ jets, cuts have been devised to suppress the continuum background. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.8)(0,0) \put(-0.5,0.3){\psfig{width=3.5in,height=2.5in,% file=sphericity.eps}} \end{picture} \bigskip \caption{Absolute value of the cosine of the angle between the the direction of the $B$ candidate and the sphericity axis of the remaining event for (a) continuum data and (b) $B\bar{B}$ Monte Carlo.} \label{sphericity} \end{center} \end{figure} In $\Upsilon (4S)$ production of $B\bar{B}$ pairs, the $B$ mesons are produced almost at rest, and their decay axes are uncorrelated. These events are rather spherical in shape, and can be distinguished from jetlike continuum events using a variety of event shape variables. For the study of inclusive production in $B$ decays a particularly useful variable is $R_2\, = \, H_2/H_0$, where $H_{2}$ and $H_{0}$ are the second and zeroth Fox-Wolfram moments\cite{fw}. This variable is 0 for a perfectly spherical event, and 1 for an event completely collimated around the jet axis. For the study of exclusive $B$ decay modes it is often more useful to compare the axis of the reconstructed $B$ candidate with the axis of the rest of the event. Examples of variables used are the direction of the sphericity axis (see Fig. \ref{sphericity}) or the thrust axis of the rest of the event with respect to the $B$ candidate, $\theta_S$ or $\theta_T$, and the sum of the momenta transverse to the axis of the $B$ candidate, known as $s_{\perp}$. In some cases, we will discuss the use of these cuts, and their effectiveness for particular analyses, but refer the reader to other references for a more detailed discussion of the shape variables\cite{Artuso},\cite{Roe}. \subsubsection{Selection of $B$ Candidates} \label{B-recon} As an example of the techniques of $B$ reconstruction we will briefly describe the procedure used by the CLEO~II experiment to reconstruct the decay modes $B\to D^{(*)}(n\pi)^-$. The CLEO~II detector is described in detail elsewhere \cite{TRA}. It has a momentum resolution for charged tracks given by $(\delta p/p)^2 = (0.0015p)^2 + (0.005)^2$, and an energy resolution for isolated photons from the CsI barrel calorimeter of $\delta E/E [\%] = 0.35/E^{0.75} + 1.9 - 0.1E$, where $p$ and $E$ are in GeV. Charged tracks are identified as pions or kaons if they have ionization loss information $(dE/dx)$, and/or time-of-flight information (ToF), consistent with the correct particle hypothesis. Photon candidates are selected from showers in the calorimeter barrel with a minimum energy of 30~MeV, which are not matched to charged tracks, and which have a lateral energy distribution consistent with that expected for a photon. Neutral pions are selected from pairs of photons with an invariant mass within $2.5\sigma$ of the known $\pi^0$ mass. Candidate $D^0$ mesons are identified in the decay modes $D^0\to K^-\pi^+$, $D^0\to K^-\pi^+\pi^0$, and $D^0\to K^-\pi^+\pi^+\pi^-$, while $D^+$ mesons are identified in the decay mode $D^+\to K^-\pi^+\pi^+$. Charged $D^*$ candidates are found using the decay $D^{*+}\to D^0\pi^+$, while neutral $D^*$ candidates are found using the decay $D^{*0}\to D^0\pi^0$. Other $D$ and $D^*$ decay modes are not used because of poorer signal to background ratios, or because of lower yields\cite{xdfeg}. The reconstructed $D$ masses and $D^*-D^0$ mass differences are required to be within $2.5\sigma$ of the known values. The $D$ meson candidates are combined with one or more additional pions to form a $B$ candidate. Cuts on the topology of the rest of the event are made in order to distinguish $B \bar{B}$ events from continuum background, as discussed in the previous section. The following requirements are imposed: $R_2<0.5$, and $|\cos(\theta_S)|<0.9(0.8,0.7)$ depending on whether there are one(two,three) pions added to the $D$ meson. As is shown in Fig. \ref{sphericity}, the cosine of the sphericity angle, $\theta_S$, is uniformly distributed for signal, but peaks near $\pm 1$ for continuum background. Requiring that $|\cos(\theta_S)|$ be less than 0.7 typically removes 80\% of the continuum background, while retaining 70\% of the $B$ decays. \begin{figure}[htb] \begin{center} \unitlength 1.0in \vskip 12 mm \begin{picture}(3.,3.)(0,0) \put(-.35,-.3){\psfig{width=2.8in,height=2.8in,% file=bexcl_dnorm.ps}} \end{picture} \bigskip \bigskip \bigskip \bigskip \vskip 15 mm \caption[]{The beam constrained mass distributions from CLEO~II {\protect{\cite{SixthB}}} for: (a) $ B^- \to D^0 \pi^-$ decays. (b) $ B^- \to D^0\rho^-$ decays for $|\cos\Theta_{\rho}|>0.4$. (c) $ \bar{B^0} \to D^+\pi^-$ decays . (d) $ \bar{B}^0 \to D^+ \rho^-$ decays for $|\cos\Theta_{\rho}|>0.4$.} \label{dpi} \end{center} \end{figure} The measured sum of charged and neutral energies, $E_{meas}$, of correctly reconstructed $B$ mesons produced at the $\Upsilon (4S)$, must equal the beam energy, $E_{beam}$, to within the experimental resolution. Depending on the $B$ decay mode, $\sigma_{\Delta E}$, the resolution on the energy difference $\Delta E\; = \; E_{beam} - E_{meas}$ varies between 14 and 46~MeV. Note that this resolution is usually sufficient to distinguish the correct $B$ decay mode from a mode that differs by one pion. For final states with a fast $\rho^-$ the energy resolution depends on the momenta of the final state pions from the $\rho$ meson. This dependence is conveniently parameterized as a function of the angle between the $\pi^-$ direction in the $\rho^-$ rest frame and the $\rho^-$ direction in the lab frame, which we denote as the $\rho$ helicity angle, $\Theta_{\rho}$. When $\cos\Theta_{\rho} = +1$, the error in the energy measurement is dominated by the momentum resolution on the fast $\pi^-$, whereas at $\cos\Theta_{\rho}= -1$ the largest contribution to the error in the energy measurement comes from the calorimeter energy resolution on the fast $\pi^0$. To determine the signal yield and display the data the beam constrained mass is formed \begin{equation} M_B^2=E_{beam}^2 - \left(\sum_i{\vec{p_i}}\right)^2, \label{EBmass} \end{equation} where $\vec{p_i}$ is the momentum of the $i$-th daughter of the $B$ candidate. The resolution in this variable is determined by the beam energy spread, and is about 2.7~MeV for CLEO~II, and about 4.0~MeV for ARGUS. \cite{mbrange} These resolutions are a factor of ten better than the resolution in invariant mass obtained without the beam energy constraint. The invariant mass spectra from the CLEO II analysis of $B \to D (n\pi)^-$ decays are shown in Fig. \ref{dpi} \cite{SixthB}. For a specific $B$ decay chain, such as $B^- \to D^0 \pi^-, D^0 \to K^- \pi^+ \pi^0$ there may be multiple combinations in a given decay chain. In the CLEO~II analysis, if there are multiple candidates only the entry with the smallest absolute value of $\Delta E$ is selected for events with $M_{B} > 5.2 $ GeV. An alternative method is to select the candidate with the highest total probability as calculated from the sum of all $\chi^2$ contributions from particle identification, kinematical fits and the beam energy constraint\cite{FifthB}. \subsubsection{Background Studies} \label{bkg-studies} In order to extract the number of signal events it is crucial to understand the shape of the background in the $M_B$ distribution. There are two contributions to this background, continuum and other $B \bar{B}$ decays. The fraction of continuum background varies between $58 \%$ and $91\%$ depending on the B decay mode\cite{cfrac}. The shape of the continuum background is well understood since it depends primarily on the transverse momentum distributions of the final state particles relative to the jet axis. This has been studied using the off-resonance data sample, and using Monte Carlo techniques. The shape of the $B \bar{B}$ background is more difficult to understand since it is mode dependent. It also has a tendency to peak in the signal region, since the combinatorial background comes mostly from combinations in which the true final state is altered by one low energy particle. A particularly troublesome background occurs when the decay $D^{*0}\to D^0\gamma$ is replaced by the decay $D^{*0}\to D^0\pi^0$. To determine the correct background shape for each $B$ decay mode, CLEO~II has studied the $M_B$ distributions for $\Delta E$ sidebands, and for combinations in which the charged particles have the wrong charges for the expected spectator decay diagram, e.g. $D^+\pi^+$ and $\bar{D^0}\pi^+$. It is found that all of the background distributions can be fitted with a linear background below M$_B$=5.282~GeV, and a smooth kinematical cutoff at the endpoint, which is chosen to be parabolic. For each $B$ decay mode CLEO~II uses this background function and a Gaussian signal with a fixed width of $2.64$ MeV to determine the yield of signal events. In the ARGUS and CLEO~1.5 experiments slightly different background parameterizations were used \cite{ARGUSback}. \subsection {High Energy Collider Experiments} While the $\Upsilon(4S)$ machines are well suited to study many aspects of $B$ physics some questions can only be investigated by experiments at higher center of mass energies. These include lifetime measurements and the search for heavier $b$-flavored mesons and baryons. The four LEP experiments and SLD operate on the $Z^0$ resonance. At this energy, the cross section for $b\bar{b}$ production is about 6.6 nb and the signal-to-noise ratio for hadronic events is 1:4, comparable to the $\Upsilon(4S)$ resonance. The kinematic constraints available on the $\Upsilon(4S)$ cannot be used on the $Z^0$ but due to the large boost the $b$ quarks travel $\approx$ 2.5 mm before they decay and the decay products of the two $b$-hadrons are clearly separated in the detector. The large boost makes precise lifetime measurements possible as well as observations of time dependent phenomena such as $B_{d}-\bar{B_{d}}$ mixing. Compared to $e^+e^-$ annihilation, the $b\bar{b}$ production cross section at hadron colliders is enormous, about $\, 50 \mu b$ at 1.8 TeV. However, a signal-to-background ratio of about 1:50 makes it difficult to extract $b$ quark signals and to fully reconstruct $B$ mesons. In the past, evidence for the production of $b$ quarks in high energy experiments has been deduced from the presence of high $p_{\perp}$ leptons. At hadron colliders, selecting final states containing leptons also provides a powerful tool to suppress QCD backgrounds. Recently, significant progress in the isolation of events containing $b$ quarks has been made possible by the installation of silicon vertex detectors near the interaction point at several collider experiments. \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(2.5,2.2)(0.0,0.0) \put(-.01,0.0){\psfig{width=1.8in,height=1.8in,file=cdf_bd.ps}} \end{picture} \vskip 17 mm \caption[]{(a) The $\psi K^+$ mass distribution from the CDF experiment (b) The $\psi K^{*0}$ mass distribution from the CDF experiment. The solid line indicates the fitted region \protect{\cite{cdf_cc}}.} \label{cdfbd} \end{center} \end{figure} These $b$ quarks hadronize as $B_d$, $B_u$, $B_s$, and $B_c$ mesons or as baryons containing $b$ quarks. With the improvement in background suppression provided by these solid state detectors, signals for exclusive hadronic $B_d$, $B_u$ and $B_s$ meson decays have been isolated in the invariant mass spectra for low multiplicity final states. Examples include the decay modes $B^{+(0)} \to \psi K^{+(0*)}$ shown in Fig. \ref{cdfbd} and $B_s \to \psi \phi$ shown in Fig. \ref{cdfbs}. However, the resolution in invariant mass for high energy experiments, which is of order 10-20 MeV, is poorer than the resolution in beam constrained mass in threshold experiments. At CDF, the mass resolution, about 10 MeV, is sufficient to separate modes such as $B^-\to \psi \pi^-$ from $B^-\to \psi K^-$. However, for experiments at LEP the mass resolution is frequently not sufficient to clearly separate B meson decay modes with an additional photon or modes where one kaon is replaced with a pion. Evidence for the production of $b$-flavored baryons has also been reported recently but the relative production fractions are not well known \cite{Sharma}. Although collider experiments cannot determine absolute branching fractions without making further assumptions or using information from experiments at the $\Upsilon (4S)$, they can measure ratios of branching fractions such as the ratio ${\cal B}(B\to \psi K^*)/{\cal B}(B\to \psi K)$. Some high energy experiments have also obtained inclusive signals for $D^0, D^{*+}, \psi$ mesons in $B$ decay. \subsection {Determination of Average $B$ Meson Branching Fractions} \label{thatsit} To extract $B$ meson branching fractions, the detection efficiencies are determined from a Monte Carlo simulation and the yields are corrected for the charmed meson branching fractions. In order to determine new average branching ratios for $B$ meson decays the results from individual experiments must be normalized with respect to a common set of charm meson and baryon absolute branching fractions. The branching fractions for the $D^0$ and $D^+$ modes used to calculate the $B$ branching fractions are given in Tables~\ref{Tbd0br},~\ref{Tbdpbr}. We have chosen the average of values for the $D^0 \to K^- \pi^+$ branching fraction recently reported by CLEO~II and ALEPH to normalize the results \cite{DKpi}. The branching fractions of other $D^0$ decay modes relative to $D^0 \to K^- \pi^+$ are taken from the PDG compilation\cite{PDG}. The $D^+$ branching fractions are also taken from the PDG compilation \cite{PDG}. \begin{table}[htb] \caption{$D^0$ branching fractions [\%] used in previous publications and this review.} \label{Tbd0br} \begin{tabular}{lllll} Mode & ARGUS, CLEO 1.5 \cite{SecondB},\cite{FourthB} & ARGUS ($DD_s$) \cite{ARGUSDDs} & CLEO II \cite{SixthB} & This review\\ \hline $K^-\pi^+$ & $4.2 \pm 0.6 $ & $ 3.7 \pm 0.3 $ & $ 3.9 \pm 0.2 $ & $3.9 \pm 0.2 $\\ $K^-\pi^+ \pi^- \pi^+$ & $ 9.1 \pm 1.1 $ & $ 7.5 \pm 0.5 $ & $ 8.0 \pm 0.5 $ & $ 7.9\pm 0.6$\\ $K^-\pi^+ \pi^0$ & $ 13.3 \pm 1.8 $ & $ 11.3 \pm 1.1 $ & $ 12.1 \pm 1.1 $& $13.4\pm 1.15$\\ $K^0\pi^+ \pi^-$ & $ 6.4 \pm 1.1 $ & $ 5.4 \pm 0.5 $ & & $5.1\pm 0.6$ \end{tabular} \end{table} \begin{table}[htb] \caption{$D^+$ branching fractions [\%] used in previous publications and this review.} \label{Tbdpbr} \begin{tabular}{lllll} Mode & ARGUS, CLEO 1.5 \cite{SecondB},\cite{FourthB} & ARGUS ($DD_s$) \cite{ARGUSDDs} & CLEO II \cite{SixthB} & This review\\ \hline $K^-\pi^+ \pi^+$ & $9.1 \pm 1.4 $ & $ 7.7 \pm 1.0 $ & $ 9.1 \pm 1.4 $ & $9.1 \pm 0.6$\\ $K^0\pi^+ $& $ 3.2 \pm 0.5 $ & $ 2.6 \pm 0.4 $ & & $2.7\pm 0.3$\\ $K^0\pi^+ \pi^+ \pi^-$ & & $ 6.9 \pm 1.1 $ & & $7.0\pm 1.0$ \\ \end{tabular} \end{table} Branching ratios for all $D_s$ decay modes are normalized relative to ${\cal{B}}(D_s^+\rightarrow \phi \pi^+)$. There are no model-independent measurements of the absolute branching fraction for $D_s^+\rightarrow \phi \pi^+$. The currently favored method uses measurements of $\Gamma(D_s^+\rightarrow \phi \ell^+\nu)$/$\Gamma(D_s^+ \rightarrow \phi \pi^+)$. The rate $\Gamma(D_s^+\rightarrow \phi^+ \ell\nu)$ is determined from measurements of $\tau_{D_s^+}/\tau_{D^+}$, $\Gamma(D^+\rightarrow K^* \ell\nu)$, and using $\Gamma(D^+\rightarrow K^* \ell\nu)$/ $\Gamma(D_s^+\rightarrow \phi \ell^+\nu)$ obtained from theory. We use the value of ${\cal{B}}(D_s^+\rightarrow \phi \pi^+)$ derived in reference \cite{CLNS9314}. Other methods include using ${\cal B}(D_s^+\to X\ell^+\nu)$ obtained by combining measurements of ${\cal B}(D_s^+\to \phi\ell^+\nu)/ {\cal B}(D_s^+\to \phi\pi^+)$, ${\cal B}(D_s^+\to \eta\ell^+\nu)/ {\cal B}(D_s^+\to \phi\pi^+)$, ${\cal B}(D_s^+\to \eta^{'}\ell^+\nu)/ {\cal B}(D_s^+\to \phi\pi^+)$ and comparing to ${\cal B}(D_s\to X\ell \nu)$ obtained from ${\cal B}(D^0\to X\ell\nu)$ and the ratio of $\tau_{D^0}/\tau_{D_s}$. This method also gives values in the same range. We believe, however, that the PDG group has underestimated the error on this value. \begin{table}[htb] \caption{$D_s$ branching fraction [\%] used in previous publications and this review.} \label{Tbdsbr} \begin{tabular}{lllll} Mode & CLEO 1.5 \cite{SecondB} & ARGUS ($DD_s$) \cite{ARGUSDDs} & CLEO II \cite{cleodds} & This review \\ \hline $\phi \pi^+ $& $2.7 \pm 0.7 $ & $ 3.0 \pm 1.1 $& $ 3.5 \pm 0.4 $ & $ 3.7 \pm 0.9 $ \end{tabular} \end{table} Since the publication of the original ARGUS and CLEO~1.5 papers on hadronic decays, the branching fractions for the $D^{*} \to D \pi (\gamma)$ modes have been significantly improved by more precise measurements from CLEO~II \cite{CLEODSTAR}. For modes which contain $D^{*}$ mesons we have recalculated the branching ratios using the CLEO~II measurements. \begin{table}[htb] \caption{$D^{*}$ branching fractions [\%] used in previous publications and this review.} \label{Tbdstarbr} \begin{tabular}{llll} Mode & ARGUS, CLEO 1.5 \cite{SecondB},\cite{FourthB} & CLEO II \cite{SixthB} & This review\\ \hline $D^{*0}\rightarrow D^0\pi^0 $& $55.0 \pm 6 $ & $ 63.6 \pm 4.0 $ & $63.6 \pm 4.0$\\ $D^{*0}\rightarrow D^0\gamma $& $45.0 \pm 6 $ & $ 36.4 \pm 4.0 $ & $36.4 \pm 4.0$\\ $D^{*+}\rightarrow D^0\pi^+ $& $57.0 \pm 6 $ & $ 68.1 \pm 1.6 $ & $68.1 \pm 1.6$ \end{tabular} \end{table} \begin{table}[htb] \caption{Charmonium branching fractions [\%] used in previous publications and this review.} \label{Tbccbr} \begin{tabular}{llll} Mode & ARGUS, CLEO 1.5 \cite{SecondB},\cite{FourthB} & CLEO II \cite{SixthB} & This review \cite{BRpsi}\\ \hline $\psi \rightarrow e^+e^- $& $6.9 \pm 0.9$ & $ 5.91\pm 0.25 $ & $ 5.91 \pm 0.25 $\\ $\psi \rightarrow \mu^+\mu^- $& $6.9 \pm 0.9$ & $ 5.91\pm 0.25 $ & $ 5.91 \pm 0.25 $\\ $\psi' \rightarrow e^+e^-$ and $\mu^+\mu^- $& $1.7 \pm 0.3$ & $ 1.7 \pm 0.3 $ & $ 1.7 \pm 0.3 $\\ $\psi' \rightarrow \psi \pi^+\pi^-$& $32.4 \pm 2.6$ & $ 32.4 \pm 2.6 $ & $ 32.4 \pm 2.6 $\\ $\chi_{c1} \rightarrow \psi \gamma $& $ 27.3 \pm 1.6$ & $ 27.3 \pm 1.6 $ & $ 27.3 \pm 1.6 $ \\ $\chi_{c2} \rightarrow \psi \gamma $& & $ 13.5 \pm 1.1 $ & $ 13.5 \pm 1.1 $ \end{tabular} \end{table} We also give the old and new values assumed for the branching fractions of the decays $\psi\to e^+e^-$ and $\psi\to \mu^+\mu^-$. We have chosen to use the precise measurement of these decays performed by the MARK III collaboration \cite{BRpsi}. The modes $\psi^{'}\to \ell ^+ \ell ^-$ and $\psi^{'}\to \psi \pi^+ \pi^-$ are used to form $B$ meson candidates in modes involving $\psi'$ mesons. Decays of $B$ meson into final states containing $\chi_{c}$ mesons are reconstructed using the channel $\chi_{c1,~(c2)}\to \psi \gamma$. Product branching ratios for all modes containing $\psi$ mesons have been rescaled to account for the improved $\psi$ branching fractions. In the cases where only one $D^0$ decay mode was used to reconstruct the $B$ meson the published branching ratio is simply rescaled. The procedure for re-calculating the branching ratios becomes more involved when more than one $D$ decay channel is used. CLEO 1.5 and ARGUS used the following procedure to obtain their results \[\displaystyle {\cal{B}}(B)\; = \; \frac{N_{observed}}{\epsilon \times N_B \times ({\cal{B}}(D^*)) \times \sum{{\cal{B}}_i(D^0)}}\] where $N_B$ is the number of $B$ mesons. The efficiency $\epsilon$ is defined as \[\epsilon \; = \; \frac{\sum{{\cal{B}}_i(D^0)\epsilon _i}}{\sum{{\cal{B}}_i(D^0)}}\] The index $i$ refers to the $D$ meson decay channel. Therefore the rescaled branching ratio is given by \[{\cal{B}}\; = \; \frac{N_{observed}}{N_B \times ({\cal{B}}(D^*))\times \sum{{\cal{B}}_i(D^0)\epsilon _i}}\] The CLEO collaboration published enough information, including the yields and the efficiencies for the individual $D^0$ decay channels, so that rescaling their $B$ branching ratios is straightforward. The CLEO II branching ratios for $B \to D^{(*)}$ decays are calculated as weighted average of the results determined in each $D$ sub-mode. Since yield and efficiencies were provided in the original publication, the results could easily be rescaled to accommodate the improved $D^0$ and $D^{+}$ branching ratios. Although the $D^0$ reconstruction efficiencies depend slightly on the $B$ meson decay channel under study, the only information available from the ARGUS collaboration are average $D^0$ reconstruction efficiencies $<\epsilon >_i$. Therefore we had to make the assumption that the correct way to renormalize the ARGUS results is to multiply their branching ratios by the scale factor $F$ where \[ F\;=\;\frac{\sum{<\epsilon >_i \times {\cal{B}}_i(D^0)_{old}}} {\sum{<\epsilon > _i \times {\cal{B}}_i(D^0)_{new}}} \] The validity of this assumption has been checked using CLEO~1.5 data. A similar procedure had to be employed for the CLEO II results on $B \to DD_s$. Statistical errors are recalculated in the same way as the branching ratios. For the results from individual experiments on $B$ decays to final states with $D$ mesons two systematic errors are quoted. The second systematic error contains the contribution due to the uncertainties in the $D^0\to K^-\pi^+$, $D^+\to K^-\pi^+\pi^+$ or $D_s \to \phi \pi$ branching fractions. This will allow easier rescaling when these branching ratios are measured more precisely. The first systematic error includes the experimental uncertainties and when relevant the uncertainties in the ratios of charm branching ratios, e.g. $\Gamma(D^0 \rightarrow K^- \pi^+ \pi^+ \pi^-)/\Gamma(D^0 \rightarrow K^- \pi^+)$ and the error in the $D^*$ branching fractions. For modes involving $D_s$ mesons, the first systematic error also includes the uncertainties due to the the $D^0$ and $D^+$ branching ratios. For all other modes only one systematic error is given. For the world averages, the statistical and the first systematic error are combined in quadrature while the errors due to the $D^0$, $D^+$ and $D_s$ branching ratio scales are still listed separately. With the improvement in the precision of the $D^0$ and $D^*$ branching fractions these are no longer the dominant source of systematic error in the study of hadronic $B$ meson decay. The error on the $D_s^+$ branching ratio scale remains large. \section{B MESON MASSES AND LIFETIMES} \label{mass-diff} Particles are characterized by their mass, lifetimes and internal quantum numbers. Only four $b$-flavored mesons have been established to date. The bound states with a $b$ quark and a $\bar{u}$ or $\bar{d}$ anti-quark are referred to as the $B_{d}$ ($\bar{B^0}$) and the $B_u$ ($B^-$) mesons, respectively. The first radial excitation is called the $B^*$ meson. The $B_s$ meson, a bound state containing a $b$ quark and $\bar{s}$ anti-quark, has been discovered in the past two years. The spectrum of known and predicted $B$ states is shown in Fig. \ref{bspectrum}. These predictions are taken from a recent potential model calculation by Eichten, Hill and Quigg (EHQ)\cite{ehq}. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,3.0)(0.0,0.0) \put(-0.001,0.01){\psfig{width=2.8in,height=2.8in,file=bspectrum.eps}} \end{picture} \bigskip \bigskip \caption{Measured and predicted masses of the low-lying $B$ meson states. The predictions are taken from the EHQ model.} \label{bspectrum} \end{center} \end{figure} The spectrum reflects the remarkable feature of heavy quark spin symmetry. In the limit of large heavy quark masses, the spin of the heavy quark decouples from the dynamics and the pseudoscalar and vector states will be degenerate. In the $B$ system, an indication that this symmetry is respected is the experimental ratio \begin{equation} \frac{m_{B^*}\, - \, m_B}{m_B}\; \;\sim 0.9\, \%. \end{equation} Since the ground state mesons containing heavy quarks decay weakly, their lifetimes are typical of the weak interaction scale, in the range of 0.1-2 ps. Ten years ago, before the MAC \cite{macblife} and MARK II \cite{mkiiblife} collaborations presented the first measurement of the $b$ lifetime, the only phenomenological guide to the strength of the coupling between the quark generations was the Cabibbo angle. If the coupling between the third and second generations had the same strength as the coupling between the second and first, the $b$ lifetime would be about 0.1 ps. The measurements of lifetimes from the PEP experiments that indicated a value longer than 1 ps were not anticipated. Since to first order the $b$ lifetime can be expressed in analogy to muon decay $$ \tau \; = \; \frac{1}{\Gamma_{tot}} \; = \; \frac{{\cal{B}}_{sl}}{\Gamma_{sl}} \; = \; \frac{\displaystyle {\cal{B}}_{sl}}{\displaystyle\frac{G_F^2}{192\pi^3}m_b^5 |V_{cb}|^2\times phase\; space} $$ it was concluded that the CKM matrix element $|V_{cb}|$ was very small. In the naive spectator model, all mesons and baryons containing $b$ quarks have the same lifetime. Differences in the hadronic decay channels and interference between contributing amplitudes, {\it {i.e.}} if the same final state can be reached through an external and internal spectator decay such as $B^- \to D^0\pi^-$, will modify this simple picture and give rise to a hierarchy of lifetimes. For the $b$ system we expect \cite{bigimarch} \begin{equation} \tau(B^-)\; \geq \; \tau(\bar{B}^0)\; \approx \; \tau(B_s)\; >\; \tau(\Lambda_b^0) \end{equation} A similar lifetime hierarchy has been observed in charm decay. However, since the lifetime differences are expected to scale as $1/m_Q^2$, where $m_{Q}$ is the mass of the heavy quark, the variation in the $b$ system should be less than $10 \%$. Measurements of lifetimes for the various b-flavored hadrons thus provide a way to determine the importance of non-spectator mechanisms. \subsection{$\bar{B^0}$ and $B^-$ Masses.} \label{bmss} We now discuss measurements of the $\bar{B^0}$ and $B^-$ masses and the mass difference between them. For these analyses only fully reconstructed $B$ decays in modes with good signal to background are used. As an example, CLEO~II \cite{SixthB} uses the modes $B^-\to\psi K^-$, $\bar{B^0}\to\psi K^{*0}$, $B^-\to D^0\pi^-$, $B^-\to D^0\rho^-$, $B^-\to D^{*0}\pi^-$, $B^-\to D^{*0}\rho^-$, $\bar{B^0}\to D^+\pi^-$, $\bar{B^0}\to D^+\rho^-$, $\bar{B^0}\to D^{*+}\pi^-$, and $\bar{B^0}\to D^{*+}\rho^-$. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,3.0)(0.0,0.0) \put(-.55,-1.13){\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,width=3.8in,height=3.8in,file=bexcl_mball.ps}} \end{picture} \bigskip \bigskip \caption{Beam constrained mass distributions from CLEO~II for (a) $B^-$ events and (b) $\bar{B^0}$ events.} \label{FBM} \end{center} \end{figure} With tight cuts, 362 $B^-$ and 340 $B^0$ candidates have been reconstructed. The beam constrained mass distributions for the sum of these modes are shown in Fig.~\ref{FBM}. \begin{table}[htb] \caption{Measurements of the $\bar{B^0}$ and $B^-$ Masses [MeV].} \label{Tmabs} \begin{tabular}{lccc} Experiment& $M_{\bar{B^0}}$ & $M_{B^-}$ & $M_{\bar{B^0}}-M_{B^-}$ \\ \hline ARGUS & $5279.6\pm 0.7 \pm 2.0 $ & $ 5280.5\pm 1.0 \pm 2.0$ & $-0.9\pm 1.2 \pm 0.5$ \\ CLEO 87 & $5278.0 \pm 0.4 \pm 2.0$ & $5278.3 \pm 0.4 \pm 2.0$ & $-0.4\pm 0.6 \pm 0.5$ \\ CLEO 93 & $5279.2 \pm 0.2 \pm 2.0$ & $5278.8 \pm 0.2 \pm 2.0$ & $0.41\pm 0.25 \pm 0.19$ \\ \hline Average & $5278.9 \pm 0.2 \pm 2.0 $ & $5278.7 \pm 0.2 \pm 2.0$ & $ 0.2\pm 0.3$ \end{tabular} \end{table} The absolute values of the $B^-$ and $\bar{B^0}$ masses are limited in accuracy by the knowledge of the beam energy. A correction of (-1.1$\pm$0.5) MeV is made for initial state radiation as described in Ref.~\cite{Cdsr}. The systematic error from the uncertainty in the absolute value of the CESR/DORIS energy scale is determined by calibrating to the known $\Upsilon (1S)$ mass. The mass difference is determined more accurately than the masses themselves, because the beam energy uncertainty cancels, as do many systematic errors associated with the measurement errors on the charged tracks and neutral pions. There are several models which predict the isospin mass difference\cite{MDone}, which give values between 1.2 and 2.3 MeV which are larger than the experimental results given in Table \ref{Tmabs}. However, Goity and Hou as well as Lebed \cite{quatsch} have found models that can lead to small values of the mass difference. That the $\bar{B^0} - B^-$ mass difference is much smaller than the corresponding mass differences in the $K$ and $D$ mesons is surprising. We conclude this section on the $B$ meson masses with a result that will be discussed in greater detail in Section \ref{mixing}. In the neutral $B$ meson system, the eigenstates of the strong and electromagnetic interaction (production) do not coincide with the weak eigenstates (decay). According to the CPT theorem, the flavor eigenstates $B^0\; = \; (d\bar{b})$ and $\bar{B}^0 \; = \; (\bar{d}b)$ must have equal mass and lifetime but this is not required for the weak eigenstates $B_1$ and $B_2$. The situation is similar to the neutral kaon system. The $B_1-B_2$ mass difference is too small to be measured directly but it can be determined from the rate of $B^0\bar{B}^0$ oscillations (see Section \ref{bdbdbmix}) $$ |\Delta m_{(B_1-B_2)}| \; = \; { \displaystyle\frac{x_d}{\tau_{B^0}}} \; = \; (3.07 \pm 0.17) \times 10^{-4}\; {\rm eV} $$ $x_d$ is the $B^0\bar{B}^0$ mixing parameter. This is about 100 times larger than the corresponding mass difference in the $K^0\bar{K}^0$ system. \subsection{Measurement of the $B^*$ Mass} The vector partner of the pseudoscalar $B$ meson is called $B^*$. It has been observed by the CUSB and CLEO collaborations through the electromagnetic transition $B^* \to \gamma B$ \cite{cleobs},\cite{cusbbs} which gives a quasi-monochromatic photon. CLEO has determined the $B^*-B$ mass difference to be $(46.4 \pm 0.3 \pm 0.8)$ MeV which combined with the average $B$ mass gives \begin{center} \begin{tabular}{ll} $m_{B^*} \; = \; 5325.2 \pm 0.5 \pm 2.8$ & MeV \end{tabular} \end{center} $B^*$ production has also been been observed by the LEP experiment \cite{l3bs},\cite{alephbs},\cite{delphibs}. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,3.0)(0.0,0.0) \put(0.0,-0.0){\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,% width=3.8in,height=2.8in,file=delphi_bstarstar_fig1.ps}} \end{picture} \bigskip \bigskip \caption{DELPHI results on $B^{**}$ production. (a) Distribution of the Q-value for $B^{(*)}\pi$ pairs (data points) along with the Monte Carlo expectation without $B^{**}$ production (shaded area). Q is defined as $Q \, = \, m_{B^{(*)}\pi} - m_{B^{(*)}} - m_{\pi}$. (b) Background subtracted $B^{(*)}\pi$ pair Q-value distribution with a Gaussian fit overlaid.} \label{bstarstar} \end{center} \end{figure} \subsection{Observation of $B^{**}$ Production} Evidence for production of orbitally excited B meson states has recently been reported by the OPAL and DELPHI collaborations \cite{opalbstarstar},\cite{delphibstarstar}. As noted by Gronau, Nippe and Rosner \cite{bstarstar_theory}, $B^{**}$ mesons, if produced in sufficient quantity, allow self-tagging flavor identification at production time. They could provide a powerful tool to study flavor oscillations and CP violation in the neutral $B$ system. $B^{**}$ mesons containing a $u$ or $d$ quark are expected to decay to $B \pi$ or $B^*\pi$ while the dominant decay for $B_s^{**}$ states is $B_s^{**} \to B^{(*)}K$. Inclusive correlations between charged pions and b quark jets as well the displacement of the decay products of the b quark jet from the interaction point are used by the LEP experiments. This method does not distinguish between $B$ and $B^*$ mesons. Evidence for production of orbitally excited $B_s^{**}$ mesons is found using inclusive correlations between the fragmentation kaon and the products of the $b$ quark jet. The results of the DELPHI analysis are shown in Figure \ref{bstarstar}. They observe $2157 \pm 120 \pm 323$ candidates and extract a fragmentation ratio for $b$ quark jets to a $B_{u,d}^{**}$ meson of \cite{delphibstarstar} $$ \sigma_{B_{u,d}^{**}/\sigma_{b-jet}} \; = \; 0.27 \pm 0.02 \pm 0.06 $$ The width of the measured signal is consistent with the predictions for orbital excitations. Assuming a $B^*\pi$ to $B\pi$ ratio of 2:1, the $B_{u,d}^{**}$ mass averaged over the four expected states is determined to $$ m_{B_{u,d}^{**}} \; = \; 5732 \pm 5 \pm 20 \; \rm MeV $$ OPAL has studied $B\pi$ and $BK$ correlations and found the probability that the pion charge correctly tags the b quark flavor to be $0.706 \pm 0.013$ \cite{opalbstarstar}. \subsection{Measurement of the $B_s$ Mass} \label{Bs-mass} Evidence for exclusive $B_s$ decays has been reported by the CDF\cite{cdfbs}, OPAL\cite{opalbs}, ALEPH\cite{alephbs}, and DELPHI collaborations\cite{delphibs}. CDF observes a signal of $32.9\pm 6.8 $ events in the $B_s\to \psi \phi$ mode (see Fig.~\ref{cdfbs}) and determines the $B_s$ mass to be $5367.7 \pm 2.4 \pm 4.8$~MeV. \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(2.5,2.2)(0.0,0.0) \put(-0.2,0.0){\psfig{width=3.0in,height=3.0in,file=cdf_bs_new.ps}} \end{picture} \vskip 10 mm \caption[]{The $\psi K^+ K^-$ mass distribution from the CDF experiment for events with $K^+ K^-$ mass within 10 MeV$/c^2$ of the nominal $\phi$ mass.} \label{cdfbs} \end{center} \end{figure} OPAL finds one $B_s$ candidate in the $\psi\phi$ mode which is used to extract a mass of $5359 \pm 19\pm 7$ MeV for the $B_s$ meson. ALEPH finds one unambiguous $B_s$ event in the $B_s \to \psi' \phi$ mode and obtains a mass of $5368.6\pm 5.6 \pm 1.5$~MeV. Finally, DELPHI has three candidates of which one is in the $B_s\to \psi \phi$ mode and obtains a mass of $5374\pm 16 \pm 2$ MeV \cite{delphibs}. By reconstructing exclusive $B^-$ and $\bar{B}^0$ decays (see Fig.~\ref{cdfbd}), the high energy experiments calibrate their $B_s$ measurements relative to the known $B^-$ and $\bar{B}^0$ masses. The four $B_s$ mass measurements are consistent with each other. The average value \begin{center} \begin{tabular}{ll} $m_{B_s}\; = \; 5368.1 \pm 3.8$ & MeV \end{tabular} \end{center} is consistent with quark model predictions\cite{ehq,Kwros}. \subsection{$b$-Baryon Masses} Evidence for the production of $\Lambda_b$ and $\Xi_b$ baryons has been found in $\Lambda - \ell^+$ and $\Xi - \ell^+$ correlations, respectively \cite{forty}. OPAL has reported preliminary evidence for $\Lambda_b^0\to\Lambda_c^+\pi^-$ \cite{opal_lambdab}. However, the value originally quoted for the $\Lambda_b$ mass, m$_{\Lambda_b^0}\, =\, 5620 \pm 30$ MeV, is not confirmed by later data from OPAL which show no evidence for this decay mode. The current upper limit at 90\% C.L. for the product branching ratio is \cite{hassan} $$ {\cal {B}}(b\to \Lambda_b)\times {\cal{B}}(\Lambda_b \to \Lambda_c^+ \pi^-) \; < \: 1.9 \times 10^{-3} $$ Earlier claims by the UA1 and ISR experiments for the decay $\Lambda_b \to \Lambda \psi$ in $p \bar{p}$ collisions have been ruled out by the CDF and LEP experiments\cite{stochi} who find \begin{center} \begin{tabular}{lll} ${\cal{B}}(b\to \Lambda_b)\times {\cal{B}}(\Lambda_b \to \Lambda \psi)$ & $<\, 1.8 \times 10^{-4}$, & CDF + LEP\\ &$=(1.8 \pm 1.0)\times 10^{-3}$, & UA1 \end{tabular} \end{center} \subsection{Techniques of $b$ Lifetime Measurements} The pioneering measurements by the experiments at PEP established a lifetime greater than $1$ ps for a mixture of $b$ flavored hadrons. This is far too short to be directly measured in threshold experiments. At higher $b$ momenta when combined with relativistic time dilation, however, the hadron containing the $b$ quark travels a measurable distance before decaying. The lifetime of a particle is related to its decay length by \begin{equation} \tau_b \; = \; \frac{L_b}{\gamma \beta c} \label{tau_eq} \end{equation} At LEP energies, for example, the average $b$ momentum is about 30 GeV which yields an average decay length of $2.5$ mm for $<\tau_b> \, = \, 1.5$ ps. Similarly, at CDF the mean vertex displacement in the transverse direction is about $0.9$ mm. A variety of methods has been developed to measure the decay length and to determine the $b$ lifetime. They all follow the same principal steps. A purified sample is selected and the decay length is either measured directly or determined indirectly by using the impact parameter. The resulting decay length is then corrected for the Lorentz boost. An additional correction for background contamination is applied as well. \subsubsection{Selection of an enriched $b$ sample} Ideally, one would like to have a sample of fully reconstructed decays to determine the lifetime of a specific $b$ hadron. The $b$ vertex could then be reconstructed allowing a measurement of the decay length. The momentum of the $b$ hadron gives the $\gamma\beta$ factor in equation (\ref{tau_eq}) without any further assumptions. The resulting proper time distribution would be an exponential convoluted with a Gaussian resolution function representing the experimental measurement errors. Although currently limited by statistics this procedure will ultimately yield the most precise measurements of individual $b$ hadron lifetimes. The best statistical precision in the determination of lifetimes of hadrons containing $b$ quarks is currently obtained from measurements using partial reconstruction of semileptonic decays. These decays represent about 21\% of the total $b$ decay rate and have the experimental advantage that both electrons and muons can be efficiently identified with low background. The purity of the sample can be enhanced by kinematical cuts which take advantage of the large mass of the $b$ quark e.g. selecting leptons with large transverse momentum with respect to the $b$ direction. Event samples with purities above 90\% have been obtained at LEP. However, in such semileptonic decays the neutrino is not detected so the $b$ hadron is not completely reconstructed. One then has to rely on Monte Carlo simulations to estimate the $b$ momentum and to extract the proper time distribution from the decay length measurements. At the TEVATRON the inclusive $b$ samples with the best signal to background ratios are obtained by selecting events with two energetic leptons from $\psi$ decay and a detached vertex. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(2.0,2.0)(0.0,0.0) \put(-0.6,0.01){\psfig{width=3.8in,height=1.8in,file=impact.eps}} \end{picture} \bigskip \bigskip \caption{Lifetime measurements using the impact parameter method (a) and reconstruction of the $b$ decay vertex (b).} \label{impact} \end{center} \end{figure} For inclusive lifetime measurements, the presence of a high $p_{\perp}$ lepton or a $\psi$ meson is usually sufficient to demonstrate the presence of a b quark, while for exclusive measurements of individual $b$ hadron lifetimes an additional particle in the decay has to be reconstructed in order to establish a signature characteristic for the decaying $b$ hadron (Fig \ref{impact}(b)). The $\Lambda_b$ lifetime, for example, is measured using a sample of events containing $\Lambda_c^+\ell^-$ or $\Lambda \ell^-$ combinations. \subsubsection {Impact parameter method} In early experiments the vertexing precision was not sufficient to measure the decay length, $l\, = \, \gamma \beta c \tau$, directly. The impact parameter method shown schematically in Fig. \ref{impact}(a) was developed as alternative. Due to the finite lifetime of the $b$ hadron, a lepton from the semileptonic decay of the heavy quark will miss the primary vertex where the $b$ hadron was produced. The miss distance or impact parameter, $\delta$, is given by \begin{equation} \delta \; = \; \gamma \beta c \tau_b \sin{\alpha} \sin{\theta}. \end{equation} where $\alpha$ is the angle between the lepton and $b$ directions and $\theta$ is the polar angle. The $b$ direction is usually approximated by the axis of the hadronic jet. A negative sign is assigned to the impact parameter if the lepton track crosses the jet axis behind the the beam spot indicating a mismeasured lepton or a background event. The main advantage of the impact parameter method is that it is rather insensitive to the unknown boost of the parent; as $\gamma\beta$ increases with the $b$ momentum, $\sin{\alpha}$ decreases approximately as $1/\gamma\beta$ for $\beta \approx 1$ \cite{Roe}. Improvements in lifetime measurements have come about from larger data samples, smaller beam spots, the use of neutral energy in the jet finding algorithms as well as three dimensional vertex reconstruction. In the best LEP measurements the average impact parameter uncertainty has been reduced to about 80 $\mu m$ \cite{Sharma}. The disadvantage of this method is, that a single track is a relative poor estimator for the $b$ decay length and not all the available information in the event is used. Today, impact parameter measurements are used only for inclusive lifetime measurements and for cases in which the $b$ decay length cannot be easily reconstructed \cite{Sharma}. \subsubsection{Decay length measurements} New interaction region designs with smaller radius beampipes and beam spots\footnote{Typical beam spot sizes. LEP: $\sigma_x\approx 150\, \mu m\, , \, \sigma_y \approx 10\, \mu m$, CDF: $\sigma_x\approx 40\, \mu m\, , \, \sigma_y \approx 40\, \mu m$,\\ SLD: $\sigma_x\approx 2\, \mu m\, , \, \sigma_y \approx 1\, \mu m$ \cite{Sharma}} combined with high resolution silicon vertex detector allow for decay length measurements with a precision better than 300 $\mu m$. This is a factor of 10 smaller than the average $b$ flight distance at LEP. As indicated in Fig \ref{impact}(b) the $b$ vertex is reconstructed using the lepton track and the direction of the reconstructed charm meson. The momentum of the $b$ hadron is estimated using the observed decay products, missing momentum and a correction factor determined from a Monte Carlo simulation. The proper time distribution is then given by an exponential convoluted with a Gaussian resolution function and the momentum correction factor. A maximum likelihood fit is used to extract the lifetime \cite{aleph_bs_plb}. \subsubsection{Averaging lifetime measurements} \label{aver_life} In order to obtain the most precise value for inclusive and exclusive $b$ lifetimes the results of lifetime measurements from different experiments have been combined. Using the conventional approach of weighting the measurements according to their error does not take into account the underlying exponential (lifetime) distribution. If a measurement fluctuates low then its weight in the average will increase, leading to a bias towards low values. This is particularly relevant for low statistics measurements such as the $B_s$ lifetime. According to a study by Forty\cite{forty}, this bias can be avoided if the weight is calculated using the relative error $\sigma_i/\tau_i$. \footnote{This procedure assumes good vertex resolution, {\it {i.e.}} $\sigma<\tau/10$.} We find a 1-3\% difference in the average lifetimes computed, with the second method giving the larger value. A slight bias of the latter method towards higher lifetime values could be avoided by taking into account asymmetric errors. This effect has been found empirically to be rather small and we omit this additional complication in the calculation of our lifetime averages. \subsection{Inclusive $b$ Lifetime} Inclusive measurements of the $b$ lifetime were important historically to establish the long $b$ lifetime and provided the first evidence that the coupling between the second and third quark generation is quite small. They are still needed for some electroweak studies such as the determination of the forward-backward asymmetry in $Z \to b\bar{b}$ where the different hadrons containing $b$ quarks are not distinguished. For $B$ physics, {\it {i.e.}} the study of $B$ meson decays, the exclusive measurements of individual $b$ hadron lifetimes are preferable. For example, to extract the value of the CKM matrix element $|V_{cb}|$ from measurements of semileptonic $B$ decays the average of the $B^+$ and $\bar{B}^0$ lifetimes should be used rather than the inclusive $b$ lifetime which contains additional contributions from $B_s$ mesons and $b$ baryons. Inclusive $b$ lifetime measurements are performed using one of the following three methods: \begin{itemize} \item{Impact parameter method $\; b \to c\ell \nu$ e.g. $$<\tau_b> \, = \, 1.487 \pm 0.023 \pm 0.0384 \rm{~ps~(ALEPH} \cite{aleph_tau_inc})$$ } \item{Decay length reconstruction in $b \to \psi X$ e.g. $$<\tau_b> \, = \, 1.46 \pm 0.06 \pm 0.06 \rm{~ps~(CDF } \cite{cdf_tau_inc})$$ } \item{Vertex topology in hadronic $b$ decays e.g. $$<\tau_b> \, = \, 1.599 \pm 0.014 \pm 0.035 \rm{~ps~(DELPHI} \cite{delphi_tau_top})$$ } \end{itemize} The third method uses displaced multi-prong vertices to reconstruct the decay point of the $b$ hadron. Extensive Monte Carlo simulations are used to estimate the $b$ momentum and to determine the proper time spectrum. The current world average for the inclusive $b$ lifetime which includes many measurements including those given above is \cite{forty}, \begin{center} \begin{tabular}{ll} $<\tau_b> \; = \; 1.524 \pm 0.027$ &ps. \end{tabular} \end{center} The world average for this quantity in 1992 was $(1.29\pm 0.05)$ ps. The substantial change in the value has been attributed to several improvements: the use of neutral energy when calculating the b jet direction, and better knowledge of the resolution function as a result of the use of silicon vertex detectors\cite{Sharma},\cite{forty}. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,3.0)(0.0,0.0) \put(-0.4,0.01){\psfig{width=3.8in,height=3.8in,file=cdf_dstarlnu.ps}} \end{picture} \vskip -10 mm \caption{$D^{*+}$-lepton correlations from $B$ decays observed by the CDF collaboration. A clear signal is present in right sign combinations while no signal is present when same sign charged leptons are combined with $D^{*+}$ candidates.} \label{cdf_dstarlnu} \end{center} \end{figure} \subsection{Exclusive Lifetime Measurements} Precise measurements of exclusive lifetimes for b-flavored hadrons have been carried out by CDF and by some of the LEP experiments \cite{cdfglas},\cite{alephglas1},\cite{alephglas2},\cite{opalglas}, \cite{delphiglas}. The most recent results and the techniques used are given in Table \ref{Tblife}. \begin{table}[htb] \caption{Measurements of exclusive lifetimes for b flavored hadrons.} \medskip \label{Tblife} \begin{tabular}{llllll} Particle & Technique &CDF & ALEPH & OPAL & DELPHI \\ \hline \medskip $ \bar{B}^0$& $D^{*}-l$ & $1.62\pm0.16\pm0.15$& $1.71^{+0.12}_{-0.11}\pm 0.06$& $1.62\pm 0.10\pm0.10$& $1.17^{+0.29}_{-0.23}\pm 0.15\pm 0.05$ \\ $ \bar{B}^0$& excl &$1.57\pm0.18\pm0.08$& $1.17^{+0.24}_{-0.19}\pm 0.06$ & & \\ $ \bar{B}^0$& topol. & & & & $1.68\pm 0.15^{+0.13}_{-0.17}$ \\ \hline\medskip $ B^-$& $D-l$ & $1.63\pm 0.20\pm 0.16$ & $1.71\pm0.15\pm0.08$ & $1.53\pm0.14\pm0.11$ & $1.30^{+0.33}_{-0.29}\pm 0.15\pm 0.05$ \\ $ B^-$& excl & $1.61\pm 0.16\pm 0.05$ & $1.30^{+0.25}_{-0.20}\pm 0.06$ & & \\ $ B^-$& topol. & & & & $1.72\pm0.08\pm0.06$ \\ \hline \medskip $ B_s^0$&$D_s -l$&$1.42^{+0.27}_{-0.23}\pm 0.11$& $1.90^{+0.46}_{-0.36}\pm0.05$& $1.33^{+0.26}_{-0.21}\pm 0.06$& $1.34^{+0.37}_{-0.29}\pm 0.14$ \\ $ B_s^0$& $D_s-h$ & & $1.75^{+0.30+0.18}_{-0.28-0.23}$& & $1.56\pm 0.35\pm 0.23$ \\ $ B_s^0$&$\psi\phi$&$1.74^{+1.08}_{-0.69}\pm 0.07$ && & \\ \hline\medskip $ \Lambda_b$ & $\Lambda - l$& & $1.05^{+0.12}_{-0.11}\pm 0.09$& $1.26^ {+0.16}_{-0.15}\pm 0.07$ & $1.13^{+0.30+0.05}_{-0.23-0.08}$ \\ $ \Lambda_b$ & $\Lambda_c -l$& & $1.06^{+0.40}_{-0.27}\pm 0.07$& & $1.33^{+0.71+0.08}_{-0.42-0.09}$ \\ $ \Lambda_b$ & $p-l$ & & & & $1.28^{+0.35+0.11}_{-0.29-0.12}$ \\ \hline\medskip $ \Xi_b^0$ & $\Xi-l$ & & & & $1.5^{+0.7}_{-0.4}\pm0.3$ \\ \end{tabular} \end{table} \subsubsection{$B^-$ and $\bar{B}^0$ lifetimes} The best statistical precision in the determination of exclusive lifetimes is obtained from measurements using lepton-particle correlations. For example, a sample of $B^0$ candidates can be obtained from events with lepton-$D^{*+}$ correlations of the correct sign which originate from the decay $\bar{B^0}\to D^{*+} \ell^-\nu$, $D^{*+}\to D^0 \pi^+$ and $D^0 \to K^- \pi^+$ (see Fig. \ref{impact} (b) for the method and Fig. \ref{cdf_dstarlnu} for the CDF results). The pion from the strong decay and the lepton form a detached vertex. This information combined with the direction of the reconstructed $D^0$ meson determines the location of the $B$ decay vertex from which the decay length can be measured. For a LEP experiment the $D^{(*)}\ell$ sample typically contains ${\cal{O}}(100)$ events. To obtain the lifetime from the decay length, requires knowledge of the $\gamma\beta$ factor which is estimated from the momenta of the observed decay products. Since the neutrino is not observed, a correction is made in the boost factor. The uncertainty in the size of this correction is included in the systematic error and is typically of order $3\%$. Another systematic problem is the contamination from decays $B^-\to D^{**}~l^- \nu$, followed by $D^{**}\to D^{*+}\pi^-$ where the $\pi^-$ from the strong decay of the $D^{**}$ (p-wave) meson is not detected. These backgrounds will lead to a $B^-$ meson contamination in the $\bar{B}^0$ lifetime sample (and vice-versa). Since the branching fractions for such decays are poorly measured, this is another important systematic limitation and gives a contribution of order 5\% to the systematic error. Significant contributions to the systematic error also result from the uncertainty in the level of background and its lifetime spectrum. A detailed discussion of exclusive lifetime measurements can be found in a recent review by Sharma and Weber \cite{Sharma}. The systematic problems associated with the boost correction and the contamination from poorly measured backgrounds can be avoided by using fully reconstructed decays such as $\bar{B^0}\to D^+ \pi^-$ or $B^- \to \psi K^-$. However, since exclusive $B$ branching ratios are small, this method has much poorer statistical precision. In hadron collider experiments, this approach has been successfully used to determine the $\bar{B^0}$, $B^-$, and $B_s$ lifetimes from exclusive modes with $\psi$ mesons e.g. $\bar{B^0}\to \psi K^{*0}$, $B^-\to \psi K^-$\cite{cdflife} and $B_s\to \psi \phi$ \cite{cdf_bs_life}. A topological vertexing method has been used by the DELPHI experiment. Candidate $\bar{B^0}$ and $B^+$ mesons are distinguished on the basis of the net charge of the tracks at the decay vertex. This method has small statistical errors however care must be taken to assure that systematic uncertainties from mistracking and incorrect assignments of decay vertices are controlled. The neutral B lifetime that is extracted is an average over the lifetimes over all neutral b flavored hadrons including $B_d^0$, $B_s^0$, and $\Lambda_b^0$. With good knowledge of the production fractions, the exclusive $B^0$ lifetime can be extracted. A topological vertexing technique has also been used by the Fermilab fixed target experiment E653 \cite{E653}, which has observed 11 charged $B$ candidates and 17 neutral $B$ candidates in their emulsion data. They find $\tau_{B^+}=3.25^{+1.50+0.27}_{-0.90-0.10}$ ps and $\tau_{B^0} =0.91^{+0.27+0.10}_{-0.20-0.04}$ ps. These results have been omitted from the determination of world average lifetimes. Using the procedure for averaging measurements described in Section \ref{aver_life}, we combine the individual $B^-$ and $\bar{B}^0$ lifetime measurements and obtain \begin{center} \begin{tabular}{ll} $\tau_{B^-} \; = \; 1.646 \, \pm \, 0.063 $ &ps \end{tabular} \end{center} and \begin{center} \begin{tabular}{ll} $ \tau_{\bar{B}^0} \; = \; 1.621 \, \pm \, 0.067 $ & ps \end{tabular} \end{center} When averaging the results obtained by studying $D-\ell$ correlations a common systematic error of 3\% has been assumed. \subsubsection{$B_s$ lifetime measurements} The $B_s$ lifetime was measured by CDF \cite{cdf_bs_life} and the LEP experiments using partial reconstruction of the semileptonic decay $\bar{B}_s^0 \to D_s^- \ell^+ \nu$. Candidate $D_s^-$ mesons were reconstructed in the $\phi \pi^-$ or $K^{*0}K^-$ final states. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,3.0)(0.0,0.0) \put(-0.5,0.01){\psfig{width=4.8in,height=8.8in,file=aleph_bs_life.ps}} \end{picture} \bigskip \bigskip \caption{$B_s$ lifetime measurement by ALEPH. a) $K^-K^+\pi^+$ invariant mass distribution for right-sign $D_s^+\ell^-$ combinations. b) $K^-K^+\pi^+$ invariant mass distribution for wrong-sign $D_s^+\ell^+$ combinations. c) Proper time distribution of the right-sign $D_s^+\ell^-$ sample. d) Proper time distribution of the combinatorial background.} \label{aleph_bs_life} \end{center} \end{figure} Fig. \ref{aleph_bs_life}(a) shows the $K^-K^+\pi^+$ invariant mass spectrum obtained by ALEPH \cite{aleph_bs_life} for right-sign and wrong-sign $D_s\ell$ combinations. This spectrum contains 47 $D_s$ candidates which were used for the lifetime measurement. The $B_s$ decay length was measured and converted to the $B_s$ proper time using a $B_s$ momentum estimator based on the reconstructed lepton and the $D_s$ momentum as well as an estimated neutrino energy obtained by using a missing mass technique. $D_s$ backgrounds from $D_s^*$ decays were treated using a Monte Carlo simulation. The $B_s$ lifetime was extracted from the proper time distribution using a maximum likelihood fit. The result of such a procedure is shown in Fig. \ref{aleph_bs_life}(b). A recent CDF result \cite{cdf_bs_life} also uses $D_s$-lepton correlations as well as exclusive reconstruction of $B_s \to \psi \phi$ decays to extract the $B_s$ lifetime\cite{cdf_bs_life}. The uncertainty in the $B_s$ lifetime is still dominated by the statistical error. Assuming a common systematic error of 2\% \cite{Sharma} for the uncertainty in the vertex resolution and the neutrino energy estimate we obtain \begin{center} \begin{tabular}{ll} $\tau_{B_s} \; = \; 1.55 \, \pm \, 0.13 $ & ps \end{tabular} \end{center} \subsubsection{$b$ baryon lifetime measurements} Studies of $\Lambda_c^+ \ell^-$ and $\Lambda \ell^-$ correlations at LEP provided the first evidence for the production of the $\Lambda_b$ baryon. Using the decay chain \begin{eqnarray*} \Lambda_b & \to & \Lambda_c^+ \ell^- \bar{\nu} \\ & & \; ^|\hspace{-2.0mm}\rightarrow \Lambda X\\ & & \; \; \; \; \; \; \; ^|\hspace{-2.0mm}\rightarrow p\pi^- \end{eqnarray*} OPAL \cite{opal_lambdab_life} found the invariant $p\pi^-$ mass distribution shown in Fig. \ref{opal_lambdab_life}(a). \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,2.8)(0.0,0.0) \put(-1.01,-0.35){\psfig{width=5.0in,height=7.8in,file=opal_lambdab_life.ps}} \end{picture} \caption{$\Lambda_b$ lifetime measurement by OPAL. a) $p\pi^-$ invariant mass distribution for right-sign and wrong-sign $\Lambda\ell$ combinations. b) Decay length distribution of the right-sign $\Lambda\ell^-$ sample. The inset shows the corresponding distribution for the wrong-sign $\Lambda\ell^+$ candidates.} \label{opal_lambdab_life} \end{center} \end{figure} Although the composition of the $b$ baryon sample is not known, it is expected that the $\Lambda_b$ baryon is the most copiously produced. The production of $\Xi_b$ and $\Omega_b$ baryons is suppressed due to the additional strange quarks required for their formation. Both impact parameter and decay length measurements are used to determine $\tau_{\Lambda_b}$. Since the $\Lambda_c^+$ lifetime is short, the $\Lambda_b$ decay length can be estimated using the $\Lambda \ell^-$ vertex. The resulting time distribution from the OPAL analysis is shown in Fig. \ref{opal_lambdab_life}(b). A better estimate of the $\Lambda_b$ decay point is obtained from fully reconstructing the $\Lambda_c^+$ baryon and finding the $\Lambda_c^+ \ell^-$ vertex. However, the sample sizes become very small. Using this method, DELPHI finds $\tau_{\Lambda_b} \, = \, 1.33 ^{+0.71}_{-0.42} \pm 0.13$ ps. Combining the results listed in Table \ref{Tblife} we determine the world average $\Lambda_b$ lifetime to be \begin{center} \begin{tabular}{ll} $\tau_{\Lambda_b} \; = \; 1.17 \, \pm \, 0.09 $ & ps. \end{tabular} \end{center} DELPHI \cite{Roudeau} has searched for $\Xi^- \ell^-$ correlations and found 10 $\Xi_b$ candidates. These are expected to come from $\Xi_b^- \to \Xi_c^0 \ell^- \bar{\nu}X$ and $\Xi_b^0 \to \Xi_c^+ \ell^- \bar{\nu}X$ followed by $\Xi_x \to \Xi^- X'$. A simple average of the proper time of the 10 candidates gives the $\Xi_b$ lifetime estimate of $1.5^{+0.7}_{-0.4}\pm 0.3$ ps. \subsubsection{Lifetime Ratios} \label{liferat} The ratio of the $B^-$ and $\bar{B^0}$ lifetimes has been measured by a number of experiments. These measurements are performed either by using correlations between $D$ mesons and leptons or by using exclusive final states such as $B^-\to \psi K^-$ and $\bar{B^0}\to \psi K^{*0}$. The CLEO~II experiment has measured ${\cal B}(B^0\to X~l^-\nu)$ and ${\cal B}(B^-\to X~l^-\nu)$ using the yield of leptons found opposite fully and partially reconstructed B decays \cite{cleoiitptz}. {}From isospin invariance, the ratio of the two branching fractions is the ratio of the lifetimes. \begin{table}[htb] \caption{Measurements of lifetime ratios for b flavored hadrons.} \medskip \label{Tbratio} \begin{tabular}{clllll} Method &CDF & ALEPH & OPAL & DELPHI& CLEO~II \\ \hline \medskip $D-l$& $1.01\pm0.19\pm0.17$ & $1.00^{+0.14}_{-0.13}\pm 0.08$ & $0.94\pm0.12 \pm 0.07$ & $1.11^{+0.55}_{-0.39}\pm 0.11$& \\ $excl$& $1.02\pm 0.16\pm 0.05$& $1.11^{+0.31}_{-0.25} \pm 0.03$& & & \\ topol. & & & & $1.02^{+0.13+0.13}_{-0.10-0.10}$& \\ $\rm{B~tags}$& & & & & $0.93\pm 0.18\pm 0.12$\\ \end{tabular} \end{table} Averaging the results listed in Table \ref{Tbratio} we obtain $$ \frac{\tau_{B^-}}{\tau_{\bar{B}^0}} \; = \; 0.995 \, \pm \, 0.068 $$ Note that this value is not exactly equal to the ratio of the world averages for the $B^-$ and $\bar{B}^0$ lifetimes since the average value of $\tau_{B^-}/\tau_{\bar{B}^0}$ is calculated directly from the ratios reported by the experiments. \subsection{Lifetime Summary} A summary of the measurements of all the $b$ hadron lifetimes can be found in Fig.~\ref{blifetime}. The pattern of measured lifetimes follows the theoretical expectations outlined in the introduction to this chapter. However, the $\Lambda_b$ meson lifetime is unexpectedly short. Scaling from the observed $\Lambda_c - D^0$ lifetime difference, $\tau_{\Lambda_b}$ should not deviate from the average $b$ lifetime by more than 10\%. A more precise determination of this lifetime would be of great interest. \bigskip \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(2.5,3.2)(0.0,0.0) \put(-0.2,0.01){\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,% width=2.8in,height=2.4in,file=blifetime.ps}} \end{picture} \caption{Summary of exclusive $b$ lifetime measurements.} \label{blifetime} \end{center} \end{figure} Assuming a $Z \to B^-,\,\bar{B}^0,\,B_s,\,\Lambda_b$ production ratio of $0.39\,:\,0.39\,:\,0.12\,:\,0.10$ we can average the exclusive lifetime measurements and find $<\tau_{excl.}>\; = \; 1.58\pm0.08$ ps; consistent with the inclusive $b$ lifetime, $\tau_b \; = \; 1.524\pm 0.027$ ps. Further improvements in the determination of $b$ lifetimes can be expected as the data samples available at the TEVATRON increase and from reduced systematic errors in the $D^{(*)}\ell$ correlation measurements at LEP. \section{SEMILEPTONIC B MESON DECAYS} Semileptonic transitions are the simplest $B$ decays: the heavy $b$ quark decays to either a $c$ or an $u$ quark and the virtual $W$ boson becomes a lepton pair. These decays are described by the external spectator diagram shown in Fig.~\ref{Fdiag}(a), (c). Measurements of semileptonic B decays are used to determine the weak couplings $|V_{cb}|$ and $|V_{ub}|$. In addition, detailed measurements of these decays test models of the dynamics of heavy quark decay. The leptonic current can be calculated exactly while corrections due to the strong interaction are restricted to the $b\to c$ and $b \to u$ vertices, respectively. Experimentally, semileptonic decays have the advantage of large branching ratios and the characteristic signature of the energetic charged lepton. The neutrino, however, escapes undetected so a full reconstruction of the decaying $B$ meson is impossible. Various techniques which take advantage of production at threshold or the hermiticity of the detector have been developed by the ARGUS, CLEO and LEP experiments to overcome this difficulty. Semileptonic decays are also useful for the study of other phenomena in $B$ physics. The charge of the lepton in a semileptonic decay is directly correlated with the flavor of the $B$ meson. A negative lepton comes only from the decay of a $b$ quark while the decay of a $\bar{b}$ anti-quark yields positive leptons. Tagging the $b$ flavor has been essential to the discovery of $B^0\bar{B}^0$ oscillations and will be equally important in searches for CP violation in the $B$ system. We begin with inclusive measurements and then discuss the results on exclusive $b\to c$ transitions. The dynamics of semileptonic decays is considered in the following section in which results on polarization and form factor measurements are given. Both inclusive and exclusive measurements of semileptonic decays of $B$ mesons have been used to determine $|V_{cb}|$. We summarize these results in Section \ref{brsl_vcb}. In the final section, inclusive and exclusive $b \to u$ transitions and the extraction of the CKM element $|V_{ub}|$ are discussed. \subsection{Inclusive Semileptonic b$\to$ c Transitions} \label{inclbc} There are three types of measurements of inclusive semileptonic B decays. These are measurements of the inclusive single lepton momentum spectrum, measurements of dilepton events using charge and angular correlations, and measurements of the separate $B^-$ and $\bar{B}^0$ branching ratios by using events which contain a lepton and a reconstructed B meson. Measurements of the semileptonic $B$ branching ratio, ${\cal{B}}_{sl}$, have been performed on the $\Upsilon(4S)$ resonance and at higher energies by the PEP, PETRA and LEP experiments. In all cases, the branching fraction of semileptonic decays is determined from the inclusive lepton yield. The primary difficulty in these analyses is distinguishing between leptons from $B$ decay and leptons from other sources. Once the fraction of direct or primary $b \to c\ell \nu$ leptons is obtained, the semileptonic branching ratio is extracted from the integral over the momentum distribution. \subsubsection{Measurements of ${\cal{B}}_{sl}$ on the $\Upsilon(4S)$} The momentum spectrum of electrons and muons from $B$ decays as measured by the CLEO II collaboration \cite{Bart1l} is shown in Fig.~\ref{accmm}. It cuts off at the kinematical limit around 2.4 GeV. Leptons from continuum $e^+e^-$ annihilation and other background sources such as $\psi \to \ell^+\ell^-$ decays have been subtracted. Corrections for final state radiation have been applied to the electron spectrum following the prescription by Atwood and Marciano \cite{radcorr}. The electron spectrum is measured down to momenta of 0.5 GeV while the muon detection system is fully efficient at about 1.4 GeV. Using similar techniques, measurements of the inclusive semileptonic branching fraction have also been published by the ARGUS\cite{ARGUS1l}, CRYSTAL BALL\cite{crystal1l}, CUSB\cite{CUSB1l}, and CLEO 1.5\cite{Hend1l} experiments. The spectrum in Fig.~\ref{accmm} contains two components: primary leptons from direct semileptonic decays ({\it {i.e.}} $b \to c\ell \nu$) and leptons from cascade decays, $b\to cX,\; c \to s\ell \nu$. The experimental challenge is to separate the two so that the inclusive semileptonic branching fraction can be determined from the direct component. Several methods have been devised to accomplish this. These include measurements of the inclusive single lepton momentum spectrum together with a model of semileptonic decays and measurements of dilepton events using charge and angular correlations. Separate $B^-$ and $\bar{B}^0$ branching ratios have been determined using events which contain a lepton and a reconstructed B meson. In the single lepton analyses, the primary difficulty is separating the contributions from direct semileptonic decay (i.e. $b\to c~l~\nu$) and cascade semileptonic decays, $b \to c~X, ~c\to s~l~\nu$. If only leptons above $1.4$ GeV are considered, then there is a negligible contribution from cascade decays, which have a soft spectrum, but a large extrapolation to lower momenta is then required to obtain the branching ratio. If the full lepton momentum range is used, then the spectrum after background subtraction must be fitted to the sum of the two components. The shape of the cascade component is obtained by convoluting the measured $B\to D X$ momentum distribution with the experimental $D\to l~Y$ spectrum\cite{Delco}. The shape of the primary spectrum is taken from a model of semileptonic B decay. Since the two components are not orthogonal, the separation of background from cascade decays introduces a significant model dependence in the determination of the semileptonic branching ratio. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.5)(0,0) \put(-1.4,-0.0) {\psfig{bbllx=0pt,bblly=0pt,width=6.0in,height=6.0in,file=bexcl_lepyield.ps}} \end{picture} \vskip 25 mm \caption{CLEO II measurement of the inclusive lepton yield from $B$ decays. a) Fit to the lepton momentum spectrum for muons and electrons using the ACCMM model. b) Fit to the lepton momentum spectrum for muons and electrons using the modified ISGW model with the $D^{**}$ fraction allowed to float.} \label{isgws} \label{accmm} \bigskip \end{center} \end{figure} Two classes of models are used to parameterize the shape of the primary $b\to c \ell \nu$ contribution. Parton models, such as the ACCMM model\cite{ACCM}, assume a Gaussian smearing in momentum space for the parton in the meson, and leave the Fermi momentum of the b quark and the charm quark mass as free parameters to be determined by the data. The model of ISGW \cite{ISGW} is an example of the second class. These models are called exclusive models since here the $b\to c \ell \nu$ transition is described as the sum of $B\to D \ell \nu$, $B\to D^* \ell \nu$ and $B\to D^{**} \ell \nu$ channels. The form factors and rate for each channel are calculated using a simple quark model\cite{ISGW}. A fit to the lepton spectrum for muons and electrons from the CLEO II experiment using the ACCMM model is shown in Fig~\ref{accmm}(a). The ACCMM model gives a good fit to the data while the ISGW model gives a fit with a somewhat poor $\chi^2$/dof. The CLEO~II experiment chooses to remedy this defect of the exclusive model (as did CLEO~1.5) by fixing the ratio of the vector to pseudoscalar contributions and allowing the normalization of the $B\to D^{**} \ell \nu$ component to float in the fit. The result is a fit with an improved $\chi^2/$dof and with the fraction $B\to D^{**} \ell \nu$/$b\to c \ell\nu$ determined to be $21.2\pm 1.6\pm 8.0\%$. This result will be compared in section~\ref{exclbdstl} to other methods of measuring the $B\to D^{**}\ell\nu$ branching fraction. A new measurement technique using events with two leptons was introduced by the ARGUS experiment\cite{Argus2l} which significantly reduces the model dependence associated with the subtraction of the cascade component. A high momentum lepton is selected ($p_l>1.4$ GeV) which tags a primary decay. This primary lepton is then combined with an additional lepton candidate which has a momentum above $0.5$ GeV. In the absence of mixing, if the second lepton has the a sign opposite to the tagging lepton it is a primary lepton, while if the second lepton has the same sign as the tag it is a secondary lepton. Since the threshold for muon detection is about $1.4$ GeV, dielectrons are used in this technique. A small background from dileptons which originate from a single B can be removed by using the angular correlation between the leptons. By momentum conservation, dileptons from the same B at the $\Upsilon(4S)$ will be approximately back to back while dileptons from different B's will be uncorrelated (Fig.~\ref{primsec} a). A more refined angular correlation as a function of momentum is used in the CLEO~II analysis\cite{Cleo2l}. Including the effect of mixing gives the following relation between the unlike and like sign spectra and the primary and secondary branching fractions ($\displaystyle{{\cal B}(b)\over {dp}}$, $\displaystyle{{\cal B}(c)\over {dp}}$), $$ {d N(\ell^{\pm} e^{\mp}) \over dp} = N_\ell \epsilon_1 (p) \epsilon_2 (p) [ {d {\cal B}(b) \over dp} (1- \chi) + {d {\cal B}(c) \over dp} \chi ]$$ $$ {d N(\ell^{\pm} e^{\pm}) \over dp} = N_\ell \epsilon_1 (p) [ {d {\cal B}(b) \over dp} \chi + {d {\cal B}(c) \over dp} (1-\chi) ]$$ Here $\epsilon_1(p)$ is the efficiency of lepton identification, $\epsilon_2 (p)$ is the efficiency of the angular correlation cut, $\chi$ accounts for $B-\bar{B}$ mixing. Note that if the second lepton has the opposite charge of the first one, it must be a primary lepton while if second lepton has the same charge it is a cascade lepton. By applying this method it is therefore possible to determine the yield of primary and cascade leptons for each momentum bin. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.5)(0,0) \put(-3.5,-1.3) {\psfig{width=10.0in,height=12.5in,file=primsec_fig_new.ps}} \end{picture} \vskip 8 mm \caption{Model independent analysis of the dielectron momentum spectrum from the CLEO~II experiment: (a) Distribution of the angle between the two leptons in a Monte Carlo simulation. b) The electron momentum spectrum in data. The contributions from primary (filled circles) and secondary electrons (open circles) are shown separately.} \label{primsec} \end{center} \end{figure} The results of the CLEO~II analysis \cite{Cleo2l}, based on a data sample of 2.07 fb$^{-1}$ taken on the $\Upsilon(4S)$ and 0.99 fb$^{-1}$ taken just below the resonance, are shown in Fig.~\ref{primsec}(b). The measured electron momentum spectrum extends down to 600 MeV and there is only a small extrapolation to zero momentum. The unmeasured part of the spectrum amounts to only $5.8\pm 0.5$\% of the total semileptonic rate and hence the model dependence is small. A correction has to be applied for the small contamination of cascade leptons with momenta above 1.4 GeV ($\approx 2.8$\%). Once the leptons from $B$ decays have been isolated using any of the methods discussed above, the semileptonic branching ratio is determined by integrating over the $b \to c\ell \nu$ momentum spectrum. The results for the single lepton measurements and for the dilepton analyses are given in Tables~\ref{Tbbsl} and \ref{Tbbsll}. CLEO~II finds $$ {\cal{B}}_{sl} \; = \; (10.36 \, \pm \, 0.17 \, \pm \, 0.40)\, \% $$ where the systematic error includes the uncertainties in the electron identification efficiency, tracking efficiency, and the $B^0\bar{B}^0$ mixing rate. The small model dependence introduced when extrapolating from $0.5$ GeV to zero momentum is determined by comparing the results obtained using the ACCMM and ISGW models and is included in the quoted systematic error. \begin{table}[htb] \caption{Inclusive semileptonic branching ratios in [\%] determined from an analysis of the yield of single leptons ($1 \ell$).} \label{Tbbsl} \begin{tabular}{llll} Experiment & ACCMM & ISGW & ISGW$^{**}$ \\ \hline ARGUS (1 $\ell$)& $10.2\pm 0.5 \pm 0.2$ & $ 9.8\pm 0.5$ & \\ CRYSTAL BALL (1 $\ell$)& $12.0\pm 0.5\pm 0.7$ & $ 11.9\pm 0.4\pm 0.7$ & \\ CUSB~II (1 $\ell$)& $10.0\pm 0.4 \pm 0.3$ & $ 10.0\pm 0.4\pm 0.3$ & \\ CLEO 1.5 (1 $\ell$)& $10.5\pm 0.2 \pm 0.4$ & $ 9.9\pm 0.1\pm 0.4$ & $11.2\pm 0.3\pm 0.4$ \\ CLEO~II (1 $\ell$)& $10.65\pm 0.05 \pm 0.33$ & $ 10.41\pm 0.07\pm 0.33$ & $10.87\pm 0.10 \pm 0.33$\\ \hline Average (1 $\ell$)& $10.51\pm0.21$& $10.21\pm 0.20$& $10.98\pm 0.28$ \\ \end{tabular} \end{table} \begin{table}[htb] \caption{Inclusive semileptonic branching ratios in [\%] determined using dilepton events ($2 \ell$) which has less statistical power but much reduced model dependence.} \label{Tbbsll} \begin{tabular}{ll} Experiment & ${\cal{B}}_{sl}$\\ \hline ARGUS (2 $\ell$)& $9.1\pm 0.5\pm 0.4$ \\ CLEO~II (2 $\ell$)& $10.36\pm 0.17\pm 0.40$ \\ \hline Average (2 $\ell$)& $9.96\pm 0.36 $\\ \end{tabular} \end{table} The results obtained from the dilepton method are consistent with the results obtained using the single lepton technique and show that there is no large systematic problem associated with the subtraction of the cascade component. Unlike the single lepton measurement, the measurement using the dilepton technique does not require the assumption that the $\Upsilon (4S)$ resonance always decays to pairs of B mesons. The agreement between the CLEO~II results for the dilepton analysis and the single lepton result can also be used to constrain possible non-$B\bar{B}$ decays of the $\Upsilon(4S)$. The 95\% confidence level upper limit on the fraction of these uncoventional decays is 0.05~\cite{Cleo2l}. The dilepton method also gives a measurement of the cascade electron spectrum. This can be compared to the ACCMM and ISGW models and the earlier DELCO \cite{Delco} measurement of the $D$ semileptonic momentum spectrum. CLEO~II finds ${\cal{B}}(b\to c\to se\nu)\, = \, (7.7 \pm 0.3 \pm 1.2)$\% using the ACCMM model and $(8.3 \pm 0.3 \pm 1.2)$\% for the ISGW model. Within errors, these results are consistent with the expectations, and with the CLEO~II single lepton measurement. \subsubsection{The semileptonic branching fractions of the $B^-$ and $\bar{B}^0$ mesons} The semileptonic branching fractions reported so far are truly inclusive in the sense that no attempt is made to distinguish different $B$ meson flavors. Separate semileptonic branching fractions for charged and neutral $B$ mesons, ${\cal B}(B^0\to X \ell^- \nu)$ and ${\cal B}(B^-\to X \ell^- \nu)$ have been determined by measuring the lepton yield in events with fully or partly reconstructed $B$ mesons. Measurements of ${\cal B}(B^0\to X \ell^- \nu)$ were reported by the CLEO~1.5 and ARGUS experiments\cite{Hend1l},\cite{argusdlmix}. Simultaneous measurement of the separate branching fractions has been accomplished by CLEO~II using its large sample of reconstructed B mesons\cite{cleoiitptz}. In the CLEO~II analysis \cite{lambrecht},\cite{saulnier}, neutral B mesons are reconstructed using the modes $\bar{B}^0\to D^{(*)+} \pi^- $, $\bar{B}^0\to D^{(*)+} \rho^-$, $\bar{B}^0\to D^{(*)+} a_1^-$, $\bar{B}^0\to \psi K^{(*)0}$, and partially reconstructed $\bar{B}^0\to D^{*+} \ell^-\nu$ and $\bar{B}^0\to D^{*+}\pi^-$ yielding a total of $8456\pm 152$ $\bar{B}^0$ tags. The modes $B^- \to D^{(*)0} \pi^- $ $B^- \to D^{(*)0} \rho^-$ $B^- \to D^{(*)0} a_1^-$, and $B^-\to \psi K^{(*)-}$ are used to reconstruct $834\pm 42$ charged B mesons. \begin{table}[htb] \caption{Measurements of the $B^0$ and $B^+$ Semileptonic Branching Fractions [\%].} \label{Tmbzbpsemi} \begin{tabular}{lll} Experiment& ${\cal B}(\bar{B}^0\to X \ell^- \nu)$ & ${\cal B}(B^-\to X \ell^- \nu)$ \\ \hline CLEO~1.5 \cite{Hend1l}& $9.9\pm 3.0\pm 0.9$ & \\ ARGUS \cite{argusdlmix}& $9.3\pm 1.1\pm 1.15$ & \\ CLEO~II \cite{cleoiitptz}& $10.9\pm 0.7\pm 1.1$ & $10.1\pm 1.8 \pm 1.4$\\ \hline Average & $10.2 \pm 1.0$ & $10.1\pm 1.8\pm 1.4$ \end{tabular} \end{table} The yield of leptons above background for momenta above 1.4 GeV in the $\bar{B}^0$ and $B^-$ samples is extrapolated to zero momentum using the ISGW$^{**}$ model\cite{ISGW}. A correction is then applied for $B-\bar{B}$ mixing. This leads to the measurements of branching fractions given in Table~\ref{Tmbzbpsemi}. These measurements confirm, albeit with lower statistical precision, the other experimental indications that ${\cal B}(B\to X l\nu)< 12.5\%$, the theoretical lower bound. \subsubsection{Measurements of ${\cal{B}}_{sl}$ on the $Z^0$ Resonance} The LEP experiments have determined the semileptonic $b$ branching fraction using dilepton events. Single lepton events could be used if Standard Model Z couplings were assumed. A $b$-enriched sample is prepared by selecting events containing a lepton with large transverse momentum, $p_{\perp}$. The semileptonic branching fraction ${\cal{B}}_{sl}$ is then extracted by simultaneously fitting the lepton momentum and $p_{\perp}$ distributions. The shape of the $B \to X\ell \nu$ spectrum is taken from the $\Upsilon(4S)$ measurements which causes the LEP results to suffer similar model dependence. The LEP results are summarized in Table~\ref{Tbsemilep}\cite{ALEPH1l}, \cite{OPAL1l},\cite{DELPHI1l},\cite{L31l}. The second systematic error quoted in the individual measurements is from model dependence. The average value of ${\cal{B}}_{sl}\, =\, 11.3\pm0.3\pm0.4$\% is consistent with the results from CLEO and ARGUS. \begin{table}[htb] \caption{ branching fractions($\%$) for inclusive semileptonic b decay from LEP.} \label{Tbsemilep} \begin{tabular}{ll} Experiment & Branching Fraction \\ \hline ALEPH & $11.40\pm 0.33\pm 0.37 \pm 0.20$ \\ OPAL & $10.5\pm 0.6\pm 0.4\pm 0.4$ \\ DELPHI & $11.41\pm 0.45 \pm 0.50 \pm 0.31$ \\ L3 & $11.73\pm 0.48 \pm 0.28\pm 0.31$ \\ \hline LEP Average& $11.3\pm 0.3 \pm 0.4$ \\ \end{tabular} \end{table} Assuming the semileptonic decay width is the same for all $b$ flavored hadrons, the semileptonic branching ratio should be slightly different at LEP since other $b$-particles are produced: $$ {\cal{B}}_{sl}(\Upsilon(4S))\; = \; \frac{\Gamma_{sl}}{\Gamma_{tot}} \; = \; \Gamma_{sl}\times\frac{(\tau_{B^+}+\tau_{B^0})}{2} $$ while $$ {\cal{B}}_{sl}(Z^0)\; = \; \Gamma_{sl}\times \tau_b $$ Using the world averages for lifetimes determined earlier this gives \begin{eqnarray*} {\cal{B}}_{sl}(Z^0) & = & \frac{2\tau_b}{(\tau_{B^+}+\tau_{B^0})}\times {\cal{B}}_{sl}(\Upsilon(4S))\\ & =& 9.77\pm 0.37\% \end{eqnarray*} Note that the contribution of other hadrons {\it reduces} the expected average semileptonic branching fraction at the $Z^0$. This prediction is below the experimental average from LEP but the errors are still too large to draw any significant conclusions. \subsubsection{Measurement of $b\to X\tau \nu$} \label{taunew} The branching fraction for $B\to X \tau \nu^-$ has been measured by several LEP experiments by using the $\tau \to {\rm hadron}~ \bar{\nu}$ decay mode and the large missing energy which is characteristic of this decay mode\cite{ALEPHxtnu},\cite{ALEPHxtnu1},\cite{L3xtnu}. After applying standard selection procedures for $Z \to b\bar{b}$ decays events containing electrons or muons are rejected in order to remove conventional semileptonic $B$ decays. Comparing the remaining data with a detailed Monte Carlo simulation, which does not include $B \to X\tau \nu$ decays, yields an excess from which the branching fraction is determined to be $$ {\cal{B}}(B\to X\tau\nu)\; = \; 2.75\pm 0.30 \pm 0.37\, \% \; \;, {\rm ~ALEPH} $$ $$ {\cal{B}}(B\to X\tau\nu)\; = \; 2.4\pm 0.7 \pm 0.8\, \% \; \;, {\rm ~L3} $$ These measurements are consistent with the Standard Model expectation of $2.3\pm 0.25$\% \cite{falktaunu},\cite{seagal}. The measurement of $B\to X~\tau~\nu$ imposes the constraint $\tan \beta \, < \, 0.4\times m_H/$GeV at the 90\% confidence level on (model II) charged Higgs which occur in various extensions of the Standard Model including the Minimal Supersymmetric Standard Model (MSSM) \cite{ALEPHxtnu1}. The $B\to X\tau\nu$ mode is difficult to isolate at threshold experiments because of the overlap with decay products from the second $B$ and the low energy of the final state lepton/hadron from the $\tau$ decay. \subsection{Exclusive Semileptonic Transitions.} \label{exclbdstl} The determination of Cabibbo-Kobayashi-Maskawa matrix elements is one of the central experimental problems in heavy quark physics. The inclusive semileptonic branching ratio discussed in the previous section can be used to determine the element $|V_{cb}|$ and the measurements are now quite precise with experimental uncertainty below the 5 \% level. However, the conversion of the resulting semileptonic width to $|V_{cb}|$ has a fairly large theoretical uncertainty. Estimates of this uncertainty range from $5\%$ to $15\%$. By contrast, it is possible that measurements of exclusive semileptonic modes can be used to extract $|V_{cb}|$ with smaller theoretical uncertainty. In the following sections we will summarize the results obtained for the decays $B\to D\ell \nu$, $B\to D^* \ell \nu$ and $B\to D^{**} \ell \nu$. \subsubsection{Measurements of ${\cal{B}}(B\to D^*\ell \nu)$} The mode $B\to D^* \ell \nu$ is preferred experimentally to the mode $B\to D \ell \nu$ since the addition of the $D^*$ constraint allows the isolation of a large and clean experimental signal. ARGUS, CLEO~1.5, CLEO~II, and ALEPH have reported signals in $\bar{B^0} \to D^{*+} \ell^- \nu$ with $D^*\to D^0 \pi^+$ and $D^0 \to K^- \pi^+$. CLEO~II can also observe the decay chain $B^- \to D^{*0} \ell^- \nu$ with $D^* \to D^0 \pi^0$. In threshold experiments, the pion from the $D^*$ decay has a momentum below $225$ MeV in the laboratory and is often referred to as the slow pion (denoted $\pi_s$). For the lower portion of the momentum range accessible to the slow pion, the large curvature of the track in the high magnetic field and the multiple orbits of its trajectory complicate track reconstruction. It is possible to reconstruct slow neutral pions, however, the combinatorial background is larger. For high energy experiments, the charged slow pion is boosted so that there is full acceptance for the entire momentum range. In the decay of the $\Upsilon (4S)$ resonance, B mesons are produced in pairs with momenta of about $330$ MeV. The signals at ARGUS and CLEO are isolated using the kinematic constraints from production at threshold. The effective mass of the neutrino in the decay $B\to D^* \ell \nu$ is given by $$m_{\nu}^2 = (E_B - E_{D^* l} )^2 - |p_{B}|^2 - |P_{D^* l}|^2 + 2 |p_{B}| |p_{D^* L}| \rm{\cos\Theta}$$ where $(E_{B}, p_{B})$ is the B meson 4-momentum, ($E_{D^{*} l}, p_{D^{*} l}$) is the sum of the $D^*$ and lepton 4-momenta, and $\Theta$ is the angle between the 3-momenta $p_{D^* l}$ and $p_{B}$. The first three terms in the expression for $m_{\nu}^2$ are the missing mass squared, denoted $MM^2$. The factor multiplying $\rm{\cos}\Theta$ will be denoted C. Since the direction of the B momentum cannot be measured, a common approximation is to set $|p_B|=0$ in the above expression and then substitute the precisely known beam energy, $E_{beam}$ for $E_B$. Then, the missing mass squared becomes $$ MM^2 = (E_{beam} - E_{D^* l} )^2 - |P_{D^* l}|^2 $$ and the signal will peak at $MM^2 = 0$ with a width determined by the B momentum. A variety of methods have been used to measure the signal yield. The ARGUS experiment which first isolated a signal in $\bar{B}^0\to D^{*+} l^- \nu$, measured the excess in the background subtracted $MM^2$ distribution. Their fit allows for contributions from $\bar{B}^0\to D^{*+}\ell^-\nu$ and $\bar{B}\to D^{**}\ell^-\nu$, $D^{**}\to D^{*+}(\pi)$. The CLEO~1.5 experiment also used this method. The CLEO~II experiment has chosen a different technique to determine the $\bar{B}\to D^* \ell \nu$ branching fraction which uses slightly more of the available information. For lepton momenta above 1.4 GeV, correctly reconstructed $B\to D^* \ell \nu$ decays must lie in a triangular region in the plane of ${\rm MM}^2$ and C. A cut on the $D^*-D$ mass difference is imposed, and the $D^0$ invariant mass spectrum (shown in Figure~\ref{dstlnu1}) is fitted to extract the number of B candidates. The largest background is due to combinations of incorrectly reconstructed $D^*$s and real leptons. This background is subtracted using the sidebands of the $D^*-D$ mass difference. There is also a small background from uncorrelated combinations of correctly reconstructed $D^*$s and leptons, which can be estimated from data. A small correction for background from non-resonant processes (continuum) and misidentified leptons is also included. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.31,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.3in,height=2.9in,file=bexcl_dstlnu1.ps}} \end{picture} \vskip 4 mm \caption{CLEO II $D^0$ mass distributions for (a) $\bar{B^0}\to D^{*+} \ell^- \nu$ candidates and (b) $B^-\to D^{*0} \ell^- \nu$ candidates.} \label{dstlnu1} \end{center} \end{figure} The resulting signal yield is due to $B\to D^* (X) \ell \nu$ events. After removing all backgrounds, CLEO~II finds $376\pm 27 \pm 16$ $\bar{B^0}\to D^{*+} \ell^- \nu$ events and $302\pm 32 \pm 13$ $B^- \to D^{*0} \ell^- \nu$ events as shown in Figure~\ref{dstlnu1}. This sample is also used to evaluate $|V_{cb}|$ using the HQET inspired method (see section~\ref{vcbhqet}). Larger event samples and significantly better statistical precision can be obtained using a partial reconstruction technique as demonstrated by the ARGUS analysis of $B\to D^{*+} \ell \nu$. In this case, only the low momentum pion from the $D^{*+}$ decay and the lepton are detected. The momentum of the undetected $D^0$ meson can be deduced from the direction of the slow pion and kinematic constraints. The momentum of the $D^*$ meson is approximately $\alpha p_{\pi}+\beta$, where $\alpha$ and $\beta$ are constants which can be determined from Monte Carlo (also see discussion in section~\ref{bdbdbmix}). The signal yield is determined using a modified form of $MM^2$ with the estimated $D^*$ direction replacing the measured $D^*$ direction in the expression above. The systematic error from background subtraction, which is estimated using the wrong sign sample, must be evaluated with care. \begin{table}[htb] \caption{Measurements of branching fractions($\%$) for exclusive semileptonic B decay with a $D$ or $D^*$ in the final state. The symbol $\dagger$ indicates the branching ratio for this mode was measured using a partial reconstruction technique. Due to the complexity of the analysis procedure, those measurements marked with a $^{*}$ cannot be renormalized to take into account the new values of the $D$ and $D^*$ branching fractions.} \label{Tbsemiexcl} \begin{tabular}{lllll} Mode & CLEO 1.5 & ARGUS & CLEO~II & ALEPH\\ \hline $ \bar{B}^0\to D^{*+} \ell^- \nu $& $4.1\pm 0.5\pm 0.7$& $4.7\pm 0.6\pm 0.6$& $4.49\pm 0.32\pm 0.39$& $5.36\pm 0.50\pm 0.76$\\ $ \bar{B}^0\to D^{*+} \ell^- \nu $& & $4.5\pm 0.3\pm 0.4$ $^{\dagger}$& & \\ ${B}^- \to D^{*0}l^-\nu$& $4.1\pm 0.8^{+0.8}_{-0.9}$ $^{*}$& $6.8\pm 1.6\pm 1.5$& $5.13\pm 0.54\pm 0.64$& \\ $ \bar{B}^0\to D^{+} \ell^- \nu $& $1.8\pm 0.6\pm 0.3$ $^{*}$& $2.1\pm 0.7\pm 0.6$& & \\ $ \bar{B}^0\to D^{0} \ell^- \nu $& $1.6\pm 0.6\pm 0.3$ $^{*}$ & $1.4\pm 0.6\pm 0.5$ $^{*}$ & & \\ \end{tabular} \end{table} By comparing the branching ratios for $\bar{B}^0\to D^{*+} \ell^- \nu$ and $B^- \to D^{*0} \ell^- \nu$ and using measurements of the ratio of lifetimes from collider experiments, CLEO~II obtains the ratio of the production of $B^+ B^-$ and $B^0 \bar{B^0}$ meson pairs at the $\Upsilon (4 S)$ resonance, $f_{+}/f_{0}=1.04\pm 0.13 (stat) \pm 0.12 (sys) \pm 0.10$ (lifetime~ratio). This confirms to an accuracy of about 15\% the initial assumption that the production of charged and neutral B meson pairs are equal. The small value of the $B^+ - B^0$ mass difference, $0.2 \pm 0.3$ MeV, discussed in Section~\ref{bmss} also supports this conclusion. At LEP, the kinematic constraints from production at threshold are not available. However, the B direction can be measured from the vector between the production point and the detached vertex from the B decay. The decay products of the two B hadrons are cleanly separated into jets. In addition, the neutrino energy can be crudely determined from a missing energy measurement. These features have allowed ALEPH to measure the $\bar{B^0}\to D^* l \nu$ branching fraction\cite{scott}. The dominant systematic error in the threshold experiments is due to the uncertainty in the slow pion detection efficiencies while ALEPH and the LEP experiments are limited by the sizeable uncertainty in the number of B mesons produced in $Z^0$ decay which is used for the normalization of the branching fraction ($\sim 8\%$). The uncertainty from the $D^{**}$ background is much smaller ($\sim 3\%$) \cite{scott}. Since the publication of the CLEO 1.5 and ARGUS results, the $D$ and $D^*$ branching ratios have changed significantly. Wherever possible, the published values for semileptonic branching fractions have been rescaled to accommodate the new charm branching fractions. For some results, which are marked with an asterix in Table~\ref{Tbsemiexcl}, insufficient information to perform the rescaling was provided in the original papers. A separate systematic error for the contribution of charm branching fraction is quoted in the world averages. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.51,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=4.5in,height=4.5in,file=aleph_dsstar.ps}} \end{picture} \vskip 10 mm \caption{ Distributions from the ALEPH experiment for $\Delta m^*\; = \; m_{D^{*+}\pi^-} - m_{D^{*+}}$ for the right sign (a) and wrong sign (b) $D^*\pi$ combinations. The right sign spectrum is fitted to the sum of a Breit-Wigner signal and a background shape.} \label{aleph_dsstar} \end{center} \end{figure} \subsubsection{Measurements of ${\cal{B}}(B\to D \ell \nu)$} Measurements of the modes $B^+\to D^+ \ell^- \nu$ and $\bar{B}^0\to D^0 \ell^- \nu$ are experimentally difficult because of the significant combinatorial backgrounds in the $D^+\to K^- \pi^+ \pi^+$ and $D^0\to K^- \pi^+$ signals and the large backgrounds from the decay chain $B\to D^* \ell \nu$, $D^*\to D (\pi,\gamma)$ as well as $B\to D^{**} \ell \nu$, followed by $D^{**}\to D^* (\pi)$ or $D^*\to D (\pi,\gamma)$ where the $\pi$ (or $\gamma$) from the $D^*$ decay is not reconstructed. The separation of the latter two backgrounds, which yield the same final state particles, is accomplished using their slightly different behaviours as a function of lepton momentum and D momentum. In addition, the presence of an additional pion shifts the center of the missing mass distribution. In some analyses, additional constraints are provided by requiring that the $D^{*(+,0)} \ell \nu$ fraction be consistent with the branching fraction from the dedicated exclusive measurement. \subsubsection{Measurements of ${\cal{B}}(B\to D^{**}\ell \nu)$} Semileptonic $B$ decay to orbitally excited $D$ states has been searched for at LEP and by the $\Upsilon (4S)$ experiments. For experiments at threshold the background from $B\to D^{**} \ell \nu \to D^* (\pi) \ell \nu$ (where the $(\pi)$ from the $D^{**}$ decay is not detected) is small and manageable. The background from events with $D^* \ell \nu$ and additional pion(s) in the final state peaks at higher values of MM$^2$ and has a characteristically soft lepton momentum spectrum. Thus, ARGUS and CLEO~1.5 found that this background could be separated statistically. ARGUS finds a sizeable signal for $B\to D^{**}\ell\nu$, $D^{**}\to D^*(\pi)$ which includes significant resonant $D_1(2420)\ell\nu$ and other resonant modes as well as non-resonant channels. In the recent CLEO~II analysis of $\bar{B}\to D^*\ell\nu$, the background from final states with additional pions is determined by examining $D^*$-lepton combinations with $p$(lepton) in the range $0.8-1.4$ GeV in the portion of C-$MM^2$ plane which is preferentially populated by $B\to D^{**} X \ell \nu$ decays. CLEO~II find a modest excess in this region which corresponds to a model dependent upper limit of ${\cal{B}}(B\to D^{**} X \ell \nu)< 2.8\% $ at the 95 \% confidence level. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.5)(0,0) \put(0.0,-0.5) {\psfig{bbllx=0pt,bblly=0pt,width=4.0in,height=5.0in,file=opal_dstarstar_fig7.ps}} \end{picture} \caption{$D^{(*)}\pi \, - \, D^{(*)}$ mass difference distributions from OPAL for a) $D^{*+}\pi^-$ combinations, b) $D^{+}\pi^-$ combinations, and c) $D^{0}\pi^-$ combinations.} \label{dstarstar} \end{center} \end{figure} The use of solid state vertex detectors has allowed the experiments at LEP to isolate signals for $\bar{B}^0\to D^{*+} \pi^- X l \nu$ where the additional charged pion is vertexed with the slow pion from the $D^*$ decay and the lepton. The ALEPH, OPAL, and DELPHI experiments have reported signals\cite{carpinelli}; the OPAL results are shown in Figure \ref{dstarstar}. The quantity plotted it the mass difference between the $D^{(*)}\pi$ system and the charmed meson ($D^*$ or $D$) which has a better experimental resolution than could be obtained from the $D^{(*)}\pi$ invariant mass distribution. Both ALEPH and OPAL have reported measurements of the product ${\cal B}(b\to B)\times {\cal B}(\bar{B}\to D(2420)X \ell^- \nu) \times {\cal B}(D_1(2420)\to D^{*+}\pi^-)$ and quote ${\cal B}(\bar{B}\to D(2420)X \ell^- \nu)$ assuming ${\cal B}(b\to B)=0.37$ and ${\cal B}(D_1(2420)\to D^{*+} \pi^-)\sim 0.67$. In addition, OPAL has reported a significant signal for ${\cal B}(\bar{B}\to D_2^*(2470) X \ell^- \nu)$ using $D_2^* (2470)\to D^0 \pi^+$. These signals may include $X$, one or more undetected pions. ALEPH has also performed an inclusive topological measurement which is sensitive to $\bar{B}\to D^{*+}\pi^-\ell^- \nu$ in which the $D^{*+}\pi^-$ system is non-resonant. The results on semileptonic $B\to D^{**}$ transitions are given in Table~\ref{Tbsemiexcit}. The LEP measurements and the CLEO~II upper limit are marginally consistent. The corresponding world averages can be found in Table~\ref{Tbsemiwa}. These measurements of the production of orbitally excited $D$ mesons in semileptonic decay show that the Shifman-Voloshin (SV) limit, in which the $B\to D\ell\nu$ and $B\to D^*\ell\nu$ channels saturate the semileptonic width, is not achieved\cite{svlimit}. Quark models also predict small rates for the production of p-wave $D$ mesons. For example, the ISGW2 model predicts that about 7\% of the total semileptonic rate will occur in channels with excited charmed mesons and ${\cal B}(\bar{B}\to D_1(2420) \ell\nu)$ $\sim 0.25\%$, well below the observed rate\cite{ISGWprime}. Predictions from the model of Colangelo {\it et al.} ~also give rates in the $0.1\%$ range for the largest channels\cite{colangelo}. \begin{table}[htb] \caption{Measurements of branching fractions($\%$) for exclusive semileptonic B decay with p-wave charmed mesons in the final state. The signal may include $X$, one or more undetected pions.} \label{Tbsemiexcit} \begin{tabular}{lllll} Mode & ARGUS & CLEO~II & ALEPH & OPAL \\ \hline $ B^-\to D_1^0(2420) \ell^- X \nu $& & & $0.84\pm 0.24\pm 0.14$ &$0.81\pm 0.20\pm 0.19$ \\ $ B^-\to D_2^{*0}(2460) \ell^- X \nu $& & & & $0.35\pm 0.14\pm 0.17$ \\ $ \bar{B^0}\to D_1^+(2430) \ell^- X \nu $& & & &$0.78\pm 0.28\pm 0.18$ \\ $ \bar{B^0}\to D_2^{*+}(2470) \ell^- X \nu $& & & &$0.90\pm 0.27\pm 0.21$ \\ \hline $ \bar{B}\to D^{**}(2420) \ell^- X \nu $& & & $0.84\pm 0.24\pm 0.14$ & $0.80\pm 0.24$ \\ $ \bar{B}\to D^{**}(2460) \ell^- X \nu $& & & & $0.44\pm 0.14$ \\ $ \bar{B}\to D^{*+} \pi^- \ell^- X \nu $& & & $1.08\pm 0.3\pm 0.22$ & \\ $ \bar{B}\to D^{**} \ell^- \nu $& $2.9\pm 0.5\pm 0.5$& $<2.8\%$(95\% C.L.) & & \\ \end{tabular} \end{table} \begin{table}[htb] \caption{World average branching fractions($\%$) for exclusive semileptonic B decay. The modes marked with the symbol $\dagger$ are not included in the sum of exclusive modes as discussed in the text.} \label{Tbsemiwa} \begin{tabular}{ll} Mode & World Average \\ \hline $ \bar{B}^0\to D^{*+} \ell^- \nu $& $4.56\pm 0.27\pm 0.25$ \\ ${B}^- \to D^{*0}l^-\nu$& $5.31\pm 0.70\pm 0.41$ \\ $ \bar{B}^0\to D^{+} \ell^- \nu $& $2.1\pm 0.9\pm 0.14$ \\ $ B^-\to D^{0} \ell^- \nu $& $1.5\pm 0.5$ $^{\dagger}$ \\ $ \bar{B}\to D^{**} \ell^- \nu $& $2.9\pm 0.5\pm 0.5$ $\dagger$\\ $ \bar{B}\to D^{**}(2420) \ell^- X \nu $& $0.82\pm 0.18\pm 0.06$ \\ $ \bar{B}\to D^{**}(2460) \ell^- X \nu $& $0.44\pm 0.14\pm 0.03$ \\ \hline Sum of exclusive semileptonic ${\cal{B}}$ & $8.05 \pm 1.7$\\ Inclusive semileptonic ${\cal{B}}$ & $10.98\pm 0.28$ \\ \end{tabular} \end{table} \subsubsection{Summary of exclusive semileptonic $b\to c$ measurements} Additional exclusive semileptonic channels have been searched for by the ARGUS \cite{ARGUSplnu} and L3 collaborations \cite{L3xtnu}. These include modes which require either $s \bar{s}$ popping at the lower part of the spectator graph\cite{ARGUSssbar} or baryon production\cite{ARGUSplnu}. \begin{table}[htb] \caption{Measurements of Other Inclusive Semileptonic Branching Ratios.} \label{Tbslother} \begin{tabular}{lll} Experiment & Mode & B(\%) \\ \hline ARGUS & $B\to D_s X \ell^+ \nu$ & $<0.9 (90\% ~{\rm C.L.})$ \\ ARGUS & $B\to \bar{p} X \ell^+ \nu$ & $<0.16 (90\%~{\rm C.L.})$ \\ L3 & $B\to X \nu$ & $2.27\pm 0.8\pm 1.5$ \\ \end{tabular} \end{table} No signal was observed and the upper limits are summarized in Table~\ref{Tbslother}. In Table~\ref{Tbsemiwa} we sum the exclusive semileptonic modes and find a value consistent with the inclusive semileptonic branching ratio. The mode $B^- \to D^0 \ell^-\nu$ is omitted since the measurements of this mode cannot be adjusted to account for the changes in $D$ and $D^*$ branching fractions. The measurement of the mode $B\to D^{**} \ell\nu$ is also omitted from the calculation of the sum of the exclusive modes and instead the measurements of $B\to D^{**}(2420)\ell\nu$ and $B\to D^{**}(2460)\ell\nu$ are used. If the other possible approach is taken and $B\to D^{**}\ell\nu$ is used but $B\to D^{**}(2420)\ell\nu$ and $B\to D^{**}(2460)\ell\nu$ are omitted, then the sum of the exclusive modes becomes $9.4\pm 1.2$. The conclusion is the same using either approach. To determine ${\cal B}(B\to D^{**} \ell \nu)$, ARGUS and CLEO~II have assumed that the model of ISGW correctly predicts the relative fractions of $B\to D_0^1(2420)\ell\nu$, $B\to D_2^*(2460)\ell\nu$ and the other excited charm mesons. This procedure can be applied to the world average for ${\cal B}(B\to D^{**}(2420) \ell \nu)$ which is the most precisely known of these rates for the semileptonic decay to an excited charmed meson, this gives ${\cal B}(B\to D^{**} \ell \nu) = 1.64\pm 0.22\pm \pm 0.11\%$ which is consistent with the ISGW$^{**}$ fit to the inclusive semileptonic spectrum which gives ${\cal B}(B\to D^{**} \ell \nu) = 2.3\pm 0.9$. Model dependent extrapolations from the rate of observed $D^{**} \ell \nu$, $D^{**}\to D^* \pi$ decays appear to saturate the remainder of the missing portion of the semileptonic rate. The other remaining decays may correspond to $B\to D^{**} \ell \nu$ where $D^{**}$ denotes a p-wave charmed meson with a large width (e.g. the very broad but as of now unobserved $1^{3}P_1$(2490) and $1^{3}P_0$(2440) states). It is also possible that the other missing decays are $B\to D \pi \ell^- \nu$ where the $D\pi$ system is non-resonant or originates from the decay of a broad excited charm meson. These possibilities are difficult to check experimentally. \subsection{The Dynamics of Semileptonic $B$ Decay} \label{bdynamics} Since leptons are not sensitive to the strong interaction, the amplitude for a semileptonic $B$ decay can be factorized into two parts, a leptonic and a hadronic current. The leptonic factor can be calculated exactly while the hadronic part is parameterized by form factors. A simple example is the transition $B\to D l \nu$. The differential decay rate in this case is given by $$ {{d \Gamma} \over {d q^2}} = {G^2 \over {24\pi^3}} |V_{cb}^2| P_D^3 f_{+}^2 (q^2)$$ where $q^2$ is the mass of the virtual W $(\ell \nu)$ and $f_{+}(q^2)$ is the single vector form factor. The form factor which describes semileptonic decay is analogous to the form factor which arises in electron-nucleon scattering. In this case, the form factor $|f_{+}(q^2)|^2$ gives the probability that the final state quarks will form a $D$ meson. The form factor is largest when the initial and final state heavy quarks have the smallest relative velocities, that is at maximum $q^2$. As $q^2$ decreases and the momentum transfer increases, the modulus of the form factor decreases. Since the $\bar{B^0}\to D^{*+} \ell^- \nu$ mode has a large branching ratio and good signal to background ratio, it is experimentally preferred to $\bar{B}\to D\ell \nu$ for form factor studies. Moreover, the theoretical predictions for this mode are thought to be reliable. The corresponding expression for the differential rate in $B\to D^* \ell \nu$ is given in section~\ref{dstlnuffexp}. In this case, there are three form factors which correspond to the three possible partial waves of the $B\to D^{*} \bar{W}^{'}$ system (here $\bar{W}^{'}$ is the virtual W which becomes the lepton-antineutrino pair). \subsubsection{Polarization in $B\to D^* \ell \nu$ decays} In the past, insufficient data was available to perform a measurement of the individual form factors in $B\to D^* \ell \nu$ decay. Various integrated quantities which give information on the form factors were determined. For example, the polarization $\alpha$ in $B\to D^* \ell\nu$ can be determined by fitting the $D^*$ helicity angle distribution which should be distributed as $1 + \alpha \cos^2\theta_{D^*}$. It is also possible to measure the forward-backward asymmetry of the lepton in the W rest frame which gives information on the ratio of the positive and negative helicity amplitudes. \begin{table}[htb] \caption{Measurements of integrated observables in $B\to D^* \ell \nu$ Decays.} \label{Tbinteg} \begin{tabular}{lll} Experiment & $\alpha$ & $A_{FB}$ \\ \hline CLEO~1.5 & $0.65\pm 0.66\pm 0.25$ & \\ ARGUS & $1.1\pm 0.4\pm 0.2$ & $0.20\pm 0.08\pm 0.06$\\ CLEO~II & $1.48\pm 0.32\pm 0.14$ & $0.209\pm 0.034\pm 0.015$\\ \hline World average & $1.24\pm 0.25$ & $0.208\pm 0.035$ \\ \end{tabular} \end{table} These results can be compared, for example, to the prediction of Scora of $\alpha=1.32$ in the ISGW$^{'}$ model\cite{ISGWprime} and to the HQET based predictions of Neubert $\alpha=1.37$ and $A_{FB}=0.22$. The agreement of $A_{FB}$ in sign and magnitude with quark model predictions has been used to deduce limits on a hypothetical $V+A$ coupling of the b quark\cite{Sanghera}. Other tests of models are provided by the vector to pseudoscalar ratio which is, for example, predicted to be $2.6$ in the ISGW$^{'}$ model and $2.79$ in the HQET based model of Neubert. Experimentally, this is found to be $2.17\pm 0.93$ where the large error reflects the poor precision of the $\bar{B^0}\to D^+ \ell^-\nu$ branching fraction. The measurements of integrated observables are thus in good agreement with models. Form factor measurements are a more sophisticated approach and provide better discrimination between models. In addition, all the available information is used and hence the statistical precision is improved. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.5)(0,0) \put(-0.01,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.0in,height=2.5in,file=angles_dstlnu.ps}} \end{picture} \vskip 10 mm \caption{The kinematic variables used in the CLEO~II $B\to D^* \ell \nu$ form factor analysis.} \label{dstlnuang} \end{center} \end{figure} \subsubsection{Measurement of the $B\to D^* \ell \nu$ Form Factors} \label{dstlnuffexp} The differential decay rate for $B\to D^* \ell \nu$ can be expressed in terms of three $q^2$-dependent helicity amplitudes $H_{\pm}(q^2)$ and $H_0(q^2),$ where the subscripts refer to the helicity of either the virtual $W$ ($\ell\nu$) or the $D^*$\cite{jdr},\cite{ks89,hagiwara,gilman,ks90}. The rate is given by \begin{eqnarray} {d\Gamma\over dq^2\, d\cos{\theta_{\ell\nu}}\, d\cos{\theta_V}\, d\chi}&=& {3G_F^2|V_{cb}|^2\,P_{D^*}\,q^2\over 8(4\pi)^4 {m_B^2}}\times\nonumber\\ &&\{[(1-\cos{\theta_{\ell\nu}})^2|H_{+}(q^2)|^2+ (1+\cos{\theta_{\ell\nu}})^2|H_{-}(q^2)|^2]\sin^{2}\theta_V \nonumber\\ &&+4\sin^2\theta_{\ell\nu}\cos^2\theta_V|H_{0}(q^2)|^2\nonumber\\ &&-2\sin^2\theta_{\ell\nu}\sin^2\theta_V \cos(2\chi)H_{+}(q^2)H_{-}(q^2)\nonumber\\ &&-4\sin\theta_{\ell\nu}(1-\cos\theta_{\ell\nu}) \sin\theta_V\cos\theta_V\cos\chi\, H_{+}(q^2)H_{0}(q^2)\nonumber\\ &&+4\sin\theta_{\ell\nu}(1+\cos\theta_{\ell\nu}) \sin\theta_V\cos\theta_V\cos\chi\, H_{-}(q^2)H_{0}(q^2)\},\nonumber\\ \label{eqn:decayrate} \end{eqnarray} where $m_B$ is the mass of $B$ meson, $P_{D^*}$ is the momentum of the $D^*$ and is a function of $q^2$, and the angles $\theta_{\ell \nu}$, $\theta_V$, and $\chi$ are defined in Fig.~\ref{dstlnuang}. The helicity amplitudes $H_{\pm}$ and $H_{0}$ can be expressed in terms of two axial-vector form factors, $A_1 (q^2)$ and $A_2 (q^2)$, and a vector form factor $V(q^2):$ \begin{eqnarray} H_{\pm}(q^2)&=&(m_B+m_{D^*})A_1(q^2)\mp{2m_B\,P_{D^*}\over(m_B+m_{D^*})} V(q^2)\nonumber\\ H_{0}(q^2)&=&{1\over 2m_{D^*}\sqrt{q^2}}\Big[(m_B^2-m_{D^*}^2-q^2) (m_B+m_{D^*})A_1(q^2) -{4m_B^2\,P_{D^*}^2\over(m_B+m_{D^*})}A_2(q^2)\Big],\nonumber\\ \label{eqn:helamps} \end{eqnarray} \noindent where $m_{D^*}$ is the mass of the $D^*$ meson. There are a number of important and simple qualitative features that are present in equation~\ref{eqn:decayrate}. These are most easily seen after integrating over the variable $\chi$, so that the last three terms vanish. The helicity zero (longitudinal) component then has a $\cos^2\theta_{D^*}$ and a $\sin^2\theta_{\ell\nu}$ dependence. The negative and positive helicity components have a $\sin^2\theta_{D^*}$ dependence as well as a $(1+\cos\theta_{\ell\nu})^2$ and $(1-\cos\theta_{\ell\nu})^2$ behaviour, respectively. As a result of the $V-A$ coupling of the virtual $W$, the difference between the positive and negative helicity amplitudes, $H_+ - H_-$, is large and negative. This feature can be clearly observed in the scatter plot of $\chi$ versus $\cos\theta_{D^*}$\cite{jdr}. Other intuitive and useful features (see previous section) can be deduced from consideration of integrated quantities. For instance, $A_{FB}$ is controlled primarily by the form factor ratio $R_2 = V(q^2=q^2_{max})/A_1(q^2=q^2_{max})$ and the $\cos\theta_{\ell\nu}$ distribution. The size of the second independent form factor ratio, $R_1 = A_2(q^2=q^2_{max})/A_1(q^2=q^2_{max})$ is determined by the $\cos\theta_{D^*}$ distribution and to a fair extent by $\alpha$, the degree of polarization. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(4.,4.0)(0,0) \put(-0.5,-0.1) {\psfig{bbllx=0pt,bblly=0pt,width=4.0in,height=4.0in,file=bexcl_ff_fit.ps}} \end{picture} \vskip 10 mm \caption{The experimental distributions of kinematic variables in data compared to the fit in the CLEO~II $B\to D^* \ell \nu$ form factor analysis for: (a) $\cos\theta_{D^*}$ (b) $\cos\theta_{\ell\nu}$ (c) $q^2$ (d) $\chi$} \label{dstlnu3} \end{center} \end{figure} Using the measured values of $ q^2 /q_{max}^2$, $\cos\Theta_{D^*}$, $\cos\Theta_{\ell \nu}$ and $\chi$, a 4-dimensional unbinned maximum likelihood fit was performed using a Monte Carlo integration technique in a manner similar to reference~\cite{E691}. This technique allows a multi-dimensional likelihood fit to be performed to variables modified by experimental acceptance and resolution, and is necessary due to the substantial smearing of the kinematic variables from the motion of the B meson. The basis of the method is to determine the probability density function by using the population of appropriately weighted MC events in the four dimensional kinematic space. This is accomplished by generating one high statistics sample of MC events with a known value of the form factor ratios $R_1, R_2$ and corresponding known values of the four kinematic variables $q^2/q^2_{max}$, $\cos\Theta_{D^*}$, $\cos\Theta_W$, and $\chi$ for each event. The generated events are then processed through the full detector simulation and analysis chain. Using the generated kinematic variables, the accepted MC events are weighted by the ratio of the decay distribution for the trial values of $R_1, R_2$ to that of the generated distribution. The accepted MC events are now, therefore, distributed according to the probability density corresponding to the trial values of $R_1, R_2$. By such weighting, a likelihood may be evaluated for each data event for different values of the form factor ratios, and a fit can be performed. The probability for each event is determined by sampling this distribution using a search volume around each data point. The volume size is chosen so that the systematic effect from finite search volumes is small and the required number of MC events is not prohibitively high\cite{rydff},\cite{tingff}. The results of such a measurement from CLEO~II are given in Table~\ref{Tmffsemi}. The measurement of the form factor ratios from CLEO~II was obtained using a larger dataset and an improved analysis technique and supersedes the result of Ref~\cite{Sanghera}. In the limit of pure HQET, the form factor ratios are both unity. Including $O(\bar{\Lambda}/m_c)$ corrections gives 1.3 and 0.8 for $R_1$ and $R_2$ respectively. The experimental precision on the form factor ratios is not sufficient to distinguish between these two possibilities and the models, which are listed in Table~\ref{Tbffpred}. However, the experimental results do indicate that deviations from the limit of heavy quark symmetry are not large. For the purposes of comparison between data and models, it should be noted that several of the quark models quoted in Table~\ref{Tbffpred} use a different $q^2$ dependence for their form factors than is assumed by HQET and by the CLEO~II measurement. This will be not a large effect given the small range in $q^2$ available in the reaction $B\to D^* l \nu$. For instance, the value of $A_2/A_1$ in HQET varies from $1.35$ to $1.27$ over the full kinematic range. \begin{table}[htb] \caption{Measurements of the form factor ratios in $B\to D^* \ell \nu$ decays at $q^2=q^2_{max}$.} \label{Tmffsemi} \begin{tabular}{lccc} Experiment& $R_1$ & $R_2$ & $\rho^2$ \\ \hline CLEO~II & $1.30\pm 0.36\pm 0.16$ & $0.64\pm 0.26 \pm 0.12$ & $1.01\pm 0.15\pm 0.09$ \\ \end{tabular} \end{table} \begin{table}[htb] \caption{Predictions for the form factor ratios from theoretical models at $q^2=q^2_{max}$.} \label{Tbffpred} \begin{tabular}{lll} Model & $R_1$ & $R_2$ \\ \hline HQET (Neubert)\cite{neubert1} & $1.35$ & $0.79$ \\ Ball\cite{Ballff} & $1.31$ & $0.95$ \\ ISGW\cite{ISGW} & $1.01$ & $0.91$ \\ BSW\cite{WSB} & $0.91$ & $0.85$ \\ KS\cite{KS} & $1.09$ & $1.09$ \\ \end{tabular} \end{table} \subsection{Determination of $|V_{cb}|$} \label{brsl_vcb} \subsubsection{$|V_{cb}|$ from inclusive measurements} The theoretical uncertainty in the determination of $|V_{cb}|$ is currently a matter of active discussion and no clear consensus has emerged. The values of $|V_{cb}|$ determined from the world average for the inclusive semileptonic branching fraction are given in Table~\ref{Tbvcbincl} for different theoretical models. The models predict the decay width in the form $\Gamma \; = \; \gamma_c\cdot |V_{cb}|^2$. The value of $|V_{cb}|$ is then obtained from experiment using $$ |V_{cb}|^2 \; = \; {\cal{B}}(B \to X\ell \nu)/(\tau_B \cdot \gamma_c) $$ One way of reducing the theoretical uncertainty associated with the $m_b^5$ dependence of the semileptonic width was introduced by Altarelli~{\it et al.} \cite{ACCM}. In their quark model, the spectator quark is assigned a Fermi momentum $p_F$ and has a Gaussian momentum distribution. Each value of $p_F$ gives a slightly different value of $m_b$, however, the average value of $p_F$ and $m_b$ as well as the effective spectator quark mass $m_{sp}$ can be determined by fitting the shape of the lepton momentum spectrum. The relationship $$m_b^2 = m_B^2 + m_{sp}^2 - 2 m_B\sqrt{p_F^2 + m_{sp}^2}$$ where $m_B$ is the B meson mass, allows the experimental data to be used to constrain $m_b$. Shifman, Uraltsev, and Vainshtein propose that the dependence of the width in the SV (Shifman-Voloshin)\cite{svlimit} limit is proportional to $m_b-m_c$ rather than to $m_b^5$. This will substantially reduce the uncertainty in the extraction of $|V_{cb}|$ from inclusive decays. Using $m_b=(4.8\pm 0.1)$~GeV from a QCD sum rule analysis of the $\Upsilon$ system, Shifman {\it et al.} ~find on this basis that the theoretical uncertainty in the determination of $|V_{cb}|$ is less than 5$\%$ and nearly model independent. In contrast, Neubert asserts that the model dependence in $|V_{cb}|$ is of the order of 10\% due to the unknown higher order corrections in the expansion for the semileptonic width in $\alpha_s (m_Q)$, where $m_Q$ is the mass of the heavy quark\cite{neubert2},\cite{neubert3}. The experimental fact that $b\to c \ell\nu$ transitions are far from the SV limit may also affect the reliability of the claim by Shifman {\it et al.}. Luke and Savage have also investigated the model dependence of $|V_{cb}|$ in a HQET framework and conclude that the determination from the inclusive measurements gives values of $|V_{cb}|$ in the range $0.037-0.052$ which corresponds to an uncertainty of order $14\%$. Using a similar approach with a constraint on $m_c$ obtained from the experimental determination of ${\cal B}(D\to X \ell \nu)$, Ball and Nierste find a significantly larger value $\gamma_c=54.2\pm 5$ which gives a somewhat lower value of $|V_{cb}|$\cite{BallNierste}. \begin{table}[htb] \caption{ Determinations of $V_{cb}$ using inclusive semileptonic decays.} \label{Tbvcbincl} \begin{tabular}{lll} Theorist &$\gamma_c$ &$|V_{cb}|$ \\ \hline ACCMM {\it et al.}\cite{ACCM} &$40\pm8$& $0.0401\pm 0.001(\rm{exp})\pm .004(\rm{theor})$ \\ Shifman {\it et al.}\cite{shifman1} &$41.3\pm 4$& $0.03965\pm 0.001(\rm{exp})\pm .002(\rm{theor})$\\ ISGW$^{**}$\cite{ISGW} & $42\pm 8$& $0.0400\pm 0.001(\rm{exp}) \pm .004(\rm{theor})$\\ Ball and Nierste\cite{BallNierste} & $54.2\pm 5$ & $0.0344\pm 0.001(\rm{exp})\pm .002(\rm{theor})$\\ \end{tabular} \end{table} \subsubsection{$|V_{cb}|$ from exclusive measurements} Using measurements of the $B^+$ and $B^0$ lifetimes, and the assumption of isospin invariance, the $\bar{B^0}\to D^{*+} \ell^- \nu$ and $B^-\to D^{*0} \ell^- \nu$ branching fraction measurements can be combined to obtain the width $\Gamma (B\to D^* \ell \nu) = (30.2 \pm 2.6 \pm 1.0)$ ns $^{-1}$ which is independent of the ratio of production fractions $f_{+}/f_0$. To allow the results to be rescaled easily, the contribution due to the uncertainty in the average $B$ meson lifetime is separated in the error. This determination of the width can then be translated into a value for $|V_{cb}|$ by using models. The values obtained with the models of Isgur, Scora, Grinstein and Wise (ISGW)\cite{ISGW}, Bauer, Stech and Wirbel (BSW)\cite{WSB}, K\"orner and Schuler (KS)\cite{KS}, and Neubert are listed in Table~\ref{Tvcbexcl1}. \begin{table}[htb] \caption{Values of $|V_{cb}|$ using the world average for $\Gamma (B\to D^* \ell \nu)$ and theoretical models. The first error is the sum in quadrature of the experimental statistical and systematic errors. The second error is from the $B$ lifetime.} \label{Tvcbexcl1} \begin{tabular}{ll} Model & $|V_{cb}|$ \\ \hline ISGW & $ 0.0349\pm 0.0015\pm 0.0006$\\ ISGW$^{'}$ & $ 0.0347\pm 0.0015\pm 0.0006$\\ BSW & $ 0.0371\pm 0.0016\pm 0.0006$\\ KS & $ 0.0342\pm 0.0015\pm 0.0006$ \\ Neubert & $0.0323\pm 0.0014\pm 0.0006$ \end{tabular} \end{table} In principle, the detection efficiencies should be determined separately for each model. This is a small effect and the CLEO~II analysis finds the systematic variation from this source to be less than 3\%. In Table~\ref{Tvcbexcl2}, we also give the values of $|V_{cb}|$ determined from exclusive models using the ARGUS measurement of ${\cal B}(\bar{B}^0\to D^+ \ell^- \nu)$ which gives $\Gamma(\bar{B}^0\to D^+ \ell^- \nu)= 12.3\pm5.7\pm3.5$~ns$^{-1}$. The first error is the sum in quadrature of the statistical and experimental systematic errors. The second error is the uncertainty due to the average $B$ meson lifetime. We note that the other measurements of $B\to D\ell \nu$ branching fractions should not be used as they cannot be modified to account for the changes in the $D$ and $D^*$ branching fractions. Improved measurements of the branching fractions for $B^-\to D^0 \ell^- \nu$ and $ \bar{B}^0\to D^+ \ell^- \nu$ will be useful in testing models and determining $|V_{cb}|$. \begin{table}[htb] \caption{Values of $|V_{cb}|$ using the world average for $\Gamma (B\to D^+ \ell \nu)$ and theoretical models. The first error is the sum in quadrature of the experimental statistical and systematic errors. The second error is from the $B$ lifetime.} \label{Tvcbexcl2} \begin{tabular}{ll} Model & $|V_{cb}|$ \\ \hline ISGW$^{'}$ & $ 0.032\pm 0.008\pm 0.0005$\\ BSW & $ 0.038\pm 0.009\pm 0.0006$\\ KS & $ 0.039\pm 0.009\pm 0.0006$ \\ \end{tabular} \end{table} The results in Tables~\ref{Tvcbexcl1},~\ref{Tvcbexcl2} show that the model dependence in the determination of $|V_{cb}|$ from the total rate is below the $10\%$ level. From the measurement of the branching fraction for $\bar{B}\to D^* \ell\nu$, using the ISGW$^{'}$ model to obtain the central value, gives $$ |V_{cb}|= 0.0347\pm 0.0016({\rm exp})\pm 0.0024({\rm model})$$ This method of obtaining $|V_{cb}|$ from the total rate has the distinct advantage that the models used make other detailed predictions for form factors and various other observables which can be experimentally verified (see section~\ref{bdynamics}). In addition, all of the data can be used unlike the HQET inspired method (discussed below) which is valid only for a certain kinematic regime (near zero recoil). \subsubsection{Determination of $|V_{cb}|$ using HQET} \label{vcbhqet} It has recently been appreciated that there is a symmetry of QCD that is useful in understanding systems containing one heavy quark. This symmetry arises when the quark becomes sufficiently heavy to make its mass irrelevant to the nonperturbative dynamics of the light quarks. This allows the heavy quark degrees of freedom to be treated in isolation from the the light quark degrees of freedom. This is analogous to the canonical treatment of hydrogenic atoms, in which the spin and other properties of the nucleus can be neglected. The behaviour and electronic structure of the atom are determined by the light electronic degrees of freedom. Heavy quark effective theory (HQET) was developed by Isgur and Wise \cite{ISGW} who define a single universal form factor, $\xi(v\cdot v^{'})$, known as the Isgur-Wise function. In this function $v$ and $v^{'}$ are the four velocities of the initial and final state heavy quarks. In the heavy quark limit all the form factors for hadronic matrix elements such as $B\to D^*$ and $B\to D$ can be related to this single function. The value of this function can then be determined from a measurement of the $B\to D^* \ell \nu$ rate as a function of $q^2$ \cite{ISGW}. The theory also provides a framework for systematic calculations of corrections to the heavy quark limit. In HQET, the decay rate for $B\to D^* \ell \nu$ as a function of y (which is $\gamma_D^* = E_{D^*}/m_{D^*}$ in the B rest frame) can be expressed in terms of a single unknown form factor $\xi(y)$. According to the celebrated result called Luke's theorem \cite{luke}, at the point of zero recoil for the $D^*$ meson (i.e. $y=1$), this universal form factor is absolutely normalized up to corrections of order $1/m_Q^2$ (where $m_Q$ is the c quark or b quark mass). The decay rate \begin{equation} d \Gamma/d y = {\cal G}(y) \eta_A^2 |V_{cb}|^2 \xi^2(y) \end{equation} where ${\cal G}(y)$ is a known function, $\eta_A=0.986\pm 0.006$ accounts for QCD corrections and $\xi(y)$ is the universal form factor. After subtracting background and and correcting for efficiency the experimental distribution of $d \Gamma/d y$ is divided by the factor ${\cal G}(y)$ to give a distribution whose intercept is $|V_{cb}|^2 \xi^2(1)$. In the limit of heavy quark symmetry, the intercept is the physical quantity of interest, $|V_{cb}|^2$. In principle, the value obtained in this manner has no model dependence. The $d \Gamma/d y$ distribution is extracted after subtracting backgrounds from fake $D^*$ and random $D^*$ lepton combinations. This distribution is then corrected for efficiency. After dividing through by ${{\cal{G}} (y)}$ CLEO~II obtains the distribution shown in Figure \ref{dstlnu2} which combines events from the modes $\bar{B}^0\to D^{*+} \ell^- \nu$ and $B^- \to D^{*0} \ell^- \nu$. Experimentally, since there are few events near the point of zero recoil, all the available data is used over the entire y range and then an extrapolation to $y=1$ is made. Most of the functional forms proposed for $\xi(y)$ are roughly linear near $y=1$. Thus the experimental distribution is fitted to the functional form $|V_{cb}|^2 (1- \hat{\rho}^2 (y-1) )$. After properly accounting for the smearing in y due to the motion of the B meson\cite{thesis1}, the product $\xi(1) \eta_{A} |V_{cb}|$ is determined (Table~\ref{Tbxiexp}). The dominant experimental systematic error is the uncertainty in the slow pion detection efficiencies. \begin{table}[htb] \caption{ Experimental measurements of the product $|V_{cb}|\xi(1)\eta_A$ and $\hat{\rho}^2$ in $\bar{B}\to D^* \ell \nu$. These have been corrected for the change in $D$ and $D^*$ branching fractions and the average $B$ meson lifetime.} \label{Tbxiexp} \begin{tabular}{lll} Experiment & $\xi(1)\eta_A~|V_{cb}|$ & $\hat{\rho}^2$ \\ \hline ARGUS & $0.0388\pm 0.0055$ & $1.17\pm 0.23$ \\ CLEO~II & $0.0347\pm 0.0027$ & $0.84\pm 0.15$\\ ALEPH & $0.0382\pm 0.0056$ & $0.46\pm 0.34$\\ \hline \medskip World Average & $0.0359\pm 0.0022$ & $0.88\pm 0.12$ \\ \end{tabular} \end{table} \begin{table}[htb] \caption{ Theoretical Calculations of the Intercept of the Isgur-Wise Function.} \label{Tbxi} \begin{tabular}{ll} Theorist & $\xi(1)\eta_A$ \\ \hline Shifman {\it et al.} & $0.89\pm 0.03$ \\ Neubert I & $0.97\pm 0.04$ \\ Neubert II & $0.93\pm 0.03$ \\ Mannel & $0.96\pm 0.03$ \\ \end{tabular} \end{table} There are two significant uncertainties in the final determination of $|V_{cb}|$ from measurements of the spectrum at zero recoil. These arise from the model dependence in the calculation of the $1/m_c^2$ corrections to $\xi(1)$ and the lack of knowledge of the functional form of the function $\xi(y)$ which is used for the extrapolation. There are now at least four calculations of $\xi(1)$ to order $1/m_c^2$ from Neubert \cite{neubert1},\cite{neubert2}, Mannel\cite{mannel} and from Shifman, Uraltsev and Vainshtein\cite{shifman}. For example, using $\xi(1)\eta_A = 0.93\pm 0.03$ from Neubert\cite{neubert2}, we obtain $$|V_{cb}|= 0.0386\pm 0.0024({\rm exp})\pm 0.0012({\rm theory})$$ where the first error is experimental and the second is the quoted uncertainty in $\xi(1)$. Other recent estimates of this product obtained using QCD sum rules are given in Table~\ref{Tbxi} and references \cite{neubert1},\cite{neubert2},\cite{mannel},\cite{shifman}. The model dependence from the theoretical uncertainty in the normalization is about 4\% but may be reduced in the near future. The uncertainty from the shape of $\xi(y)$ can be investigated using several of the functional forms proposed in the literature. CLEO~II finds that the systematic error from this source is less than 5\% in $|V_{cb}|$\cite{thesis2}. Note that the quantity $\hat{\rho^2}$ and the quantity $\rho^2$ determined in the CLEO~II form factor analysis are slightly different. The former is calculated assuming HQS (heavy quark symmetry). An approximate relation between the two values is $\hat{\rho}^2 \approx \rho^2 - 0.2$\cite{neubert2}. The two values agree well. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.0,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.0in,height=3.0in,file=bexcl_dstlnu2.ps}} \end{picture} \vskip 5 mm \caption{ Distribution of $\displaystyle [{{d \Gamma} \over {d y}} {1\over {{\cal G}(y)}}]^{1 \over 2}$ for $\bar{B}\to D^{*} \ell^- \nu$ candidates with a fit to a linear parameterization of $\xi(y)$, the Isgur-Wise function.} \label{dstlnu2} \end{center} \end{figure} The universal form factor $\xi(y)$ is a quantity which cannot be derived in perturbation theory. To obtain this function, one must depend on models or on QCD lattice calculations. For instance, from a fit to the $d \Gamma/d y$ spectrum with a linear functional form, CLEO~II obtains \begin{equation} \hat{\rho}^2 = 0.84 \pm 0.12 \pm 0.08 \end{equation} A fit using a Taylor expansion which includes a quadratic term for $\xi(y)$ gives slightly different results for $\hat{\rho^2}$. The magnitude of the quadratic term is very poorly determined. The value of $\hat{\rho}^2$ is consistent with most quark models, QCD sum rules, and with lattice calculations. \subsection{$b\to u$ Transitions and $|V_{ub}|$} A non-zero value of $|V_{ub}|$ is necessary but not sufficient if the Standard Model is to provide a consistent description of the CP violation observed in the kaon sector. A precise measurement of $|V_{ub}|$ is required to constrain the allowed range of CP asymmetries in the B sector. The experimental signature for inclusive $b\to u$ transitions is an excess of leptons beyond the kinematic limit for the transition $b \to c \ell \nu$. The branching ratio for the inclusive process is large $O(10^{-3})$. However, there are also substantial backgrounds from continuum, misidentified leptons and mismeasured $b \to c$ transitions. The first evidence for charmless semileptonic B decay and for non-zero $|V_{ub}|$ was reported by the CLEO~1.5 experiment in 1989 from the inclusive lepton momentum spectrum. Corroborating evidence was presented shortly afterwards by the ARGUS experiment, who introduced hermiticity cuts to detect the presence of a neutrino and thus significantly reduce background levels. To extract a value of $|V_{ub}/V_{cb}|$, the physical quantity of interest, the yield of leptons in the signal window, which is limited to a small portion of the Dalitz plot, must be corrected for detection efficiency and then extrapolated to the full allowed kinematic range range. The resulting width must then be converted to the ratio of CKM matrix elements. In other words, \begin{equation} {|{ {V_{ub}} \over {V_{cb}}} |}^2 = {{\Delta B_{ub}} \over {B_{cb}~ d (p)} } \label{vubeqn} \end{equation} where $\displaystyle{\Delta B_{ub}} = {{N_{ub}}\over {\epsilon}}$ and $\displaystyle{d(p)= f_u(p) {{\gamma_u} \over {\gamma_c}}}$. In principle, each factor in equation ~(\ref{vubeqn}) is model dependent. In practice, the only factors with large model dependence are $\gamma_u$, which relates the width and $|V_{ub}|$ via $\displaystyle\Gamma(b\to u \ell \nu)= \gamma_u |V_{ub}|^2$ and $d(p)$ the fraction of $b\to u \ell\nu$ decays which lie in the momentum window. These factors have been determined using a variety of models which belong to two generic classes: inclusive parton models such as the model of Altarelli {\it et al.}\cite{ACCM} and models with exclusive final states e.g. \cite{ISGW}, \cite{KS}, \cite{WSB}, \cite{ISGWprime}, \cite{BBD}. Model dependence in the value of $|V_{ub}|$ is a severe systematic limitation. In the past, as large as a factor of two uncertainty in the value of $|V_{ub}|$ has been assigned to this model dependence. In principle, the decay rate for an exclusive mode can be translated into a less model dependent value of $|V_{ub}|$ using exclusive models. While no exclusive decay mode accounts for more than 3.5\% to 14\% of the inclusive rate, backgrounds in individual exclusive modes are fairly small. The most promising modes are $B\to \pi \ell \nu$ and the modes with a vector meson, $B\to \rho^0 \ell \nu$, $B \to \rho^+ \ell \nu$, and $B \to \omega \ell \nu$. In 1991 ARGUS reported evidence for two fully reconstructed candidates in the $\bar{B}^0\to \pi^+\ell\nu$ and $B^-\to \omega^0\ell^-\nu$ modes \cite{argus_exclusive}. \subsubsection{Inclusive Semileptonic $b\to u$ Transitions.} In the analyses of inclusive semileptonic $b\to u$ transitions by CLEO~1.5, ARGUS and CLEO~II, tight track quality cuts are imposed and special care is taken to reduce background from mismeasured $b\to c \ell \nu$ decays which can be smeared beyond the kinematic limit. The largest remaining background is then due to continuum processes i.e. non-resonant $q \bar{q}$ production and is suppressed with event shape cuts. In the CLEO~II analysis which has the highest statistical precision, two complementary analyses are carried through. One analysis employs strict cuts which make ample use of the hermiticity of the detector and imposes the requirement that an energetic neutrino be present which is opposite in direction to the lepton. This analysis achieves the best signal to background ratio. There is also a second analysis with no hermiticity cuts and less stringent requirements on event shape variables. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.5,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.0in,height=3.0in,file=b2ufig.ps}} \end{picture} \vskip 10 mm \caption{Lepton momentum spectra from CLEO~II for (a) the analysis with tight cuts and (b) the analysis with loose cuts. The filled points with error bars represent the $\Upsilon(4S)$ data. The open circles are the data taken below resonance, while the dashed line is the fit to the off-resonance data. The solid histogram is a Monte Carlo simulation of $b\to c\ell\nu$ processes.} \label{btouinc} \end{center} \end{figure} In the analysis with loose cuts, only a modest requirement on the event shape is imposed, $R_2<0.3$. This analysis attempts to use selection criteria that are similar to the cuts used in the CLEO~1.5 analysis. The efficiency of the hermiticity cuts used in the strict analysis will depend somewhat on the $Q^2$ spectrum of $B\to u\ell \nu$. On the other hand, the analysis with strict cuts has the advantage that the allowed phase space is restricted to the region where resonances ($\pi$, $\rho$ $\eta$, $\omega$) dominate and where exclusive models are most reliable. The lepton momentum spectrum with the continuum data and a histogram from a $b\to c$ Monte Carlo superimposed is shown in Fig.~\ref{btouinc}. The background is subtracted using a fit to the continuum data, which were recorded at an energy slightly below the $\Upsilon(4S)$ resonance. In the loose analysis, $128.5 \pm 26.3 \pm 15.2$ excess leptons are observed in the momentum interval between 2.4 and 2.6 GeV, while in the strict analysis, an excess of $ 43.0\pm 10.1 \pm 6.6$ leptons is found in the same interval. As can be seen from Figure~\ref{btouinc}, the yield beyond the kinematic limit for $b\to u\ell\nu $ production is consistent with zero. For the loose analysis, the partial branching fraction is $\Delta B_{u} (2.4,2.6)$ $=(0.90 \pm 0.18\pm 0.12)\times 10^{-4}$ while for the strict analysis, the corresponding branching fraction is $\Delta B_{u} (2.4,2.6)$ $=(0.70 \pm 0.16 \pm 0.12)\times 10^{-4}$ where the ACCMM model has been used to evaluate the detection efficiency. The agreement found in CLEO~II between the branching fractions from the analysis with strict cuts and the analysis with loose cuts indicates that the choice of $q^2$ dependent selection criteria in the analysis with tight cuts does not introduce significant model dependence in the final result. The largest source of model dependence is due to the extrapolation from the narrow signal window to the full kinematic range (the factor $f_u(p)$). The central value for the branching fraction from the CLEO~II experiment is significantly below the previous CLEO~1.5 and ARGUS results for this momentum interval. The CLEO~II and CLEO~1.5 measurements are consistent at the 2.5 standard deviation level. The large values initially reported by CLEO~1.5 and ARGUS are now believed to be upward fluctuations. In addition, the yields in the lower momentum bins from both the early experiments, which must be determined after subtractions of large $b\to c$ backgrounds, are marginally consistent with the yield in the high momentum ($2.4-2.6$ GeV) bin. In the CLEO~II analysis, the branching fractions determined from the $2.3-2.4$ GeV bin and the $2.4-2.6$ GeV bin are in good agreement. For these reasons, we have chosen to determine the value of $|V_{ub}/V_{cb}|$ using the branching fraction measured by CLEO~II in the high momentum bin. Note that including the CLEO~1.5 and ARGUS results gives only a small shift in the world average since the errors for these measurements are large. These results can be used to deduce values for $V_{ub}$, which are given in Table~\ref{Tvub}. Taking the central value from the ACCMM model gives $$ |{{V_{ub}}\over {V_{cb}}}| = 0.073\pm 0.011(exp) \pm 0.010(model)$$ and a range at the 1 standard deviation level $0.095 > |{V_{ub}\over V_{cb}}| > 0.055$. The table includes the estimate from D. Scora for the revised ISGW model\cite{ISGWprime} (denoted ISGW2) but does not include the older ISGW model. Incomplete exclusive models such as those of K\"orner and Schuler and Wirbel-Stech-Bauer that do not calculate all the exclusive final states which are relevant for the endpoint region are also omitted. \begin{table}[htb] \caption{ Experimental measurements of the partial branching fraction for $b\to u \ell\nu$ transitions in the lepton endpoint region. (*) The ARGUS value is deduced indirectly from the value of $|V_{ub}/V_{cb}|^2$ given in their publication.} \label{Tbuexp} \begin{tabular}{llll} Experiment & ${\cal B}(2.2-2.6)$ GeV& ${\cal B}(2.3-2.6)$ GeV & ${\cal B}(2.4-2.6)$ GeV \\ \hline CLEO~1.5 & $(33\pm 8\pm 8)\times 10^{-5}$& & $(18\pm 4\pm 3)\times 10^{-5}$\\ ARGUS & &$(32.8\pm 7)\times 10^{-5}$ $^{*}$ & $ $ \\ CLEO~II & & $(8.2\pm 1.5\pm 0.9)\times 10^{-5}$ & $(7.0\pm 1.6\pm 1.2)\times 10^{-5}$\\ \hline World Average & & & $(7.0\pm 2.0)\times 10^{-5}$ \\ \end{tabular} \end{table} \begin{table}[htb] \caption{Determination of $|V_{ub}/V_{cb}|$ from various theoretical models for the momentum interval $2.4-2.6$ GeV.} \label{Tvub} \begin{tabular}{lllll} Model &$\gamma_u \times 10^{12}$sec &$f(p)$ & $d(p)$& $|V_{ub}/V_{cb}|$ \\ \hline ACCMM &$80.4$ &$0.055$ & $0.123$ & $0.073\pm 0.011$\\ BBD & $68.0$&$0.074$ & $0.113$ & $0.071\pm 0.011$\\ Hybrid &$84.3$ &$0.066$ & $0.135$ & $0.065\pm 0.010$\\ ISGW2 &$58.8$ &$0.049$ & $0.0703$ & $0.083\pm 0.012$\\ \end{tabular} \end{table} \subsubsection{Exclusive Semileptonic $b\to u$ Transitions} CLEO~II has searched for the exclusive decays $B^+ \to \rho^0 \ell^+ \nu$ and $\bar{B^0} \to \rho^+ \ell^- \nu$ and $B^+ \to \omega \ell^+ \nu$. They require that the observed momenta be consistent with a missing neutrino taking advantage of production at threshold. The $\rho$ ($\omega$) invariant mass spectrum is then examined after applying additional cuts. Event shape cuts are used to suppress background from the dominant background $e^+ e^- \to q \bar{q}$ which are jetlike in contrast to $B \bar{B}$ events which are more spherical. The analysis is divided into two lepton momentum ranges: $2.3>E_{lep}>2.0$ GeV, and $2.6>E_{lep}>2.3$ GeV. All models are in approximate agreement that about 60\% (52\% in WSB to 72\% in ISGW) of the rate is contained in the union of the two ranges. Both the ARGUS and CLEO~II analyses of exclusive $b\to u$ transitions make use of the hermiticity of the detector. For example, CLEO~II requires the missing mass of the remainder of the event be greater than -0.2 GeV$^2$ and less than 5.0 GeV$^2$. In addition, the missing momentum vector should balance the visible momentum of $Y= \rho \ell$ system. The branching ratios for $B \to \rho $ and $B \to \omega$ decays are related by the quark model and isospin invariance such that ${\cal B} (\bar{B}^0\to \rho^+ \ell \nu) = 2 {\cal B} (B^-\to\rho^0 \ell \nu) = 2 {\cal B}( B^-\to \omega \ell^- \nu)$. No signal was observed and CLEO placed an upper limit on the production of light vector mesons ($\rho^{+(0)},\, \omega$) $$B(B^- \to V^0 \ell \nu) < (1.6 - 2.7) \times 10^{-4} \rm{~at~the~90\% ~C.L.}$$ where the upper end of the range corresponds to calculating the efficiency using the ISGW model. The results for different models and for an older ARGUS analysis \cite{argus_exclusive} are listed in Table \ref{Tbuexcl}. The CLEO~II results are used to obtain the constraints: $$|V_{ub}/V_{cb}| < 0.08-0.13 \rm{~at~the~90\% ~C.L.}$$ where again the range given corresponds to the three models considered. These limits are consistent with the values of $|V_{ub}/V_{cb}|$ determined from the inclusive lepton spectrum. ARGUS has also reported the observation of two events in exclusive semileptonic $b\to u$ modes, $\bar{B}^0\to \pi^+ \ell^-\nu$ and $B^-\to \omega\ell^-\nu$ opposite fully reconstructed modes\cite{argusvub1}. \begin{table}[htb] \caption{Measurements of Exclusive Charmless Semileptonic B Branching Ratios. Here V denotes a vector meson as discussed in the text.} \label{Tbuexcl} \begin{tabular}{lllll} Experiment & Mode & ISGW & WSB & KS \\ \hline ARGUS & $\bar{B}\to \pi^+ \ell \nu$ & $<0.9 \times 10^{-3}$ \\ ARGUS & $\bar{B}\to \rho^0 \ell \nu$ & $<1.1 \times 10^{-3}$ \\ CLEO~II & $\bar{B}\to \rho^0 \ell \nu$ & $<2.1\times 10^{-4}$ & & \\ CLEO~II & $\bar{B}\to \omega \ell \nu$ & $<2.1\times 10^{-4}$ & & \\ CLEO~II & $\bar{B}\to \rho^+ \ell \nu$ & $<4.1\times 10^{-4}$ & & \\ CLEO~II & $\bar{B}\to V \ell \nu$ & $<1.6\times 10^{-4}$ &$2.7\times 10^{-4}$ & $<2.3\times 10^{-4}$ \end{tabular} \end{table} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.2)(0,0) \put(-0.5,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.0in,height=2.5in,file=pilnu.ps}} \end{picture} \vskip 20 mm \caption{ Beam constrained mass distributions from CLEO~II for (a) $B\to \pi\ell\nu$ and (b) $B\to \rho\ell\nu$ candidates. The dotted line shows the background contribution which includes $b\to c$ and other $b\to u$ decays.} \label{pilnu} \end{center} \end{figure} \subsubsection{Observation of $B \to \pi \ell \nu$ Transitions} The first signal for an exclusive semileptonic charmless decay mode has been reported recently by CLEO~II in the mode $B\to \pi\ell\nu$ \cite{lkgmoriond}. Events in which the neutrino momentum is well constrained from the missing energy are used. This allows a beam constrained mass and energy difference to be constructed in analogy to exclusive hadronic B decays. The effective beam constrained mass distribution for $\bar{B}^0\to \pi^+\ell^-\nu$ and $B^-\to \pi^0\ell^-\nu$ candidates after a cut on the energy difference is shown in Fig.~\ref{pilnu}. A likelihood fit to the beam constrained mass, the energy difference, and $\sin^2\theta_{\ell \pi}$ distributions\cite{jdrpilnu}, shows that the excess has a significance of $3.8$ standard deviations. The resulting branching fraction is $${\cal B}(B\to \pi^+ \ell\nu) = 1.19\pm 0.41\pm 0.22\pm 0.19\times 10^{-4} (ISGW)$$ $${\cal B}(B\to \pi^+ \ell\nu) = 1.70\pm 0.50\pm 0.31\pm 0.27\times 10^{-4} (BSW)$$ for the ISGW and BSW models, respectively\cite{lkgmoriond}. No significant excess is observed in the ${\cal B}(B\to \rho\ell\nu)$ mode. Measurements of branching fractions for exclusive charmless modes will be an important step towards establishing a reliable value of $|V_{ub}|$. \subsubsection{Prospects for the Determination of $|V_{ub}|$.} The model dependence in $|V_{ub}/V_{cb}|$, which is larger than the experimental error, will be reduced eventually by the detailed study of inclusive and then exclusive decay modes. At present, with the inclusive sample, rough checks of the $Q^2$ dependence in inclusive $b\to u$ decay are possible. In the future these may allow some discrimination between models (see Figure~\ref{vubq2theor}) \cite{nelson},\cite{artusovub}. For instance, the inclusive Altarelli style model peaks at a lower $q^2$ than does the ISGW model (the sum of exclusive final states). The early version of the ISGW model disagrees with the observed $q^2$ spectrum at the 1.5 $\sigma$ level. With larger data samples this approach will either rule out or severely constrain models of $b\to u$ decay. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.5,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.0in,height=3.0in,file=vub_model.ps}} \end{picture} \vskip 10 mm \caption{$q^2$ distribution for two theoretical models of $b\to u\ell\nu$ transitions. The solid histogram is the model of Altarelli~{\it et al.} while the dashed histogram is the original model of Isgur, Scora, Grinstein and Wise (ISGW).} \label{vubq2theor} \end{center} \end{figure} If the $b\to c$ backgrounds can be well constrained and the continuum background reduced to a sufficient degree by the use of detector hermiticity, then the stringent lepton momentum cut may be relaxed and a larger fraction of $b\to u \ell\nu$ events accepted. This approach could significantly reduce the model dependence in $|V_{ub}|$. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.5,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.0in,height=3.0in,file=vub_q2dist.ps}} \end{picture} \vskip 10 mm \caption{Pseudo $q^2$ distribution in CLEO~II data compared with two theoretical models of $b\to u\ell\nu$ transitions. The original model of ISGW is shown as the solid histogram, while the ACCMM model is the dashed histogram.} \label{vubq2exp} \end{center} \end{figure} Once exclusive $b\to u$ modes are observed, various integrated quantities such as the vector meson polarization in $B\to V\ell\nu$ and the average lepton energy can be compared to models. Unlike the $b\to c$ case, the differences between models are significant. It is also possible to compare data on other heavy quark to light quark transitions to models. For example, one can compare the form factors in $D^0 \to K^- \ell \nu$ and $D \to K^* \ell \nu$ to various models\cite{jdr}. The form factors for $B \to D^* \ell \nu$, where large event samples are available, have also been checked (see section~\ref{bdynamics}) and are consistent at the present level of experimental precision with HQET and quark models. As noted by Isgur and Wise, the use of HQS (heavy quark symmetry) gives relations between various heavy to light form factors\cite{wiserholnu}. For instance, the $B\to \rho$ form factor can be related to the corresponding $D\to \rho$ form factor at equal $\rho$ energies provided one assumes that the light degrees of freedom decouple \begin{equation} <\rho(k, \epsilon)| \bar{u} \gamma_u (1-\gamma_5) b | \bar{B}(v)> = {({m_B \over m_D})}^{1/2} [{\alpha_s(m_b) \over \alpha_s(m_c)}]^{-6/25} {}~<\rho(k, \epsilon)| \bar{u} \gamma_u (1-\gamma_5) c | \bar{D}(v)> \label{isgweqn} \end{equation} when the momentum transfer to the $\rho$ meson is much less than a heavy quark mass, i.e. $v \cdot k << m_c, m_b$. If in addition, $SU(3)$ symmetry holds, then the $B\to \rho$ form factor can be related to the $D\to K^*$ form factor. Analogous relations can be derived between the $B\to \pi$ and $D\to K$ form factors\cite{burdpilnu}. Thus, precise measurements of the form factors in semileptonic $D$ decays in conjunction with measurements of exclusive charmless semileptonic $B$ decays may be useful in the future determination of $|V_{ub}|$\cite{dibvub}. However, there are several potential difficulties with this approach. The type of relation given in equation~(\ref{isgweqn}) may have significant $1/m_Q$ corrections. At very small momentum transfer, other theoretical corrections (``pole terms'') may be large. Experimentally, the kinematic range available in $D\to \rho$ decays is much smaller than in the corresponding $B$ decay, so models will still be needed to extrapolate to the case of $B$ decays. Akhoury, Sterman, and Yao have suggested that measurements of the exclusive decays $\bar{B}\to \pi \ell\nu$ and $\bar{B}\to \rho\ell\nu$ at low $Q^2$ may be suitable for the determination of $|V_{ub}|$\cite{stermanvub}. They suggest that the inclusive width of these decays in the range $Q^2=(0-9.2)$ GeV$^2$ can be calculated reliably using QCD factorization theorems. Then experimental measurements of these decays can be used to extract $|V_{ub}|$. Another possible approach to a model independent determination of $|V_{ub}/V_{cb}|$ has been suggested by Neubert\cite{neubertvub}. The proposed method makes use of the relation between the differential spectra for $b\to u \ell\nu$ and $b\to s\gamma$ transitions to eliminate the uncertainty from the hadronization of $b\to u$ in the endpoint region. A simplified and intuitive description may be useful. The width for inclusive $b\to u$ transitions is proportional to $|V_{ub}|^2 m_b^5$ where $m_b$ is the b quark mass. On the other hand, in the rest frame of the b quark, the average energy of the photon emitted in the inclusive $b\to s\gamma$ decay is $m_b/2$. Thus, determination of the average photon energy in the electromagnetic penguin provides a way to eliminate the uncertainty from the $b$ quark mass. It is remarkable that a slight modification of this idea remains useful after the effect of the cut on lepton energy in $b\to u$, QCD corrections, and the Fermi motion of the $b$ quark have been properly included. Several authors have proposed similar techniques for the determination of $|V_{ub}|$\cite{falkvub},\cite{russkievub},\cite{korvub}. It should be noted that if the $b\to u$ endpoint region is dominated by a single resonant mode (e.g. $B\to \rho\ell\nu$), this technique may not be valid\cite{falkvub}. \section{$B-\bar{B}$ MIXING} \label{mixing} In production processes involving the strong or the electromagnetic interaction neutral $B$ and $\bar{B}$ mesons can be produced. These flavor eigenstates are not eigenstates of the weak interaction which is responsible for the decay of neutral mesons containing b quarks. The strong eigenstates are linear combinations of the weak eigenstates, $$ |B_1> = {1\over \sqrt{2}}(|B^0>+|\bar{B}^0>)$$ $$ |B_2> = {1\over \sqrt{2}}(|B^0>-|\bar{B}^0>)$$ This feature and the small difference between the masses and/or lifetimes of the weak interaction eigenstates gives rise to the phenomenon of $B-\bar{B}$ mixing. The formalism which describes $B$ meson mixing closely follows that used to describe $K^0-\bar{K}^0$ mixing, although the time scale characteristic of $B-\bar{B}$ oscillations is much shorter. If a pure $|B^0>$ state is produced at time $t=0$, then at later times it will evolve into a new state which contains an admixture of $|\bar{B^0}>$. The weak interaction eigenstates are denoted $B_1$ and $B_2$, and have masses $M_1, ~M_2$ and widths $\Gamma_1, ~\Gamma_2$ respectively. The difference between the masses is denoted $\Delta M$, while the difference between the widths is $\Delta\Gamma$. The average width will be referred to as $\Gamma$. The probability that the state, initially produced as $|B^0>$ will mix into $|\bar{B^0}>$ at time t is given by $$P(B^0\to \bar{B^0}) = \exp{(-\Gamma t)} ~[1-\cos(\Delta M ~t)].$$ For convenience, the ratios $x= \Delta M/\Gamma$ and $y=\Delta \Gamma/2\Gamma$ are frequently introduced. The contribution of $y$ to mixing is usually neglected. Its size is determined by the fraction of final states which are common to both $B$ and $\bar{B}$ mesons. The difference between the width for those final states with one sign of CP and those with the opposite sign determines the magnitude of $\Delta\Gamma$. In contrast to the case of neutral kaons, the branching ratio for such modes (e.g. ${\cal B}(\bar{B}^0\to \psi K^*)$) is small, so $y<< x$ for neutral B mesons. The magnitude of $x$ determines the frequency of the mixing oscillations. The quark level process responsible for $B-\bar{B}$ mixing is shown in Figures 1(g) and 1(h). The contribution to $\Delta M$ for $B_d-\bar{B_d}$ mixing is found to be \begin{equation} \Delta M_d = {{G_F^2} \over {6 \pi^2}} m_B m_t^2 {}~F({{m_t^2} \over {m_W^2}}) ~\eta_{QCD} B_{B_d} f_{B_d}^2 |V_{tb}^{*} V_{td}|^2 \end{equation} where $G_F$ is the weak coupling constant, $m_t$ is the top quark mass, $F$ is a slowly decreasing function which depends on $m_{t}$ and $m_{W}$, $\eta_{QCD}$ is a factor which accounts for QCD corrections, $B_{B_d}$ is a constant which is used to account for the vacuum insertion approximation, and $f_B$ is the decay constant of the $B_d$ meson. An analogous expression for $B_s$-$\bar{B_s}$ mixing can also be obtained. Since $|V_{ts}| >> |V_{td}|$, the rate and frequency of mixing for the $B_s$ meson will be significantly larger than for the $B_d$ meson. Since the mass of the top quark has been determined and the QCD correction has been calculated to NLO ($\eta_{QCD}=0.55$)\cite{Burasmix}, the largest uncertainties in $\Delta m_d$ arise from the product $B_{B_d}^{1/2} f_{B_d}$. This last factor must be determined from non-perturbative methods such as lattice calculations or QCD sum rules. Evidence for $B_d-\bar{B_d}$ mixing was first reported in 1987 by the ARGUS experiment from the study of like sign lepton correlations\cite{mixdiscover}. The CLEO~1.5 experiment later confirmed the result\cite{cleo15mix}. The observed level of mixing was significantly larger than theoretically expected and provided the first suggestion that the mass of the top quark was large, much greater than 30 GeV as was indicated by the UA1 experiment at that time. \begin{figure}[htb] \begin{center} \unitlength 1.0in \vskip 20mm \begin{picture}(3.,3.0)(0,0) \put(-0.5,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=4.0in,height=4.0in,file=argusb0b0bar.eps}} \end{picture} \vskip 5 mm \caption{A fully reconstructed event with mixing observed by the ARGUS experiment.} \label{argmix} \end{center} \end{figure} The substantial mixing rate also implies that CP violation could be large in $B$ decay to CP eigenstates, provided the amplitude for the $B$ decay to a CP eigenstate interfers with the amplitude for a $\bar{B}$ meson to mix and then decay to the same final state. A relative phase between the two amplitudes can be introduced by the CKM couplings. That is, mixing provides a mechanism that gives rise to interfering amplitudes and hence CP violation. To experimentally measure mixing requires identification of the initial state flavor of the neutral $B$ meson at production as well as the final state flavor after decay of the $\bar{B}$ meson. This ``tagging'' of the initial flavor can be accomplished using leptons or other partial reconstruction techniques. In addition, the production fraction for the neutral B meson in question must be known. Several experimental parameters are used to measure the strength of mixing. For example, the ratio of the time integrated number of $B^0$ and $\bar{B^0}$ mesons is denoted $r= {N(\bar{B}^0)\over N(B^0)}$ if the initial state is $|B^0>$. In general, mixing is measured by studying pairs of B mesons since one of the B hadrons is needed to tag the flavor at production. The ratio of the number of mixed events to the number of unmixed events is given by $$R= {{N(B B + \bar{B} \bar{B})} \over { N(B \bar{B} + \bar{B} B)}}$$ At threshold, this becomes $$R= {{N(B^0 B^0 + \bar{B}^0 \bar{B}^0)} \over { N(B^0 \bar{B}^0 + \bar{B^0} B^0)}}$$ On the $\Upsilon(4S)$ resonance and at the threshold for $B \bar{B}^*$ production, $B \bar{B}$ pairs are produced coherently i.e. in a state of definite orbital angular momentum. Quantum statistics for spin zero particles implies that the wave function of the $B \bar{B}$ must be antisymmetric (symmetric) for production with odd (even) orbital angular momentum. This is the case for production at the $\Upsilon(4S)$ where $$ R= {x^2 \over {2+ x^2}} \; ({\rm l~odd})$$ At $B \bar{B^*}$ threshold, the relative orbital angular momentum is zero, and $$ R= {{3 x^2+ x^4} \over {2+ x^2+x^4}} \; ({\rm l~even})$$ Production of neutral B mesons at the $Z^0$ and at hadron colliders is an incoherent sum of these two cases. It is also useful to define $\chi$ which is the probability that a produced neutral B meson mixes and then decays as a neutral $\bar{B}$ meson. The fraction of mixed events is then $2 \chi ~(1-\chi)$. Experiments at the $\Upsilon(4S)$ resonance extract the mixing parameter from the observed ratio of like-sign dileptons to opposite dileptons. This requires knowledge of the relative production of $B^+ B^-$ and $B^0 \bar{B}^0$ mesons pairs. For this case, $\chi_d = (1+ \Lambda) r_{wrong}$, where $\Lambda= f_+/f_0 (\tau^+/\tau^0)^2$, $r_{wrong}$ is the ratio of the number of like-sign dileptons to opposite sign dileptons and $f_+, f_0$ are the fractions of $B^0 \bar{B^0}$ and $B^+ B^-$ pairs produced, respectively. The current uncertainty in $\Lambda$ leads to a systematic uncertainty of about 20\% in measurements of mixing from threshold experiments which use like sign dileptons to determine the $b$ quark flavor. \subsection{$B_d-\bar{B_d}$ Mixing} \label{bdbdbmix} The determination of the $B_d$ mixing parameter with the best statistical precision is obtained from measurements of the rate for like-sign dileptons in experiments at threshold. The yield of like-sign dileptons is found after subtracting background contribution from non-resonant production (continuum), cascades, $\psi^{(')}$s and misidentified leptons. There are results from the ARGUS, CLEO1.5 and CLEOII experiments (see Table~\ref{mixthresh}). To reduce the systematic error from the poorly measured fraction $\Lambda$, another technique to increase the $B^0-\bar{B^0}$ content of the sample has been developed. Events which contain a wrong sign lepton and a partially reconstructed $B \to D^{*+} \ell \nu$ decay are used. In this case, it is not necessary to detect the decay products of the $D^0$ meson. The method takes advantage of the small energy release in the decay $D^{*+}\to D^0 \pi^+$ and the kinematic constraints of production near threshold\cite{cleoiimix},\cite{argusdlmix}. \begin{table}[htb] \caption{Measurements of the mixing parameter $\chi_d$ from threshold experiments. The first error is statistical, while the second is the experimental systematic error and the third is due to the uncertainty in $\Lambda$.} \medskip \label{mixthresh} \begin{tabular}{lll} Experiment & Technique & $\chi_d$\\ \hline\medskip ARGUS & $\ell^+, \ell^+$& $0.173\pm 0.038\pm 0.044^{+0.031}_{-0.023}$ \\ CLEO~1.5 & $\ell^+, \ell^+$ &$0.142\pm0.035\pm0.034^{+0.025}_{-0.019}$ \\ CLEO~II & $\ell^+, \ell^+$ &$0.157\pm0.016\pm0.018^{+0.028}_{-0.021}$ \\ ARGUS & partial $D^{*-} \ell^+ \nu,\ell^+$ & $0.162\pm 0.043\pm 0.039$ \\ CLEO~II & partial $D^{*-} \ell^+ \nu,\ell^+$& $0.149\pm0.023\pm0.019\pm 0.010$ \\ \hline \medskip Average $\Upsilon(4S)$ & $\ell^+, \ell^+$ & $0.156\pm 0.020 ^{+0.027}_{-0.020} (\Lambda)$ \\ Average $\Upsilon(4S)$ & partial $D^{*-} \ell^+ \nu, \ell^+$& $0.151\pm 0.0265\pm 0.010(\Lambda)$ \\ \end{tabular} \end{table} In the CLEO~II analysis, the energy of the $D^*$ is approximated by $(E_{\pi}/E_{\pi}^*) \times M_{D^*}$ where $E_\pi$ is the energy of the slow $\pi$ in the laboratory frame and $E_{\pi}^*$ is the corresponding energy in the center of mass frame\cite{cleoiimix}, \cite{mssmix}. The direction of the slow pion is a good estimator of the $D^*$ direction. A quantity analogous to missing mass is formed: $$ MM^2 \approx (E_{beam} - E_l - E_D^*)^2 -|p_l + p_D^*|^2$$ where $E_{D^*}$ and $p_{D^*}$ are the estimates of the $D^*$ energy and momentum determined from the slow pion energy and direction. For signal events, the variable $MM^2$ peaks near zero with a width of $0.9$ GeV$^2$. A similar partial reconstruction technique has been applied by ARGUS\cite{argusdlmix}. The measurement of $\chi_d$ using the partial reconstruction technique substantially reduces the systematic uncertainty from $\Lambda$ but has a slightly larger statistical error. Note that the results from the dilepton and partial reconstruction analyses should not be combined since there is substantial overlap between the datasets used for the two analyses. \begin{table}[htb] \caption{Measurements of the mixing parameter $\Delta M_d$ in units of ps$^{-1}$. The notation (time) indicates that an explicitly time dependent measurement was performed to determine the mixing parameter.} \medskip \label{Tmix1} \begin{tabular}{lll} Experiment & Technique & $\Delta m_d$ \\ \hline \medskip ALEPH & $\ell^+, \ell^+$(time)& $0.44\pm 0.05^{+0.09}_{-0.06}$ \\ OPAL & $\ell^+, \ell^+$(time)& $0.462^{+0.040+0.052}_{-0.053-0.035}$ \\ DELPHI & $\ell^+, \ell^+$(time)& $0.53^{+0.11+0.11}_{-0.10-0.10}$ \\ ALEPH & $D^* \ell^+, Q_j$(time) & $0.497\pm 0.070\pm 0.036$ \\ OPAL & $D^* \ell^+, Q_j$(time) & $0.508\pm 0.075\pm 0.025$ \\ DELPHI & $D^* \ell^+, Q_j$ (time)& $0.456\pm 0.068\pm 0.043$ \\ OPAL & $D^{*+}, \ell^+$(time) & $0.57\pm 0.11\pm 0.02$ \\ DELPHI & $\ell^+, K^+, Q_j$(time)& $0.586\pm 0.049\pm 0.062$ \\ DELPHI & $D^{*+}, Q_j$(time)& $0.50\pm 0.12\pm 0.06$ \\ \hline $\Upsilon(4S)$& partial $D^{*-} \ell^+, \ell^+$ & $0.405\pm 0.041$ \\ LEP & (time) & $0.501\pm 0.034$ \\ \hline \smallskip World Average & & $0.462\pm 0.026$ \\ \end{tabular} \end{table} Time integrated mixing measurements from LEP and hadron colliders determine the quantity $\chi$ =$f_d \chi_d + f_s \chi_s$ where $f_d$ and $f_s$ are the fractions of $B_d$ and $B_s$ mesons produced. This method gives weak constraints on $\chi_s$ and $\chi_d$ which are not competitive with those deduced from the time dependent measurements and from the experiments at the $\Upsilon (4S)$ resonance. Hence these results will not be discussed further. Additional details and a summary of these results are available in Ref.~\cite{Venus}. The ALEPH, DELPHI and OPAL experiments have performed explicit measurements of $P(B^0\to \bar{B^0})$ as a function of time to obtain the parameter $x_d$\cite{alephmix},\cite{delphimix},\cite{opalmix}. The initial state $b$ quark flavor is tagged either using leptons or jet charge, while the flavor of the final state $b$ quark is tagged using either $\bar{B}_{d}\to D^{*+} \ell^- X$, $\bar{B}_{d}\to D^{*+} X$, or $\bar{B}_{d}\to \ell^- X$. If the final state is not fully reconstructed, as is the case for the analyses using dileptons, then the decay time must be determined using a topological vertexing technique where the lepton from the $B$ decay and the other tracks in the same jet hemisphere are combined. The boost is determined using the observed energy, missing momentum and a correction factor determined from a Monte Carlo simulation. The observed fraction of like sign leptons $N_{++}/(N_{++} + N_{+-})$ is clearly not time independent (see Figure~\ref{alephmix}) and the beginning of one oscillation cycle is visible. A full oscillation lasts about 15 ps. Due to the effects of acceptance and resolution, additional oscillations are not seen. The largest contributions to the systematic errors in the measurements of $\Delta m_d$ using time dependent $D^*-$lepton correlations arise from the uncertainties in the decay time resolution and the knowledge of the charged B meson fraction. For the measurements with dileptons and those using the lepton-jet charge tagging technique, the uncertainty in the $B_s$ fraction at the $Z^0$ also gives a significant contribution to the systematic error. The results from the LEP experiments with silicon vertex detectors are given in Table~\ref{Tmix1} as well as the average of the CLEO~II and ARGUS measurements using the partial reconstruction technique. The $\Upsilon(4S)$ value was computed using the world average for the $B^0$ lifetime. The results from the various time dependent techniques used by the LEP experiments were averaged separately and then combined to form the LEP average. For the LEP dilepton measurements, a common systematic error of $0.09$ ps$^{-1}$ was assumed, while for the $D^{*+} l^-,\, Q_J$ results, a common systematic error of $0.02$ ps$^{-1}$ was assumed. The systematic errors in the different techniques were then assumed to be uncorrelated. This treatment of the systematic errors gives slightly more weight to the results with jet charge tagging technique than a simple weighted average. \begin{figure}[htb] \begin{center} \unitlength 1.0in \vskip 25mm \begin{picture}(3.,3.0)(0,0) \put(-1.6,-0.8) {\psfig{bbllx=0pt,bblly=0pt,width=6.0in,height=6.0in,file=aleph_time.ps}} \end{picture} \caption{The fraction of wrong sign leptons as a function of time from the ALEPH experiment.} \label{alephmix} \end{center} \end{figure} \subsection{$B_s-\bar{B_s}$ Mixing} The measurement of the mixing parameter $x_s=\Delta M/\Gamma$ for the $B_s$ meson is one of the goals of high energy collider experiments and experiments planned for future facilities \cite{BELLE,BABAR,DESY}. A measurement of $x_s$ combined with a determination of $x_d$, the corresponding quantity for the $B_d$ meson, allows the determination of the ratio of the CKM matrix elements $|V_{td}|^2/|V_{ts}|^2$ with significantly reduced theoretical uncertainties. The ratio of the mixing parameters can be written as \begin{equation} {{x_s}\over {x_d}} = {{(m_{B_s} \eta_{QCD} B f_{B_d}^2)}\over {(m_{B_d} \eta_{QCD} B f_{B_s}^2)}} \,|{{V_{ts}^2} \over {V_{td}^2}}| \, {{\tau_d}\over{\tau_s}} \end{equation} where the factor which multiplies the ratio of CKM matrix elements is believed to be unity up to $SU(3)$ breaking effects. Ali and London \cite{Alickm} ~ have estimated $$ {{\Delta m_s}\over{\Delta m_d}} \; = \; (1.19 \pm 0.10)\, {{|V_{ts}|^2}\over{|V_{td}|^2}} $$ The time integrated probability that a neutral $B$ meson mixes is $\chi= {1\over 2} {x^2 \over{1+x^2}}$. As x becomes large, as is expected for the $B_s$ meson, $\chi$ asymptotically approaches 0.5. Thus time integrated measurements of $B_s$ mixing are insensitive to $x_s$ when mixing is maximal, and one must make time dependent measurements in order to extract this parameter. These are experimental challenging due to the rapid oscillation rate of the $B_s$ meson. Using dilepton events in which the tagging lepton is vertexed with other tracks in the same hemisphere and the neutrino energy is deduced from the energy flow in the event, ALEPH has searched for a high frequency component in their fit to the proper time distribution. They find $\Delta M_s > 3.9$ ps $^{-1}$ or $x_s>5.5$\cite{alephmix}. {}From an event sample with a lepton and a tag using a special jet charge where each track is weighted by its rapidity\cite{jetcharge}, ALEPH has obtained an even tighter constraint on the rate of $B_s$ mixing. The jet charge tagging technique allows a high efficiency, of order 45\%, to be achieved with mistagging probability of about 21\%. The upper limit on $B_s$ mixing is determined by performing a series of Monte Carlo experiments. Allowing for systematic error including a 30\% uncertainty in the $B_s$ fraction, they obtain $\Delta M_s > 6$ ps $^{-1}$ or $x_s>8.5$ at the 95\% confidence level\cite{alephmix},\cite{fortymix}. Similarly, OPAL uses the dilepton technique and allows for a high frequency component in their fit. They obtain $\Delta M_s > 2.2$ ps$^{-1}$ which gives $x_s > 3.0$ at the 95\% confidence level\cite{opalmix}. Other tagging techniques, for example, using the fragmentation kaon to enhance the $B_s$ content or partial reconstruction of the $D_s$ meson are being investigated by the high energy collider experiments. For the $B_s$ meson, the quantity $\Delta\Gamma$ may be large enough to be observable. Parton model calculations\cite{Hagelin} and calculations with exclusive final states\cite{Aleksan} suggest that the width difference may be $10-20\%$. This lifetime difference could be determined experimentally by using decays to final states with different CP. For example, a measurement of a difference in the lifetimes between $\bar{B}_s^0\to \psi\phi$ and $\bar{B}_s^0\to D_s^- \ell^+ \nu$ would yield $\Delta\Gamma/\Gamma^2$. It has also been suggested that such measurements could be used to constrain $|V_{ts}/V_{td}|^2$ if parton model calculations are reliable\cite{Browpak}. \section{INCLUSIVE B DECAY} \subsection{Motivation} Due to the large mass of the $b$ quark $B$ meson decays give rise to a large number of secondary decay products. For instance, CLEO finds that the charged and photon multiplicities at the $\Upsilon (4 S)$ are: $n_{\rm charged}=10.99 \pm 0.06 \pm 0.29$, $n_{ \gamma}=10.00\pm 0.53 \pm 0.50$, respectively \cite{multi}. Similarly, ARGUS \cite{multiARG} finds $n_{\rm charged}=10.74 \pm 0.02$. The high multiplicity of final state particles leads to a large number of possible exclusive final states. Even with a detector that has a large acceptance for both charged tracks and showers, it is difficult to reconstruct many exclusive final states because of the combinatorial backgrounds. Furthermore the detection efficiency drops for high multiplicity final states. Thus, to get a complete picture of $B$ meson decay, it is important to study inclusive decay rates. A number of theoretical calculations of inclusive $B$ decay rates have been made using the parton model. It is believed that measurements of such inclusive rates can be more reliably compared to the theoretical calculations than can measurements of exclusive decays While this is sufficient motivation for studying the inclusive rates, there is also a need for accurate measurements in order to model the decays of $B$ mesons both for high energy collider experiments, and for experiments at the $\Upsilon (4S)$. As a specific example, the inclusive rate for $B\to\psi$ has been used to determine the $B$ meson production cross-section at the Tevatron \cite{pppsi}. The branching ratios for inclusive $B$ decays to particular final state particles are determined by measuring the inclusive yields of these particles in data taken on the $\Upsilon (4S)$ resonance, and subtracting the non-resonant background using data taken at energies below the $\Upsilon (4S)$ resonance. The off-resonance data are scaled to correct for the energy dependence of the continuum cross-section. Results on inclusive production at the $\Upsilon (4S)$ are usually presented as a function of the variable $x$, which is the fraction of the maximum possible momentum carried by the particle, $p_{max}=\sqrt{E_{beam}^2 - M^2}$. The endpoint for production in $B$ decays is at $x=0.5$. \subsection{Inclusive $B$ Decay to Mesons} CLEO~1.5 \cite{CLEOK} has measured the branching fractions of inclusive $B$ decays to light mesons, while ARGUS has determined the average multiplicities of light mesons in $B$ decay. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.)(0,0) \put(-.4,-.2){\psfig{width=3.0in,height=2.0in,% file=bexcl_etamom.ps}} \end{picture} \bigskip \vskip 10 mm \caption{The momentum spectra for $B\to \eta X$ as measured in CLEO~II data.} \label{etamom} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.7)(0,0) \put(-.7,-.8){\psfig{width=4.5in,height=4.0in,% file=jdl.ps}} \end{picture} \bigskip \vskip 10 mm \caption{$B\to D^0 X$, $D^+ X$, and $D^{*+}X$ momentum spectra in CLEO 1.5 data. The dashed curve is the prediction of the phenomenological model of Wirbel and Wu while the solid histogram is the prediction of their free quark model} \label{Fdmomdata} \end{center} \end{figure} If more than one meson of the particle type under study is produced in a $B\bar{B}$ decay, then the branching fraction and the multiplicity will differ. Unless otherwise noted, the results reported in Table \ref{Tbmulti} are averaged over $B$ and $\bar{B}$ decay. \begin{table}[htb] \caption{Multiplicities or branching fractions of light mesons per $B$ meson decay.} \label{Tbmulti} \begin{tabular}{lll} Mode & CLEO 1.5 \cite{CLEOK} & ARGUS \cite{ARGUSK} \\ & (Branching Ratio) & (Multiplicity) \\ \hline $ B/\bar{B}\to \pi^{\pm} $ & & $ 3.59\pm 0.03\pm0.07$ \\ (not from $K_s,\Lambda$) & & \\ $ B/\bar{B}\to \pi^{\pm} $ & & $ 4.11\pm 0.03\pm0.08$ \\ (incl. $K_s,\Lambda$) & & \\ $ B/\bar{B}\to K^{\pm} $ & $ 0.85\pm 0.07\pm 0.09$ & $0.78\pm 0.02\pm 0.03$ \\ $ \bar{B}\to K^{-} $ & $ 0.66\pm 0.05\pm 0.07$ & \\ $ \bar{B}\to K^{+} $ & $ 0.19\pm 0.05\pm 0.02$ & \\ $ B/\bar{B}\to K^0/\bar{K}^0 $ & $ 0.63 \pm 0.06\pm0.06$ & $0.64\pm 0.01 \pm 0.04$ \\ $ B/\bar{B}\to K^{*0} $ & & $0.146\pm 0.016\pm 0.020$ \\ $ B/\bar{B}\to K^{*+} $ & & $0.182\pm 0.054\pm 0.024$ \\ $ B/\bar{B}\to \rho^0 $ & & $0.209\pm 0.042 \pm 0.033$ \\ $ B/\bar{B}\to \omega $ & & $< 0.41$ (90\% C.L.) \\ $ B/\bar{B}\to f_0(975) $ & & $<0.025$ (90\% C.L.) \\ $ B/\bar{B}\to \eta $ & $0.176\pm 0.011 \pm 0.0124$ (CLEO II) & \\ $ B/\bar{B}\to \eta ' $ & & $<0.15$ (90\% C.L.) \\ $ B/\bar{B}\to \phi $ & $ 0.023 \pm 0.006 \pm 0.005 $ & $0.039\pm 0.003\pm 0.004 $ \\ \end{tabular} \end{table} In the decay $b \to c \to s$ the charge of the kaon can be used to determine the flavor of the $b$ quark. A first attempt to measure the tagging efficiency and misidentification probability for this method has been performed by ARGUS \cite{ARGUSK}. With the large sample of reconstructed $B^0$ and $B^+$ decays from CLEO~II it should be possible to measure these quantities directly. The experiments also measure the momentum spectra for the particles listed in Table \ref{Tbmulti}. An example of such data is the momentum spectrum for $B\to \eta$ production shown in Figure~\ref{etamom}. These results provide important information needed to improve Monte Carlo generators and determine tagging efficiencies for future $B$ experiments\cite{dunietztag}. \begin{figure}[htb] \begin{center} \vskip 10mm \unitlength 1.0in \begin{picture}(3.,2.8)(0,0) \put(-0.7,0.3){\psfig{width=4.5in,height=3.0in,% bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,file=bdds_fig9_dswithfits.ps}} \end{picture} \caption{$B\to D_s X$ momentum spectrum in CLEO~II data. The solid histogram is the sum of the two components. The two dotted histograms indicate the two body components from $\bar{B}\to D^{(*)} D_s^{(*)-}$ and $\bar{B}\to D^{(**)} D_s^{(*)-}$. The dash-dotted histogram shows the contribution of the three body process.} \label{Fdsmomdata} \end{center} \end{figure} The inclusive production of $D^0, D^+, D_{s}^+$ and $D^{*+}$ mesons in $B$ decay has been measured by ARGUS \cite{ARGUSD} and CLEO~1.5 \cite{CLEOD}. Preliminary measurements of several of these inclusive branching fractions from CLEO~II have also become available\cite{cleodds},\cite{dpfd0}. To improve signal to background, only the $D^0 \to K^- \pi^+$, $D^+ \to K^- \pi^+ \pi^+$ and $D_{s}^{+} \to \phi \pi^+$ decay modes are used. The results, rescaled for the charm branching ratios, are given in Table~\ref{khinc}. Other detailed properties of inclusive B decay have been determined in addition to branching fractions. The momentum spectrum for the inclusive decay of $B$ mesons to $D^0$, $D^+$, and $D^{*+}$ as measured by CLEO~1.5 are shown in Fig. \ref{Fdmomdata}. The $D^{*+}$ spectrum is not measured for $x <0.1$ due to poor reconstruction efficiency for slow tracks. The polarization as a function of $x$ for $B \to D^{*+}$ has also been measured and was found to be consistent with the predictions of Wirbel and Wu\cite{D*pol} and of Pietschmann and Rupertsberger\cite{PR}. Analyses of the shape of the $D_s$ momentum spectrum (Fig. \ref{Fdsmomdata}) indicates that there is a substantial two body component. In model dependent fits the ARGUS and CLEO~1.5 experiments find two body fractions of $(58 \pm 7 \pm 9)$\% \cite{ARGUSD} and $(56 \pm 10)$\% \cite{CLEOD}, respectively. CLEO~II finds a somewhat smaller two body fraction, $45.7\pm 1.9\pm 3.7\pm 0.6$ where the last error accounts for the uncertainty due to model dependence in the predictions for the rates for the two body modes\cite{cleodds}. There is no uncertainty in this results from the $D_s \to \phi \pi$ branching fraction. Averaging the results from the three experiments we find a two body component of $(49.4 \pm 4.4)\%$ which leads to ${\cal{B}}(B\to D_sX\; {\rm (two~body)})\; = \; (4.8\pm 1.3)\%$. It is important to determine which mechanisms are responsible for the production of the remainder, the lower momentum $D_s$ mesons. Two possibilities are external $W^-$ emission with $W^-\to \bar{c} s$ or $W^-\to \bar{u} d$ with $s \bar{s}$ quark popping. Results on inclusive $B$ decay to final states with $\psi$ and $\psi '$ mesons have been reported by CLEO~1.5\cite{SecondB}, ARGUS\cite{FifthB}, and CLEO~II\cite{CLEOpsiinc}. Indirect measurements of charmonium production have been reported by CDF \cite{cdf_cc} and the LEP experiments \cite{lep_cc}. Because of the large uncertainties in the composition of their data samples, these results have not been included in our determination of the world averages listed in Table \ref{khinc}. In the most recent high statistics analysis from CLEO~II, the effect of final state radiation has been taken into account \cite{dmc}. This effect leads to a significant tail on the low side of the $\psi \to e^+ e^-$ mass peak and a smaller effect in the $\mu^+ \mu^-$ spectrum. Even with a large mass window that extends from $2.50$ to $3.05$ GeV$/c^2$, this effect can modify the calculated detection efficiency by more than $10\%$. Small corrections are also made for non-resonant $\psi$ production in the CLEO~II analysis \cite{fastpsi}. The resulting invariant dielectron and dimuon mass distributions are shown in Fig. \ref{Fpsi}. The theoretical predictions for charmonia production in $B$ decay\cite{bodwin,kuehn_psi,palmstech} will be discussed further in Section~\ref{fac-color}. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.)(0,0) \put(-2.0,-.8){\psfig{height=10.0in,file=cleo_charmonium_mass.ps}} \end{picture} \vskip 2mm \caption{$B\to {\rm Charmonium} ~X$ invariant mass spectra from CLEO II: (a) $\psi \to e^+ e^-$ channel and (b) $\psi \to \mu^+ \mu^-$ channel. (c) $\psi \gamma - \psi$ mass difference showing the $\chi_{c1}$ and $\chi_{c2}$ signals.} \label{Fpsi} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.)(0,0) \put(-1.9,-.8){\psfig{height=10.0in,file=cleo_charmonium_mom.ps}} \end{picture} \vskip 2 mm \caption{$B\to {\rm Charmonium}~X$ momentum spectra in CLEO~II data. (a) Inclusive $B \to \psi X$ production with contributions from individual decay channels overlaid. (b) Direct $B \to \psi X$ production. (c) $B \to \psi$'$X$.} \label{Fpsimomdata} \end{center} \end{figure} The momentum spectrum for $B \to \psi, \psi^{'}$ transitions has been measured (Fig.~\ref{Fpsimomdata}). The two body component due to $B\to \psi K$ and $B\to \psi K^*$ saturates the spectrum in the momentum range between 1.4 and 2.0 GeV. By subtracting the contributions from $\psi$'s originating in $\psi$' and $\chi_c$ decays CLEO and ARGUS measure the momentum distribution of the direct component shown in Fig.~\ref{Fpsimomdata}(b). The average branching ratio for direct $\psi$ production is found to be ${\cal{B}}(B \to \psi$, where $\psi$ not from $\psi ') \; = \; (0.82 \pm 0.08)\%$. The two body component constitutes about 1/3 of direct $\psi$ production. \begin{table}[htb] \caption{$\psi$ polarization $\Gamma_L/\Gamma $ in inclusive $B$ meson decays.} \label{Tbpsipol} \begin{tabular}{lll} $\psi$ momentum & CLEO II \cite{psipol} & ARGUS \cite{argpol} \\ \hline $p_{\psi}< 0.8$ GeV/c & $0.55 \pm 0.35 $ & \\ 0.8 GeV/c $<p_{\psi}< 1.4$ GeV/c & $0.49 \pm 0.32 $ & \\ 1.4 GeV/c $<p_{\psi}< 2.0$ GeV/c & $0.78 \pm 0.17 $ & $1.17\pm 0.17$ \\ all $p_{\psi}< 2.0$ GeV/c & $0.59 \pm 0.15 $ & \end{tabular} \end{table} The polarization $\Gamma_L/\Gamma$ as a function of momentum for $B\to \psi$ transitions has also been determined (see Table~\ref{Tbpsipol}). According to ARGUS, the $\psi$ mesons in the highest momentum bin are completely longitudinally polarized. Since the highest momentum bin is dominated by two body $B$ decay, the polarization measured in this bin can be used to estimate the polarization of $B \to \psi K^*$ after correcting for the contribution of $B \to \psi K$. Therefore the ARGUS result indicates that the $B \to \psi K^*$ mode is dominated by a single orbital angular momentum state and hence by a single CP eigenstate. Integrating over the range of kinematically allowed momenta, CLEO measures the average polarization of $\psi$ mesons in $B$ decay to be $\Gamma_L/\Gamma \; =\; 0.59 \pm 0.15$. This result is consistent with the longitudinal polarization of 54\% predicted by Palmer and Stech \cite{palmstech}. Using factorization and HQET, M. Wise finds significantly more transverse polarization, $\Gamma_L/\Gamma\; \approx 0.25$ in the inclusive process\cite{wise_pol}. Results on inclusive $B \to \chi_c X, \chi_c \to \gamma \psi$ decays have been reported by ARGUS \cite{arguschi} and CLEO~II \cite{CLEOpsiinc,fastpsi}. ARGUS assumes there is no $\chi_{c2}$ production. CLEO~II has significantly better $\chi_c$ mass resolution than ARGUS and allows for both possibilities. The branching ratio for $\chi_{c0} \to \gamma \psi$ is $(6.6\pm 1.8) \times 10^{-3}$ so the contribution of the $\chi_{c0}$ meson to the $\psi\gamma$ final state can be neglected. CLEO finds evidence at the 2.5 standard deviation level for a $B\to \chi_{c2}$ contribution which would indicate non-factorizable contributions or higher order processes $O(\alpha_s^2)$ in $b\to c \bar{c} s$\cite{bodwin}. The decay of $B$ mesons to the lightest charmonium state, the $\eta_c$, has not yet been observed. A recent CLEO~II search placed a upper limit of 0.9\% on the process $B \to \eta_c X$ at the 90\% confidence level \cite{CLEOpsiinc}. Using the results in Table~\ref{khinc}, it is possible to isolate the component of $B\to \psi$ production which is due to production of higher charmonium states in B decay and the direct component. Similiarly, the direct $B\to \chi_{c1}$ component can be determined by removing the contribution from $B\to \psi^{'}$, $\psi^{'}\to \chi_{c1} \gamma$. It is assumed, that all $\psi$' mesons are directly produced. \begin{table}[htb] \let\tabbodyfont\scriptsize \caption{Branching fractions [\%] of inclusive $B$ decays } \label{khinc} \begin{tabular}{l|lll|l} \multicolumn{1}{l}{Particle} & \multicolumn{1}{l}{ARGUS} & \multicolumn{1}{l}{CLEO 1.5} & \multicolumn{1}{l}{CLEO II} & \multicolumn{1}{l}{Average} \\ \hline $\bar{B} \rightarrow \bar{D}^0 X$ & $ 49.7 \pm 3.8 \pm 6.4 \pm 2.6 $ & $ 59.7 \pm 3.2 \pm 3.6 \pm 3.1 $ & $ 63.8 \pm 1.1 \pm 2.0 \pm 1.7 $ & $ 62.1 \pm 2.0 \pm 3.2 $ \\ $\bar{B} \rightarrow D^- X$ & $ 23.0 \pm 3.0 \pm 4.4 \pm 1.5 $ & $ 24.9 \pm 3.3 \pm 2.0 \pm 1.6 $ & & $ 24.2 \pm 3.1 \pm 1.6 $ \\ $\bar{B} \rightarrow D^{*-} X$ & $ 26.7 \pm 2.3 \pm 4.5 \pm 1.4 $ & $ 22.7 \pm 1.3 \pm 2.3 \pm 1.2 $ & & $ 23.5 \pm 2.3 \pm 1.2 $ \\ $\bar{B} \rightarrow D_s^- X$ & $ 7.9 \pm 1.1 \pm 0.8 \pm 1.9 $ & $ 8.3 \pm 1.2 \pm 2.0 $ & $ 11.5 \pm 0.4 \pm 0.8 \pm 2.9 $ & $ 9.8 \pm 0.6 \pm 2.4 $ \\ $\bar{B} \rightarrow \phi X$ & & $ 2.3 \pm 0.6 \pm 0.5 $ & & $ 2.3 \pm 0.8 $ \\ $\bar{B} \rightarrow \psi X$ & $ 1.25 \pm 0.19 \pm 0.26 $ & $ 1.31 \pm 0.12 \pm 0.27 $ & $ 1.13 \pm 0.04 \pm 0.06 $ & $ 1.15 \pm 0.07 $ \\ $\bar{B} \rightarrow \psi X$ (direct) & $ 0.95 \pm 0.27 $ & & $ 0.81 \pm 0.08 $ & $ 0.82 \pm 0.08 $ \\ $\bar{B} \rightarrow \psi$'$ X$ & $ 0.50 \pm 0.19 \pm 0.12 $ & $ 0.36 \pm 0.09 \pm 0.13 $ & $ 0.34 \pm 0.04 \pm 0.03 $ & $ 0.35 \pm 0.05 $ \\ $\bar{B} \rightarrow \chi_{c1} X$ & $ 1.23 \pm 0.41 \pm 0.29 $ & & $ 0.40 \pm 0.06 \pm 0.04 $ & $ 0.42 \pm 0.07 $ \\ $\bar{B} \rightarrow \chi_{c1} X$ (direct) & & & $ 0.37 \pm 0.07 $ & $ 0.37 \pm 0.07 $ \\ $\bar{B} \rightarrow \chi_{c2} X$ & & & $ 0.25 \pm 0.10 \pm 0.03 $ & $ 0.25 \pm 0.10 $ \\ $\bar{B} \rightarrow \eta_{c} X$ & & & $ <0.90 $ (90\% C.L.) & $ <0.90 $ (90\% C.L.) \\ $\bar{B} \rightarrow p X$ & $ 8.2 \pm 0.5 \pm 1.2 $ & $ 8.0 \pm 0.5 \pm 0.3 $ & & $ 8.0 \pm 0.5 $ \\ $\bar{B} \rightarrow \bar{\Lambda} X$ & $ 4.2 \pm 0.5 \pm 0.6 $ & $ 3.8 \pm 0.4 \pm 0.6 $ & & $ 4.0 \pm 0.5 $ \\ $\bar{B} \rightarrow \Xi ^+ X$ & $ <0.51 $ (90\% C.L.)& $ 0.27 \pm 0.05 \pm 0.04 $ & & $ 0.27 \pm 0.06 $ \\ $\bar{B} \rightarrow \Lambda _c^- X$ & $ 7.0 \pm 2.8 \pm 1.4 \pm 2.1 $ & $ 6.3 \pm 1.2 \pm 0.9 \pm 1.9 $ & $ 4.2 \pm 0.5 \pm 0.6 \pm 1.3 $ & $ 4.7 \pm 0.7 \pm 1.4 $ \\ $\bar{B} \rightarrow \Sigma_c^0 X$ & & & $ 0.53 \pm 0.19 \pm 0.16 \pm 0.16 $ & $ 0.53 \pm 0.25 \pm 0.16 $ \\ $\bar{B} \rightarrow \Sigma_c^0 \bar{N}$ & & & $ <0.17 $ (90\% C.L.) & $ <0.17 $ (90\% C.L.) \\ $\bar{B} \rightarrow \Sigma_c^{++} X$ & & & $ 0.50 \pm 0.18 \pm 0.15 \pm 0.15 $ & $ 0.50 \pm 0.23 \pm 0.15 $ \\ $\bar{B} \rightarrow \Sigma_c^{++} \bar{\Delta}^{--}$ & & & $ <0.12 $ (90\% C.L.) & $ <0.12 $ (90\% C.L.) \\ $\bar{B} \rightarrow \Xi_c^+ X$ & & & $ 1.5 \pm 0.7 $ & $ 1.5 \pm 0.7 $ \\ $\bar{B} \rightarrow \Xi_c^0 X$ & & & $ 2.4 \pm 1.3 $ & $ 2.4 \pm 1.3 $ \\ \end{tabular} \let\tabbodyfont\small \end{table} Using the procedures outlined in Section II the results reported by the different experiments have been rescaled to accommodate the new charm branching ratios. The world averages for inclusive $B \to$~meson decays are given in Table~\ref{khinc}. \subsection{Inclusive $B$ Decay to Baryons } ARGUS\cite{argusbary} and CLEO~1.5\cite{crawbary} have observed inclusive production of $\bar{p}$, $\Lambda$, $\Xi$ and the charmed $\Lambda_c$ baryon. Recently CLEO~II has reported the observation of $B \to \Sigma_c X$\cite{sigmamz}, $B\to \Xi_c^{0} X$ and $B\to \Xi_c^{+} X$\cite{cleocascade}. The measured branching ratios for these decays and the world averages can be found in Table \ref{khinc}. The determination of branching ratios for inclusive $B$ decays to the charmed baryons $\Lambda_c$ and $\Sigma_c$ requires knowledge of ${\cal{B}}(\Lambda_c^+ \to pK^-\pi^+)$. However, the uncertainty in this quantity is still large as it can only be determined by indirect and somewhat model dependent methods. The results given in this review use ${\cal{B}}(\Lambda_c^+ \to pK^-\pi^+)\:=\: (4.3 \pm 1.0 \pm 0.8)$\% \cite{crawbary}. For modes involving $\Lambda_c$ baryons the uncertainty due to the $\Lambda_c$ branching ratio scale is listed as a separate error. The momentum spectrum of $B\to \Lambda_c$ transitions has been measured by CLEO~1.5 \cite{crawbary} and ARGUS \cite{argusbary}. The result of a recent CLEO~II \cite{sigmamz} measurement is shown in Fig.~\ref{lambdacmom}(a). The momentum spectrum is rather soft indicating $\Xi_c$ production or the presence of a significant multibody component. Similarly, CLEO~II has found that $B\to \Sigma_c^0 X$ and $B\to \Sigma_c^{++} X$ decays have no two body contribution. In addition to the inclusive branching ratios given above, the experimental data has been used in attempts to disentangle which of the baryon production mechanisms shown in Fig.~\ref{btobaryon} dominates. CLEO~1.5 \cite{crawbary} and ARGUS \cite{argusbary} have investigated baryon correlations in $B$ decay in order to elucidate the underlying decay process. \begin{table}[htb] \caption{Branching fractions [\%] of inclusive $B$ decays to baryon pairs.} \label{Tbbaryonp} \begin{tabular}{lll} Mode & CLEO 1.5 & ARGUS \\ \hline $B\to p \bar{p}~X$ & $2.4\pm 0.1\pm 0.4 $ & $ 2.5\pm 0.2\pm 0.2$ \\ $B\to \Lambda \bar{\Lambda}~X$ & $<0.5$ (90\% C.L.) & $<0.88$ (90\% C.L.) \\ $B\to \Lambda \bar{p} ~X$ & $2.9\pm 0.5\pm 0.5 $ & $2.3\pm 0.4\pm 0.3$ \\ $B\to D^{*+} p \bar{p} ~X$ & $< 0.35 $ (90\% C.L.) & $ $ \\ $B\to D N \bar{N} ~X$ & $<5.2$ (90\% C.L.) & $ $ \end{tabular} \end{table} We follow the notation of Reference\cite{crawbary} . Let N denote baryons with $S=C=0$ (e.g. p, n, $\Delta$, $N^*$). Let Y refer to baryons with $S=-1, C=0$ (e.g. $\Lambda$, $\Sigma^0$, $\Sigma^+$). Let $Y_c$ refer to baryons with $S=0, C=1$ [e.g. $\Lambda_{c}^{+}$, $\Sigma_{c}^{(+,0,++)}$] . Then the following final states can be used to distinguish possible mechanisms for baryon production in $B$ decay (Fig. \ref{btobaryon}). \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.5)(0,0) \put(-1.1,-0.1) {\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,% width=6.0in,height=4.0in,file=btobaryon.ps}} \end{picture} \vskip 10 mm \caption{ Decay diagrams for $B$ meson decays to baryons: (a) External spectator diagram (b) W Exchange diagram (c) External spectator diagram which produces $D N \bar{N} X$ and $D Y \bar{Y} X$ final states (d) Internal spectator diagram which produces $DN\bar{N}X$ and $DY\bar{Y}X$ final states.} \label{btobaryon} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(2.0,1.5)(0.0,0.0) \put(-0.6,0.81){\psfig{width=3.8in,height=1.1in,file=intbaryon.ps}} \end{picture} \caption{ Baryon production in $B$ meson decay via internal $W$ emission. (a) $b \to c\bar{u}d$ with $q\bar{q}$ popping, (b) $b \to c\bar{c}s$ with $q\bar{q}$ popping.} \label{intbaryon} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(4.,2.7)(0,0) \put(-1.1,-0.2) {\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,% width=6.5in,height=5.5in,file=lambdac_momentum.ps}} \end{picture} \vskip 2 mm \caption{ Momentum spectrum of $\Lambda_c$ baryons from $B$ decay (CLEO~II). (a) The overlaid histograms show the spectra from two components of the internal W-emission process $b\to c \bar{c} s$, $\bar{B}\to \Xi_c\bar{\Lambda_c}$, and $\bar{B}\to \Xi_c^{'}\bar{\Lambda_c}$. (b)The overlaid histograms are the results of a Monte Carlo study assuming multibody $\Lambda_c + (n\pi)$ final states with different numbers of additional pions. (c) The $\Lambda_c$ momentum spectrum for events with a $\Lambda_c^+$ in coincidence with a high momentum lepton ($\ell^-$) tag. (d) The $\Lambda_c$ momentum spectrum for events with a $\Lambda_c^+$ in coincidence with a high momentum lepton ($\ell^+$) tag. } \label{lambdacmom} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(4.,4.0)(0,0) \put(-1.1,-0.7) {\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,% width=5.5in,height=4.5in,file=lambdac_lep_cor.ps}} \end{picture} \vskip 2 mm \caption{$\Lambda_c -$ lepton correlation in $B$ decay (CLEO~II). (a) The $p K^-\pi^+$ invariant mass spectrum for $\Lambda_c^+-\ell^+$ combinations. (b) The $p K^-\pi^+$ invariant mass spectrum for $\Lambda_c^+-\ell^-$ combinations.} \label{lambdalep} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,2.5)(0,0) \put(-1.1,-0.1) {\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,% width=5.0in,height=5.0in,file=bksi_ksi_mass.ps}} \end{picture} \vskip 4 mm \caption{Evidence for $\Xi_c$ production in $B$ decays (CLEO~II). (a) Continuum subtracted $\Xi_c^0\to \Xi^-\pi^+$ invariant mass distribution. (b) Continuum subtracted $\Xi_c^+\to \Xi^-\pi^+\pi^+$ invariant mass distribution.} \label{bksimass} \end{center} \end{figure} \begin{enumerate} \item {$\bar{B} \to Y_c \bar{N} X$, $\bar{B} \to \Xi_c \bar{Y} X$}\\ These final states are produced by the usual $b \to c W^-$ coupling in a spectator or exchange diagram in conjunction with the popping of two quark pairs from the vacuum (as shown in Figs.~\ref{btobaryon}(a),(b)). It should be noted that the two mechanisms can be distinguished by examination of the $Y_c$ momentum spectrum, since the exchange diagram will produce two body final states (e.g. $\Lambda_c \bar{p}$ or $\Sigma_c^{++} \bar{\Delta}^{--}$). \item {$\bar{B}\to D N \bar{N} X$, $\bar{B} \to D Y \bar{Y} X$}\\ The non-charmed baryon-antibaryon pair is produced from W fragmentation after hadronization with two quark-antiquark pairs popped from the vacuum (as shown in Figs.~\ref{btobaryon}(c),(d)). The $D$ meson is formed from the charm spectator quark system. If this mechanism is significant, inclusive production of charmless baryon-antibaryon pairs should be observed in $B$ decay. \item{$\bar{B} \to Y_c \bar{Y} X$, $\bar{B} \to \Xi_c \bar{Y_c} X$}\\ These states are produced by the internal spectator graph with $W^- \to \bar{c} s$ in conjunction with the popping of two quark antiquark pairs. Since ${\cal B} (W^- \to \bar{c} s)/ {\cal B} (W^- \to all)$ is about $0.15$, this mechanism may be suppressed. \item {$\bar{B}\to D_{s}^{-} Y_c \bar{N} X$, $\bar{B}\to D_{s}^{-} \Xi_c \bar{Y} X$}\\ This is the same as mechanism (1) with $W^- \to \bar{c} s$. \end{enumerate} The low rates for $B\to \Lambda \bar{\Lambda} X$, $\Lambda \bar{p} X$ and $D^* p \bar{p} X$(see Table \ref{Tbbaryonp}) suggest that mechanism (2) is small. The absence of a two body component in the momentum spectra of $B\to \Lambda_c X$, $\Sigma_c X$ indicates that the W-exchange mechanism is small. Thus it was thought reasonable to assume that $\bar{B}\to Y_c \bar{N} X$ with an external spectator $b\to c W^-$ coupling (Fig.~\ref{btobaryon}(a)) is the principal mechanism in $B$ to baryon transitions. If $B$ decays to baryons are dominated by $\bar{B} \to \Lambda_c \bar{p} X$ and $\bar{B} \to \Lambda_c \bar{n} X$ then measurements of the branching ratios for $B \to \bar{p} X$, $B \to p \bar{p} X$ can be used to extract the absolute $\Lambda_c \to p K^- \pi^+$ branching ratio. The CLEO~1.5 measurements give $B (\Lambda_c \to p K^- \pi^+) = 4.3 \pm 1.0 \pm 0.8 \%$ which can be used to normalize all other measured $\Lambda_c$ branching ratios. In a similar fashion, ARGUS finds $(4.1\pm 2.4)$\% for this branching ratio. An alternate explanation for the absence of a two body component in $B$ decays to baryons was recently proposed by Dunietz, Falk and Wise\cite{dunietzbary}. These authors suggested that the primary mechanism in such decays is the internal W-emission process $b\to c \bar{c} s$. This might lead to two body final states such as $\bar{B}\to \bar{\Lambda_c} \Xi_c$ which would account for the softness of the $\Lambda_c$ momentum spectrum. CLEO has searched for the mechanism suggested by Dunietz {\it et al.}~in a variety of ways. By examining $\Lambda_c$-lepton correlations, it is possible to constrain the size of the $b\to c \bar{c} s$ component in $B\to {\rm baryon}$ decays. The $b\to c \bar{c} s$ component gives rise to opposite sign $\Lambda_c^+ \ell^-$ correlations whereas the internal process W-emission process $b\to c u {\bar d}$ gives same sign $\Lambda_c^+ \ell^+$ correlations (Fig.~\ref{lambdalep}(a)). From the ratio of same sign to opposite sign $\Lambda_c$-lepton yields, CLEO finds $b\to c \bar{c} s/ b\to c \bar{u} d = (20\pm 13 \pm 4) \%$ for internal W-emission processes. This shows that $b\to c \bar{c} s$, although present, is not the dominant mechanism operating in B decays to baryons. CLEO~II has measured the $\Lambda_c^+$ momentum spectrum separately for $\Lambda_c^+\ell^-$ and $\Lambda_c^+\ell^+$ correlations (Figs.~\ref{lambdalep}(b), (c)). The $\Lambda_c^+$ momentum spectrum is somewhat softer in events containing an additional $\ell^-$ tag. This is consistent with the expectation that $b\to c\bar{c}s$ transitions produce $\Lambda_c^+$ baryons accompanied by very massive $\bar{\Xi}_c$ baryon. On the other hand, in $b\to c\bar{u}d$ transitions, the $\Lambda_c^+$ is produced in association with a lighter nucleon or nucleon resonance, which should result in a hard $\Lambda_c^+$ momentum spectrum. Since the $b\to c \bar{c} s$ mechanism is present, $\Xi_c^+$ and $\Xi_c^0$ baryons should be produced in $B\to$baryon transitions. However, $\Xi_c^0$ baryons can also be produced from $b\to c\bar{u}d$ transitions with $s\bar{s}$ popping. Naively, one estimates $s\bar{s}$ popping to be approximately 15\% of all $q\bar{q}$ popping. Thus this mechanism should contribute $\displaystyle \frac{B\to\Xi_c\bar{\Lambda}X}{B\to \Lambda_c N X}\approx 0.15$ to the observed $B\to \Xi_c$ rate. A simple phase space argument gives $\displaystyle\frac{W^- \to \bar{c}s}{W^- \to \bar{u}d}\approx 0.30$. Combining these two contributions, one expects a $B\to \Xi_c$ branching ratio of $0.45 \times {\cal{B}}(B\to \Lambda_c X)$. Experimentally, the sum of the rates for $B\to \Xi_c^+$ and $B\to \Xi_c^0$ decays relative to $B\to \Lambda_c X$ is consistent with this expectation $$ \frac{{\cal{B}}(B\to \Xi_c X)}{{\cal{B}}(B\to \Lambda_c X)} \; = \; 0.8 \pm 0.4 $$ However, the $\Xi_c$ absolute branching ratio scale is poorly known and the experimental errors need to be reduced before any final conclusion can be deduced from this ratio. To verify whether the dominant mechanism for baryon production in B decays is the external spectator mechanism with $b\to c \bar{u} d$ as was previously assumed by the CLEO and ARGUS analyses, CLEO~II has searched for evidence of $B\to \Lambda_c \bar{N} \ell \nu$. This should give rise to several distinctive experimental signatures: $\Lambda$-lepton correlations, $\Lambda_c$-lepton correlations, and semi-exclusive $B\to \Lambda_c^+ \bar{p} \ell^- \nu$ production having a massing mass consistent with a B decay. No significant signals were observed and limits (at the 90\% C.L.) of $(B\to \Lambda_c \bar{N} X\ell\nu)/(B\to \Lambda_c X) $ $<5.7\%$, $(B\to \Lambda_c \bar{N}\ell\nu)/(B\to \Lambda_c X) $ $<6\%$, $(B\to \Lambda_c \bar{p}\ell\nu)/(B\to \Lambda_c X) $ $<10\%$, respectively, were obtained\cite{glasbary}. These limits indicate that the conventional and previously accepted picture of baryon production in $B$ decay is incorrect. A possible explanation of all the existing data requires the simultaneous presence of several production mechanisms. The internal spectator process $b\to c \bar{u} d$ followed by $u \bar{u}$ or $d \bar{d}$ quark popping is dominant. This leads to production of a high mass excited anti-nucleon in conjunction with a charmed baryon and accounts for the soft momentum spectrum of charmed baryons produced in B decay as well as the absence of $B\to \Lambda_c \bar{N} X \ell \nu$. The internal spectator process $b\to c \bar{c} s$ with quark popping as well as the internal spectator process $b\to c \bar{u} d$ with $s \bar{s}$ quark popping are also operative at the 10-20\% level. The latter two mechanisms account for the production of $\Xi_c$ baryons in B decay. \subsection{Charm Production in $B$ Decay} \label{charmpro} The measurements of inclusive decay rates can be used to test the parton level expectation that most $B$ decays proceed via a $b\to c$ transition. If we neglect the small contributions from $b\to u$ and penguin transitions, we expect about 1.15 charm quarks to be produced per $B$ decay. The additional $15\%$ is due to the fact that the virtual W forms a $s \bar{c}$ quark pair with a probability of approximately $0.15 $. To verify this expectation we use the experimental results listed in Table~\ref{khinc} and determine the charm yield to \begin{eqnarray*} {\rm Charm~yield} & = & {\cal B}(B \to D^0 X) + {\cal B}(B \to D^+ X) + {\cal B}(B \to D_s X) \\ & +& {\cal B}(B \to \Lambda_c X) + {\cal B}(B \to \Xi^+_c X) + {\cal B}(B \to \Xi^0_c X) \\ & + &2\times{\cal B}(B\to \psi X) + 2\times{\cal B}(B\to \psi{\rm '} X) + 2\times{\cal B}(B\to \chi_{c1} X) \\ &+& 2\times{\cal B}(B\to \chi_{c2} X) + 2\times{\cal B}(B\to \eta_c X~({\rm incl.~other ~c\bar{c}}))\\ & = & 1.10 \pm 0.06 \\ \end{eqnarray*} The factor of 2 which multiplies ${\cal{B}}(B\to c\bar{c}X)$ accounts for the two charm quarks produced in $b \to c\bar{c}s$ transitions. Wherever possible the branching fractions for direct production are used. The contribution of $B\to \eta_c X$ and other charmonia is generously taken to be at the CLEO 90\% confidence level upper limit for the process $B\to \eta_c X$. Another interesting quantity is the fraction of $B$ decays in which two charm quarks are produced. In a parton level calculation, Palmer and Stech \cite{palmstech} find that ${\cal{B}}(B \to X_{c \bar{c}}) = 19 \pm 1 \%$ where the theoretical error is the uncertainty due to the choice of quark masses. This can be compared to the sum of the experimental measurements \begin{eqnarray*} {\cal{B}}(B \to X_{c \bar{c}}) & = & {\cal{B}}(B \to D_s X) + {\cal{B}}( B \to \psi X) + {\cal{B}}(B\to \psi' X) \\ & +& {\cal {B}}(B\to \chi_{c1} X) + {\cal {B}}(B\to \chi_{c2} X) + {\cal {B}}(B\to \Xi_c X) \\ & +& {\cal {B}}(B\to \eta_c X~({\rm incl.~other ~\bar{c}})) \\ &= & (15.6 \pm 2.7)\% \\ \end{eqnarray*} where the direct $B \to \psi$ and $B \to \chi_{c1}$ branching fraction have been used. The contribution from $B\to \Xi_c^0 X$ is reduced by 1/3 to take into account the fraction that is not produced by the $b\to c \bar{c} s$ subprocess but by $b\to c\bar{u}d \, + \, s \bar{s}$ quark popping. With the addition of these recent experimental results the understanding of baryon production in B decay is improving. In contrast to meson production in $B$ decay, $B \to {\rm baryon}$ transitions proceed predominantly through the internal W-emission process $b\to c\bar{u}d$ followed by light quark pair popping. In a parton level calculation with diquark correlation taken into account, Palmer and Stech \cite{palmstech} have performed a calculation of the total rate for inclusive $B$ decay to charmed baryons. They find ${\cal{B}}(B \to$ charmed baryons) $\approx 6\%$. In order to compare this prediction with experimental data, we will assume most $B$ to charmed baryon decays proceed through a $\Lambda_c$ baryon but correct for the small fraction of $\Xi_c$ baryons produced by $b \to c\bar{u}d$ transitions combined with $s\bar{s}$-popping. This gives \begin{center} \begin{tabular}{rcl} ${\cal B}(B \to {\rm charmed~baryons})$ & = & ${\cal{B}}(B\to \Lambda_c X) + 1/3 \times {\cal{B}}(B\to \Xi_c^0)$\\ & = & $(5.5 \pm 1.6)\% $ \end{tabular} \end{center} \begin{table}[htb] \caption{CLEO II results on exclusive branching ratios for $B \to $baryon transitions \protect\cite{exclbaryon}.} \label{exclbaryons} \begin{tabular}{lll} $B$ mode & Events observed & ${\cal{B}}$ [\%]\\ \hline $\bar{B}^0 \to \Lambda_c^+\bar{p}$ & $<2.3$ & $< 4.4 \times 10^-2$\\ $\bar{B}^0 \to \Lambda_c^+\bar{p}\pi^0$ & $<2.3$ & $< 0.076$ \\ $\bar{B}^0 \to \Lambda_c^+\bar{p}\pi^+\pi^- $& $15.0\pm4.7$ & $0.187 \pm 0.059 \pm 0.056 \pm 0.045$\\ $\bar{B}^0 \to \Lambda_c^+\bar{p}\pi^+\pi^-\pi^0$ & $<11.6$ & $< 0.76$\\ $\bar{B}^0 \to \Lambda_c^+\bar{p}\pi^+\pi^-\pi^+\pi^-$ & $<6.4$ & $< 0.34$\\ \hline $B^- \to \Lambda_c^+\bar{p}\pi^-$ & $<6.4$ & $< 0.084 $\\ $B^- \to \Lambda_c^+\bar{p}\pi^-\pi^0$ & $<8.7$ & $< 0.36$ \\ $B^- \to \Lambda_c^+\bar{p}\pi^-\pi^+\pi^-$ & $<14.7$ & $< 0.55$ \\ $B^- \to \Lambda_c^+\bar{p}\pi^-\pi^+\pi^-\pi^0$ & $<15.6$ & $< 2.17 $ \end{tabular} \end{table} The experimental result for the charm yield per $B$ decay is consistent with the naive expectation that $1.15$ charm quarks are produced per $b$ decay. However, it does not support a number of proposals which suggest that at least $1.3$ quarks should be produced per $b$ decay. In these recent theoretical efforts, large charm quark yields are a consequence of modifying the heavy quark masses in order to explain the discrepancy between theoretical calculations and experimental measurements of the inclusive semileptonic rate, ${\cal B}(B \to X\ell \nu)$ (see Section~\ref{baffle})\cite{falk_baffle}. The data are not yet sufficiently precise to convincingly rule out the possibility of a larger charm yield. In addition, there are several possible systematic flaws in the computation of the yield of charm quarks. The charm meson absolute branching fractions can contribute a systematic uncertainty, although the errors from this source have been significantly reduced by the recent precise determinations of ${\cal B}(D^0\to K^-\pi^+)$\cite{DKpi} and ${\cal B}(D^+\to K^-\pi^+\pi^+)$. However, the absolute branching fraction scales for the $D_s$ meson and $\Lambda_c$ baryons are still quite uncertain. Since the inclusive branching ratios to these particles are small, a substantial change to the branching ratio scale would be required to significantly modify the charm yield. There could also be a large contribution to the inclusive rate that has not been measured. It has been suggested by Palmer and Stech\cite{palmstech}, that $b \to c \bar{c} s$ followed by $c \bar{c} \to \rm{gluons}$, which in turn hadronize into a final state with no charm, has a large branching ratio. The charm content for this mechanism would not be properly taken into account. Another related suggestion is that the rate for the hadronic penguin diagram $b\to sg$ is larger than expected\cite{kaganbsg}. \section{EXCLUSIVE B DECAY TO BARYONS} The first exclusive $B\to$baryon decay has been observed by CLEO~II\cite{exclbaryon}. A small signal was reconstructed in the mode $\bar{B}^0\to \Lambda_c \bar{p} \pi^+ \pi^-$ corresponding to a branching ratio of $0.187\pm 0.059\pm 0.056\pm 0.045\% $. In addition, CLEO ~II has set limits on other exclusive modes which are given in Table~\ref{exclbaryons}. \begin{figure}[p] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.)(0,0) \put(-.35,0.0){\psfig{width=2.5in,height=2.5in,% file=bexcl_dstr1.ps}} \end{picture} \bigskip \bigskip \vskip 15 mm \caption[]{Beam constrained mass distributions from CLEO~II for: (a) $B^- \to D^{*0} \pi^-$ decays, (b) $B^- \to D^{*0} \rho^-$ decays, (c) $\bar{B}^0 \to D^{*+} \pi^-$ decays, and (d) $\bar{B}^0 \to D^{*+} \rho^-$ decays.} \label{dspi} \end{center} \vskip 2 mm \begin{center} \unitlength 1.0in \begin{picture}(2.2,2.2)(0.0,0.0) \put(-1.,-0.9){\psfig{width=4.in,height=4.in,file=bexcl_fig11.ps}} \end{picture} \vskip 10 mm \caption[]{Resonant substructure for $B\to D^* \rho^-$ from CLEO~II for: (a) the $\pi^0\pi^-$ invariant mass spectrum for the $ \bar{B}^0 \to D^{*+} \pi^0\pi^-$ decay mode in data. (b) the $\pi^0\pi^-$ invariant mass spectrum for the $ \bar{B}^0 \to D^{*+} \pi^0\pi^-$ decay mode in data.} \label{subs} \end{center} \end{figure} \begin{figure}[p] \unitlength 1.0in \vskip 10 mm \begin{picture}(3.,2.0)(0,0) \put(1.4,-1.1){\psfig{width=3.0in,height=2.6in,% file=bexcl_dstr2.ps}} \end{picture} \bigskip \caption[]{Beam constrained mass distributions from CLEO~II for: (a) $B^- \to D^{*0} a_{1}^{-}$ and (b) $\bar{B}^0 \to D^{*+} a_{1}^{-}$.} \label{FBmaone} \unitlength 1.0in \vskip 18 mm \begin{center} \begin{picture}(2.2,2.4)(0.0,0.0) \put(-0.7,-0.8){\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt% ,width=3.7in,height=3.3in,file=bexcl_ma1dst.ps}} \end{picture} \vskip 15 mm \caption[]{Resonant substructure of $\bar{B^0}\to D^{*+} a_1$ from CLEO~II: (a) The $\pi ^- \pi ^- \pi ^+$ invariant mass spectrum from a Monte Carlo simulation of $\bar {B}^0 \to D^{*+} a_1^-$ (b) The $\pi ^- \pi ^- \pi ^+$ invariant mass spectrum from Monte Carlo simulation for $\bar {B}^0 \to D^{*+} (\pi ^- \rho ^0)_{NR}$ (c) The $\pi ^- \pi ^- \pi ^+$ mass spectrum from data after $B$ mass sideband subtraction. The fit to the sum of (a) and (b) is superimposed.} \label{Fmaonea} \end{center} \end{figure} \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(3.0,3.4)(0.0,0.0) \put(-.35,.3){\psfig{width=2.5in,height=2.5in,file=bexcl_d24202460.ps}} \end{picture} \vskip 15 mm \caption[]{ Beam constrained mass distributions from CLEO~II for: (a) $B^- \to D^{**0}(2420) \pi^-$ where $D^{**0}(2420) \to D^{*+} \pi^-$, (b) $B^- \to D^{**0}(2460) \pi^-$ where $D^{**0}(2460) \to D^{*+} \pi^-$, (c) $B^- \to D^{**0}(2420) \pi^-\pi^0$ where $D^{**0}(2420) \to D^{*+}\pi^-$, (d) $B^- \to D^{**0}(2460) \pi^-\pi^0$ where $D^{**0}(2460) \to D^{*+} \pi^-$} \label{dsspi} \end{center} \end{figure} \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(3.0,3.5)(0.0,0.0) \put(-0.35,0.3){\psfig{width=2.5in,height=2.5in,file=bexcl_fig22.ps}} \end{picture} \vskip 15 mm \caption[]{Angular distributions (efficiency corrected) from CLEO~II for (a) the helicity angle from $D^{*+} \to D^0 \pi ^+$ in $\bar {B^0} \to D^{*+} \rho ^-$ and (b)the helicity angle from $\rho ^- \to \pi ^- \pi ^0$ in $\bar {B^0} \to D^{*+} \rho ^-$ (c) the helicity angle from $D^{*+} \to D^0 \pi ^+$ in $\bar {B^0} \to D^{*+} \pi ^-$} \label{helrho} \end{center} \end{figure} \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(3.0,3.5)(0.0,0.0) \put(-0.3,.3){\psfig{width=4.in,height=4.2in,file=bdds_bzero.ps}} \end{picture} \caption[]{ Beam constrained mass distributions for $\bar{B}^0 \to D^{(*)+} D_s^{(*)-}$ from CLEO~II.} \label{bdds_zero} \end{center} \end{figure} \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(3.0,3.5)(0.0,0.0) \put(-.3,.3){\psfig{width=4.in,height=4.2in,file=bdds_bplus.ps}} \end{picture} \caption[]{ Beam constrained mass distributions for $B^- \to D^{(*)0} D_s^{(*)-}$ from CLEO~II.} \label{bdds_plus} \end{center} \end{figure} \section{EXCLUSIVE B DECAY TO D MESONS} \label{BDpiDrho} \subsection{Measurements of $D (n \pi)^-$ Final States} The decay modes $ \bar{B^0}\to D^+ \pi^-$, $ \bar{B^0} \to D^+ \rho^-$, $ B^-\to D^0 \pi^-$, and $ B^- \to D^0 \rho^-$ are reconstructed following the procedures outlined in Section \ref{B-recon}. The beam constrained mass distributions from CLEO II are shown in Fig.~\ref{dpi}, while the experimental branching ratios are given in Tables ~\ref{kh1} and \ref{kh2} . To select $ \bar{B} \to D \rho^-$ candidates additional requirements are imposed on the $\pi^-\pi^0$ invariant mass and the $\rho$ helicity angle. The CLEO~II analysis requires $ |m(\pi^- \pi^0) - 770|< 150$~MeV and $|\cos\Theta_{\rho}|>0.4$. For the $B \to D \rho^-$ modes there are events which are consistent with both $B \to D \rho^-$ and with $ B \to D^{*} \pi^-$, followed by $ D^{*} \to D \pi^0$. These events are removed from the $B \to D \rho^-$ sample using a cut on the $D^{*} - D$ mass difference. By fitting the $\pi^- \pi^0$ mass spectrum and the helicity angle distribution, CLEO~II finds that at least 97.5\% of the $B \to D \pi^-\pi^0$ rate is described by the decay $B \to D \rho^-$\cite{mcdd}. ARGUS\cite{ThirdB} also finds that the $\pi^- \pi^0$ mass spectrum is consistent with the dominance of $\rho$ production. \subsection{Measurements of $D^*(n\pi)^-$ Final States} We now consider final states containing a $D^*$ meson and one, two or three pions. These include the $B \to D^* \pi^-$ , $B \to D^* \rho^-$, and $B \to D^* a_1^-$ decay channels. The results for the decays $\bar{B^0} \to D^{*+} \pi^-$, $\bar{B^0} \to D^{*+} \rho^-$ and $\bar{B^0} \to D^{*+} \pi^-\pi^-\pi^+$ are listed in Table~\ref{kh2}, and the results for $B^- \to D^{*0} \pi^-$, $B^- \to D^{*0} \rho^-$ and $B^- \to D^{*0} \pi^-\pi^-\pi^+$ are given in Table ~\ref{kh1}. The CLEO II $B^-$ and $\bar{B}^0$ signals in the $D^* \pi$ and $D^* \rho$ decay channels are shown in Fig. \ref{dspi}. They find that $B \to D^* \pi^-\pi^0$ is saturated by the decay $B \to D^* \rho^-$ (Fig. \ref{subs}) and set a tight upper limit of $<9$\% at 90\% C.L. on a possible non-resonant contribution \cite{mcdrho}. This disagrees with an ARGUS analysis that finds about 50\% of $\bar{B}^0 \to D^{*+} \pi^- \pi^0$ decays do not contain a $\rho^-$ meson \cite{FifthB}. The CLEO~II data suggest that the signal in $B\to D^{*}\pi^-\pi^-\pi^+$ arises dominantly from $B\to D^{*} a_1^-$. Taking into account the $a_1 \to \pi^-\pi^-\pi^+$ branching fractions it follows that ${\cal{B}}(B\to D^{*} a_1^-) = 2 \times {\cal{B}}(B\to D^{*}\pi^-\pi^-\pi^+)$. In Fig.~\ref{FBmaone} we show the $M_B$ distributions when the $\pi^-\pi^-\pi^+$ invariant mass is required to be in the interval $1.0 <\pi^-\pi^-\pi^+ < 1.6$ GeV. Fig.~\ref{Fmaonea} shows a fit to the $\pi^-\pi^-\pi^+$ mass distributions with contributions from $B \to D^{*+} a_1^-$ and a $B \to D^{*+}\pi^- \rho^0$ non-resonant background. The $a_1$ meson has been parameterized as a Breit-Wigner resonance shape with $m_{a_1} = 1182 $ MeV and $\Gamma_{a1} = 466$ MeV. This fit gives an upper limit of 13\% on the non-resonant component in this decay. This conclusion differs from CLEO~1.5 which attributed $(35\pm 15\pm 8)$\% of their $\bar{B^0} \to D^{*+} \pi^-\pi^-\pi^+$ signal to non-resonant $\bar{B^0} \to D^{*+} \pi^-\rho^0$ decays \cite{anotherB}. ARGUS also finds a significant non-$a_1$ component in this decay but does not quote a quantitative result \cite{FifthB}. The Cabibbo suppressed decay modes such as $B\to D K$ should also be observed and studied in the future. These modes, in particular, $B^+\to D^0 K^+$ and $B^+ \to \bar{D}^0 K^+$ with $D^0\to |f_{CP}>$ (where $|f_{CP}>$ denotes a CP eigenstate) will be used at B factories to constrain one of the three angles of the unitary triangle. \subsection{Polarization in $B \to D^{*+}\rho^-$ Decays} \label{pol-D*-rho} The sample of fully reconstructed $ \bar{B^0} \to D^{*+}\rho^-$ decays from CLEO~II has been used to measure the $D^{*+}$ and $\rho^-$ polarizations. By comparing the measured polarizations in $\bar{B^0} \to D^{*+}\rho^-$ with the expectation from the corresponding semileptonic B decay a test of the factorization hypothesis can be performed (see Sec.~\ref{fac-ang-cor}). The polarization is obtained from the distributions of the helicity angles $\Theta_{\rho}$ and $\Theta_{D^*}$. The $D^{*+}$ helicity angle, $\Theta_{D^*}$, is the angle between the $D^0$ direction in the $D^{*+}$ rest frame and the $D^{*+}$ direction in the rest frame of the $B$ meson. After integration over $\chi$, the angle between the normals to the $D^{*+}$ and the $\rho^-$ decay planes, the helicity angle distribution can be expressed as \begin{equation} {d^2\Gamma\over{d\cos\Theta_{D^*}d\cos\Theta_{\rho}}} \propto {1\over{4}}\sin^2\Theta_{D^*}\sin^2\Theta_{\rho}(|H_{+1}|^2+|H_{-1}|^2) +\cos^2\Theta_{D^*}\cos^2\Theta_{\rho}|H_{0}|^2 \label{polar3d} \end{equation} where $H_{i}$ are the amplitudes for the various possible $D^*$ helicity states. The fraction of longitudinal polarization is defined by \begin{equation} {{\Gamma_L}\over{\Gamma}} ~ = ~ {{|H_0|^2}\over{|H_{+1}|^2 + |H_{-1}|^2 + |H_{0}|^2}} \label{ratiohel} \end{equation} If $\Gamma_L$ is large both the $D^{*+}$ and the $\rho^{-}$ helicity angles will follow a $\cos^{2}\Theta$ distribution, whereas a large transverse polarization, $\Gamma_T$, gives a $\sin^2\Theta$ distribution for both helicity angles. To measure the polarization the helicity angle distributions in the $B$ signal region are corrected by subtracting the distributions from a properly scaled mass sideband. The resulting helicity angle distributions, corrected for efficiency, are fitted to the functional form: \begin{equation} {d\Gamma\over{d\cos\Theta}} = N ~ \left[ \cos^2 \Theta ~ + ~{{1}\over{2}} {{\Gamma_{T}}\over{\Gamma}} (1 -3 \cos^2 \Theta) \right]. \label{fithel} \end{equation} This form is derived from the angular distribution given above. It is well behaved for large longitudinal polarization. From the fit to the $D^{*+}$ helicity angle distribution, they find $\Gamma_{L}/\Gamma =(85 \pm 8) \% $, while a fit to the $\rho$ helicity angle distribution gives $\Gamma_{L}/\Gamma = (97\pm 8)\%$. The results of the fit are shown in Fig.~\ref{helrho}(a) and (b). As a consistency check they have verified that the $D^{*+}$ mesons in $\bar{B}^0 \to D^{*+}\pi^-$ are completely longitudinally polarized, as expected from angular momentum conservation (Fig. \ref{helrho}(c)). The statistical errors can be reduced by taking advantage of the correlation between the two helicity angles. An unbinned two dimensional likelihood fit to the joint $(\cos\Theta_{D^*}, \cos\Theta_{\rho})$ distribution gives \begin{equation} (\Gamma_{L}/\Gamma)_{\bar{B^0} \to D^{*+} \rho^-}\; =\; 93 \pm 5 \pm 5 \% \end{equation} \subsection{Measurements of $D^{**}$ Final States} \label{B->D**} In addition to the production of $D$ and $D^*$ mesons, the charm quark and spectator antiquark can hadronize as a $D^{**}$ meson. The $D^{**0}(2460)$ has been observed experimentally and identified as the J$^P=2^+$ state, while the $D^{**0}(2420)$ has been identified as the $1^+$ state. These states have full widths of approximately 20 MeV. Two other states, a $0^+$ and another $1^+$ are predicted but have not yet been observed, presumably because of their large intrinsic widths. There is evidence for $D^{**}$ production in semileptonic $B$ decays\cite{Dssin}, and $D^{**}$ mesons have also been seen in hadronic decays. However, early experiments did not have sufficient data to separate the two narrow $D^{**}$ states and hence reported branching ratios only for the combination of the two (see results listed under $B \to D_J^{(*)0}$ in Tables~\ref{kh1} -- \ref{kh4}). In order to search for $D^{**}$ mesons from $B$ decays the final states $B^- \to D^{*+} \pi^- \pi^-$ and $B^- \to D^{*+} \pi^- \pi^- \pi^0$ are studied. These decay modes are not expected to occur via a spectator diagram in which the $c$ quark and the spectator antiquark form a $D^*$ rather than a $D^{**}$ meson. The $D^{*+}$ is combined with a $\pi ^-$ to form a $D^{**}$ candidate. If the $D^{**}$ candidate is within one full width of the nominal mass of either a $D^{**0}(2420)$ or a $D^{**0}(2460)$, it is combined with a $\pi^-$ or $\rho^-$ to form a $B^-$ candidate. CLEO~II has also looked for $D^{**}$ production in the channels $B^-\to D^+ \pi^- \pi^-$ and $\bar{B^0}\to D^0 \pi^- \pi^+$. Since $D^{**0}(2420)\to D \pi$ is forbidden, only the $D^{**0}(2460)$ is searched for in the $D \pi \pi$ final state. Fig.~\ref{dsspi} shows candidate $B$ mass distributions obtained by CLEO~II for the four combinations of $D^{**0}(2460)$ or $D^{**0}(2420)$, and $\pi^-$ or $\rho^-$. In the $D^{**0}(2420) \pi^-$ mode, there is a significant signal of 8.5 events on a background of 1.5 events. In this channel CLEO~II quotes the branching ratio given in Table~\ref{kh1}, while for the other three channels, they give upper limits. ARGUS has also found evidence for $B \to D^{**}(2420) \pi^-$ using a partial reconstruction technique in which they observe a fast and slow pion from the $D^{**}$ decay but do not reconstruct the $D^0$ meson\cite{Krieger}. Other final states with higher pion multiplicities should be systematically studied in the future. For example, due to the large combinatorial background, there is little information available on $B\to D^{(*)}\pi\pi\pi\pi$. \begin{table}[htb] \caption{$D_s$ decay channels used to reconstruct $B\to DD_s$ decays.} \label{TDinf} \hfill{ \begin{tabular}{lll} ARGUS \cite{ARGUSDDs} & CLEO 1.5 \cite{DDcleo}& CLEO~II\\ \hline $D_s^+ \to \phi \pi^+ $& $D_s^+ \to \phi \pi^+ $ & $D_s^+ \to \phi \pi^+ $\\ $D_s^+ \to \phi \pi^+ \pi^0 $& & $D_s^+ \to \phi \pi^+ \pi^0 $ \\ $D_s^+ \to \phi \pi^+ \pi^+ \pi^- $& & \\ $D_s^+ \to K_s K^+ $& $D_s^+ \to K_s K^+ $ & $D_s^+ \to K_s K^+ $\\ $D_s^+ \to K_s K^{*+}$ & & \\ $D_s^+\to\bar{K}^{*0} K^+$&$D_s^+\to\bar{K}^{*0} K^+$ & $D_s^+\to\bar{K}^{*0}K^+$\\ $D_s^+ \to K^{*0} \bar{K}^{*+} $& $D_s^+ \to \bar{K}^{*0} K^{*+}$ &\\ & & $D_s^+ \to \eta \pi^+$ \\ & & $D_s^+ \to \eta \rho^+$ \\ $D_s^+ \to \eta ' \pi^+$ & & \end{tabular}} \hfill \end{table} \subsection{Exclusive Decays to $D$ and $D_s$ Mesons} \label{doubledees} Another important class of modes are decays to two charmed mesons. As shown in Fig. ~\ref{Fdiag} (a) the production of an isolated pair of charmed mesons ($D_s^{(*)}$ and $D^{(*)}$) proceeds through a Cabibbo favored spectator diagram in which the $s\overline{c}$ pair from the virtual $W^-$ hadronizes into a $D_s^-$ or a $D_s^{*-}$ meson and the remaining spectator quark and the $c$ quark form a $D^{(*)}$ meson. These modes have been observed by the CLEO~1.5\cite{DDcleo}, ARGUS\cite{ARGUSDDs} and CLEO~II\cite{cleodds} experiments. The decay channels listed in Table~\ref{TDinf} are used to form $D_s$ meson candidates. B mesons are then reconstructed in eight decay modes: $D_s^-D^+$, $D_s^-D^0$, $D_s^{*-}D^+$, $D_s^{*-}D^0$, $D_s^-D^{*+}$, $D_s^-D^{*0}$, $D_s^{*-}D^{*+}$, and $D_s^{*-}D^{*0}$ (See figs.~\ref{bdds_zero},\ref{bdds_plus}). The sum of the exclusive modes, averaged over $B^-$ and $\bar{B}^0$ decays, is $4.93 \pm 0.72 \%$. This can be compared to the branching fraction of the two body component found in the fit to the inclusive $D_s$ momentum spectrum of $4.5\pm 1.2$. The error is dominated by the uncertainty in ${\cal{B}}(D_s \to \phi \pi)$. The remaining contribution to the inclusive production of $D_s$ mesons must be due to the decay modes $B\to D_s^{**} D^{(*)}$, $B\to D_s^{(*)} D^{(*)} (n\pi)$ or $D_s^{(*)} D \pi$. Partial reconstruction techniques are also being investigated to improve the size of the signals in $B\to D^{(*)} D_s^{(*)+}$. Larger samples not only reduce the statistical error in the branching ratio measurements but will also allow the polarization in $B\to D^* D_s^{*+}$ decays to be determined. Comparsion of the yield in partially reconstructed and fully reconstructed $B\to D^* D_s^{(*)+}$ events will also give a model independent measurement of ${\cal B}(D_s\to \phi\pi^+)$ which sets the scale for the $D_s$ branching fractions. Branching fractions and background levels for CP eigenstates such as $\bar{B}^0\to D^{(*)+} D^{(*)-}$ will also be studied. \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,2.5)(0.0,0.0) \put(-.35,.8){\psfig{width=3.5in,height=3.0in,file=cleo_b2psi_excl.ps}} \end{picture} \vskip 15 mm \caption[]{ Beam-constrained mass from CLEO~II for: (a) $B^-\to\psi K^-$, (b) $\bar{B^0}\to\psi\bar{K^0}$, (c) $B^- \to\psi\bar{K}^{*-}$, (d) $\bar{B^0}\to\psi K^{*0}$, (e) $B^-\to\psi' K^-$, (f) $\bar{B^0}\to\psi'\bar{K^0}$, (g) $B^-\to\psi'\bar{K}^{*-}$, and (h) $\bar{B}^0\to\psi' K^{*0}$.}\label{bpsipk} \label{bpsik} \end{center} \end{figure} \begin{figure}[htb] \unitlength 1.0in \begin{center} \begin{picture}(3.0,3.0)(0.0,0.0) \put(-.1,.3){\psfig{width=2.5in,height=3.0in,file=psipol_final_fig.ps}} \end{picture} \vskip 15 mm \caption[]{ Distributions of the efficiency corrected $\psi$ and $K^*$ helicity angles in $B \to \psi K^*$ decays from CLEO~II. The overlaid smooth curves are projections of the unbinned maximum likelihood fit described in the text.} \label{expol} \end{center} \end{figure} \begin{table}[htb] \caption{Upper limits (90\% C.L) on color suppressed $B$ decays.}\label{Tbrcol} \begin{tabular}{lcc} Decay Mode & Events & U. L. (\%) \\ \hline $\bar{B^0} \to D^{0} \pi^0$ & $<20.7 $ & $<0.048$ \\ $\bar{B^0} \to D^{0} \rho^0$ & $<19.0$ & $<0.055$ \\ $\bar{B^0} \to D^{0} \eta$ & $<9.5$ & $<0.068$ \\ $\bar{B^0} \to D^{0} \eta^{'}$ & $<3.5 $ & $<0.086$ \\ $\bar{B^0} \to D^{0} \omega $ & $<12.7 $ & $<0.063$ \\ $\bar{B^0} \to D^{*0} \pi^0$ & $<11.0 $ & $<0.097$ \\ $\bar{B^0} \to D^{*0} \rho^0$ & $<8.1$ & $<0.117$ \\ $\bar{B^0} \to D^{*0} \eta$ & $<2.3 $ & $<0.069$ \\ $\bar{B^0} \to D^{*0} \eta^{'}$ & $<2.3 $ & $<0.27$ \\ $\bar{B^0} \to D^{*0} \omega$ & $<9.0 $ & $<0.21$ \end{tabular} \end{table} \begin{table}[htb] \caption{Upper limits on ratios of branching fractions for color suppressed to normalization modes.}\label{Tratcol} \begin{tabular}{cc} Ratio of Branching Ratios & CLEO~II (90\% C.L.) \\ \hline ${\cal B}(\bar{B^0} \to D^0 \pi^0)/{\cal B}(B^- \to D^0 \pi^-)$ & $< 0.09$ \\ ${\cal B}(\bar{B^0} \to D^0 \rho^0)/{\cal B}(B^- \to D^0 \rho^-)$ & $< 0.05 $ \\ ${\cal B}(\bar{B^0} \to D^0 \eta)/{\cal B}(B^- \to D^0 \pi^-)$ & $< 0.12 $ \\ ${\cal B}(\bar{B^0} \to D^0 \eta^{'})/{\cal B}(B^- \to D^0 \pi^-)$ & $< 0.16 $ \\ ${\cal B}(\bar{B^0} \to D^0 \omega)/{\cal B}(B^- \to D^0 \rho^-)$ & $< 0.05 $ \\ ${\cal B}(\bar{B^0} \to D^{*0}\pi^0)/{\cal B}(B^- \to D^{*0} \pi^-)$ & $< 0.20 $ \\ ${\cal B}(\bar{B^0} \to D^{*0} \rho^0)/{\cal B}(B^- \to D^{*0} \rho^-)$ & $< 0.07 $ \\ ${\cal B}(\bar{B^0} \to D^{*0} \eta)/{\cal B}(B^- \to D^{*0} \pi^-)$ & $< 0.14 $ \\ ${\cal B}(\bar{B^0} \to D^{*0} \eta^{'})/{\cal B}(B^-\to D^{*0}\pi^-)$ & $< 0.54 $ \\ ${\cal B}(\bar{B^0} \to D^{*0} \omega)/{\cal B}(B^- \to D^{*0} \rho^-)$ & $< 0.09 $ \end{tabular} \end{table} \section{COLOR SUPPRESSED B DECAY} \label{B->psi-K(*)} \subsection{Exclusive $B$ Decays to Charmonium} \label{intro-B->psi-K(*)} In $B$ decays to charmonium the $c$ quark from the $b$ combines with a $\bar{c}$ quark from the virtual $W^-$ to form a charmonium state. This process is described by the color suppressed diagram shown in Fig.~\ref{Fdiag}(b). By comparing $B$ meson decays to different final states with charmonium mesons the dynamics of this decay mechanism can be investigated. The decay modes $\bar{B^0} \to \psi K^0$ and $\bar{B^0} \to \psi' K^0$ are of special interest since the final states are CP eigenstates. These decays are of great importance for the investigation of one of the three CP violating angles accessible to study in $B$ decays. It is also possible to use the decay $\bar{B^0} \to \psi K^{*0}$, $K^{*0} \to K^0 \pi^0$ which has a somewhat higher branching ratio, but this final state consists of a mixture of CP eigenstates. It has even CP if the orbital angular momentum L is 0 or 2 and odd CP for L=1. If both CP states are present the CP asymmetry will be diluted. A measurement of CP violation in this channel is only possible if one of the CP states dominates, or if a detailed moments analysis of the various decay components is performed \cite{Idunit}. Recent measurements of the polarization in the decay $\bar{B^0} \to\psi \bar{K^{*0}}$ allow us to determine the fractions of the two CP states. B meson candidates are formed by combining a charmonium and a strange meson candidate. CLEO~1.5 and ARGUS have observed signals for some of these modes. Using the procedures outlined in Sec.~\ref{B-recon} the beam constrained mass distributions shown in Fig.~\ref{bpsik} are obtained by CLEO~II. CLEO~II has also reported a signal in the Cabibbo suppressed decay $B^- \to \psi \pi^-$, The branching ratios are listed in Tables~\ref{kh1} and \ref{kh2} . Recently, CDF has reported signals for $B\to \psi K^{*0}$ and $B\to \psi K^-$ (see Fig.~\ref{cdfbd}) and measurements of polarization in $B\to \psi K^*$ decays\cite{cdfpolar}. Averaging over $B^-$ and $\bar{B}^0$ decays we determine the sum of the exclusive two-body decays to ${\cal{B}}(B\to \psi \:K(K^*,\:\pi)) \; = \; 0.258 \pm 0.030\%$ and ${\cal{B}}(B\to \psi$'$\:K(K^*,\:\pi)) \; = \; 0.22 \pm 0.09\%$. The first results represents about 1/4 of the inclusive rate for direct $B\to \psi$ production. The experimental investigation of the remaining fraction is important, since any additional quasi-two body channel open to $B\to \psi$ transitions could be useful for future studies of CP violation, Lower momentum $\psi$ mesons could originate from multibody final states or from two body decays involving heavier $K^{(*)}$ resonances. Evidence for the decay mode $B\to \chi_{c1} K$ has been reported by CLEO~II \cite{fastpsi,SixthB} and ARGUS \cite{FifthB}. The average branching fraction is ${\cal B}(B^-\to \chi_c K^-) = (0.104\pm 0.040) \%$. The CLEO~II collaboration has also placed upper limits on $\chi_{c1}K^0$ and $\chi_{c1}K^*$ production in $B$ decay. \subsection{Polarization in $B\to\psi K^*$ } The polarization in $B\to\psi K^*$ is studied using the methods described for the $\bar{B^0}\to D^{*+}\rho^-$ polarization measurement in Section \ref{pol-D*-rho}. After integration over the azimuthal angle between the $\psi$ and the $K^*$ decay planes, the angular distribution in $B \to \psi K^*$ decays can be written as \begin{equation} {d^2\Gamma\over{d\cos\Theta_{\psi}d\cos\Theta_{K^*}}} \propto {1\over{4}}\sin^2\Theta_{K^*} (1+\cos^2\Theta_{\psi})(|H_{+1}|^2+|H_{-1}|^2) +\cos^2\Theta_{K^*}\sin^2\Theta_{\psi}|H_{0}|^2 , \label{psipolar} \end{equation} where the $K^*$ helicity angle $\Theta_{K^*}$ is the angle between the kaon direction in the $K^*$ rest frame and the $K^*$ direction in the $B$ rest frame and $\Theta_{\psi}$ is the corresponding $\psi$ helicity angle, and $H_{\pm1,0}$ are the helicity amplitudes. The fraction of longitudinal polarization in $B \to \psi K^*$ is determined by an unbinned fit to the $\psi$ and $K^*$ helicity angle distributions. The results obtained by the CLEO~II, ARGUS and CDF collaborations are listed in Table \ref{Tpsipolex}. \begin{table}[htb] \caption{Longitudinal polarization of $\psi$ mesons from $B \to \psi K^*$ decays.} \label{Tpsipolex} \begin{tabular}{cc} Experiment & ${\left({\Gamma_L\over{\Gamma}}\right)}$\\ \hline CLEO II & $ 0.80\pm 0.08 \pm 0.05$ \\ ARGUS \cite{argpol} & $0.97 \pm 0.16\pm 0.15$ \\ CDF \cite{cdfpolar} & $ 0.66 \pm 0.10^{+0.08}_{-0.10}$\\ \hline Average & $ 0.78 \pm 0.07$\\ \end{tabular} \end{table} The efficiency corrected distributions in each of the helicity angles $\cos\Theta_{\psi}$ and $\cos\Theta_{K^*}$ are shown in Fig.~\ref{expol} (CLEO II). Assuming that the systematic errors from the various experiments are uncorrelated, these three results can be averaged to obtain \begin{equation} {\Gamma_L\over \Gamma} = 0.78 \pm 0.07 \label{psikstavg} \end{equation} Although the decay mode $B \to \psi K^*$ may not be completely polarized, it is still dominated by a single CP eigenstate. This mode will therefore be useful for measurements of CP violation. \subsection{Exclusive Decays to a $D^{0 (*)}$ and a Neutral Meson.} \label{color-supress} We now discuss searches for $B$ decays which can occur via an internal W-emission graph but which do not yield charmonium mesons in the final state. Naively, one expects that these decays will be suppressed relative to decays which occur via the external W-emission graph. For the internal graph, in the absence of gluons, the colors of the quarks from the virtual $W$ must match the colors of the $c$ quark and the accompanying spectator antiquark. In this simple picture, one expects that the suppression factor should be $1/18$ in rate for decays involving $\pi^0$, $\rho^0$ and $\omega$ mesons\cite{Dpi}. In heavy quark decays the effects of gluons cannot be neglected, and QCD based calculations \cite{Neubie} predict suppression factors of order $1/50$. If color suppressed $B$ decay modes are not greatly suppressed then these modes could also be useful for CP violation studies\cite{Dunietz}. CLEO~II has searched for color suppressed decay modes of $B$ mesons which contain a single $D^0$ or $D^{*0}$ meson in the final state\cite{wex}. The relevant color suppressed modes are listed in Table~\ref{Tbrcol}. The decay channels used are $\eta \to \gamma \gamma$, $\omega \to \pi^+ \pi^- \pi^0$ and $\eta^{'} \to \eta \pi^+ \pi^-$, followed by $\eta \to \gamma \gamma$\cite{BReta}. For decays of a pseudoscalar meson into a final state containing a pseudoscalar and a vector meson (V), a helicity angle cut of $|\cos \Theta_{V}| \; > \; 0.4$ is used\cite{omeg}. No signals were observed. Upper limits \cite{PDGul} on the branching ratios for color suppressed modes are given in Table~\ref{Tbrcol}. Upper limits on the ratios of color suppressed modes to normalization modes are given in Table~\ref{Tratcol}. These limits show that there is color suppression of these $B$ decay modes. \begin{table}[htb] \caption{$B^-$ Branching fractions [\%]} \label{kh1} \begin{tabular}{llll} Mode & ARGUS & CLEO 1.5 & CLEO II \\ \hline $B^- \rightarrow D^0 \pi ^-$ & $ 0.22 \pm 0.09 \pm 0.06 \pm 0.01 $ & $ 0.56 \pm 0.08 \pm 0.05 \pm 0.02 $ & $ 0.53 \pm 0.04 \pm 0.05 \pm 0.02 $\\ $B^- \rightarrow D^0 \rho ^-$ & $ 1.41 \pm 0.43 \pm 0.39 \pm 0.06 $ & & $ 1.31 \pm 0.12 \pm 0.14 \pm 0.04 $\\ $B^- \rightarrow D^{0} \pi ^+ \pi ^- \pi ^-$ & & $ 1.24 \pm 0.31 \pm 0.14 \pm 0.05 $ & \\ $B^- \rightarrow D^{*0} \pi ^-$ & $ 0.38 \pm 0.13 \pm 0.10 \pm 0.02 $ & $ 1.00 \pm 0.25 \pm 0.18 \pm 0.04 $ & $ 0.49 \pm 0.07 \pm 0.06 $\\ $B^- \rightarrow D^{*0} \rho ^-$ & $ 0.94 \pm 0.56 \pm 0.35 \pm 0.04 $ & & $ 1.59 \pm 0.20 \pm 0.26 \pm 0.05 $\\ $B^- \rightarrow D_J^{(*)0} \pi ^-$ & $ 0.13 \pm 0.06 \pm 0.03 \pm 0.01 $ & $ 0.13 \pm 0.07 \pm 0.01 \pm 0.01 $ & \\ $B^- \rightarrow D^{*+} \pi ^- \pi ^- \pi ^0$ & $ 1.64 \pm 0.64 \pm 0.37 \pm 0.07 $ & & \\ $B^- \rightarrow D_J^{(*)0} \rho ^-$ & $ 0.32 \pm 0.19 \pm 0.07 \pm 0.01 $ & & \\ $B^- \rightarrow D^{*0} \pi ^- \pi ^- \pi ^+$ & & & $ 0.92 \pm 0.20 \pm 0.17 \pm 0.01 $\\ $B^- \rightarrow D^{*0} a_1 ^-$ & & & $ 1.83 \pm 0.39 \pm 0.33 \pm 0.02 $\\ $B^- \rightarrow D^+ \pi^- \pi ^- $ & & & $ <0.14 $ (90\% C.L.)\\ $B^- \rightarrow D^{*+} \pi ^- \pi ^-$ & $ 0.24 \pm 0.13 \pm 0.05 \pm 0.01 $ & $ <0.37$ (90\% C.L.)& $ 0.18 \pm 0.07 \pm 0.03 \pm 0.01 $\\ $B^- \rightarrow D^{**0}(2420) \pi^- $ & $ 0.30 \pm 0.08 \pm 0.06 \pm 0.01 $ & & $ 0.11 \pm 0.05 \pm 0.02 \pm 0.01 $\\ $B^- \rightarrow D^{**0}(2420) \rho^- $ & & & $ <0.13 $ (90\% C.L.)\\ $B^- \rightarrow D^{**0}(2460) \pi^- $ & & & $ <0.13 $ (90\% C.L.)\\ $B^- \rightarrow D^{**0}(2460) \rho^- $ & & & $ <0.45 $ (90\% C.L.)\\ $B^- \rightarrow D^0 D_s^-$ & $ 1.69 \pm 0.85 \pm 0.27 \pm 0.41 $ & $ 1.66 \pm 0.70 \pm 0.13 \pm 0.40 $ & $ 1.11 \pm 0.20 \pm 0.23 \pm 0.28 $\\ $B^- \rightarrow D^0 D_s^{*-}$ & $ 1.13 \pm 0.85 \pm 0.20 \pm 0.27 $ & & $ 0.79 \pm 0.25 \pm 0.15 \pm 0.19 $\\ $B^- \rightarrow D^{*0} D_s^-$ & $ 0.79 \pm 0.55 \pm 0.11 \pm 0.19 $ & & $ 1.27 \pm 0.39 \pm 0.32 \pm 0.31 $\\ $B^- \rightarrow D^{*0} D_s^{*-}$ & $ 1.89 \pm 0.98 \pm 0.28 \pm 0.46 $ & & $ 2.82 \pm 0.80 \pm 0.59 \pm 0.68 $\\ $B^- \rightarrow \psi K^-$ & $ 0.08 \pm 0.04 \pm 0.01 $ & $ 0.09 \pm 0.02 \pm 0.02 $ & $ 0.110 \pm 0.015 \pm 0.009 $\\ $B^- \rightarrow \psi ' K^-$ & $ 0.20 \pm 0.09 \pm 0.04 $ & $ <0.05$ (90\% C.L.)& $ 0.061 \pm 0.023 \pm 0.009 $\\ $B^- \rightarrow \psi K^{*-}$ & $ 0.19 \pm 0.13 \pm 0.03 $ & $ 0.15 \pm 0.11 \pm 0.03 $ & $ 0.178 \pm 0.051 \pm 0.023 $\\ $B^- \rightarrow \psi ' K^{*-}$ & $ <0.53 $ (90\% C.L.)& $ <0.38$ (90\% C.L.)& $ <0.30 $ (90\% C.L.)\\ $B^- \rightarrow \psi K^- \pi ^+ \pi ^-$ & $ <0.19 $ (90\% C.L.)& $ 0.14 \pm 0.07 \pm 0.03 $ & \\ $B^- \rightarrow \psi ' K^- \pi ^+ \pi ^-$ & $ 0.21 \pm 0.12 \pm 0.04 $ & & \\ $B^- \rightarrow \chi_{c1} K^-$ & $ 0.22 \pm 0.15 \pm 0.07 $ & & $ 0.097 \pm 0.040 \pm 0.009 $\\ $B^- \rightarrow \chi_{c1} K^{*-}$ & & & $ <0.21 $ (90\% C.L.)\\ $B^- \rightarrow \psi \pi ^-$ & & & $ 0.0047 \pm 0.0024 \pm 0.0004 $\\ \end{tabular} \end{table} \begin{table}[htb] \caption{$\bar{B}^0$ Branching fractions in [\%]} \label{kh2} \begin{tabular}{llll} Mode & ARGUS & CLEO 1.5 & CLEO II \\ \hline $\bar{B}^0 \rightarrow D^+ \pi ^-$ & $ 0.48 \pm 0.11 \pm 0.08 \pm 0.03 $ & $ 0.27 \pm 0.06 \pm 0.03 \pm 0.02 $ & $ 0.29 \pm 0.04 \pm 0.03 \pm 0.02 $\\ $\bar{B}^0 \rightarrow D^+ \rho ^-$ & $ 0.90 \pm 0.50 \pm 0.27 \pm 0.06 $ & & $ 0.81 \pm 0.11 \pm 0.12 \pm 0.05 $\\ $\bar{B}^0 \rightarrow D^+ \pi ^- \pi ^- \pi ^+$ & & $ 0.81 \pm 0.21 \pm 0.09 \pm 0.05 $ & \\ $\bar{B}^0 \rightarrow D^{*+} \pi ^-$ & $ 0.25 \pm 0.08 \pm 0.03 \pm 0.01 $ & $ 0.45 \pm 0.11 \pm 0.05 \pm 0.02 $ & $ 0.25 \pm 0.03 \pm 0.04 \pm 0.01 $\\ $\bar{B}^0 \rightarrow D^{*+} \rho ^-$ & $ 0.64 \pm 0.27 \pm 0.25 \pm 0.03 $ & $ 2.13 \pm 0.90 \pm 1.24 \pm 0.09 $ & $ 0.70 \pm 0.09 \pm 0.13 \pm 0.02 $\\ $\bar{B}^0 \rightarrow D^{*+} \pi ^- \pi ^- \pi ^+$ & $ 1.09 \pm 0.27 \pm 0.32 \pm 0.04 $ & $ 1.77 \pm 0.31 \pm 0.30 \pm 0.07 $ & $ 0.61 \pm 0.10 \pm 0.11 \pm 0.02 $\\ $\bar{B}^0 \rightarrow D^{*+} a_1^-$ & & & $ 1.22 \pm 0.19 \pm 0.22 \pm 0.04 $\\ $\bar{B}^0 \rightarrow D^{0} \pi ^+ \pi^- $ & & & $ <0.16 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow D^{**+}(2460) \pi^- $ & & & $ <0.21 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow D^{**+}(2460) \rho^- $ & & & $ <0.47 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow D^+ D_s^-$ & $ 1.05 \pm 0.80 \pm 0.35 \pm 0.26 $ & $ 0.54 \pm 0.31 \pm 0.03 \pm 0.13 $ & $ 0.82 \pm 0.23 \pm 0.19 \pm 0.20 $\\ $\bar{B}^0 \rightarrow D^+ D_s^{*-}$ & $ 1.67 \pm 1.05 \pm 0.52 \pm 0.41 $ & & $ 0.95 \pm 0.33 \pm 0.21 \pm 0.23 $\\ $\bar{B}^0 \rightarrow D^{*+} D_s^-$ & $ 0.83 \pm 0.59 \pm 0.11 \pm 0.20 $ & $ 1.17 \pm 0.66 \pm 0.09 \pm 0.28 $ & $ 0.85 \pm 0.21 \pm 0.15 \pm 0.21 $\\ $\bar{B}^0 \rightarrow D^{*+} D_s^{*-}$ & $ 1.54 \pm 0.83 \pm 0.24 \pm 0.37 $ & & $ 1.85 \pm 0.46 \pm 0.33 \pm 0.45 $\\ $\bar{B}^0 \rightarrow \psi K^0$ & $ 0.09 \pm 0.07 \pm 0.02 $ & $ 0.07 \pm 0.04 \pm 0.02 $ & $ 0.075 \pm 0.024 \pm 0.008 $\\ $\bar{B}^0 \rightarrow \psi ' K^0$ & $ <0.30 $ (90\% C.L.)& $ <0.16$ (90\% C.L.)& $ <0.08 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \psi \bar{K}^{*0}$ & $ 0.13 \pm 0.06 \pm 0.02 $ & $ 0.13 \pm 0.06 \pm 0.03 $ & $ 0.169 \pm 0.031 \pm 0.018 $\\ $\bar{B}^0 \rightarrow \psi ' \bar{K}^{*0}$ & $ <0.25 $ (90\% C.L.)& $ 0.15 \pm 0.09 \pm 0.03 $ & $ <0.19 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \psi K^{-} \pi ^+$ & & $ 0.12 \pm 0.05 \pm 0.03 $ & \\ $\bar{B}^0 \rightarrow \psi ' K^- \pi ^+$ & $ <0.11 $ (90\% C.L.)& & \\ $\bar{B}^0 \rightarrow \chi_{c1} K^0$ & & & $ <0.27 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \chi_{c1} \bar{K}^{*0}$ & & & $ <0.21 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \psi \pi ^0$ & & & $ <0.0069 $ (90\% C.L.)\\ \end{tabular} \end{table} \begin{table}[htb] \caption{World average $B^-$ branching fractions [\%]} \label{kh3} \begin{tabular}{ll} Mode & Branching Fraction \\ \hline $B^- \rightarrow D^0 \pi ^-$ & $0.48 \pm 0.05 \pm 0.02 $ \\ $B^- \rightarrow D^0 \rho ^-$ & $1.32 \pm 0.17 \pm 0.05 $ \\ $B^- \rightarrow D^{0} \pi ^+ \pi ^- \pi ^-$ & $1.24 \pm 0.34 \pm 0.05 $ \\ $B^- \rightarrow D^{*0} \pi ^-$ & $0.50 \pm 0.08 \pm 0.02 $ \\ $B^- \rightarrow D^{*0} \rho ^-$ & $1.47 \pm 0.29 \pm 0.06 $ \\ $B^- \rightarrow D_J^{(*)0} \pi ^-$ & $0.13 \pm 0.05 \pm 0.01 $ \\ $B^- \rightarrow D^{*+} \pi ^- \pi ^- \pi ^0$ & $1.64 \pm 0.73 \pm 0.07 $ \\ $B^- \rightarrow D_J^{(*)0} \rho ^-$ & $0.32 \pm 0.20 \pm 0.01 $ \\ $B^- \rightarrow D^{*0} \pi ^- \pi ^- \pi ^+$ & $0.92 \pm 0.26 \pm 0.04 $ \\ $B^- \rightarrow D^{*0} a_1 ^-$ & $1.83 \pm 0.51 \pm 0.07 $ \\ $B^- \rightarrow D^+ \pi^- \pi ^- $ & $<0.14 $ (90\% C.L.)\\ $B^- \rightarrow D^{*+} \pi ^- \pi ^-$ & $0.19 \pm 0.06 \pm 0.01 $ \\ $B^- \rightarrow D^{**0}(2420) \pi^- $ & $0.15 \pm 0.05 \pm 0.01 $ \\ $B^- \rightarrow D^{**0}(2420) \rho^- $ & $<0.13 $ (90\% C.L.)\\ $B^- \rightarrow D^{**0}(2460) \pi^- $ & $<0.13 $ (90\% C.L.)\\ $B^- \rightarrow D^{**0}(2460) \rho^- $ & $<0.45 $ (90\% C.L.)\\ $B^- \rightarrow D^0 D_s^-$ & $1.24 \pm 0.27 \pm 0.30 $ \\ $B^- \rightarrow D^0 D_s^{*-}$ & $0.83 \pm 0.28 \pm 0.20 $ \\ $B^- \rightarrow D^{*0} D_s^-$ & $1.06 \pm 0.38 \pm 0.26 $ \\ $B^- \rightarrow D^{*0} D_s^{*-}$ & $2.37 \pm 0.71 \pm 0.58 $ \\ $B^- \rightarrow \psi K^-$ &$ 0.102 \pm 0.014 $ \\ $B^- \rightarrow \psi ' K^-$ &$ 0.070 \pm 0.024 $ \\ $B^- \rightarrow \psi K^{*-}$ &$ 0.174 \pm 0.047 $ \\ $B^- \rightarrow \psi ' K^{*-}$ & $<0.30 $ (90\% C.L.)\\ $B^- \rightarrow \psi K^- \pi ^+ \pi ^-$ &$ 0.140 \pm 0.078 $ \\ $B^- \rightarrow \psi ' K^- \pi ^+ \pi ^-$ &$ 0.207 \pm 0.127 $ \\ $B^- \rightarrow \chi_{c1} K^-$ &$ 0.104 \pm 0.040 $ \\ $B^- \rightarrow \chi_{c1} K^{*-}$ & $<0.21 $ (90\% C.L.)\\ $B^- \rightarrow \psi \pi ^-$ &$ 0.0047 \pm 0.0024 $ \\ \end{tabular} \end{table} \begin{table}[htb] \caption{World average $\bar{B}^0$ branching fractions [\%]} \label{kh4} \begin{tabular}{ll} Mode & Branching Fraction \\ \hline $\bar{B}^0 \rightarrow D^+ \pi ^-$ & $0.30 \pm 0.04 \pm 0.02 $ \\ $\bar{B}^0 \rightarrow D^+ \rho ^-$ & $0.82 \pm 0.16 \pm 0.05 $ \\ $\bar{B}^0 \rightarrow D^+ \pi ^- \pi ^- \pi ^+$ & $0.81 \pm 0.23 \pm 0.05 $ \\ $\bar{B}^0 \rightarrow D^{*+} \pi ^-$ & $0.27 \pm 0.04 \pm 0.01 $ \\ $\bar{B}^0 \rightarrow D^{*+} \rho ^-$ & $0.70 \pm 0.15 \pm 0.03 $ \\ $\bar{B}^0 \rightarrow D^{*+} \pi ^- \pi ^- \pi ^+$ & $0.77 \pm 0.13 \pm 0.03 $ \\ $\bar{B}^0 \rightarrow D^{*+} a_1^-$ & $1.22 \pm 0.29 \pm 0.05 $ \\ $\bar{B}^0 \rightarrow D^{0} \pi ^+ \pi^- $ & $<0.16 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow D^{**+}(2460) \pi^- $ & $<0.21 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow D^{**+}(2460) \rho^- $ & $<0.47 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow D^+ D_s^-$ & $0.71 \pm 0.21 \pm 0.17 $ \\ $\bar{B}^0 \rightarrow D^+ D_s^{*-}$ & $1.02 \pm 0.37 \pm 0.25 $ \\ $\bar{B}^0 \rightarrow D^{*+} D_s^-$ & $0.88 \pm 0.22 \pm 0.21 $ \\ $\bar{B}^0 \rightarrow D^{*+} D_s^{*-}$ & $1.75 \pm 0.47 \pm 0.43 $ \\ $\bar{B}^0 \rightarrow \psi K^0$ &$ 0.075 \pm 0.021 $ \\ $\bar{B}^0 \rightarrow \psi ' K^0$ & $<0.08 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \psi \bar{K}^{*0}$ &$ 0.153 \pm 0.028 $ \\ $\bar{B}^0 \rightarrow \psi ' \bar{K}^{*0}$ &$ 0.151 \pm 0.091 $ \\ $\bar{B}^0 \rightarrow \psi K^{-} \pi ^+$ &$ 0.117 \pm 0.058 $ \\ $\bar{B}^0 \rightarrow \psi ' K^- \pi ^+$ & $<0.11 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \chi_{c1} K^0$ & $<0.27 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \chi_{c1} \bar{K}^{*0}$ & $<0.21 $ (90\% C.L.)\\ $\bar{B}^0 \rightarrow \psi \pi ^0$ & $<0.007 $ (90\% C.L.)\\ \end{tabular} \end{table} \clearpage \section{THEORETICAL INTERPRETATION OF HADRONIC B DECAY} \subsection{Introduction} The simple spectator diagram for two-body hadronic $B$ meson decays that occur through the CKM favored $b\to c$ transition is described by the Hamiltonian\cite{vud}: \begin{equation} H ={G_F\over \sqrt 2}V_{cb} \left\{\left[(\bar d u)+(\bar s c)\right] (\bar c b)\right\} \label{Eraw} \end{equation} where $(\bar q_i q_j)=\bar q_i\gamma_{\mu}(1-\gamma_5)q_j$, $G_F$ is the Fermi coupling constant, and $V_{cb}$ is the CKM matrix element. The spectator diagram is modified by hard gluon exchange between the initial and final quark lines. The effect of these exchanges can be taken into account by use of the renormalization group, with the result that the original Hamiltonian of equation~(\ref{Eraw}) is replaced by one which now contains two pieces, the original term multiplied by a coefficient $c_1(\mu)$, and an additional term multiplied by $c_2(\mu)$: \begin{equation} H_{eff}={G_F\over \sqrt 2}V_{cb} \left\{c_1(\mu)\left[(\bar d u)+(\bar s c)\right] (\bar c b)+ c_2(\mu)\left[(\bar c u)(\bar d b)+(\bar c c)(\bar s b)\right] \right\} \label{Eheff} \end{equation} The $c_i$ are Wilson coefficients that can be calculated from QCD. However, the calculation is inherently uncertain because it is unclear at what mass scale, $\mu$, these coefficients should be evaluated. The usual scale is taken to be $\mu \sim m_b^{2}$. Defining \begin{equation} c_{\pm}(\mu)=c_1(\mu)\pm c_2(\mu) \label{Ecees} \end{equation} the leading-log approximation gives\cite{Neubie} \begin{equation} c_{\pm}(\mu)=\left({\alpha_s(M_{W}^{2})\over\alpha_s(\mu)}\right) ^{\displaystyle {-6\gamma_{\pm}\over (33-2n_f)}} \label{Ecpmcal} \end{equation} where $\gamma_-=-2\gamma_+=2$, and $n_f$ is the number of active flavors, which is taken to be five in this case. The additional term in the Hamiltonian in Eq.\ (\ref{Eheff}) corresponds to the ``color suppressed'' diagram. The quark pairings in this diagram are different from those in the spectator diagram, and lead to the decay modes discussed in section \ref{B->psi-K(*)}. From the observation of the $B \to \psi X_s$ decays, where $X_s$ is a strange meson, the magnitude of the color-suppressed term can be deduced. In $B^-$ decays, both spectator and color-suppressed diagrams are present and can interfere. By comparing the rates for $B^-$ and $\bar{B^0}$ decays, both the size and the relative sign of the color-suppressed term can be determined (see Sec.~\ref{a1-a2}). For comparisons between theoretical models and data we will use the following values for couplings and lifetimes: \begin{eqnarray} |V_{cb}|& = & 0.0386 \pm 0.0027 \\ \nonumber \frac{|V_{ub}|}{|V_{cb}|} & = &0.073 \pm 0.011 \pm 0.010 \\ \nonumber \tau_B^{0} & = & (1.621 \pm 0.067) {\; \rm ps}\\ \nonumber \tau_B^{+} & = & (1.646 \pm 0.063) {\; \rm ps}\\ \nonumber <\tau_B> & = & (1.634 \pm 0.046) {\; \rm ps} \label{theo_parms} \end{eqnarray} $<\tau_B>$ is the average lifetime for a sample consisting of equal numbers of $B^-$ and $\bar{B}^0$ mesons. \subsection{Factorization} Factorization is the assumption that two body hadronic decays of $B$ mesons can be expressed as the product of two independent hadronic currents, one describing the formation of a meson from the converted $b$ quark and the light spectator quark, and the other describing the production of a meson by the hadronization of the virtual $W^-$. This description is expected to be valid for the external spectator decays where the large energy carried by the $W^-$ causes the products of the $W^-$ to be well separated from the spectator quark system \cite{Bjorken},\cite{DG}. It has also been used to calculate color-suppressed and penguin diagrams, although it is not known whether factorization is a correct assumption for these diagrams. There are number of tests of the factorization hypothesis that can be made by comparing rates and polarizations for semileptonic and hadronic $B$ decays. These will be discussed in section \ref{test-factor}. If factorization holds, then measurements of hadronic $B$ decays can be compared to the theoretical models, and used to extract fundamental parameters of the Standard Model. For instance the CKM matrix element $|V_{ub}|$ could be obtained from $\bar{B^0} \to \pi^+\pi^-$ or $ \bar{B^0} \to D_s^- \pi^+$, and the decay constant $f_{D_s}$ could be determined from $\bar{B^0} \to D_s^- D^{*+}$. \subsection{Phenomenological Models of Hadronic $B$ Decay} Several groups have developed models of hadronic $B$ decays based on the factorization approach. To compute rates for all hadronic $B$ decays the magnitude and sign of the color amplitude must also be known. It is difficult to calculate this amplitude from first principles in QCD. Instead a phenomenological approach was adopted by Bauer, Stech and Wirbel \cite{Stech}, in which two undetermined coefficients were assigned to the effective charged current, $a_1$, and the effective neutral current, $a_2$, parts of the $B$ decay Hamiltonian. In reference \cite{Stech} these coefficients were determined from a fit to a subset of the experimental data on charm decays. The values of $a_1$ and $a_2$ can be related to the QCD coefficients $c_1$ and $c_2$ by \begin{equation} a_{1} = c_{1} + \xi c_{2},~~~ a_{2} = c_{2} + \xi c_{1} \label{a1defs} \end{equation} where $\xi= 1 /N_{\rm color}$. The values \begin{equation} a_1 = 1.26,~~~a_2 = -0.51 \label{a1charm} \end{equation} that give the best fit to the experimental data on charm decay correspond to $1/N_{\rm color} \sim 0$ \cite{Neubie}. However, there is no convincing theoretical justification for this choice of $N_{\rm{color}}$. In section \ref{a1-a2} we will discuss the experimental determination of the values of $a_1$ and $a_2$ from a fit to the $B$ meson decay data. \subsection{Heavy Quark Effective Theory} The evaluation of amplitudes for hadronic decays requires not only the assumption of factorization, but also the input of hadronic form factors and meson decay constants. As a result of the development of HQET it is now believed that many of the hadronic form factors for $b \to c $ transitions can be calculated quite well in an essentially model-independent way. This has been done by several groups \cite{Neubie},\cite{Bari}. The comparison of these theoretical predictions with the experimental results can be used to test the range of validity of HQET, and the extent to which $1/M_Q$ corrections to the heavy quark symmetry are needed. \section{TESTS OF THE FACTORIZATION HYPOTHESIS } \label{test-factor} \subsection{Branching Ratio Tests} The large samples of reconstructed hadronic $B$ decays have made possible the precise measurements of branching ratios discussed in section \ref{BDpiDrho}. As an example of the use of these results to test the factorization hypothesis we will consider the specific case of $\bar{B^0}\to D^{*+}\pi^-$. The amplitude for this reaction is \begin{equation} A ={G_F\over \sqrt 2}V_{cb}V_{ud}^* \langle \pi^- | (\bar{d} u) | 0 \rangle \langle D^{*+} | (\bar c b) | \bar{B^0} \rangle.\label{EHeffDP} \end{equation} The CKM factor $|V_{ud}|$ arises from the $W^-\to\bar u d$ vertex. The first hadron current that creates the $\pi^-$ from the vacuum is related to the pion decay constant, $f_{\pi}$, by: \begin{equation} \langle \pi^-(p) | (\bar d u) | 0 \rangle = -if_{\pi}p_{\mu}.\label{Efpi} \end{equation} The other hadron current can be found from the semileptonic decay $\bar{B^0}\to D^{*+}\ell^- \bar{\nu_{\ell}}$. Here the amplitude is the product of a lepton current and the hadron current that we seek to insert in Eq.~(\ref{EHeffDP}). Factorization can be tested experimentally by verifying whether the relation \begin{equation} {\Gamma\left(\bar{B^0}\to D^{*+}\pi^-\right)\over\displaystyle{d\Gamma\over \displaystyle dq^2} \left(\bar{B^0}\to D^{*+} \ell ^- \bar{\nu_{l}} \right)\biggr|_{q^2=m^2_{\pi}}} = 6\pi^2{ c_1^2} f_{\pi}^2|V_{ud}|^2 ,\label{Efact} \end{equation} is satisfied. Here $q^2$ is the four momentum transfer from the $B$ meson to the $D^*$ meson. Since $q^2$ is also the mass of the lepton-neutrino system, by setting $q^2 = m_{\pi}^2=0.019 ~ GeV^2$ we are requiring that the lepton-neutrino system has the same kinematic properties as does the pion in the hadronic decay. $V_{ud}$ and $f_{\pi}$ have well measured values of 0.975 and 131.7~MeV respectively. For the coefficient $c_1$ we will use the value $1.12\pm 0.1$ deduced from perturbative QCD \cite{qcd}. The error in $c_1$ reflects the uncertainty in the mass scale at which the coefficient $c_1$ should be evaluated. In the original test of equation~(\ref{Efact}), Bortoletto and Stone \cite{Bort} found that the equation was satisfied for $c_1$=1. In the following discussion we will denote the left hand side of Eq.~(\ref{Efact}) by $R_{Exp}$ and the right hand side by $R_{Theo}$. This type of factorization test can be extended to larger $q^2$ values by using other $\bar{B^0}\to D^{*+} X^-$ decays, e.g. $ X^- =\rho^-$ or $a_1^-$. For the $\rho^-$ case Eq.~(\ref{Efact}) becomes: \begin{equation} R = { {\Gamma(\bar{B}^0 \to D^{*+} \rho^-)}\over{ {{d\Gamma}\over{dq^2}} {(B\to D^{*} ~l ~\nu)|}_{q^2=m_{\rho}^2}} } = {6 \pi^2 c_1^2 f_{\rho}^2 |V_{ud}|^2} \label{Efctr} \end{equation} where the semileptonic decay is evaluated at $ q^2 = m_{\rho}^2=0.60$ GeV$^2$. The decay constant on the right hand side of this equation can be determined from $ e^+ e^- \to \rho^0$ which gives $ f_\rho=215 \pm 4$ MeV. A second method uses the relation $\Gamma(\tau^- \to \nu \rho^-)=~ 0.804 {G_F^2 \over{16 \pi}} |V_{ud}^2| M_{\tau}^3 f_\rho^2$, where the $\rho$ width has been taken into account \cite{Pham}. This gives f$_{\rho} = 212.0 \pm 5.3$ MeV \cite{narrow}. For the factorization test with $\bar{B^0} \to D^{*+} a_1^-$ we use $f_{a_1} = 205 \pm 16$ MeV \cite{ir} determined from $\tau$ decay. To derive numerical predictions for the left hand side of equation (\ref{Efact}), we must interpolate the observed differential $q^2$ distribution \cite{width} for $B \to D^* \ell ~\nu$ to $q^2=m_\pi^2$, $m_\rho^2$, and $m_{a_1}^2$, respectively. Until this distribution is measured more precisely we have to use theoretical models to perform this interpolation. The $d\Gamma/dq^2$ distribution obtained in a recent CLEO~II analysis \cite{dstlnu} is shown in Fig.~\ref{dlnu}. The solid and dashed lines represent fits to different models. The differences between the extrapolations using models for $B \to D^* \ell ~ \nu$ are small, on the order of 10-20\%. The measurement of this differential distribution from CLEO~II can be combined with the earlier results from the ARGUS and CLEO 1.5 experiments\cite{Bort,bhp}. The values of $d\Gamma/dq^2(B\to D^*\ell \nu)$ used for the factorization test are given in Table~\ref{TFactst}. The statistical and systematic errors have been combined in quadrature; the uncertainty due to the $D^0$ branching ratios cancels in the ratio. \begin{figure}[htb] \unitlength 1.0in \vskip 10 mm \begin{center} \begin{picture}(3.0,3.5)(0.0,0.0) \put(-1.,-0.8){\psfig{width=4.0in,height=1.5in,file=dstlnu_q2_new.ps}} \end{picture} \vskip 2 mm \caption{The $d\Gamma/dq^2$ distribution for $\bar{B} \rightarrow D^*~\ell~\bar{\nu}$ decays from CLEO~II data (from Ref.~\protect\cite{dstlnu}). The solid lines represent fits to the CLEO~II data based on HQET. The upper solid line corresponds to a linear extrapolation near the endpoint, and the lower solid line corresponds to a quadratic extrapolation. The dashed line shows $d\Gamma/dq^2$ for the ISGW model, the dotted line is the BSW model.} \label{dlnu} \end{center} \end{figure} \begin{table}[htb] \caption{Ingredients for Factorization Tests.}\label{TFactst} \begin{tabular}{cc} $ \vert c_1 \vert $ & $1.12 \pm 0.1$ \\ $ f_{\pi}$ & $131.74 \pm 0.15 $ MeV \\ $ f_{\rho}$ & $215\pm 4$ MeV \\ $ f_{a_1}$ & $205\pm 16$ MeV \\ $ V_{ud}$ \protect{\cite{PDG}} & $0.9744 \pm 0.0010 $ \\ $ {{d {\cal B}}\over{dq^2}}(B \to D^* \ell ~\nu)\vert_{q^2=m_{\pi}^2} $ & $(0.237 \pm 0.026)\:\times 10^{-2} $ GeV$^{-2}$ \\ $ {{d {\cal B}}\over{dq^2}}(B \to D^* \ell ~\nu)\vert_{q^2=m_{\rho}^2} $ & $(0.250 \pm 0.030)\:\times 10^{-2} $ GeV$^{-2}$ \\ $ {{d {\cal B}}\over{dq^2}}(B \to D^* \ell ~\nu)\vert_{q^2=m_{a_1}^2})$ & $(0.335 \pm 0.033)\:\times 10^{-2} $ GeV$^{-2} $ \\ $ {{d {\cal B}}\over{dq^2}}(B \to D^* \ell ~\nu)\vert_{q^2=m_{D_s}^2} $ & $(0.483 \pm 0.033)\:\times 10^{-2} $ GeV$^{-2} $\\ $ {{d {\cal B}}\over{dq^2}}(B \to D^* \ell ~\nu)\vert_{q^2=m_{D_s^*}^2} $ & $(0.507 \pm 0.035)\:\times 10^{-2} $ GeV$^{-2} $ \end{tabular} \end{table} Using the information listed in Table~\ref{TFactst} we obtain from Eqs.~(\ref{Efact}) and (\ref{Efctr}) the results given in Table ~\ref{Tfactc}. \begin{table}[htb] \caption{Comparison of $R_{Exp}$ and $R_{Theo}$}\label{Tfactc} \begin{tabular}{lcc} & $R_{Exp}$ (GeV$^2$) & $R_{Theo}$ (GeV$^2$) \\ \hline $\bar{B}^0 \to D^{*+}\pi^- $ & $1.14\pm 0.21$ & $1.22 \pm 0.15$ \\ $\bar{B}^0 \to D^{*+}\rho^- $ & $2.80\pm 0.69$ & $3.26 \pm 0.42$ \\ $ \bar{B}^0 \to D^{*+} a_1^- $ & $3.6\pm 0.9$ & $3.0 \pm 0.50$ \end{tabular} \end{table} Some of the systematic uncertainties in $R_{Exp}$ cancel if we form ratios of branching fractions, as does the QCD coefficient $c_1$ in $R_{Theo}$. Thus in the case of $D^{*+}\rho^-$/$D^{*+}\pi^-$, the expectation from factorization is given by $R_{Theo}(\rho)$/$R_{Theo}(\pi)$ times the ratio of the semileptonic branching ratios evaluated at the appropriate $q^2$ values. In Table~\ref{Tfacthh} we show the comparison between the measured ratios and two theoretical predictions by Reader and Isgur \cite{ir}, and the revised BSW model \cite{Neubie}. \begin{table}[htb] \caption{Ratios of $B$ decay widths.}\label{Tfacthh} \begin{tabular}{lcccc} & Exp. & Factorization & RI Model & BSW Model \\ \hline ${\cal B}(\bar{B}^0 \to D^{*+}\rho^-) / {\cal B}(\bar{B}^0 \to D^{*+}\pi^-)$ & $2.59 \pm 0.67$ & $2.81 \pm 0.46$ & 2.2 -- 2.3 & 2.8 \\ ${\cal B}(\bar{B}^0 \to D^{*+} a_1^-) / {\cal B}(\bar{B}^0 \to D^{*+}\pi^-)$ & $4.5 \pm 1.2$ & $3.4 \pm 0.6$ & 2.0 -- 2.1 &3.4 \end{tabular} \end{table} At the present level of precision, there is good agreement between the experimental results and the expectation from factorization for the $ q^2$ range $ 0 < q^2 < m_{a_1}^2$. Note that it is possible that factorization will be a poorer approximation for decays will smaller energy release or larger $q^2$. Factorization tests can be extended to higher $q^2$ using $B\to D^{*} D_s^{(*)}$ decays as will be discussed in section \ref{facapply} . \subsection{Factorization and Angular Correlations} \label{fac-ang-cor} More subtle tests of the factorization hypothesis can be performed by examining the polarization in $B$ meson decays into two vector mesons, as suggested by K\"orner and Goldstein\cite{Kg}. Again, the underlying principle is to compare the hadronic decays to the appropriate semileptonic decays evaluated at a fixed value in $q^2$. For instance, the ratio of longitudinal to transverse polarization ($\Gamma_{L}/\Gamma_{T}$) in $\bar{B^0} \to D^{*+} \rho^{-}$ should be equal to the corresponding ratio for $B\to D^{*}\ell \nu$ evaluated at $ q^2={m_\rho}^2=0.6~ \rm{GeV}^2$. \begin{equation} {{\Gamma_{L}}\over{\Gamma_{T}}} ({\bar{B^0} \to D^{*+} \rho^{-}}) = {{\Gamma_{L}}\over{\Gamma_{T}}} {(B\to D^*\ell\nu)|}_{q^2=m_{\rho}^2} \end{equation} The advantage of this method is that it is not affected by QCD corrections \cite{lepage}. For $B \to D^*\ell\nu$ decay, longitudinal polarization dominates at low $q^2$, whereas near $ q^2= q^2_{\rm max}$ transverse polarization dominates. There is a simple physical argument for the behaviour of the form factors near these two kinematic limits. Near $ q^2=q^2_{\rm max}$, the $D^*$ is almost at rest and its small velocity is uncorrelated with the $D^*$ spin, so all three $D^*$ helicities are equally likely and we expect $\Gamma_T / \Gamma_L$ = 2. At $q^2=0$, the $D^*$ has the maximum possible momentum, while the lepton and neutrino are collinear and travel in the direction opposite to the $D^*$. The lepton and neutrino helicities are aligned to give $S_z= 0$, so near $q^2=0$ longitudinal polarization is dominant. For $\bar{B^0} \to D^{*+} \rho^-$, we expect $88\%$ longitudinal polarization from the argument described above \cite{Rosfac}. Similar results have been obtained by Neubert\cite{neub}, Rieckert\cite{ricky}, and Kramer {\it et al.} \cite{Kramfac}. \begin{figure}[htb] \unitlength 1.0in \vskip 15 mm \begin{center} \begin{picture}(2.5,2.5)(0.0,0.0) \put(-.35,-0.2){\psfig{width=3.2in,height=3.2in,file=bexcl_fig33.ps}} \end{picture} \vskip 15 mm \caption[]{ The differential branching ratio for $\bar{B^0} \to D^{*+} \ell \bar{\nu}_{\ell}$. The curves show the theoretical prediction for producing transversely (dashed) and longitudinally (dash-dotted) polarized $D^*$ mesons, as well as the total decay rate (solid) (from Ref.~\protect\cite{neub}).}\label{neuba} \end{center} \end{figure} Fig.~\ref{neuba} shows the prediction of Neubert for transverse and longitudinal polarization in $B \to D^*\ell\nu$ decays. Using this figure we find $\Gamma_L /\Gamma$ to be 85\% at $q^2={m_\rho}^2=0.6$. The agreement between these predictions and the experimental result (Sec.~\ref{pol-D*-rho}) \begin{equation} \Gamma_L /\Gamma \; = \; 90 \pm 7 \pm 5 \% \end{equation} supports the factorization hypothesis in hadronic $B$ meson decay for $q^2$ values up to $m_{\rho}^2$. Factorization breaks down in the charm sector due to the presence of final state interactions, ``FSI". The strength of these long distance effects in the B system can be determined by performing an isospin analysis of related decay channels such as $B^- \to D^0\pi^-$, $\bar{B}^0 \to D^0\pi^0$, and $\bar{B}^0 \to D^+\pi^-$ as was done in the past for the $D\to K \pi$ and $D \to K^* \pi$ systems. At the present level of experimental precision, there is no evidence for non-zero isospin phase shifts in B decay From a maximum likelihood fit to the observed branching fractions, Yamamoto finds that $\cos\delta^* > 0.82$ at the 90\% confidence level, where $\delta^*$ is the phase shift for the $B\to D \pi$ system and comparable constraints, $\cos\delta^* > 0.57 (0.92)$, for the $B\to D^* \pi$ ($B\to D\rho$) isospin multiplets\cite{hitoshi}. In $B$ decays to two vector mesons, such as $B \to D^*\rho$, the presence of final state interaction could also be probed by studying the angle, $\chi$, between the $D^*$ and $\rho$ decay planes. FSI would cause a phase shift between the helicity amplitudes and break the symmetry of the $\chi$ distribution. The presence of FSI would lead to a angular distribution proportional to $\sin\chi$ or $\sin 2\chi$\cite{hitoshichi}. Until the $D_s$ decay constant, $f_{D_s}$, is measured more precisely, e.g. in $D_s \to \mu\nu$, tests of the factorization hypothesis based on branching fractions cannot be applied to $B\to D^* D_s$ decays. However, CLEO~II has accumulated about 20 events in the $B^- \to D^{*0}D_s^{*-}$ and $\bar{B}^0 \to D^{*+}D_s^{*-}$ modes. As the data sample increases, it will become possible to measure the polarization in these decay modes and investigate whether factorization is still a valid assumption at $q^2=m^2_{D_s}$. \subsection{Tests of Spin Symmetry in HQET} \label{spin-sym} In HQET the effect of the heavy quark magnetic moment does not enter to lowest order \cite{Mannel}, and the assumption of factorization leads to the following predictions based on the spin symmetry of HQET: \begin{equation} \Gamma (\bar{B^0} \to D^+ \pi^-) = \Gamma (\bar{B^0} \to D^{*+}\pi^-) \end{equation} and \begin{equation} \Gamma (\bar{B^0} \to D^+ \rho^-) = \Gamma (\bar{B^0} \to D^{*+}\rho^-). \end{equation} After correcting for phase space and deviations from heavy quark symmetry it is predicted that ${\cal B}(\bar{B^0} \to D^+ \pi^-) = 1.03~ {\cal B}(\bar{B^0} \to D^{*+}\pi^-)$ and ${\cal B}(\bar{B^0} \to D^+ \rho^-) = 0.89 ~ {\cal B}(\bar{B^0} \to D^{*+} \rho^-)$. A separate calculation by Blok and Shifman using a QCD sum rule approach predicts that ${\cal B}(\bar{B^0} \to D^+ \pi^-) = 1.2 {\cal B}(\bar{B^0} \to D^{*+} \pi^-)$. This differs from the HQET prediction due to the presence of non-factorizable contributions \cite{BS}. From the experimental data we find \begin{equation} {{{\cal B}(\bar{B^0} \to D^+ \pi^-)}\over{{\cal B}(\bar{B^0} \to D^{*+} \pi^-)}} \; = \; 1.11 \pm 0.22 \pm 0.08 \end{equation} and \begin{equation} {{{\cal B}(\bar{B^0} \to D^+ \rho^-)}\over{ {\cal B}(\bar{B^0} \to D^{*+} \rho^-)}}\; = \; 1.06 \pm 0.27 \pm 0.08 \end{equation} The second error is due to the uncertainty in the D branching fractions. The two ratios of branching fractions are consistent with the expectations from HQET spin symmetry as well as with the prediction from Blok and Shifman that includes non-factorizable contributions. Mannel {\it et al.} \cite{Mannel} observe that by using a combination of HQET, factorization, and data on $B\to D^*~\ell~\nu$, they can obtain model dependent predictions for ${\cal B} (\bar{B^0}\to D^+ \rho^-)/ {\cal B}(\bar{B^0} \to D^+ \pi^-)$. Using three parameterizations of the universal Isgur-Wise form factor \cite{param}, they predict this ratio to be 3.05, 2.52, or 2.61. From the measurements of the branching ratios we obtain \begin{equation} {{{\cal B}(\bar{B^0}\to D^+ \rho^-)}\over{ {\cal B}(\bar{B^0} \to D^{+} \pi^-)}}\; = \; 2.7 \pm 0.6 \end{equation} The systematic errors from the $D$ branching fractions cancel in this ratio. Again we find good agreement with the prediction from HQET combined with factorization. Similar comparisons can be performed for $B\to D^{(*)} D_s^{(*)}$ decay modes. Using isospin invariance to combine the $\bar{B}^0\to D^+ D_s^{(*-)}$ and $B^-\to D^0 ~D_s^{(*-)}$ decay modes, we obtain \begin{equation} {{{\cal B}(\bar{B}\to D D_s^{-})}\over{ {\cal B}(\bar{B} \to D^* D_s^{-})}}\; = \; 0.94 \pm 0.35 \end{equation} The predicted range for this ratio is $1.35-1.56$\cite{cleodds}. Similarly, \begin{equation} {{{\cal B}(\bar{B} \to D D_s^{*-})}\over{ {\cal B}(\bar{B} \to D^{*} D_s^{*-})}}\; = \; 0.46 \pm 0.19 \end{equation} In this case, the additional helicity states available leads to the expectation that this ratio will lie in the range $0.33-0.39$\cite{cleodds}. \subsection{Applications of Factorization} \label{facapply} If factorization holds, hadronic $B$ decays can be used to extract information about semileptonic decays. For example, we can determine the poorly measured rate $B\to D^{**}(2420)~\ell~\nu$ from the branching ratio of $B\to D^{**}(2420)\pi$. By assuming that the rate for $B\to D^{**}(2420)\pi$ is related to $d\Gamma/dq^2 (B \to D^{**}(2420) \ell \nu)$ evaluated at $q^2 = m_{\pi}^2$. Using the model of Colangelo {\it et al.} \cite{Bari} to determine the shape of the form factors we obtain the ratio $$\frac{\Gamma(B \to D^{**}(2420) ~\ell ~\nu)} {\Gamma(B \to D^{**}(2420)\pi)}= 3.2$$ Combining this with the experimental result, ${\cal {B}}(B^- \to D^{**0}(2420)\pi^- )\, = \, 0.15 \pm 0.05\, \%$, (Table~\ref{kh3}) we predict ${\cal B} (D^{**}(2420) \ell \nu ) = 0.48 \pm 0.16 \%$. This is consistent with the average of recent direct measurements by OPAL and ALEPH (Table~\ref{Tbsemiexcit}), ${\cal B} (D^{**}(2420) \ell \nu ) = 0.82 \pm 0.18 \pm 0.06\%$. A second application of factorization is the determination of $f_{D_s}$ using the decays $B \to D^*D_s$. The rate for $\bar{B^0}\to D^{*+}D_s$ is related to the differential rate for $\bar{B^0}\to D^{*+}\ell^-\nu$ at $q^2 = m_{D_s}^2$ if factorization continues to be valid at larger values of $q^2$: \begin{equation} {\Gamma\left(\bar{B^0}\to D^{*+} D_{s}^{-}\right)\over\displaystyle{d\Gamma\over \displaystyle dq^2} \left(\bar{B^0}\to D^{*+}\ell^-\nu\right)\biggr|_{q^2=m^2_{D_s}}} = 6\pi^2 \delta ~{ c_1^2} f_{D_s}^2|V_{cs}|^2 ,\label{Efacts} \end{equation} The factor $\delta = 0.37$ accounts for the different form factors which enter in $B \to D^* D_s$ and $B\to D^*\ell\nu$ \cite{Neubie}. Using the value listed in Table \ref{TFactst} for $d\Gamma/dq^2(B\to D^*\ell \nu)$ at $q^2\: = \: m_{D_s}^2$ and the average branching ratio for ${\cal B}(B\to D^{*} D_{s}^-)=0.93 \pm 0.25 \%$, we obtain $$ f_{D_s} = (271 \pm 77) \sqrt{3.7\%/B(D_s \to \phi \pi^+)} ~\rm{MeV} $$ and with ${\cal B}(B\to D^{*} D_{s}^{*-})=1.95 \pm 0.52 \%$, we find ($\delta = 1$) $$ f_{D_s^*} = (248 \pm 69) \sqrt{3.7\%/B(D_s \to \phi \pi^+)} ~\rm{MeV} $$ This result can be compared to the value $$ f_{D_s} = (344 \pm 37 \pm 52) \sqrt{B(D_s \to \phi \pi^+)/3.7\%} ~\rm{MeV}$$ that was obtained from a direct measurement of $D_s\to \mu \nu$ decays in continuum charm events \cite{CLNS9314}. Both values of $f_{D_s}$ are consistent with theoretical predictions which are in the range $f_{D_s}=200-290$~MeV \cite{Lattices}, \cite{Potentials}, \cite{QCDsum}. If both the $D_s^+ \to \phi \pi^+$ branching fraction and $f_{D_s}$ are measured more precisely, then measurements of the branching ratios of $B\to D^* D_s$ decays can be used to test factorization in $B$ decay at $q^2 = m_{D_s}^{2}$. In the near future, it will also be possible to test factorization in this $q^2$ range by measuring $\Gamma_{L}/\Gamma$ in $B \to D^* D_{s}^*$ decays. \subsection{Factorization in Color Suppressed Decay} \label{fac-color} It is not obvious whether the factorization hypothesis will be satisfied in decays which proceed via internal W-emission e.g $B\to \psi K^{(*)}$. Two observables have been compared to phenomenological models based on the factorization hypothesis: the ratio of vector to pseudoscalar modes and the polarization in $B\to \psi K^*$ decays. The ratio of vector to pseudoscalar meson production \begin{equation} {{\cal B}(B \to \psi K^*)\over{{\cal B} (B \to \psi K)}} = 1.68 \pm 0.33 \end{equation} can be calculated using factorization and the ratio of the $B\to K^*$ and $B\to K$ form factors. The revised BSW model of Neubert {\it et al.}\cite{Neubie} predicts a value of 1.61 for this ratio, which is close to the experimental value. Another test is the corresponding ratio for $\psi^{'}$ decays: \begin{equation} {{\cal B}(B \to \psi' K^*)\over{{\cal B} (B \to \psi' K)}} = 2.1 \pm 1.5 \end{equation} This can be compared to the revised BSW model which predicts 1.85 for this ratio. Gourdin {\it et al.} \cite{gkpeta} argue, that the ratio ${{\cal B}(B \to \eta_c K^*)/{{\cal B} (B \to \eta_c K)}} $ would provide a good test of the factorization hypothesis in internal spectator decays. However, this will require a significantly larger data sample than is available at present before this ratio can be measured with sufficient precision. Other ratios of decay rates in modes with charmonium mesons may also be used to test for the violation of factorization\cite{gkpother}. The experimental results on $\psi K^*$ polarization can be compared to the theoretical predictions of Kramer and Palmer\cite{Krampalm} which depend on the assumption of factorization and on the unmeasured $B\to K^*$ form factor. Using the BSW model to estimate the form factors, they find $\Gamma_{L}/\Gamma= 0.57$. Using HQET to extrapolate from the E691 measurements of the $D\to K^*$ form factor, they obtain $\Gamma_{L}/\Gamma=0.73$. The group of Gourdin, Kamal and Pham as well as the collaboration of Aleksan, Le Yauoanc, Oliver, P\`ene, and Raynal have noted that there is no set of experimental or theoretical form factors that can simultaneously reproduce the measured values of $\Gamma_{L}/\Gamma$ and ${\cal B}(B\to \psi K^*)/ {\cal B}(B\to \psi K)$ \cite{gkp},\cite{ayopr}. They conclude that there is either a fundamental problem in heavy to light form factors or a breakdown of factorization for this class of decay modes. Kamal and Santra have suggested that all the measured observables in exclusive $B\to \psi$ can be accommodated with a single non-factorizable amplitude\cite{kamalpsi}. CLEO also finds evidence at the 2.5 standard deviation level for $B\to \chi_{c2}$ transitions at a branching ratio of $ 0.25\pm 0.10\pm 0.03\%$. If confirmed, this would indicate the presence of either non-factorizable color octet contributions which are neglected in the usual treatment of hadronic $B$ decays or higher order processes $O(\alpha_s^2)$ in $b\to c \bar{c} s$ decays\cite{bodwin}. \section{DETERMINATION OF THE COLOR SUPPRESSED AMPLITUDE} \label{eff-color-supp} \subsection{Color Suppression in $B$ Decay} In the decays of charmed mesons the effect of color suppression is obscured by the effects of final state interactions (FSI), and soft gluon effects which enhance $W$ exchange diagrams. Table~\ref{Tcolsuprat} gives ratios of several charmed meson decay modes with approximately equal phase space factors where the mode in the numerator is color suppressed while the mode in the denominator is an external spectator decay. With the exception of the decay $D^0\to \bar{K}^0\rho^0$ it is clear that the color suppressed decays do not have significantly smaller branching ratios. \begin{table}[htb] \caption{Measured Ratios of color suppressed to external spectator branching fractions.}\label{Tcolsuprat} \begin{tabular}{cc} Mode & Branching fraction \cite{PDG} \\ \hline ${\cal B}(D^0 \to \bar{K^0}\rho^0) / {\cal B}(D^0 \to K^- \rho^+)$ & $0.08 \pm 0.04$ \\ ${\cal B}(D^0 \to K^0 \pi^0) / {\cal B}(D^0 \to K^- \pi^+)$ & $0.57 \pm 0.13$ \\ ${\cal B}(D^0\to \bar{K^{*0}} \pi^0) / {\cal B}(D^0 \to K^{*-} \pi^+)$ & $0.47\pm 0.23$ \\ ${\cal B}(D^0 \to \pi^0 \pi^0) / {\cal B}(D^0 \to \pi^- \pi^+)$ & $0.77 \pm 0.25$ \\ ${\cal B}(D_s^{+} \to \bar{K^{*0}} K^+) / {\cal B}(D_s \to \phi \pi^+)$ & $0.95\pm 0.10$ \\ ${\cal B}(D_s^{+} \to \bar{K^0} K^+) / {\cal B}(D_s \to \phi \pi^+)$ & $1.01 \pm 0.16$ \\ \end{tabular} \end{table} When the BSW model is used to fit the data on charm decays it gives values of $a_1=1.26$ and $a_2 = -0.51$. The BSW model assumes that the values of the coefficients can be extrapolated from $\mu = m_{c}^2$ to $\mu = m_{b}^2$ taking into account the evolution of the strong coupling constant $\alpha_s$. This extrapolation gives the predictions $a_1=1.1$ and $a_2=-0.24$ for $B$ decays. The smaller magnitude of $a_2$ means that in contrast to the charm sector one expects to find a more consistent pattern of color suppression in $B$ meson decays. Another approach uses the factorization hypothesis, HQET and model dependent form factors (RI model)\cite{ir}. In this approach, $a_1$ and $a_2$ are determined from QCD (with $ 1 /N_{\rm color} =1 /3$), and color suppressed $B$ decays are expected to occur at about $1/1000$ the rate of unsuppressed decays. In Section ~\ref{color-supress} we obtained upper limits for color suppressed $B$ decays with a $D^0$ or $D^{*0}$ meson in the final state. In Table~\ref{Tbrcolcomp} these results are compared to the predictions of the BSW and the RI models. \begin{table}[htb] \caption{Branching fractions of color suppressed $B$ decays and comparisons with models.}\label{Tbrcolcomp} \begin{tabular}{lcccc} Decay Mode & U. L. (\%) & BSW (\%) & $\cal{B}$ (BSW) & RI~model(\%) \\ \hline $\bar{B^0} \to D^{0} \pi^0$ &$<0.048$ & $0.012$ & $0.20 a_2^{2} (f_{D}/220 \rm{MeV})^2$ & $0.0013 - 0.0018$ \\ $\bar{B^0} \to D^{0} \rho^0$ &$<0.055$ & $0.008$ & $0.14 a_2^{2} (f_{D}/220 \rm{MeV})^2$ & $0.00044$ \\ $\bar{B^0} \to D^{0} \eta$ &$<0.068$ & $0.006$ & $0.11 a_2^{2} (f_{D}/220 \rm{MeV})^2 $ & \\ $\bar{B^0} \to D^{0} \eta^{'}$ &$<0.086$ & $0.002$ & $ 0.03 a_2^{2}(f_{D}/220 \rm{MeV})^2$ & \\ $\bar{B^0} \to D^{0} \omega $ &$<0.063$ & $0.008$ & $0.14 a_2^{2}(f_{D}/220 \rm{MeV})^2$ & \\ $\bar{B^0} \to D^{*0} \pi^0$ &$<0.097$ & $0.012$ & $ 0.21 a_2^{2}(f_{D*}/220 \rm{MeV})^2$ & $0.0013-0.0018$ \\ $\bar{B^0} \to D^{*0} \rho^0$ &$<0.117$ & $0.013$ & $ 0.22 a_2^{2}(f_{D*}/220 \rm{MeV})^2$ & $0.0013 -0.0018$ \\ $\bar{B^0} \to D^{*0} \eta$ &$<0.069$ & $0.007$ & $0.12 a_2^{2}(f_{D*}/220 \rm{MeV})^2$ & \\ $\bar{B^0} \to D^{*0} \eta^{'}$ &$<0.27$ & $ 0.002$ & $0.03 a_2^{2}(f_{D*}/220 \rm{MeV})^2$ & \\ $\bar{B^0} \to D^{*0} \omega$ &$<0.21$ & $0.013$ & $ 0.22 a_2^{2}(f_{D*}/220 \rm{MeV})^2$ & \end{tabular} \end{table} In contrast to charm decay, color suppression seems to be operative in hadronic decays of $B$ mesons. The limits on the color suppressed modes with $D^{0(*)}$ and neutral mesons are still above the level expected by the two models, but we can already exclude a prediction by Terasaki \cite{tera} that ${\cal{B}}(\bar{B^0} \to D^0 \pi^0) \approx 1.8 {\cal{B}}(\bar{B^0} \to D^+\pi^-)$. To date, the only color suppressed $B$ meson decay modes that have been observed are final states which contain charmonium mesons e.g. $B\to \psi K$ and $B\to \psi K^*$ \cite{psicomment}. \subsection{Determination of $|a_1|$, $|a_2|$ and the Relative Sign of ($a_2/a_1$)} \label{a1-a2} In the BSW model \cite{Neubie}, the branching fractions of the $\bar{B}^0$ normalization modes are proportional to $a_1^2$ while the branching fractions of the $B\to\psi$ decay modes depend only on $a_2^2$. A fit to the branching ratios for the modes $\bar{B^0}\to D^+\pi^-$, $D^+\rho^-$, $D^{*+}\pi^-$ and $D^{*+}\rho^-$ using the model of Neubert {\it et al.}\ yields \begin{equation} |a_1| = 1.03 \pm 0.04 \pm 0.06 \label{normal_a1} \end{equation} and a fit to the modes with $\psi$ mesons in the final state gives \begin{equation} |a_2| = 0.23 \pm 0.01 \pm 0.01 \label{psi_a2} \end{equation} The first error on $|a_1|$ and $|a_2|$ includes the uncertainties from the charm or charmonium branching ratios, the experimental systematics associated with detection efficiencies and background subtractions as well as the statistical errors from the branching ratios. The second error quoted is the uncertainty due to the $B$ meson production fractions and lifetimes. We have assumed that the ratio of $B^+ B^-$ and $B^0 \bar{B^0}$ production at the $\Upsilon(4S)$ is one \cite{dstlnu}, and assigned an uncertainty of 10\% to it. The magnitude of the amplitude for external spectator processes, $|a_1|$ can also be determined from $B\to D^{(*)}D_s^{(*)}$ decays. Since these transitions are not subject to interference with the internal spectator amplitude we can combine $B^-$ and $\bar{B}^0$ decays to reduce the statistical error. Using the average branching fractions given in Tables~\ref{kh3},~\ref{kh4} we obtain \begin{equation} |a_1|_{DD_s} = 0.93 \pm 0.06 \pm 0.04 \label{dds_a1} \end{equation} It is interesting to note that this value of $|a_1|$ agrees with the result of the fit to the $B\to D^{(*)} \pi$ and $B\to D^{(*)}\rho$ modes (see~\ref{normal_a1}). In general, $|a_1|$ could be different for exclusive $b\to c \bar{u} d$ and $b\to c \bar{c} s$ processes. \begin{table}[htb] \label{Tbswcol} \caption{Predicted branching fractions in terms of BSW parameters $a_1$, $a_2$. The coefficients have been rescaled to accommodate the new $B$ lifetime and $|V_{cb}|$ values given in equation \protect{\ref{theo_parms}} and $f_D\, = \, f_{D^*}\, = 220$~MeV.} \begin{tabular}{lcc} Mode & Neubert {\it et al.} \cite{Neubie} & Deandrea {\it et al.} \cite{DBGN} \\ \hline $\bar{B}^0 \to D^+ \pi^- $ &$ 0.264 a_1^2 $ & $0.278 a_1^2$ \\ $\bar{B}^0 \to D^+ \rho^-$ &$ 0.621 a_1^2 $ & $0.717 a_1^2$ \\ $\bar{B}^0 \to D^{*+} \pi^-$ &$ 0.254 a_1^2 $ & $0.278 a_1^2$ \\ $\bar{B}^0 \to D^{*+} \rho^-$ &$ 0.702 a_1^2 $ & $0.949 a_1^2$ \\ $\bar{B}^0 \to D^+ D_s^- $ &$ 1.213 a_1^2 $ & $ 1.094 a_1^2$ \\ $\bar{B}^0 \to D^+ D_s^{(*-)}$ &$ 0.859 a_1^2 $ & $ 0.745 a_1^2$ \\ $\bar{B}^0 \to D^{*+} D_s^-$ &$ 0.824 a_1^2 $ & $ 0.768 a_1^2$ \\ $\bar{B}^0 \to D^{*+} D_s^{(*-)}$ &$ 2.203 a_1^2 $ & $ 2.862 a_1^2$ \\ $ B^- \to D^0 \pi^- $ & $ 0.265 [a_1 +1.230 a_2 ( f_D/220)]^2 $ & $ 0.278 [a_1 +1.12655 a_2 ( f_D/220)]^2 $ \\ $ B^- \to D^0 \rho^-$ & $ 0.622 [a_1 + 0.662 a_2 ~( f_D/220)]^2 $ & $ 0.717 [a_1 +0.458 a_2 ( f_D/220)]^2 $ \\ $ B^- \to D^{*0} \pi^-$ & $ 0.255 [a_1 +1.292 a_2 ~( f_{D^*}/220)]^2$ & $ 0.278 [a_1 + 1.524 a_2 ( f_{D^*}/220)]^2 $ \\ $ B^- \to D^{*0} \rho^-$ & $ 0.703 [a_1^2 + 1.487 a_1 a_2 ~( f_{D^*}/220) $ & $ 0.949 [a_1^2 + 1.31 a_1 a_2 ~( f_{D^*}/220) $ \\ & $+0.635 a_2^2 (f_{D^*}/220)^2]$ & $+ 0.53 a_2^2 ( f_{D^*}/220)^2]$ \\ $B^- \to D^0 D_s^- $ &$ 1.215 a_1^2 $ & $ 1.094 a_1^2$ \\ $B^- \to D^0 D_s^{(*-)}$ &$ 0.862 a_1^2 $ & $ 0.745 a_1^2$ \\ $B^- \to D^{*0} D_s^-$ &$ 0.828 a_1^2 $ & $ 0.768 a_1^2$ \\ $B^- \to D^{*0} D_s^{(*-)}$ &$ 2.206 a_1^2 $ & $ 2.862 a_1^2$ \\ $ \bar{B^0} \to \psi \bar{K}^0$ &$ 1.817 a_2^2 $ & $ 1.652 a_2^2$ \\ $ \bar{B^0} \to \psi \bar{K}^{*0} $&$ 2.927 a_2^2 $ & $ 2.420 a_2^2$ \\ $ \bar{B^0} \to \psi$'$ \bar{K}^0$ &$ 1.065 a_2^2 $ & $ 0.559 a_2^2$ \\ $ \bar{B^0} \to \psi$'$ \bar{K}^{*0} $&$ 1.965 a_2^2 $ & $ 1.117 a_2^2$ \\ $ B^- \to \psi K^- $ &$ 1.819 a_2^2 $ & $ 1.652 a_2^2$ \\ $ B^- \to \psi K^{*-}$ &$ 2.932 a_2^2 $ & $ 2.420 a_2^2$ \\ $ B^- \to \psi$'$ K^- $ &$ 1.068 a_2^2 $ & $ 0.559 a_2^2$ \\ $ B^- \to \psi$'$ K^{*-}$ &$ 1.971 a_2^2 $ & $ 1.117 a_2^2$ \\ \end{tabular} \end{table} \begin{table}[htb] \caption{Ratios of normalization modes to determine the sign of $a_2/a_1$. The magnitude of $a_2/a_1$ is the value in the BSW model which agrees with our result for $B\to \psi$ modes.}\label{Tbswexpc} \begin{tabular}{ccccc} Ratio &$a_2/a_1 =-0.23 $ & $a_2/a_1 =0.23 $ & Experiment & RI~ model \\ \hline $R_1 $& 0.51 & 1.64 & $1.60 \pm 0.30$ &$1.20-1.28$ \\ $R_2 $& 0.72 & 1.33 & $1.61 \pm 0.39$ &$1.09-1.12$ \\ $R_3 $& 0.49 & 1.68 & $1.85 \pm 0.40$ &$1.19-1.27$ \\ $R_4 $& 0.68 & 1.37 & $2.10 \pm 0.61$ &$1.10-1.36$ \end{tabular} \end{table} By comparing branching ratios of $B^-$ and $\bar{B^0}$ decay modes it is possible to determine the the sign of $a_2$ relative to $a_1$. The BSW model,~\cite{Neubie} predicts the following ratios: \begin{equation} R_1 = {{\cal B}(B^- \to D^0 \pi^-) \over {\cal B}(\bar{B^0}\to D^+ \pi^-)} = (1 + 1.23 a_2/a_1)^2 \label{colrate1} \end{equation} \begin{equation} R_2 = {{\cal B}(B^- \to D^0 \rho^-) \over {\cal B}(\bar{B^0} \to D^+ \rho^-)} = (1 + 0.66 a_2 /a_1)^2 \label{colrate2} \end{equation} \begin{equation} R_3 = {{\cal B}(B^- \to D^{*0} \pi^-) \over {\cal B}(\bar{B^0} \to D^{*+} \pi^-)} =(1 + 1.29 a_2/a_1)^2 \label{colrate3} \end{equation} \begin{equation} R_4 = {{\cal B}(B^- \to D^{*0} \rho^-) \over{\cal B}(\bar{B^0} \to D^{*+} \rho^-)} \approx (1 + 0.75 a_2/a_1)^2 \label{colrate4} \end{equation} Table~\ref{Tbswexpc} shows a comparison between the experimental results and the two allowed solutions in the BSW model. In the experimental ratios the systematic errors due to detection efficiencies partly cancel. In the ratios $R_3$ and $R_4$ the $D$ meson branching ratio uncertainties do not contribute to the systematic error. A least squares fit to the ratios $R_1$ - $R_3$ gives \begin{equation} a_2/a_1 = 0.25 \pm 0.07 \pm 0.06 \label{a2a1_ratio} \end{equation} where we have ignored uncertainties in the theoretical predictions. $R_4$ is not included in the fit since the model prediction in this case is not thought to be reliable \cite{volkie}. The second error is due to the uncertainty in the $B$ meson production fractions and lifetimes which enter into the determination of $a_1/a_2$ in the combination $(f_+ \tau_{+}/ f_{0} \tau_{0})$. As this ratio increases, the value of $a_2/a_1$ decreases. The allowed range of $(f_+ \tau_{+}/ f_{0} \tau_{0})$ excludes a negative value of $a_2/a_1$. Other uncertainties in the magnitude\cite{fdvari} of $f_D$, $f_{D^*}$ and in the hadronic form factors can change the magnitude of $a_2/a_1$ but not its sign. The numerical factors which multiply $a_2/a_1$ include the ratios of $B \to \pi$($B\to\rho$) to $B\to D$ ($B\to D^*$) form factors, as well as the ratios of the meson decay constants. We assume values of 220~MeV for $f_D$ and $f_{D^*}$ \cite{rosfd}. To investigate the model dependence of the result we have recalculated $|a_1|$, $|a_2|$, and $a_2/a_1$ in the model of Deandrea {\it et al.}\ We find $|a_1| = 0.97 \pm 0.04 \pm 0.06$, $|a_2| = 0.24 \pm 0.01 \pm 0.01$, and $a_2/a_1 = 0.25 \pm 0.07 \pm 0.05$, consistent with the results discussed above. A different set of $B \to \pi$ form factors can be calculated using QCD sum rules. Using the form factors determined by Belyaev, Khodjamirian and R\"uckl \cite{brueckl} and by Ball \cite{Ballff}, $a_2/a_1$ changes by 0.04. Kamal and Pham have also considered the effect of uncertainties in form factors, the effects of final state interactions, and annihilation terms. They conclude that these may change the magnitude of $a_2/a_1$ but not its sign \cite{KPham}. Systematic uncertainties in the ratio of $D$ branching fractions could also modify its magnitude. \begin{table}[htb] \caption{Predicted (BSW) and measured ratios of widths of $D^+$ and $D^0$ modes in charm decay.}\label{Tbcharm} \begin{tabular}{cccc} Mode &$a_2/a_1 =-0.40 $ & $a_2/a_1 =0.40 $ & Ratio of widths (exp)\cite{PDG} \\ \hline $D^+ \to \bar{K}^0\pi^+ /D^0\to K^- \pi^+$ & 0.26 & 2.2 & $0.28 \pm 0.05 $ \\ $D^+\to \bar{K}^0\rho^+ /D^0\to K^-\rho^+$ & 0.58 & 1.5 & $ 0.36 \pm 0.10 $ \\ $D^+\to \bar{K}^{*0}\pi^+ /D^0\to K^{*-}\pi^+$ & 0.05 & 3.2 & $0.17\pm 0.07 $ \\ $D^+\to \bar{K}^{*0}\rho^+ /D^0\to K^{*-}\rho^+ $ & 0.34 & 2.0 & $0.25 \pm 0.12 $ \end{tabular} \end{table} The magnitude of $a_2$ determined from this fit to the ratio of $B^-$ and $B^0$ modes is consistent with the value of $a_2$ determined from the fit to the $B\to\psi$ decay modes. The sign of $a_2$ disagrees with the theoretical extrapolation from the fit to charmed meson decays using the BSW model\cite{oldfit}. It is also disagrees with the expectation from the $1/N_{c}$ rule\cite{BS},\cite{halperin}. Table~\ref{Tbcharm} compares the corresponding charm decay ratios to the theoretical expectations for positive and negative values of $a_2/a_1$. The result may be consistent with the expectation of perturbative QCD \cite{Burasa1a2}. \subsection{The Sign of a$_2$/a$_1$ and the Anomalous Semileptonic Branching Ratio} \label{baffle} A relative plus sign between the coefficients $a_1$ and $a_2$ indicating constructive interference in $B^-$ decays came somewhat as a surprise since destructive interference is observed in charm decay. Although constructive interference has been observed in all the $B^-$ modes studied so far these only comprise a small fraction of the total rate. It is therefore important to broaden the experimental base and to measure $a_1$ and $a_2$ for a large variety of decay modes. One approach would be to compare inclusive $B^- \to D^0_{\rm direct}$ with $\bar{B}^0 \to D^+_{\rm direct}$ production. It is intriguing that $a_1$ determined from $B\to D^{(*)}\pi , \, D^{(*)} \rho$ modes agrees well with the value of $a_1$ extracted from $B \to DD_s$ decays. The observation of color suppressed decays such as $\bar{B}^0 \to D^0 \pi^0$ would certainly help to clarify this picture since they give another measure of $|a_2|$ complementary to $B \to $ Charmonium decays . Keum \cite{keumi} has suggested that the relative sign of $a_1$ and $a_2$ could be determined from a measurement of the polarization in $B^- \to D^{*0} \rho^-$ decays. For $a_2/a_1 > 0 $ the amount of longitudinal polarization should be less than 88\% and vice versa. The experimentally measured semileptonic branching ratio is determined to be $(10.35\pm 0.17 \pm 0.35)$\% in the model independent dilepton analysis \cite{Cleo2l}. Comparable but more precise rates are also obtained from the analysis of the single lepton spectrum. These measurements are significantly below the theoretical lower bound ${\cal{B}}_{sl}>12.5 $\% from QCD calculations within the parton model\cite{bbsl}. It is possible to understand simply the origin of the theoretical limit. In the absence of QCD corrections, the virtual $W$ emitted by the b quark can decay into a lepton-antineutrino pair, a $\bar{u}-d$ quark pair, or $\bar{c}-s$ quark pair. For the decay into a quark pair, there are three possible color states which are equally probable. In addition, corrections must be made for the reduction in phase space in the $W\to \tau \nu$ and $W\to \bar{c} s$ decays. Then the semileptonic fraction, ${\cal B}_{SL}$ is given by \begin{equation} {\cal B}_{SL} = {{f_c} \over {5 f_c + 3 f_{\bar{c} s} + f_{c \tau}} } \end{equation} Using the phase space factors, $f_c=0.45$, $f_{\bar{c} s} \approx f_{c \tau} =0.12$ gives ${\cal B}_{SL} = 16.5\%$. Including QCD corrections, modifies the hadronic contributions to the width and gives ${\cal B}_{SL} = 14.3\%$. The theoretical lower limit of $12.5\%$ is obtained by varying the quark masses and QCD scale to their lower limits. Several explanations of this discrepancy have been proposed and await experimental confirmation: \begin{itemize} \item An increased $b\to c\bar{c}s$ component of the $B$ meson hadronic width \cite{bbsl}, \cite{palmstech},\cite{dunietz}. However, recent experimental data rule out the mechanism suggested by reference \cite{dunietz} as a major contributor to $B \to$ baryon decays. \item Higher order contributions might reduce the theoretical expectation or the assumption of duality may not hold for b quark decay\cite{falk}. The former has been advocated by Bagan, Ball, Braun, and Gosdzinsky who find results consistent with the experimental result\cite{bagan1},\cite{bagan2} but also predict $N_{c}=1.28\pm 0.08$ for the number of charm quarks produced per $b$ decay due to higher order enhancements of the $b\to c \bar{c} s$ channel\cite{bagan2}. \item Constructive interference in $B^-$ decays would reduce the theoretical expectation for the semileptonic branching ratio. A small contribution from W exchange to $\bar{B}^0$ decays would keep the lifetime ratio close to unity and satisfy the experimental constraints on this quantity\cite{hsw}. \end{itemize} Increasing the $b\to c\bar{c}s$ component would increase the average number of c quarks produced per b quark decay and lead to another interesting problem: the predicted number of charm quarks per b decay would increase to 1.3 while the current experimental world average for this number is $1.10\pm 0.06$ (see section~\ref{charmpro}). There could also be a large contribution to the hadronic width that has not been measured. It has been suggested by Palmer and Stech\cite{palmstech}, that $b \to c \bar{c} s$ followed by $c \bar{c} \to \rm{gluons}$, which in turn hadronize into a final state with no charm, has a large branching ratio. Another related suggestion is that the rate for the hadronic penguin diagram $b\to sg$ is much larger than expected\cite{kaganbsg}. These possibilities will lead to significant production of high multiplicity charmless final states and are difficult to distinguish experimentally. A systematic study of inclusive hadronic $B$ decays to mesons and baryons will be required to resolve this problem. \section{RARE HADRONIC DECAYS} There are hadronic $B$ meson decays that cannot be produced by the usual $b\to c$ transition. The results of the experimental search for these rare decay modes provides important information on the mechanisms of $B$ meson decay and significant progress is being made with the collection of large samples of $B$ mesons by the CLEO II experiment. As an indication of this we will discuss the first observation of radiative penguin decay as well as new experimental results on the decays $\bar{B^0}\to\pi^+\pi^-$ and $\bar{B^0}\to K^-\pi^+$ where a statistically significant signal has been observed in the sum of the two modes. Decays of the kind $B\to D_s X_u$, where the $X_u$ system hadronizes as pions, can occur via a $b\to u$ spectator diagram where the $W$ forms a $c\bar{s}$ pair. Since other contributing diagrams are expected to be negligible these decays may provide a clean environment in which to measure $V_{ub}$ in hadronic decays. Decays of the kind $\bar{B^0}\to D_s^+ X_s^-$, where $X_s$ is a strange meson, are also interesting since they are associated with a $W$ exchange diagram. \begin{figure}[htb] \begin{center} \vskip 15 mm \unitlength 1.0in \begin{picture}(2.,1.2)(0,0) \put(-1.9,-3.4){\psfig{width=6.5in,height=5.8in,% file=rarefeyn.ps}} \end{picture} \caption{Rare $B$ meson decay diagrams: (a) $b \to u$ spectator and (b) gluonic penguin.} \label{rarefeyn} \end{center} \end{figure} Charmless hadronic decays such as $\bar{B^0}\to\pi^+ \pi^-$, $B^-\to\pi^-\pi^0$, $\bar{B^0}\to\pi^{\pm}\rho^{\mp}$ and $B^-\to\pi^0 \rho^-$, are expected to be produced by the $b\to u$ spectator diagram (Fig. \ref{rarefeyn}(a)), although there is a possible small contribution from a $b\to d$ penguin diagram (Fig. \ref{rarefeyn}(b)). The decay $\bar{B^0}\to \pi^+\pi^-$ has been discussed as a possible place to observe CP violation in the $B$ meson system \cite{CPpipi}. The final state is a CP eigenstate, and CP violation can arise from interference between the amplitude for the direct decay via the $b\to u$ spectator diagram, and the amplitude for the decay following $B^0\bar{B^0}$ mixing. In this decay the CP violating angle is different from the one accessible in $\bar{B^0}\to \psi K_s$, so the measurement is complementary. There is a possible complication if the $b\to d$ penguin contribution to the amplitude is significant. This could be resolved if measurements are made on other rare hadronic decay modes to determine the role of the penguin amplitude in any observed CP violating effect \cite{CPpipi}. Decays to charmless hadronic final states containing an $s$ quark are expected to have a significant contribution from a $b\to s$ penguin diagram, although they can also occur through a CKM suppressed $b\to u$ spectator diagram. The inclusive rates for the hadronic penguin diagrams $b\to sg$ and $b\to sq\bar{q}$ are estimated to be of order 0.1\% from the parton model, but predictions for the hadronization into exclusive final states are uncertain because the simple assumptions about factorization of the amplitude used for the spectator diagram may not be valid for loop diagrams. Gronau, Rosner and London have observed that precise measurements of the branching fractions for hadronic charmless decay modes will provide sufficient information to determine the CKM complex phase\cite{gronros}. SU(3) symmetry gives the relationship \begin{equation} \sqrt{2} A (B^+\to \pi^0 K^+) + A(B^+\to \pi^+ K^0) = \tilde{r_u} \sqrt{2} A(B^+\to \pi^+ \pi^0) \label{roseqn} \end{equation} where $\tilde{r_u}=f_K/f_\pi |V_{us}/V_{ud}|$ accounts for $SU(3)$ breaking. The weak phase $\gamma$ enters only in the charmless decay modes proportional to $|V_{ub}|$ but not in those which are proportional to $|V_{ts}|$. By taking appropriate linear combinations of the rates for the above decay modes and their charge conjugates, it is then possible to solve for $\gamma$. Of order at least 100 reconstructed decays in each of the modes would be required to complete the determination. However, it has been recently noted that a possible contribution of electroweak penguins to the amplitudes for these decays may invalidate equation~(\ref{roseqn})~ \cite{deshiso}. Other systematic uncertainties due to contributions from loops with $c$ and $u$ quarks may also be problematic. \subsection{Decays to $D_s$ Mesons} These decays have recently been searched for by ARGUS \cite{ARGUSDspi} and CLEO~II \cite{CLEODspi}. The upper limits are given in Table~\ref{TABDspi} along with theoretical predictions by Choudury~{\it et al.} \cite{CISS}, and Deandrea~{\it et al.} \cite{DBGN}. \begin{table} [hbt] \caption{Theoretical predictions and experimental upper limits (90\% C.L.) for $B$ decays to $D_s$. All numbers quoted are branching fractions $\times 10^{5}$} \label{TABDspi} \begin{center} \begin{tabular}{lcccc} $B$ Decay & Choudury & Deandrea & ARGUS & CLEO II \\ \hline $D_s^+\pi^-$ & 1.9 & 8.1 & $<$170.0 & $<$27.0 \\ $D_s^{*+}\pi^-$ & 2.7 & 6.1 & $<$120.0 & $<$44.0 \\ $D_s^+\rho^-$ & 1.0 & 1.2 & $<$220.0 & $<$66.0 \\ $D_s^{*+}\rho^-$ & 5.4 & 4.5 & $<$250.0 & $<$74.0 \\ $D_s^+\pi^0$ & 1.8 & 3.9 & $<$90.0 & $<$20.0 \\ $D_s^{*+}\pi^0$ & 1.3 & 3.0 & $<$90.0 & $<$32.0 \\ $D_s^+\eta$ & & 1.1 & & $<$46.0 \\ $D_s^{*+}\eta$ & & 0.8 & & $<$75.0 \\ $D_s^+\rho^0$ & 0.5 & 0.6 & $<$340.0 & $<$37.0 \\ $D_s^{*+}\rho^0$ & 2.8 & 2.4 & $<$200.0 & $<$48.0 \\ $D_s^+\omega$ & & 0.6 & $<$340.0 & $<$48.0 \\ $D_s^{*+}\omega$ & & 2.4 & $<$190.0 & $<$68.0 \\ $D_s^+ K^-$ & & & $<$170.0 & $<$23.0 \\ $D_s^{*+} K^-$ & & & $<$120.0 & $<$17.0 \\ $D_s^+ K^{*-}$ & & & $<$460.0 & $<$97.0 \\ $D_s^{*+} K^{*-}$ & & & $<$580.0 & $<$110.0 \\ \end{tabular} \end{center} \end{table} The experimental limits are still at least a factor of three above the theoretical predictions. If these limits are compared to the predictions of Deandrea~{\it et al.} then the best constraint on $|V_{ub}/V_{cb}|$ will come from the CLEO~II limit on $\bar{B^0}\to D_s^+\pi^-$, but that this model dependent limit is still above the range $0.06<|V_{ub}/V_{cb}|<0.10$ allowed by the recent semileptonic data \cite{btoulnu}. Combining several $D_s X_u$ modes the sensitivity to $V_{ub}$ can be slightly improved. For example, using the BSW model CLEO obtains an upper limit of $|V_{ub}/V_{cb}|< 0.15$ (90\% C.L.) \cite{CLEODspi}. \subsection{Charmless Hadronic $B$ Decay} Predictions of branching ratios for charmless hadronic decays were made by Bauer, Stech and Wirbel \cite{Stech} using the $b\to u$ spectator diagram and the assumption of factorization. The possible contributions from penguin diagrams were neglected. These predictions have recently been updated by Deandrea {\it et al.} \cite{DBGN} using new estimates of the hadronic form factors. We compare their results to the experimental upper limits in Table \ref{TABbsw}. Recently, the LEP experiments with silicon vertex detectors have also contributed to the search for these charmless decay modes. These experiments are also able to set limits on rare decays of the $B_s$ meson (see Table~\ref{bsrare}), which are not produced at threshold experiments. \begin{table} [hbt] \caption{Theoretical predictions and experimental upper limits (90\% C.L.) for charmless hadronic $B$ decays. All numbers quoted are branching fractions $\times 10^{5}$.} \label{TABbsw} \begin{center} \begin{tabular}{lccccccc} $B$ Decay &Deandrea & ARGUS & CLEO 1.5 & CLEO II & DELPHI & ALEPH & OPAL \\ \hline $\pi^+\pi^-$ & 1.8 & $<$13.0 & $<$7.7 &$<2.2$ & $<5.5$ & $<7.5$ & $<4.7$ \\ $\pi^{\pm}\rho^{\mp}$ & 5.2 & $<$52.0 & &$<9.5$ & & &\\ $\rho^+\rho^-$& 1.3 & & & & & & \\ $\pi^{\pm}a_1^{\mp}$ & & $<$90.0 & $<$49.0 & & & & \\ $\pi^0\pi^0$ & 0.06 & & & $<1.0$ & & & \\ $\pi^0\rho^0$ & 0.14 & $<$40.0 & & $<2.9$ & & & \\ $\rho^0\rho^0$& 0.05 & $<$28.0 & $<$29.0 & & & & \\ $\pi^-\pi^0$ & 1.4 & $<$24.0 & & $<2.3$ & & & \\ $\pi^-\rho^0$ & 0.7 & $<$15.0 & $<$17.0& $<4.1$ &$<26$& & \\ $\pi^0\rho^-$ & 2.7 & $<$55.0 & & & & & \\ $\rho^-\rho^0$& 0.7 & $<$100.0 & & & & & \\ $\pi^-\pi^+\pi^-$& & & & & $<22$& & \\ $\pi^+\pi^+\pi^-\pi^-$& & & & & $<28$& & \end{tabular} \end{center} \end{table} In addition to the results given in Table~\ref{TABbsw}, L3 has set limits on two rare modes with all neutral final states $B^0\to \eta\pi^0$ and $B\to \eta\eta$ modes of $84\times 10^{-5}$ and $210\times 10^{-5}$, respectively. There are two recent sets of theoretical predictions by Deshpande {\it et al.} \cite{Desh} and Chau {\it et al.} \cite{Chau} that take into account both penguin and spectator contributions and make predictions for a large number of charmless hadronic $B$ decays. A selection of these predictions are shown in table~\ref{TABbsg}. Large contributions from the penguin amplitude are expected in decays such as $B\to K^{(*)}\phi$ and $B\to K^{(*)}\pi$. However, the decays $B\to K\rho$ are predicted to have very small penguin amplitudes due to cancellations in the contributions to the amplitude \cite{Desh}. \begin{table} [hbt] \caption{Theoretical predictions and experimental upper limits (90\% C.L.) for $b\to s$ decays. All numbers quoted are branching fractions $\times 10^{5}$} \label{TABbsg} \begin{center} \begin{tabular}{lcccccccc} $B$ Decay & Deshpande & Chau & ARGUS & CLEO 1.5 & CLEO II & DELPHI & ALEPH & OPAL \\ \hline $K^-\pi^+$ & 1.1 & 1.7 & $<$18.0 & $<$7.7& $<1.9$ & $<9$ &$<7.5$ & $<8.1$\\ $K^-\rho^+$ & 0 & 0.2 & & & $<4.3$ & & & \\ $K^- {a_1}^+$ & & & & & & $<39$& &\\ $K^- \pi^+ \pi^-$ & & & & & & $<40$& &\\ $K^- \pi^- \pi^+ \pi^+$ & & & & & & $<21$& &\\ $K^0\pi^0$ & 0.5 & 0.6 & & & $<6.3$ & & &\\ $K^0\rho^0$ & 0.01& 0.04 & $<$16.0 & $<$50.0 & & & &\\ $K^{*-}\pi^+$& 0.6 & 1.9 & $<$62.0 & $<$38.0 & $<23.8$& & &\\ $K^{*0}\pi^0$& 0.3 & 0.5 & & & $<3.5$ & & &\\ $K^-\pi^0$ & 0.6 & 0.8 & & & $<3.2$ & & &\\ $K^-\rho^0$ & 0.01& 0.06 & $<$18.0 & $<$8.0 & $<2.6$ & $<19$& &\\ $K^0\pi^-$ & 1.1 & 1.2 & $<$9.6 & $<$10.0 & $<6.8$ & & &\\ $K^0\rho^-$ & 0 & 0.03 & & & & & &\\ $K^{*0}\pi^-$& 0.6 & 0.9 & $<$17.0 & $<$15.0 & $<6.0$ & & &\\ $K^{*-}\pi^0$& 0.3 & 0.9 & & & & & &\\ $K^0\phi$ & 1.1 & 0.9 & $<$36.0 & $<$42.0 & $<10.7$& & &\\ $K^{*0}\phi$ & 3.1 & 0.9 & $<$32.0 & $<$38.0 & $<3.9$ & & &\\ $K^-\phi$ & 1.1 & 1.4 & $<$18.0 & $<$9.0 & $<1.4$ & $<44$& &\\ $K^{*-}\phi$ & 3.1 & 0.8 & $<$130.0& & $<9.0$ & & & \\ $\phi \phi $& & & & & $<4.8$ & & &\\ $K^- K^+ K^-$& & & & & & $<31$& &\\ \end{tabular} \end{center} \end{table} New upper limits have been presented for $\bar{B}^0\to\pi^+ \pi^-$ \cite{PRLkpi} and $\bar{B^0}\to\pi^{\pm}\rho^{\mp}$ \cite{CLEOglasrare}. The CLEO~II search for $\bar{B}^0\to\pi^+ \pi^-$ is discussed in detail in the next section. CLEO~II also has a new limit on $\bar{B^0}\to K^-\pi^+$ \cite{PRLkpi}, and preliminary results on $\bar{B^0}\to K^-\rho^+$ \cite{CLEOglasrare} as well as the $B\to K^{(*)}\phi$ modes \cite{CLEOglasrare}. The CLEO~II limits on $\bar{B^0}\to K^-\pi^+$ and $B^-\to K^-\phi$, which are expected to have a large penguin amplitude, are close to the theoretical predictions. \begin{table}[htb] \begin{center} \label{bsrare} \caption{Upper limits on branching fractions for rare $B_s$ decay modes in unit of $10^{-5}$.} \vspace{0.5cm} \begin {tabular}{l c c c} $\bar{B}_s$ Mode & ALEPH & OPAL & DELPHI\\ \hline $\bar{B}_s\to \pi^+\pi^-$ & $<25$ & & \\ $\bar{B}_s\to K^+\pi^-$ & $<25$ & $<26$ & $<9$ \\ $\bar{B}_s\to K^+K^-$ & $<11$ & $<14$ & $<12$ \\ $\bar{B}_s\to p \bar{p}$ & $<11$ & & \\ \end{tabular} \end{center} \end{table} The experimental sensitivities to branching ratios have now reached the $10^{-5}$ range. Since the theoretical predictions for several $B$ decay modes are in this range, it is possible that some signals will be observed soon. By measuring a sufficient number of charmless $B$ decay modes (e.g. $\bar{B}^0 \to \pi^- \pi^+$, $B^- \to \pi^- \pi^0$, $\bar{B}^0 \to \pi^0 \pi^0$) it may be possible to isolate the spectator and penguin contributions. \subsection{New Experimental Results on $\bar{B^0}\to \pi^+\pi^-$ and $\bar{B^0}\to K^- \pi^+$} \label{newpipi} The decay modes $\bar{B^0}\to\pi^+\pi^-$, $\bar{B^0}\to K^-\pi^+$, and $\bar{B^0}\to K^+ K^-$ \cite{bkk}, have been searched for by CLEO~II using a data sample of 2.0~fb$^{-1}$ taken on the $\Upsilon$(4S)\cite{PRLkpi}. A sample of 0.9~fb$^{-1}$ taken just below the resonance is used to study the continuum background. Since $B$ mesons are produced nearly at rest on the $\Upsilon$(4S), the final state has two nearly back-to-back tracks with momenta about 2.6~GeV/c. Candidates for $B$ meson decays are distinguished from continuum background using the difference, $\Delta E$, between the total energy of the two tracks and the beam energy, and the beam-constrained mass, $M_B$. The r.m.s. resolutions on $\Delta E$ and $M_B$ are 25~MeV and 2.5~MeV respectively. Separation between $\pi^-\pi^+$, $K^-\pi^+$ and $K^-K^+$ events is provided by the $\Delta E$ variable, and by $dE/dx$ information from the 51-layer main drift chamber. The $\Delta E$ shift between $K\pi$ and $\pi\pi$ events is 42~MeV if $E_1$ and $E_2$ are determined using the pion mass. This is 1.7$\sigma_{\Delta E}$. The $dE/dx$ separation between kaons and pions at 2.6~GeV/c is found to be $(1.8\pm 0.1)\sigma$ from a study of a sample of $D^{*+}$-tagged $D^0\to K^-\pi^+$ decays. Thus, in the CLEO II experiment the total separation between $K\pi$ and $\pi\pi$ events is $2.5\sigma$. The background arises entirely from the continuum where the two-jet structure of the events can produce high momentum, back-to-back tracks. These events can be discriminated against by calculating the angle, $\theta_T$, between the thrust axis of the candidate tracks, and the thrust axis of the rest of the event. The distribution of $\cos\theta_T$ is peaked at $\pm$1 for continuum events, and is nearly flat for $B\bar{B}$ events. A cut is made at $|\cos\theta_T|<0.7$. Additional discrimination is provided by a Fisher discriminant \cite{CLNSKpi},\cite{Fisher}, $\cal{F}$ = $\sum_{i=1}^{n}\alpha_i y_i$. The inputs $y_i$ are the direction of the candidate thrust axis, the $B$ meson flight direction, and nine variables measuring the energy flow of the rest of the event. The coefficients $\alpha_i$ are chosen to maximize the separation between $B\bar{B}$ signal events and continuum background events. The optimal cut on the Fischer discriminant is 84\% efficient for signal and 40\% efficient for background. Two approaches are used to evaluate the amount of signal in the data sample. In the first approach a cut is made on ${\cal {F}}$ and events are classified as $\pi\pi$, $K\pi$ or $KK$ according to the most probable hypothesis from the $dE/dx$ information. The signal and background numbers are given in Table~\ref{rareKpi}. The efficiency for the correct identification of a signal event in this analysis is 19\%. The background is estimated using sidebands in the continuum and on-resonance data and scaling factors from Monte Carlo studies. There is no $B\bar{B}$ background in the signal region. \begin{table}[htb] \begin{center} \label{rareKpi} \caption{Updated results for the branching fractions of $B^0\rightarrow K^+\pi^-$, $B^0\rightarrow \pi^+\pi^-$, and $B^0\rightarrow K^+K^-$. Upper limits are at the 90\% confidence level.} \vspace{0.5cm} \begin {tabular}{l c c c} Mode & Event Yield & ${\cal{B}}~(10^{-5})$ &Theoretical Predictions $(10^{-5})$ \\ \hline $\pi^+\pi^-$ & $8.5^{+4.9}_{-4.0}$ & $<2.2$ & 1.0-2.6 \\ $K^+\pi^-$ & $7.1^{+4.2}_{-3.4}$ & $<1.9$ & 1.0-2.0 \\ $K^+ K^-$ & $0.0^{+1.6}_{-0.0}$ & $<0.7$ & $-$ \\ & & & \\ $\pi^+\pi^-$ + $K^+\pi^-$ & $15.7^{+5.3}_{-4.5}$ & $1.8^{+0.6}_{-0.5}\pm0.2$ & \\ \end{tabular} \end{center} \end{table} \begin{figure}[hbt] \vspace{-1.0cm} \centerline{\psfig{figure=contour.ps,height=5in,bbllx=0bp,bblly=0bp,bburx=600bp,bbury=700bp,clip=}} \vspace{-3.5cm} \vskip 4mm \caption{Likelihood contours in the CLEO~II analysis for the fit to $N_{\pi\pi}$ and $N_{K\pi}$. The best fit is indicated by the cross, the 1, 2, 3, and 4$\sigma$ contours by solid lines, and the $1.28\sigma$ contour by the dotted line.} \label{contour} \end{figure} To increase the efficiency of the search and to exploit the information contained in the distributions of the $\Delta E$, $M_B$, $\cal{F}$ and $dE/dx$ variables a second analysis is performed. The cuts described above are removed, and an unbinned maximum-likelihood fit is made. In this fit the signal and background distributions are defined by probability density functions derived from Monte Carlo studies. The fit determines the relative contributions of $\pi^-\pi^+$, $K^-\pi^+$ and $K^-K^+$ to the signal and background. The best fit values for the signal yields $N_{\pi\pi}$, $N_{K\pi}$ and $N_{KK}$, are given in Table~\ref{rareKpi}. Fig.~\ref{contour} shows the $n\sigma$ contours in the plane $N_{\pi\pi}\; vs.\; N_{K\pi}$, and Fig.~\ref{Kpiproj} shows the projections of the likelihood fit onto the $M_B$ and $\Delta E$ axes compared to the events observed. The efficiency for a signal event to be included in the likelihood analysis is 38\%. \begin{figure}[hbt] \vspace{-1.5cm} \centerline{\psfig{figure=mde.ps,height=5in,bbllx=0bp,bblly=0bp,bburx=600bp,bbury=700bp,clip=}} \vspace{-2.6cm} \vskip 4mm \caption{CLEO~II results on $B\to \pi^+ \pi^-$ and $K^+ \pi^-$. Comparison of on-resonance data (histogram) with projections of the likelihood fit (solid curve). (a) Projection onto $M_B$ after cuts on $\Delta E$ and $\cal{F}$ (b) Projection onto $\Delta E$ after cuts on $M_B$ and $\cal{F}$. The shaded portions of the histogram are $\pi\pi$ events, the unshaded are $K\pi$ events. The dotted and dot-dashed lines in (b) indicate the fit projections for $K\pi$ and $\pi\pi$ separately.} \label{Kpiproj} \end{figure} The best fit value shown in Fig. \ref{contour} is more than 4$\sigma$ away from the point $N_{\pi\pi} = N_{K\pi} = 0$. After including the effect of systematic errors on the sum of $N_{\pi\pi}$ and $N_{K\pi}$ \cite{CLNSKpi}, it has been concluded that the significance of the sum is sufficient to claim the observation of a signal for charmless hadronic $B$ decays. It should be emphasized that the present data do not have sufficient statistical precision to allow any conclusion to be reached about the relative importance of the two decays. While the CLEO~II experiment does not measure signals for the individual decays $\bar{B^0}\to\pi^+\pi^-$ and $\bar{B^0}\to K^-\pi^+$, it does set stringent upper limits (Table \ref{rareKpi}). Studies have been made of the amount of additional data that might be required to measure signals in the individual modes, and it is estimated that a sample of about 4 fb$^{-1}$ may be sufficient, assuming that the best fit continues to give the same yields for $N_{\pi\pi}$ and $N_{K\pi}$. Note that the separation between $\pi\pi$ and $K\pi$ provided by $\Delta E$ and $dE/dx$ is adequate for this analysis, as can be seen from the nearly circular form of the contours in Fig.~\ref{contour}. \subsection{Inclusive/Semi-Inclusive $b\to s g$ transitions} It is major experimental challenge to measure the rate for the inclusive process $b \to s$ gluon, where the virtual gluon hadronizes into a $q \bar{q}$ pair. At least two methods have been proposed to determine the rate for such inclusive transitions. Since the coupling of gluons to quark-antiquark pairs is flavor independent, it is expected that except for modifications due to phase space $b\to s \bar{s} s$ will be comparable to $b\to s \bar{u} u$, $b\to s \bar{d} d$. Experimentally, one searches for inclusive $B\to \phi$ transitions with the $\phi$ momentum in the range beyond the kinematic limit for $b\to c$ transitions, or $B\to \phi X_s$ where the $X_s$ system contains a kaon and additional pions. For example, Deshpande and He find ${\cal B}(B\to X_s\phi) = (0.6 -2)\times 10^{-4}$\cite{Deshbsg},\cite{ciuchini}. Alternately, one may attempt to reconstruct $B\to K^-$ with additional pions. Several authors have proposed that $b\to s g$ transitions are enhanced in order to explain the anomalously low value of the B semileptonic branching fraction. In addition, it is conceivable that new physics could modify $b\to s g$ without significantly modifying the rates for $b\to s \gamma$ or $b\to s l^+ l^-$. \section{ELECTROMAGNETIC PENGUIN DECAYS} \subsection{Observation of $B\to K^* (892)\gamma$} The first observation of the electromagnetic decay $B\to K^*\gamma$ has been reported by CLEO~II\cite{PRLbsg}. A data sample of 1.38 fb$^{-1}$ taken on the $\Upsilon$(4S) resonance was searched for both $\bar{B}^0\to \bar{K}^{*0}\gamma$ and $B^-\to K^{*-}\gamma$, where the $\bar{K}^{*0}$ was detected in its $K^-\pi^+$ decay mode, and the $K^{*-}$ in both the $K^-\pi^0$ and $K_s\pi^-$ decay modes. If a $K^*$ candidate is within 75~MeV of the known $K^*$ mass then it is combined with an isolated photon with an energy between 2.1 and 2.9~GeV. The photon candidate must not be matched to a charged track, and must have a shower shape consistent with an isolated photon. If the photon candidate forms a $\pi^0$($\eta$) meson when combined with any another photon with energy greater than 30(200)~MeV it is rejected. Candidates for $B$ meson decays are identified using the variables $\Delta E = E_{K^*} + E_{\gamma} - E_{beam}$ and $M_B$. The r.m.s. resolutions on $\Delta E$ and $M_B$ are 40~MeV and 2.8~MeV respectively. \begin{table} [hbt] \caption{Summary of results for $B\to K^*\gamma$} \label{TABksg} \begin{center} \begin{tabular}{lccc} &$\bar{B}^0\to K^{*0}\gamma$& \multicolumn{2}{c}{$B^-\to K^{*-}\gamma$}\\ &$\bar{K}^{*0}\to K^-\pi^+$&$K^{*-}\to K_s\pi^-$&$K^{*-}\to K^-\pi^0$\\ \hline Signal Events & 8 & 2 & 3 \\ Continuum Background & 1.1$\pm$0.2 & 0.05$\pm$0.03 & 0.8$\pm$0.3 \\ $B\bar{B}$ Background& 0.30$\pm$0.15 & 0.01$\pm$0.01 & 0.10$\pm$0.05 \\ Detection Efficiency & (11.9$\pm$1.8)\% & (2.0$\pm$0.3)\% & (3.1$\pm$0.5)\%\\ Branching Ratio & (4.0$\pm$1.7$\pm$0.8)$\times 10^{-5}$& \multicolumn{2}{c}{(5.7$\pm$3.1$\pm$1.1)$\times 10^{-5}$}\\ \end{tabular} \end{center} \end{table} There are two main sources of background from the continuum, $q\bar{q}$ jets and initial state radiation (ISR). These backgrounds are suppressed by applying cuts on the shape variables $R_2<0.5$, $|\cos\theta_T|<0.7$, and $0.25<s_{\perp}<0.60$. The upper restriction on $s_{\perp}$ is useful for rejecting ISR background. By transforming the event into the frame where the photon is at rest, and defining new shape variables $R_2^{'}$ and $\cos\theta_T^{'}$ in this frame, the ISR background can be further suppressed. There is a small amount of background to $B\to K^*\gamma$ from other $B\bar{B}$ events. The size of this background was determined from a high statistics Monte Carlo study. This study includes a feeddown from other $b\to s\gamma$ decays, which was estimated using the theoretical models for $b\to s\gamma$ discussed in the next section. The remaining background is mainly due to continuum $e^+e^-$ annihilation. This contribution has been determined using $\Delta E, \; M_B$ sidebands in both the $\Upsilon(4S)$ and continuum data and scaling factors determined from Monte Carlo studies. Supporting evidence that the events in the signal region are due to the decay $B\to K^*\gamma$ comes from a likelihood analysis similar to the one described in section \ref{newpipi}. In this analysis the distributions of the events in the variables $M_B$, $\Delta E$, $M_{K^*}$, $\cos\Theta_{K^*}$ (the $K^*$ helicity angle), $\cos\theta_B$, $R_2$, $R_2'$, $s_{\perp}$, and $\cos\theta_T$ are compared to the distributions expected from Monte Carlo samples of signal and continuum background events \cite{CLNSksg}. This analysis gives results completely consistent with the signal and background yields given in Table~\ref{TABksg}. \begin{figure}[hbt] \vspace{-1.5cm} \centerline{\psfig{figure=ksg.ps,height=4.5in,bbllx=0bp,bblly=0bp,bburx=600bp,bbury=700bp,clip=}} \vspace{-2.5cm} \vskip 4mm \caption{The beam-constrained mass distribution from CLEO~II for $B\to K^*\gamma$ candidates: $K^-\pi^+\gamma$ solid, $K^-\pi^0\gamma$ shaded, $K_s\pi^-\gamma$ unshaded} \label{FIGksg} \end{figure} The eight $\bar{K}^{*0}\gamma$ and five $K^{*-}\gamma$ events in the signal region, $|\Delta E|<90$~MeV and $5.274<M_B<5.286$~GeV, are a clear signal for the decay $B\to K^*\gamma$ (Fig.~\ref{FIGksg}). The yields in the observed modes are consistent. Assuming that $\bar{B}^0\to \bar{K}^{*0}\gamma$ and $B^-\to K^{*-}\gamma$ are equal, the average branching ratio is $(4.5\pm 1.5\pm 0.9)\times 10^{-5}$. This is in agreement with theoretical predictions from the electromagnetic penguin diagram \cite{Ali}. \subsection{Search for exclusive $b\to d \gamma$ transitions} CLEO has also searched for exclusive $b\to d \gamma$ decay modes including $\bar{B}^0\to \rho^0 \gamma$, $B^- \to \rho^- \gamma$ and $B\to \omega \gamma$\cite{cleobrhog}. In these modes, the largest background arises from continuum production with significant contributions from the $b\to s \gamma$ process $B\to K^* \gamma$, $K^*\to K^- \pi^+$ with the charged kaon misidentified as a pion. Some discrimination between $B\to K^{*0} \gamma$ and $B\to \rho^0 \gamma$ is provided by $\Delta E$, the vector decay angle, and by the constraint that $\pi \pi $ mass lie in the $\rho $ mass region. This information is used by a neural network to reduce the background from $B\to K^* \gamma$ by a factor of 20 while retaining 50\% of the $B\to \rho \gamma$ signal. No signals are observed and upper limits at the 90\% C.L. of ${\cal B}(B^-\to \rho^- \gamma) < 2.0\times 10^{-5}$, ${\cal B}(\bar{B}^0\to \rho^0 \gamma) < 2.4\times 10^{-5}$, and ${\cal B}(\bar{B}^0\to \omega \gamma) < 1.1\times 10^{-5}$ are obtained. \subsection{Experimental Constraints on the $ b \to s \gamma$ Inclusive Rate} At present, due to the uncertainties in the hadronization, only the inclusive $b \to s \gamma$ rate can be reliably compared with theoretical calculations. This rate can be measured from the endpoint of the inclusive photon spectrum in $B$ decay. The signal for $b\to s\gamma$ is expected to peak in the region $2.2<E_{\gamma}<2.7$~GeV, with only about 15\% of the rate expected to lie outside this range \cite{Ali}. \begin{figure}[hbt] \vspace{+0.5cm} \centerline{\psfig{figure=brec_bsg.ps,height=4.5in,bbllx=0bp,bblly=0bp,bburx=600bp,bbury=700bp,clip=}} \vspace{-4.5cm} \vskip 4mm \caption{(a) The on-resonance data is shown as the solid histogram, the scaled off resonance data is the dashed histogram, and the sum of off-resonance data and background from $b\to c$ and $b\to u$ decays are the squares with error bars. (b) The photon energy distribution for $b\to s \gamma$ from CLEO~II for the B reconstruction analysis after subtraction of all backgrounds.} \label{FIGbsg1} \end{figure} \begin{figure}[hbt] \vspace{+0.5cm} \centerline{\psfig{figure=net_bsg.ps,height=4.5in,bbllx=0bp,bblly=0bp,bburx=600bp,bbury=700bp,clip=}} \vspace{-4.5cm} \vskip 4mm \caption{ (a) The on-resonance data is shown as the solid histogram, the scaled off resonance data is the dashed histogram, and the sum of off-resonance data and background from $b\to c$ and $b\to u$ decays are the squares with error bars. (b) The photon energy distribution for $b\to s \gamma$ from CLEO~II for the event shape analysis after subtraction of all backgrounds. The points with error bars are the background subtracted data while the solid curve is the Monte Carlo prediction for the shape of the $b\to s \gamma$ signal.} \label{FIGbsg2} \end{figure} Two experimental methods are employed to suppress the large background from non-resonant $e^+ e^-\to q \bar{q}$ and initial state radiation i.e. $e^+ e^- \to q \bar{q} \gamma$. One method uses partial reconstruction of the decay products of the kaonic resonance which recoils against the energetic photon. This is referred to as the B reconstruction analysis. A kaon candidate (either charged or neutral), up to four charged tracks, and two or fewer $\pi^0$s are combined with the photon candidate and are required to be consistent with the B mass. The resulting photon spectrum after event shape cuts is examined. This technique does not require that the final state kaonic resonance be correctly reconstructed and is primarily designed to suppress the continuum background. A complementary method using a neural network is also used to distinguish $b\to s\gamma$ signal and background. This is referred to as the event shape analysis. The network uses the event shape variables $R_2$, $S_{\perp}$, $R_{2}^{'}$, $\cos\theta_{T}^{'}$ as well as the energy deposited in cones within 20$^{\circ}$ and 30$^{\circ}$ of the photon direction and in similiar cones in the direction opposite to the photon. The output of the neural network is a value between $-1$ and $1$ which measures the degree to which an event resembles signal. The network is trained using a large sample of continuum Monte Carlo events. There is good agreement between the network output from Monte Carlo simulations and various data samples (e.g. continuum data, $B\to X\mu \nu$ candidates). The experimental photon energy spectra from the two methods are shown in Figs.~\ref{FIGbsg1},~\ref{FIGbsg2}. In the CLEO~II data there is an excess of $2.2<E_{\gamma}<2.7$~GeV in the B reconstruction analysis and an excess of $263\pm 104$ events from $B$ decays in this region for the event shape analysis\cite{jaethesis}. The detection efficiencies are 9\% and 32\% respectively. However, the signal to noise ratio in the B reconstruction analysis is a factor of 4 higher so the sensitivities are comparable. These correspond to branching ratios of $(1.88\pm 0.74) \times 10^{-4}$ for the B reconstruction analysis and $(2.75\pm 0.67) \times 10^{-4}$ for the event shape analysis. The errors quoted are statistical only. The two results are consistent at the 1.1 standard deviation level. The model dependence introduced when extrapolating from the partial branching fraction in the signal window to the branching fraction for the entire range of photon energies is evaluated using a parton model calculation. The largest uncertainty arises from the error in $m_{b}$, the b quark mass. This parameter is allowed to vary in the range $m_{b}= 4.87 \pm 0.10$. The resulting ($10$\%) change in the branching ratio is incorporated in the systematic error. The results of the two analyses can be combined allowing for statistical correlations and separating the correlated and independent components of the systematic error to give, $${\cal B}(b\to s \gamma) = (2.32 \pm 0.57 \pm 0.35) \times 10^{-4}$$ where the first error is statistical and the second is systematic. For the purposes of constraining extensions of the Standard Model, it is useful to derive upper and lower limits at the 95\% confidence level from this measurement. This gives: $$ 1.0 \times 10^{-4} < {\cal B} (b\to s \gamma) < 4.2 \times 10^{-4} ~({\rm at~the~95\% ~c.l.})$$ An alternative approach to measuring the inclusive rate is to use the observed exclusive rate for $B\to K^*\gamma$. However, the fraction of the inclusive rate that hadronizes to a particular exclusive final state is not very well understood. Ali {\it et al.} \cite{Ali} predict the mass distribution of the $X_s$ system using an estimate of the Fermi momentum of the spectator quark ($p_{F}=300$ MeV). By integrating this spectrum up to 1~GeV and assuming this region is dominated by the $K^*$ resonance, the fraction of $K^*(892)\gamma$ is estimated to be (13$\pm$3)\%. Other authors have made predictions between 5\% and 40\% for the fraction of $K^*(892)\gamma$ \cite{ksg}. A reasonable estimate that covers most of the theoretical predictions is (13$\pm$6)\%. Examination of the observed mass of the ($X_s$ system) particles which accompany the high energy photon in $b\to s \gamma$ indicates that there are states other than $B\to K^* \gamma$ which contribute. Note that the apparent $X_s$ mass spectrum shown in Fig.~\ref{FIGbsg3} is not corrected for efficiency, which decreases rapidly as a function of mass nor for misreconstruction of high multiplicity channels. \begin{figure}[hbt] \vspace{+0.5cm} \centerline{\psfig{figure=mxs_bsg.ps,height=4.5in,bbllx=0bp,bblly=0bp,bburx=600bp,bbury=700bp,clip=}} \vspace{-4.5cm} \vskip 4mm \caption{ Apparent $X_s$ invariant mass spectrum for $b\to s \gamma$ candidates after background subtraction.} \label{FIGbsg3} \end{figure} Searches have also been made for $b\to s\gamma$ processes at LEP. The L3 experiment has set an upper limit of $1.2\times 10^{-3}$ (90\% C.L.) on the inclusive $b\to s\gamma$ rate \cite{L3shit}. The exclusive decays $\bar{B}^0\to \bar{K}^{*0}\gamma$ and $B_s\to \phi\gamma$ have been searched for by the DELPHI experiment using the particle identification capabilities of the RICH detector. Upper limits of $3.6\times 10^{-4}$ and $19.0\times 10^{-4}$ are obtained for these two decays \cite{Battaglia}. ALEPH has searched for $B_s\to \phi\gamma$ and $\Lambda_b\to \Lambda\gamma$ and obtains limits of $29\times 10^{-5}$ and $56\times 10^{-5}$ respectively. \subsection{Theoretical Implications of $b \to s \gamma$} There has been recent interest in $b\to s\gamma$ as a probe of physics beyond the standard model\cite{Joanne}, \cite{Hewett,SUSY}. There are possible additional contributions to the loop from a charged Higgs boson and from supersymmetric particles. Hewett \cite{Hewett} has considered two Higgs doublet models and shown that contributions comparable to the standard model are expected for a charged Higgs mass of order 100~GeV. In supersymmetric models there are also contributions from loops containing charginos, neutralinos and squarks that tend to cancel the charged Higgs and standard model contributions (in unbroken supersymmetry all contributions to the loop diagram cancel exactly)\cite{cancel}. Several recent papers \cite{SUSY} investigate the parameter space allowed by $b\to s\gamma$ for particular models of the breaking of the supersymmetry. For most of the parameter space the charged Higgs contribution is the dominant one, and the present CLEO~II upper limit on $b\to s\gamma$ constrains the charged Higgs mass to be greater than 244~GeV. This is more restrictive than constraints from direct searches at existing high energy colliders. The limit on the charged Higgs mass can be avoided in some supersymmetric models if the stop mass is small since this leads to a large negative contribution from the chargino-stop loop. For this case the rate for $b\to s\gamma$ could even become smaller than the standard model prediction. Other constraints on new physics have been derived from the bounds on $b \to s \gamma$. If there are anomalous $W- W- \gamma$ couplings, these can significantly modify the rate for $b\to s\gamma$. The CLEO measurements exclude certain regions of the parameter space of anomalous dipole and quadrupole couplings of the W boson that cannot be explored by direct studies of $W^+ -\gamma$ production at hadron colliders \cite{Chiawwg}. It has also been suggested that these results constrain most supersymmetric dark matter candidates to such an extent that they will not produce significant counting rates in dedicated dark matter WIMP searches planned in the near future\cite{wimps}. \subsection{$b\to s \ell ^+\ell ^-$ Decays} The $b\to s\gamma$ diagram can be modified by replacing the real photon by a virtual photon or by a virtual $Z^0$ or other neutral boson that produces a lepton pair (see Fig.~\ref{kstgll}). This penguin diagram leads to both $B\to K \ell ^+ \ell ^-$ and $B\to K^* \ell ^+\ell ^-$ decays, since the $B\to K$ transition is no longer forbidden by angular momentum conservation as it was for $b\to s\gamma$. Although the penguin amplitude for $b\to s \ell ^+\ell ^-$ is smaller than $b\to s\gamma$ the final states can be identified easily, and are particularly favorable for study at hadron colliders. As in the radiative penguin decay discussed previously, the process $b\to s \ell ^+ \ell ^-$ is sensitive to high mass physics including charged Higgs bosons and non-standard neutral particles. These modes also do not have significant QCD corrections which may be a useful feature when constraining new physics. Ali, Mannel, and Guidice have noted that the constraints imposed by the combinations of measurements of $b\to s\gamma$ and $b\to s\ell^+\ell^-$ can severely constrain new physics including SUSY models\cite{Alickm}. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,0.8)(0,0) \put(-1.1,-3.2){\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,width=4.0in,height=3.5in,file=kstgll.ps}} \end{picture} \vskip 15 mm \caption{Diagrams for the decays $B\to K^{(*)} \ell ^+ \ell ^-$.} \label{kstgll} \end{center} \end{figure} The penguin amplitude has been calculated by a number of authors \cite{PengKll}, with results for the inclusive $b\to s e^+ e^-$ rate of $ (1-2)\times 10^{-5}$ and for the $b\to s \mu^+ \mu^-$ rate of $ (4-8) \times 10^{-6}$. The exclusive channels $K^* \ell ^+\ell ^-$ and $K \ell ^+\ell ^-$ are expected to comprise $5-30\%$ of the inclusive rate. However, the theoretical description of $b\to s\ell ^+\ell ^-$ is more complicated than $b\to s\gamma$, since the final states $K^{(*)}\ell ^+\ell ^-$ can also be produced via ``long distance'' contributions from the hadronic decay $B\to K^{(*)}\psi$ followed by $\psi\to \ell ^+\ell ^-$ where $\psi$ stands for a real or virtual charmonium state \cite{LongD}. Ali, Mannel and Morozumi \cite{AliKll} have performed an analysis of $b\to s \ell^+\ell^-$ including both the penguin and the long distance contributions. Their predictions for the inclusive $b\to s \ell^+ \ell^-$ rate are in the range $(2-6)\times 10^{-6}$ excluding the regions close to the $\psi$ and $\psi'$ mass where the long distance contributions dominate. There is interference between the penguin and long distance amplitudes over a wide range of dilepton masses. Ali {\it et al.}\ point out that the sign of the interference is controversial, and that information about the interference can be obtained both from the dilepton mass distribution, and from the forward-backward asymmetry of the lepton pair. The $B\to K^* e^+ e^-$ mode has significant contributions from both virtual $Z^0$s and virtual photons. At low $m^2$, the virtual photon contribution is dominant and has a pole. This must be properly taken into account when computing experimental efficiency, since there is usually a cut on $m_{e^+ e^-}^2$ to eliminate conversions\cite{CakirKll}. Experimental searches have been made by CLEO~1.5, CLEO~II and ARGUS at the $\Upsilon (4S)$, and by UA1 and CDF in $p\bar{p}$ collisions. The CLEO and ARGUS analyses \cite{CLEOKll,CLEOIIKll,ARGUSKll} make a simple veto on dilepton masses consistent with a real $\psi$ or $\psi'$, and see almost no background in their beam-constrained mass plots. CDF has searched for $B^+\to K^{+} \mu^+\mu^-$ and $B^0\to K^{*0} \mu^+ \mu^-$ using the mode $B\to \psi K^+$ for normalization\cite{CDFKll}. The CDF analysis requires that the dilepton mass lie in the range $3.3-3.6$ GeV or in the range $3.8-4.5$ GeV. This avoids contamination from modes with $\psi$ and $\psi^{'}$ mesons and reduces the combinatorial background. The UA1 analysis\cite{UA1Kll} selects a range $3.9 < M(\ell ^+\ell ^-) < 4.4$~GeV which is believed to have small long distance contributions and no radiative tail from the $\psi$. UA1 performs both an exclusive search for $\bar{B}^0\to \bar{K}^{*0}\mu^+\mu^-$ and an inclusive search for $B\to X_s\mu^+\mu^-$. The upper limits derived from the hadron collider searches using a small fraction of the allowed dilepton mass range e.g. for example the CDF search considers about 25\%. The limits on the partial branching fraction are extrapolated to the full dilepton mass range using a theoretical model. The upper limits from all the experimental measurements are summarized in Table~\ref{Kll}. These upper limits are all well above the theoretical predictions The CLEO~II limit on $\bar{B^0}\to K^{*0} e^+ e^-$ is within a factor of 3 of the branching ratio predicted by the Standard Model. These limits suggest that $b\to s \ell ^+\ell ^-$ decays will eventually be observed at either hadron colliders, or by $\Upsilon (4S)$ experiments. \begin{table} [hbt] \caption{Experimental upper limits (90\% C.L.) for $b\to s\ell ^+\ell ^-$ decays. All numbers quoted are branching fractions $\times 10^{-5}$} \label{Kll} \begin{center} \begin{tabular}{lcccccc} $B$ Decay & ARGUS & CLEO I & CLEO 1.5& CLEO~II & UA1 & CDF\\ \hline $K^0e^+e^-$ & $<$15.0 & $<$56.0 & & & & \\ $K^- e^+e^-$ & $<$9.0 & $<$24.0 & $<$5.7 & $<1.2$ & & \\ $K^0\mu^+\mu^-$ & $<$26.0 & $<$39.0 & & & &\\ $K^- \mu^+\mu^-$ & $<$22.0 & $<$36.0 & $<$17.0& $<0.9$ & & $<3.5$ \\ $\bar{K}^{*0}e^+e^-$ & $<$29.0 & & $<$6.9& $<1.6$ & & \\ $K^{*-}e^+e^-$ & $<$63.0 & & & & & \\ $\bar{K}^{*0}\mu^+\mu^-$&$<$34.0 & & $<$16.0 & $<3.1$&$<$2.3&$<5.1$ \\ $K^{*-}\mu^+\mu^-$&$<$110.0& & & & \\ $X_s\mu^+\mu^-$ & & & & &$<$5.0 \\ \end{tabular} \end{center} \end{table} \section{PURELY LEPTONIC B DECAY} \subsection{$B$ Decays to Two Leptons} The Standard Model allows $B^0$ and $B_s$ mesons to decay to $e^+e^-$ $\mu^+\mu^-$ or $\tau^+\tau^-$ via box diagrams or loop diagrams involving both $W$ and $Z$ propagators (see Fig.~\ref{dilepfig}) \cite{RareAli}. The largest branching fraction is predicted to be $4\times 10^{-7}$ for $B_s\to\tau^+\tau^-$, and the smallest $2\times 10^{-15}$ for $B^0\to e^+e^-$. The decays to the lighter leptons are suppressed by a helicity factor which is proportional to $m_{\ell} ^2$, and the $B^0$ decays are suppressed relative to the $B_s$ decays by the factor $|V_{td}/V_{ts}|^2$. Decays to the final states $e^{\pm}\mu^{\mp}$, $e^{\pm}\tau^{\mp}$ and $\mu^{\pm}\tau^{\mp}$ are all forbidden in the Standard Model by lepton family number conservation. A search for $B^0$ decays to two leptons has been made by CLEO~II \cite{CLEOll}, and there are also searches for $B^0\to\mu^+\mu^-$ by the UA1 and CDF collaborations at hadron colliders \cite{UA1Kll,CDFll}. The 90\% C.L. upper limits on the allowed processes are $5.9\times 10^{-6}$ for $B^0\to e^+e^-$ (CLEO~II), and $3.2\times 10^{-6}$ (CDF), $5.9\times 10^{-6}$ (CLEO~II) and $8.3\times 10^{-6}$ (UA1) for $B^0\to\mu^+\mu^-$. The hadron collider experiments will set similar limits on $B_s\to\mu^+\mu^-$, and presumably have not done so because the $B_s$ mass was unknown until recently (see section \ref{Bs-mass} ) . \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,0.5)(0,0) \put(-1.1,-3.0){\psfig{bbllx=0pt,bblly=0pt,bburx=567pt,bbury=567pt,width=5.0in,height=4.0in,file=dilepfig.ps}} \end{picture} \bigskip \vskip 5 mm \caption{Diagrams for the dilepton decays of $B$ mesons.} \label{dilepfig} \end{center} \end{figure} CLEO~II also sets limits on the lepton-flavor changing decays of $5.9\times 10^{-6}$ for $B^0\to e^{\pm}\mu^{\mp}$, $7.9\times 10^{-4}$ for $B^0\to e^{\pm}\tau^{\mp}$ and $1.2\times 10^{-3}$ for $B^0\to\mu^{\pm}\tau^{\mp}$. Upper limits on the lepton flavor violating decays $B^- \to K^- e^{\pm} \mu^{\mp}$ and $\bar{B^0}\to \bar{K}^{*0} e^{\pm} \mu^{\mp}$ of $< 1.2 \times 10^{-5}$ and $2.7\times 10^{-5}$ have also been set. Several recent papers consider the relative sensitivity of various lepton-flavor changing decays to non-Standard Model couplings \cite{SherYuan},\cite{Campbell} . Sher and Yuan argue that larger Yukawa couplings are expected for third generation quarks, and that these larger couplings not only enhance the sensitivity of the decays, but also make them less dependent on the detailed parameterization of the new couplings\cite{SherYuan}. They make a comparison of $B$ and $K$ decays which suggests that $B_s\to\tau\mu$ has the best sensitivity, although it is unclear how to search for this channel experimentally. The more accessible channel $B_s\to\mu e$ could also have better sensitivity than the equivalent decay $K_{L}\to\mu e$, even though the upper limit on the latter is now in the $10^{-11}$ range. $B^0$ decays are less sensitive than $B_s$ decays but are still of interest because they can be searched for in experiments at the $\Upsilon (4S)$. \subsection{The Decays $B\to\tau\nu$, $B\to\mu\nu$ and $B\to e\nu$.} The decay $B^+\to\tau^+\nu$ proceeds through the annihilation of the constituent quarks in analogy to the $\pi^+\to\mu^+\nu$ decay. The branching fraction is given by: $$ {\cal{B}}(B^+\to\tau^+\nu) = \frac{G_F^2m_Bm_\tau^2}{8\pi} \left(1-\frac{m_\tau^2}{m_B^2}\right) f_B^2|V_{ub}|^2\tau_B $$ All the parameters in this equation are well known except the decay constant $f_B$ and the CKM matrix element $V_{ub}$. Given a more accurate knowledge of $V_{ub}$ from other measurements and the experimental observation of the decay $B^+\to\tau^+\nu$, it would be possible to determine a value for $f_B$. The measurement of this decay constant is of fundamental importance for $B$ physics since it enters into many other $B$ decay measurements, including most notably $B\bar{B}$ mixing \cite{mixing}. The present theoretical estimates of $f_B$ from lattice QCD and QCD sum rules are in the range $f_B = (180\pm 50)$~MeV \cite{fB}. Using this value of $f_B$ and our standard values of $V_{ub}$ and $\tau_B$, we obtain a prediction of ${\cal{B}}(B^+\to\tau^+\nu) = 4.0\times 10^{-5}$. The decays $B^+\to\mu^+\nu$ and $B^+\to e^+\nu$ have smaller branching ratios of $1.4\times 10^{-7}$ and $3.3\times 10^{-12}$ respectively. The decays to the muon and electron are suppressed relative to the tau decay by a helicity factor proportional to the square of the lepton mass. The radiative decays $B^+\to \mu \nu \gamma$ and $B^+ \to e \nu\gamma$ are less suppressed and occur at rates comparable to their purely leptonic counterparts\cite{burdgw}. CLEO~II has searched for $B^+\to\tau^+\nu$ followed by $\tau \to l \nu \bar{\nu}$. In this case, the observed showers and tracks, apart from the lepton, must originate from the other B meson. No additional leptons are allowed. Constraints on the missing energy and momentum are used to isolate the signal. No significant excess is found. This leads to a 90\% C.L. upper limit of ${\cal{B}}(B^+\to\tau^+\nu) < 2.2\times 10^{-3}$ \cite{cleobtaunu}. Using the same technique as in their analysis of the mode $B\to \tau\nu X$ (see section \ref{taunew}), but requiring additional missing energy, ALEPH finds ${\cal{B}}(B^+\to\tau^+\nu) < 1.8 \times 10^{-3}$ \cite{ALEPHxtnu1}. CLEO~II has also searched for $B^+\to\mu^+\nu$ as well as $B^+\to e^+\nu$. The $B$ meson decays almost at rest into a $\mu^+$ (or $e^+$) and a neutrino which are back-to-back and have energies of about 2.65~GeV. The muon is well identified and has little background. The neutrino is ``detected'' by calculating the missing momentum $p_{miss}$ of the whole event. If all the decay products of the other $B^-$ have been measured by the CLEO~II detector $p_{miss}$ will be a good estimator of the neutrino momentum. Then the analysis proceeds as if this were a fully reconstructed $B$ decay, with the calculation of the energy difference, $\Delta E$, and the beam-constrained mass, $M_B$. The analysis is almost background free, and gives a 90\% C.L. upper limit of ${\cal{B}}(B^+\to\mu^+\nu) < 2.1\times 10^{-5}$. The sensitivity in the electron mode is comparable, ${\cal B}(B^+\to\ e^+\nu) < 1.5 \times 10^{-5}$\cite{cleobtaunu}. The limits on $B^+\to\tau^+\nu$ and $B^+\to\mu^+\nu$ are both two orders of magnitude above the theoretical predictions, corresponding to the rather uninteresting limit on $f_B$ of about 2.6 GeV for $|V_{ub}/V_{cb}|=0.073$. \section{CONSTRAINTS ON THE CKM MATRIX} \subsection{Introduction} One of the primary goals of the B physics program is the determination of the values of V, the CKM (Cabibbo-Kobayashi-Maskawa) couplings. The experimental results were discussed in previous sections. We now summarize their implications for the CKM matrix. The usual form of the CKM matrix is given below. \begin{equation} V =\pmatrix{V_{ud}&V_{us}&V_{ub}\cr V_{cd}&V_{cs}&V_{cb}\cr V_{td}&V_{ts}&V_{tb}\cr} \label{ckmmat} \end{equation} The matrix $V$ can be expressed approximately as \begin{equation} V \simeq \pmatrix{1-\lambda^2/2&\lambda&A\lambda^3(\rho-i\eta)\cr -\lambda&1-\lambda^2/2&A\lambda^2\cr A\lambda^3(1-\rho-i\eta)&-A\lambda^2&1\cr}+ O(\lambda^4) \label{wolfpar} \end{equation} This empirical parameterization, suggested by Wolfenstein, is correct to terms of order $\lambda^4$ with $\lambda = \sin{\theta_{Cabibbo}}\approx 0.22$\cite{Wolfie}. As noted by Buras, Lautenbacher, and Ostermaier, in the future as the precision of measurements improves, the above approximation will have to be extended to be correct up to order $\lambda^6$\cite{BLO}. This can be accomplished by adding the correction \begin{equation} \Delta V \simeq \pmatrix{-\lambda^4/8& 0 & 0\cr A^2\lambda^5({1\over 2}-\rho-i\eta)& -({A^2\over 2}+1/8)\lambda^4&0\cr {A^2\over 2}\lambda^5(\rho+i\eta)& A\lambda^4({1\over 2}-\rho-i\eta)& -A^2/\lambda^6\cr}+ O(\lambda^6) \label{wolfcorr} \end{equation} \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,1.5)(0,0) \put(-0.01,0.0) {\psfig{width=3.3in,height=1.5in,file=bjorken.eps}} \end{picture} \caption{Representation of the Bjorken or Unitarity triangle in the complex plane.} \label{triangle} \end{center} \end{figure} \subsection{The CKM element $|V_{cb}|$} The value of $V_{cb}$, the fundamental weak interaction coupling constant, may be determined from the semileptonic width: $$ \Gamma(b\to c l \nu) = \gamma_c^2 |V_{cb}|^2 $$ where $\gamma_c$ is a constant determined from theory e.g. quark model calculations. The semileptonic width is obtained from measurements of the semileptonic branching fraction and the appropriate average of the charged and neutral B meson lifetimes, ($< \tau_{B} >$) $$ \Gamma(b\to c l \nu) ={{{\cal B}(b\to c l \nu)}\over {<\tau_{B}>}}.$$ The width has a $m_b^5$ dependence on the b quark mass although Shifman {\it et al.}~argue that in a certain limit, the dependence is much weaker and is proportional to $m_b-m_c$\cite{shifman2},~\cite{shifman}. Their calculation and the world average for the semileptonic branching fraction give, $$ |V_{cb}|_{{\rm Inclusive}} =0.03965\pm 0.001({\rm exp})\pm 0.002({\rm theo})$$ where the first error is experimental and the second is the quoted theoretical uncertainty. The theoretical uncertainty in the determination of $|V_{cb}|$ from inclusive semileptonic decays is currently a matter of active discussion and no clear consensus has emerged\cite{neubert2}. Two other methods are used to determine the value of $|V_{cb}|$. These are measurements of the total widths of exclusive final states from branching fractions and measurement of the $B\to D^*\ell\nu$ rate at zero recoil. The former method of obtaining $|V_{cb}|$ from the total rate has the distinct advantage that the models used make other detailed predictions for form factors and various other observables which can be experimentally verified. In addition, all of the data can be used unlike the HQET inspired method which is valid only near zero recoil. Using the world average of the branching fraction for $\bar{B} \to D^* \ell \nu$ and the ISGW' model to obtain the central values, we find $$ |V_{cb}|_{{\rm Exclusive}} =0.0347\pm 0.0016({\rm exp})\pm 0.0024({\rm theo})$$ For the HQET method, which requires experimental measurements of the differential spectrum of $\bar{B}\to D^* \ell\nu$ decays, there are two significant uncertainties in the final determination of $|V_{cb}|$ from measurements of the spectrum at zero recoil. These arise from the model dependence in the calculation of the $1/m_c^2$ corrections to $\xi(1)$ and the lack of knowledge of the functional form of the function $\xi(y)$ which is used for the extrapolation. Using the value of $\xi(1)$ recently calculated by Neubert\cite{neubert3} and the world average for the experimental intercept gives \begin{equation} |V_{cb}|_{{\rm HQET}}= 0.0386\pm 0.0024({\rm exp})\pm 0.0012({\rm theory}) \end{equation} where the first error is experimental and the second is the theoretical uncertainty in $\xi(1)$. The model dependence from the theoretical uncertainty in the normalization is about 4\% but may be reduced in the near future. A precise determination of $|V_{cb}|$ constrains the product $A\lambda^2$, following the notation of Wolfenstein. Since $\lambda$ is well determined from measurements of kaon decays, the parameter $A$ is determined from $|V_{cb}|$. Using $\lambda=0.2205\pm 0.0018$, the world average computed by the PDG group, and the value of $|V_{cb}|$ obtained using HQET, gives \begin{equation} A= 0.794 \pm 0.049 \pm 0.025 \label{Apar} \end{equation} A very precise value of $|V_{cb}|$ is desirable in order to check the unitarity of the CKM matrix as well as for a number of phenomenological applications. For example, one of the largest uncertainties in the determination of the location of the vertex in the $\rho,\eta$ plane using $\epsilon$, the CP violation parameter in kaon decay is the parameter A. In other words, in order to interpret CP violation in the kaon sector and predict the magnitude of CP asymmetries for B mesons, a precise measurement of $|V_{cb}|$ is required. In addition, as emphasized by Buras\cite{Burasckm}, rates for certain rare kaon decays such as $K_{L}\to \pi^0 \ell^+ \ell^-$, $K_{L}\to \pi^0 \nu \bar{\nu}$ depend on $|V_{cb}|^4$. It will be worthwhile to test the Standard Model and verify that the value of $|V_{cb}|$ from such loop induced decays in the kaon sector agree with the value of $|V_{cb}|$ from tree level semileptonic $B$ decays. \subsection{The CKM element $|V_{ub}|$} In the Wolfenstein parameterization, the value of $|V_{ub}/V_{cb}|$ is approximately $ \lambda ~|\rho - i \eta|$. Thus the measured value of this ratio constrains the vertex of the Bjorken triangle (see Fig.~\ref{triangle}) to lie on a circle of radius $\lambda ~\sqrt{\rho^2 + \eta^2}$ in the $\rho-\eta$ plane. The value of this ratio is determined from measurements of inclusive $b\to u\ell\nu$ decays. Using the central value from the ACCMM model gives \begin{equation} |{{V_{ub}}\over {V_{cb}}}|= 0.073\pm 0.011({\rm exp})\pm 0.01({\rm model}) \end{equation} where the first error is the sum in quadrature of the experimental statistical and systematic errors and the second error is due to model dependence. Quantifying model dependence is difficult. An alternate way is to give the allowed range, $$ 0.055< |V_{ub}/V_{cb}| < 0.095,$$ which corresponds to a one standard deviation variation on each of the models considered. The measurement of $|V_{ub}/V_{cb}|$ gives the constraint $ ~\sqrt{\rho^2 + \eta^2} = 0.331\pm 0.067.$ At present, a large uncertainty in the radius of this circular region in the $\rho, \eta$ plane is due to the model dependence in the extraction of $|V_{ub}|$. This may be improved with additional theoretical work as well as the observation of exclusive charmless semileptonic decays. \subsection{The CKM element $|V_{td}|$} In the Standard Model, $\Delta M_d$ for $B_d-\bar{B_d}$ mixing is $$ \Delta M_d = {{G_F^2} \over {6 \pi^2}} m_B m_t^2 ~F({{m_t^2} \over {m_W^2}}) ~\eta_{QCD} B_{B_d} f_{B_d}^2 |V_{tb}^{*} V_{td}|^2$$ where $G_F$ is the weak coupling constant, $m_t$ is the top quark mass, $F$ is a slowly decreasing function which depends on $m_{t}$ and $m_{W}$, $\eta_{QCD}$ is a factor which accounts for QCD corrections, $B_{B_d}$ is a constant which is used to account for the vacuum insertion approximation, and $f_B$ is the decay constant of the $B_d$ meson. Since the mass of the top quark has been determined ($m_t=179\pm 10$), and the QCD correction has recently been calculated to NLO by Buras, Jamin, and Weisz\cite{Burasmix} ($\eta_{QCD}=0.55$), the largest remaining uncertainties in $\Delta M_d$ arise from the product $B_{B_d}^{1/2} f_{B_d}$. This last factor must be determined from non-perturbative methods such as lattice QCD, QCD sum rules, or potential models. One estimate is $B_{B_d}^{1/2} f_{B_d}=180\pm 50$ MeV. This covers the range found in the lattice calculations by the UKQCD, ELC, and Bernard {\it et al.}~groups\cite{Lattices}. However, this estimate should be regarded with considerable caution and the assigned error may be an underestimate. Using these parameters and the world average for $\Delta M_{B_d} =0.468\pm 0.026$(ps)$^{-1}$ gives $$|V_{td}|=(0.92\pm 0.03\pm 0.09\pm 0.24)\times 10^{-2}$$ where the first error is statistical, the second is due to the top quark mass, and the third is the uncertainty in the product $B_{B_d}^{1/2} f_{B_d}$. At present, the experimental limits on $B\to \tau \nu$ from ALEPH and CLEO~II give the constraint $f_{B_d} < 2.6 $ GeV. A factor of ten improvement in sensitivity is required to reach the range of interest for $f_B$. A complementary approach is to measure the decay constants of charmed mesons (i.e. $f_{D_s}$, $f_{D^+}$) which can be used to verify the lattice calculations and then be scaled to the B mass. Since $$V_{td}= A \lambda^3 (1 - \rho -i \eta)$$ in the usual parameterization, the modulus is $$|V_{td}|^2 = A^2 \lambda^6 \{(1 - \rho)^2 +\eta^2 \}.$$ Thus, a precise determination of $|V_{td}|$ constrains the vertex of the Bjorken triangle to lie on a circle centered at $\rho=1$, $\eta=0$ with radius $A \lambda^3$ and further reduces the allowed range of CP asymmetries in the Standard Model. At present, the experimental measurements give the constraint, $$ \sqrt{(1-\rho^2)+\eta^2} = 1 \pm 0.3.$$ \subsection{The CKM element $|V_{ts}|$} The CKM parameter $|V_{ts}|$ can be extracted from a measurement of the branching fraction for the electromagnetic penguin. There are many calculations of the inclusive rate for $b\to s\gamma$ \cite{Ali}, \cite{bsg}. The rate has a logarithmic dependence on the top quark mass, $m_t$, and is proportional to the product of CKM matrix elements $|V_{ts}V_{tb}|^2$. Large leading order QCD corrections increase the rate by a factor of about 3.5. Using the measurement $ m_t=179\pm 10$~GeV, and allowing the range of mass scales, $\mu$, at which the QCD corrections are evaluated, to vary between $m_b/2$ and $2m_b$, Buras {\it et al.}~calculate the inclusive rate to be $(2.8\pm 0.8)\times 10^{-4}$\cite{Burasbsg}. This prediction is completely consistent with the experimental results discussed in the previous section. The theoretical uncertainty from the scale dependence ($\mu$) should be significantly reduced when a calculation including next to leading order QCD corrections is completed. Ali and London have used the new experimental bounds to determine the range of possible values for the ratio CKM matrix element $|V_{ts}/V_{cb}|$\cite{Alickm}: $$ 0.62 < |V_{ts}/V_{cb}| < 1.1$$ which is expected from unitarity to be close to 1. Using heavy quark symmetry, the $D\to K^*$ and $B\to K^*$ form factors can be related at certain kinematic points. Models are then used to extrapolate and obtain form factors for the entire kinematic range in $B\to K^*$ decays. Using experimentally measured form factors for $D\to K^* \ell \nu$, the measured branching ratio for $B\to K^* \gamma$ can then be used to determine a value of $|V_{ts}|$. Griffin, Masip and McGuigan have carried out this program and find $|V_{ts}|=0.026\pm 0.006({\rm exp}) \pm 0.011 ({\rm theo})$ where the first error is from experimental data and the second arises from theoretical uncertainties\cite{gmm}. \subsection{The ratio $|V_{ts}/V_{td}|$} In the future, measurements of $B_s-\bar{B_s}$ mixing may allow the determination of $|V_{ts}/V_{td}|$ in a manner which is fairly independent of hadronic uncertainties. This would constrain the quantity $|1-\rho-i\eta|$ and circumvent the problems associated with hadronic uncertainties in $B_d-\bar{B_d}$ mixing. The existing limit from ALEPH on $B_s-\bar{B_s}$ mixing implies $${{\Delta m_s} \over {\Delta m_d}} = (1.2\pm 0.1) |{V_{ts}\over V_{td}}|^2 > 7.9 \Longrightarrow |{V_{ts}\over V_{td}}| > 3.0.$$ This gives the constraint $$\sqrt{(1-\rho)^2+\eta^2}<1.5$$ with minimal uncertainties from non-perturbative physics. This is slightly better than the bound from the unitarity of the CKM matrix. Using the constraint $ (\sqrt{(1-\rho^2)+\eta^2} = 1 \pm 0.3 )$ obtained from $B_d$ mixing and the relation $x_d/x_s = (1/f) \lambda^2 |(1-\rho-i\eta)|^2$ gives $x_s/x_d = 13.2\pm 7.9$ where $f$ takes into account $SU(3)$ breaking and is assumed to be $1.25\pm 0.1$\cite{rosfd}, \cite{breaking}. This implies $ x_s = 19\pm 11$. The Standard Model parameters preferred in a recent fit by Ali and London\cite{Alickm}, which uses all available experimental constraints, also indicate that $x_s$ is large, $$x_s =19.4\pm 6.9$$ for $f_{B_s}\sqrt{B_{B_s}}= 230$ MeV. Such rapid time dependent oscillations of the $B_s$ meson will be extremely difficult to measure in future experiments at high energy colliders or asymmetric B factories. The ratio $|V_{td}/V_{ts}|$ may be determined from a comparison of the decay rates for $B\to\rho\gamma$ (or $B\to \omega \gamma$) and $B\to K^*\gamma$. In this ratio many of the theoretical uncertainties are expected to cancel. $${{{\cal B}(B^-\to \rho^- \gamma)}\over {{\cal B}(B^- \to K^{*-} \gamma)}} ={{{\cal B}(B^0\to \rho^0 \gamma)+ {\cal B}(B^0\to \omega \gamma)} \over {{\cal B}(B^- \to K^{*-} \gamma)}} = \xi |{V_{td} \over V_{ts}}|^2 $$ where the factor $\xi$ accounts for SU(3) breaking effects. This gives limits on $|V_{td}/V_{ts}|$ between 0.64 and 0.75 for different models of $SU(3)$ breaking in the form factors. The possible contribution of long distance effects in this ratio is still in dispute\cite{soni},\cite{cheng},\cite{pakvasa},\cite{longdesh}. For example, Cheng\cite{cheng} finds that the decay $B\to \rho \gamma$ is dominated by the short distance penguin and gives a possible 10-20\% contribution to the amplitude from long distance effects. If these long distance effects can be shown to be manageable, then this ratio will also provide useful constraints in the future. \subsection{CP Violation} The three internal angles of the Bjorken triangle can be expressed in terms of CKM elements \[ \alpha\equiv arg \left( \begin{array}{c} \frac{V_{ud}V^*_{ub}}{V_{td}V^*_{tb}} \end{array} \right) ,~~\beta\equiv arg \left( \begin{array}{c} \frac{V_{cd}V^*_{cb}}{V_{td}V^*_{tb}} \end{array} \right) ,~~ \gamma\equiv arg \left( \begin{array}{c} \frac{V_{cd}V^*_{cb}}{V_{ud}V^*_{ub}} \end{array} \right). \] These angles can be measured indirectly once each of the CKM elements is precisely determined. The allowed values for the upper vertex of the unitarity triangle is shown in Figure~\ref{ckmfig} \cite{Alickm}. It is also possible to determine the angles directly from observations of time dependent CP asymmetries in $\bar{B}^0$ decay. The ultimate goal is to measure both the angles and the CKM couplings to high precision and overconstrain the Standard Model. If a inconsistency is found, this would provide an indication for New Physics. \begin{figure}[htb] \begin{center} \unitlength 1.0in \begin{picture}(3.,3.0)(0,0) \put(-0.31,0.0) {\psfig{bbllx=0pt,bblly=0pt,width=3.3in,height=3.3in,file=region_ckm.ps}} \end{picture} \caption{The allowed region in the $\rho, \eta$ plane as determined from measurements of $B$ decays and $\epsilon$ from kaon decay. The dashed circles correspond to the constraints from the limits for $B_s$ mixing with different assumptions on $SU(3)$ breaking \protect\cite{Alickm}.} \label{ckmfig} \end{center} \end{figure} Large CP violating asymmetries in the $B_d$ system are generated by $B-\bar{B}$ mixing. The simplest case to consider is a process where the final state is a CP eigenstate,$|f_{CP}>$. The amplitude for the direct decay and the amplitude for the process where the $B$ mixes to a $\bar{B}$ which then decays to the same final state cannot be distinguished. If the two amplitudes have some relative phase, then a measurable interference effect will be generated. As noted by A. Sanda, this is analogous to the double slit interference experiment of classical physics. The CP violating asymmetry is due to the analogue of the path difference, which in this case is introduced by $B-\bar{B}$ mixing. The time dependent rates for an initially pure state to decay to a CP eigenstate $|f_{CP}>$ is given by $$ \Gamma(B^0(t) \to |f>) \propto |A|^2 \exp^{-\Gamma t} (1\pm Im (\lambda) \sin{(\Delta M_d t)})$$ where the plus sign obtains for $B^0$ and the minus sign for $\bar{B^0}$. This gives a time dependent CP asymmetry $$A(t)= 2 Im \lambda \sin{(\Delta M_d t)}$$ For the case $|f_{CP}>=\psi K_s$, Im$\lambda = -\sin{(2 \beta)}$ the expected asymmetry is of order 0.6. Similarly, modes such as $\bar{B}^0\to \pi^+ \pi^-$ may give asymmetries proportional to $\sin{(2 \alpha)}$ which will also probably be order 0.2-1.0 . These asymmetries are considerably larger than the characteristic scale of asymmetries in the kaon sector, which are typically of order $10^{-3}$. However, the branching fraction of the $B_d$ meson to CP eigenstates is small and it is difficult to produce large numbers of $B$ mesons. An additional complication arises when considering production at the $\Upsilon(4S)$ resonance where the $B\bar{B}$ meson pairs are produced in a coherent state. The restrictions of quantum statistics lead to a CP asymmetry which depends on the difference of the decay times of the $B^0$ and $\bar{B}^0$ mesons. This also has the unfortunate side effect that time integrated CP asymmetries for $B_d$ mesons vanish\cite{UMass}. Several solutions to this difficulty have been proposed. One alternative is to operate the experiment at a center of mass energy just above the threshold where $\bar{B} B^*$ pairs are produced. In this case, time integrated asymmetries no longer vanish but the cross section is lower by at least a factor of five. Or one can operate at the $\Upsilon(4S)$ resonance with asymmetric energy beams. The center of mass frame will be boosted and the B decay lengths will be dilated to measurable distances. The latter solution has been chosen by the SLAC and KEK laboratories. Another possibility is to take advantage of the large cross sections for hadronic production of B mesons at either hadron collider of fixed target experiments. In this case, B mesons should be produced incoherently but it is quite challenging to trigger and operate the experiment in a very high rate environment. The high luminosity $B$ factory projects are discussed in detail elsewhere\cite{BABAR},\cite{BELLE}, \cite{DESY},\cite{CDFB},\cite{LHC}. \section{~CONCLUSIONS} Significant progress in the physics of $B$ mesons has been made in the last several years. Improved measurements of branching fractions for semileptonic decays in conjunction with more precise measurements of exclusive B lifetimes from LEP and CDF have improved the knowledge of the CKM matrix elements $|V_{cb}|$ and $|V_{ub}|$. A number of experimental and theoretical approaches to a model independent determination of $|V_{ub}|$ have been proposed. These will be one of the foci of experimental efforts in the future. Improved determinations of the $B_{d}-\bar{B_d}$ mixing parameters from CLEO~II and the LEP experiments, as well as the determination of the top quark mass at CDF and D0 have reduced the allowed range for the CKM element $|V_{td}|$. The LEP experiments have provided the first evidence for time dependent oscillations of neutral $B$ mesons. These experiments have also provided useful constraints on the $B_s-\bar{B_s}$ mixing parameter, which can be to constrain $|V_{ts}|/V_{td}|$. Measurement of the $B_s-\bar{B}_s$ oscillation frequency is a major experimental challenge for the high energy collider experiments. Results from CLEO~II have significantly modified our understanding of hadronic B decay. The data and measurements of branching fractions are now of sufficient quality to perform non-trivial tests of the factorization hypothesis including comparisons of rates for $\bar{B}^0\to D^{*+} X^-$ (where $X^-=\pi^- ,\rho^-$, or $a_1^-$) with rates for $D^{*+} \ell ^- \bar{\nu}$ at $q^2=M_X^2$, as well as comparisons of the polarizations in $\bar{B}^0\to D^{*+}\rho^-$ with $\bar{B}^0 \to D^{*+} \ell^-\bar{\nu}_\ell$. In all cases, the factorization hypothesis is consistent with the data at the present level of experimental precision and for $q^2 < m_{a_1}^2$. Improved measurements of branching ratios of two-body decays with a final state $\psi$ meson have been reported from ARGUS and CLEO~II. The decay $B \to \psi K^*$ is strongly polarized with $\Gamma_L / \Gamma = (78 \pm 7)$ \%. Therefore, this mode will be useful for measuring CP violation. There is no evidence for color suppressed decays to a charmed meson and light neutral hadron in the final state. The most stringent limit, ${\cal B}(\bar{B^0}\to D^0\pi^0) / {\cal B}(\bar{B^0}\to D^+\pi^-) < 0.07$ from CLEO~II, is still above the level where these color suppressed $B$ decays are expected in most models. The observation of $B \to \psi$ modes shows that color suppressed decays are present. Using results on exclusive $B \to \psi$ decays from CLEO~1.5, CLEO~II and ARGUS, we find a value of the BSW parameter $|a_2|\; = \; 0.23 \pm 0.01 \pm 0.01$. We also report a new value for the BSW parameter $|a_1|\; = \; 1.03 \pm 0.04 \pm 0.06$. By comparing rates for $B^-$ and $\bar{B}^0$ modes, it has been shown that the sign of $a_2/a_1$ is positive, in contrast to what is found in charm decays. There has been dramatic progress in the study of rare decays. CLEO~II has reported evidence for charmless hadronic $B$ decay in the sum of $B\to K^+ \pi^-$ and $B\to \pi^+ \pi^-$ and has observed the first direct evidence for the radiative penguin decay $B \to K^{*} \gamma$ with a branching fraction of $(4.5\pm 1.5 \pm 0.9) \times 10^{-5}$ consistent with Standard Model expectations for a heavy top quark. CLEO~II has also succeeded in observing the inclusive process, $b\to s \gamma$ and finds ${\cal{B}}(b\to s\gamma) = (2.32\pm 0.57 \pm 0.35) \times 10^{-4}$ . These results restrict the allowed range for $|V_{ts}|$ and constrain physics beyond the Standard Model. Large samples of reconstructed hadronic decays will be obtained in the next few years by the CLEO~II collaboration as a result of further improvements in the luminosity of CESR, and in the performance of the CLEO~II detector. There will also be significant increases in the size of data samples available to the CDF experiment. These will permit accurate tests of the factorization hypothesis over the full $q^2$ range. The large tagged sample at CLEO can be used to study inclusive properties of $B^+$ and $B^0$ decays and constrain $f_{B}$ via $B^+\to \tau^+ \nu$. Measurements of additional decays to final states with charmonium mesons will be performed and other color suppressed decays will be observed. Larger data samples should allow further results to be obtained on rare $B$ decays including the observation of $B^0\to \pi^+ \pi^-$, $B^0\to K^+ \pi^-$ and a measurement of the inclusive process $b\to s ~{\rm gluon}$. The measurement of several rare hadronic decays would provide information on the relative importance of the penguin and spectator amplitudes. Additional electromagnetic penguin decays such as $B\to \rho (\omega) \gamma $, $B\to K^{**} \gamma$, and $B\to K^* \ell ^+ \ell ^-$ may be observed. These provide further constraints on the Standard Model parameters $|V_{ts}|$ and $|V_{td}|$, as well as on extensions of the Standard Model. The ultimate goal of the study of B mesons is to measure the large CP asymmetries predicted by the Standard Model in decay modes such as $\bar{B}^0\to \psi K^0$, $\bar{B}\to \pi^+ \pi^-$ and $B^-\to D^0 K^-$. In order to throughly test the consistency of the Standard Model's description of CP violation in these decays, the mechanisms of $B$ decay must be well understood. This review shows that rapid progress is being made in this program. \acknowledgements We acknowledge the essential contributions of Dr. S. Playfer to an earlier version of this review. We have also benefitted from the reviews of S. Stone and the review of J.D. Richman and P. Burchat on semileptonic charm and beauty decays. We thank H. Albrecht, V. Barger, C. Bebek, D. Cassel, P. Colangelo, J.E. Duboscq, F. DeJongh, I. Dunietz, E. Golowich, J. Hewett, A. Jawahery, Y.Y. Keum, K. Kinoshita, S. Menary, M. Neubert, C. Nixdorf, X.Y. Pham, A. Ryd, J. D. Richman, V. Rieckert, J. Rodriguez, S. Pakvasa, W. Palmer, K. Schubert, B. Stech, R. Wanke, and M. Zoeller for useful discussions and help in preparation of this review. We thank our colleagues from the CLEO, ARGUS, CDF, ALEPH, OPAL, DELPHI and L3 experiments for their contributions to the work discussed in this review. We thank the Department of Energy, the University of Hawaii and Ohio State University for their unwavering support.
1,314,259,995,538
arxiv
\section{\textbf{Introduction}} \subsection{Motivation and background} The convergence and efficiency of a reduced order model for approximation of the solutions of a large-scale PDE system depend heavily on the choice of the elements that constitute the "reduced basis" \cite{Priori}. Therefore, the input parameter selection for which the snapshots are generated must be appropriate. It must be sufficiently rich to adequately cover key areas of the parametric space. For sampling the parametric space, discretization techniques like uniform sampling and random sampling are frequently utilized. Both sampling methods, however, have their limitations. For instance, a multidimensional parametric system would necessitate a thorough grid search in uniform sampling, while random sampling would miss some crucial parts of the function in the parametric space. On the other hand, another sampling technique, known as Latin Hypercube Sampling (LHS) \cite{helton2003latin,helton2005comparison} provides a compromise between uniformity and size of the sample, which makes it more efficient than uniform sampling and gives often better accuracy than random sampling. Some statistically-based sampling methods like Monte Carlo methods are also among other popularly used techniques \cite{gentle2006random}. Even with efficient sampling techniques, the complexity can grow exponentially with the increase in the dimensionality of the parametric space, which is the case for many engineering applications. Therefore, the Reduced Basis Modeling (RBM) based on greedy sampling emerged as a promising tool for reducing the computational cost of Full Order Model (FOM) by generating high-fidelity snapshots at only a select few optimal parametric points. The basic idea in RBM is to adaptively choose sample points by finding the location at which the estimated error of the reduced model is maximum in the offline phase, thanks to rigorous error estimators. The greedy RBM was a subject of research for a very long time, first applied to find reduced models for the parameterized steady incompressible Navier– Stokes equations \cite{ito1998reduced} and then further developed for a variety of parameterized parabolic PDEs \cite{grepl2005posteriori,grepl2005reduced,grepl2007efficient} and also applied to several optimal control and inverse problems \cite{dede2010reduced,nguyen2010reduced}. During the past two decades, RBM had a significant contribution to the development of rigorous error bounds for Stokes flow problems, with a special focus on the inf sup stability conditions that can be referred to in the articles \cite{rozza2013reduced,martini2015reduced,quarteroni2007numerical}. Even though, the RBM methods are not completely void of bottlenecks; it requires sharp, and rigorous error estimators that are problem specific for ensuring the reliability of the method. Additionally, the error is estimated over a discrete training set, which must be a good surrogate of the continuous parametric space. Inefficient greedy sampling could yet occur from this, particularly for high dimensional parametric PDEs. To mitigate this issue, the authors in the article \cite{Cohen} have performed the greedy algorithm on random training sets of small sizes in every iterative cycle instead of estimating the error over the entire training set. The authors have successfully demonstrated a 16-parametric dimension system for a diffusion equation problem. Wilcox et al. in their research work \cite{bui2008model}, solved a sequence of optimization problems on a parametric space which is not a discrete set but rather continuous, to find optimal points adaptively in a greedy manner using both error estimator or residual error indicator. The authors demonstrated the proposed methodology on a thermal problem for the design of a fin based on two input parameters Biot number and conduction coefficients for 11, and 21 parametric dimensions respectively. The papers \cite{BUITHANH2007880,1582499,Hoang2015AnEG} also provide references to quite a few literary works on goal-oriented sampling methods. Numerous other studies have examined the use of subspace angles to assess the model's sensitivity to parameter changes \cite{Bazaz2015,varona2017automatic,amsallem2010interpolation} or sensitivity analysis to sample adaptively from the parametric space \cite{bond2007piecewise}. \subsection{Overview of the idea: Iterative multi-fidelity model order reduction } In this work, we suggest a sampling strategy that uses a multi-fidelity modeling approach as an alternative to the conventional greedy sampling technique that is driven by error estimators. Multi-fidelity modeling employs models of varied accuracy to estimate the same output quantity at a considerably faster convergence rate than using a single model with a higher approximation quality. For instance, in optimization problems, an optimizer is constructed to supply the input design variables at each iteration, and the model then assesses the value of the related objective function, its corresponding gradients, and the value of the constraint. Typically, high-fidelity models are retained in the loop to establish accuracy and convergence guarantees on the low-fidelity models, which are used to determine the best design parameters while taking advantage of computing speedups. Low-fidelity model estimates are cheaper model approximations that can be a coarse-discretized model, a ROM, or even a simplified physics model that can approximate the same output as the high-fidelity model but with lower accuracy. Multi-fidelity methods have been successfully applied in optimization problems based on co-kriging models \cite{cokriging}, uncertainty analysis \cite{uncertainty}, Monte Carlo simulations \cite{WANG2021100035,montecarlo} to name a few. Such methods have also been extended to the machine learning and Physics-Informed Neural Network (PINN) domain, which can be referred to in \cite{PENWARDEN2022110844,GUO2022114378}. The authors in \cite{PENWARDEN2022110844} have presented the connection of fidelity of different accuracy with neural networks by manipulating the width and depth of the network architecture. A comprehensive review of the past works and recent advances in the area of multi-fidelity modeling can be found in the survey \cite{peherstorfer2018survey}. Recently, in the work of Kast et al. \cite{KAST2020112947}, a multi-fidelity setup is exploited in the context of reduced-order modeling by solving a nonlinear structural dynamic problem based on three input parametric dimensions. First, an appropriate sampling set is exploited in the parametric space by leveraging a collection of low-fidelity models, followed by multi-fidelity Gaussian process regression (GPR) for approximation of the reduced coefficients in the online stage, therefore allowing an efficient decoupling of offline-online strategy. Another work based on a multi-fidelity adaptation strategy can be found in the work \cite{PEHERSTORFER201521} where the authors combined data-driven models with projection-based ROM and adapted the ROM for any change in the input parameter by low-rank updates to the reduced operators without rebuilding the FE operators from scratch. This work is focused on addressing the complexities of cases when the underlying properties of the PDE system are not static, but undergo dynamic changes due to the change in the latent variables. In this work, we conjunct the multi-fidelity method with physics-based reduced order modeling for deriving low-cost ROMs efficiently without the need for problem-specific error estimators. Our approach is based on the "learn and adapt" framework. In the first step, a low-fidelity model learns to sample points from a large input parametric space, and in the second step, the low-fidelity model improves by adapting to the current ROM approximation and the procedure continues until the low-fidelity model is a good representation of a FOM. By keeping high-fidelity solutions in the loop, not only accuracy and convergence is achieved, the prior requirement of error estimators or upper bounds is no longer served. The details are explained in section \ref{sec:IMFM}. The goal of this work is to explore the parametric space efficiently and generate appropriate snapshots for a high dimensional parametric system irrespective of the problem definition, the underlying discretization techniques used for solving the problem such as the finite element method (FEM), or finite volume method (FVM), or for cases when posteriori error estimators are unavailable or difficult to obtain. In fact, we evaluate the max norm error between the high-fidelity solution and reduced order solution at the computed parametric points until it establishes the acceptable accuracy, hence our sampling process is adhoc based on heuristics. The greedy selection of points can also be tuned as per the user's requirements for the efficient performance of the algorithm, which will be reflected in the numerical examples. We first introduce the algorithm on a 2D heat conduction problem with 2 input parameters and make a qualitative comparison with the existing greedy RBM. Further, we apply the proposed methodology to an advection-diffusion reaction problem in a parametric space of 9 dimensions. This work focuses on two main aspects: the first is the construction of an initial low-fidelity model explained in section \ref{sec:initLF}, and the second is the sampling strategy using the DEIM technique explained in \ref{sec:DEIM}. The idea of using DEIM in iterative multi-fidelity modeling is not related to the approximation of the nonlinear term, but only to the notion of greedy selection of "optimal" parametric points. The article is organized as follows: Section 2 presents a general framework of an elliptic partial differential problem, followed by classical reduced basis construction. Section 3 describes the construction of the reduced basis using the proposed iterative multi-fidelity approach. Section 4, demonstrates the methodology through different numerical examples followed by results and discussion. \section{\textbf{General problem setting}} This section presents a general problem setting for the purpose of establishing the notations for future reference. The proposed sampling methodology is unrelated to any particular PDE definition, but in order to compare it to the examples presented in the numerical analysis section \ref{numericaltest1} and \ref{numericaltest2}, we adhere to a general elliptic parameterized PDE. Let $\Omega$ be some bounded domain and $\mathcal{V}$ be an associated functional space to characterize the solution as a function of space. Denoting with $\bm{\mu} $ as the input parameter, which belongs to the parametric space $\mathcal{D}$, the problem is to find solution $u(\bm{\mu})$ in some finite-dimensional discrete space $\mathcal{V}^\mathcal{N} \subset \mathcal{V} $ where $\mathcal{N}=\operatorname{dim}(\mathcal{V}^\mathcal{N})$ such that \begin{equation} a\left(u,v ; \bm{\mu}\right)=f\left(v; \bm{\mu}\right) \quad \forall v \in \mathcal{V}^{\mathcal{N}} \label{PDE} \end{equation} In this work, finite elements are used to obtain the discrete solution to \ref{PDE}, however, the proposed method is also applicable to other numerical discretization solvers. After spatial discretization, the FE solution of the field variable `$u$', can be approximated as: \begin{equation} u \approx u_h(\bm{x};\bm{\mu})= \sum_{i=1}^\mathcal{N} \mathrm{N}_i(\bm{x})(\mathrm{u}_h(\bm{\mu}))_i \label{eq:FE} \end{equation} where, $\mathrm{N}_i$ are the shape functions of choice and $(\mathrm{u}_h(\bm{\mu}))_i$ represent the scalar values of the field $u_h$ at discretization points $\bm{x} \in \mathbb{R}^{\mathcal{N}}$. In practice, these values are stored as a collection of high-fidelity solutions that we call "snapshots" of distinct parameter values $\mu\in \Xi_{train} \subset \mathcal{D}$, of cardinality $|\Xi_{train}|=N$. This set of snapshots $\{\bm{\mathrm{u}}_h^k\}_{k=1}^{N}$ is generated in the offline stage by solving the PDE equation \eqref{PDE} using a high-fidelity solver for varying choice of input parameters. It can vary from material parameters to geometrical, and shape parameters, to boundary conditions of the unknown field variable, and so on. \subsection{Reduced basis construction} The basic idea in the reduced order modeling approach is that the discrete solution space associated with the underlying PDE lies in a low dimensional subspace and is in general represented by a low dimensional smooth manifold \cite{rozza2007reduced}. The manifold comprises all solutions of the parametric problem belonging to the parametric space i.e. \begin{equation}\mathcal{M}_h=\operatorname{span}\left\{\bm{u}_h(\bm{\mu}) \in \mathcal{V}^\mathcal{N}: \bm{\mu} \in \mathcal{D}\right\} \subset \mathcal{V} \end{equation} We aim to exploit a low dimensional structure of this manifold, $\mathcal{V}^R \subset \mathcal{V}^\mathcal{N}$ where $R=\operatorname{dim}(\mathcal{V}^R)<<\mathcal{N}$ by appropriately choosing a set of orthogonal basis functions $\{\bm{\phi}_1,\bm{\phi}_2,\ldots,\bm{\phi}_R\}$ that spans the subspace of the manifold $\mathcal{M}_h$, and can well represent the manifold with small error. The associated reduced subspace is then given by, $$\mathcal{V}^R=span\{\bm{\phi}_1,\bm{\phi}_2,\ldots,\bm{\phi}_R\} \subset \mathcal{V}^{\mathcal{N}}$$ The reduced solution $u^{R}\in \mathcal{V}^R $ can then be approximated by the linear combination of these basis functions given by: \begin{equation} u^{R}(\bm{x} ; \bm{\mu})=\sum_{i=1}^{R} \phi_i(\bm{x}) b^{i}( \bm{\mu}) \label{eq:RB} \end{equation} whose coefficients are calculated thanks to a projection onto the reduced basis (RB) space. Therefore, the reduced problem can be sought as: for any $\bm{\mu}\in \mathcal{D}$, find $u^{R}(\bm{\mu}) \in \mathcal{V}^R$ such that: \begin{equation} a\left(u^{R}, v_h; \bm{\mu}\right)=f(v_h ; \bm{\mu}) \quad \forall v_h \in \mathcal{V}^R \label{eq:ROM} \end{equation} There are several strategies in the literature for constructing reduced basis functions, including the proper orthogonal decomposition (POD) \cite{sirovich1987turbulence,lumley1967structure} and the classical greedy reduced basis method. The objective of the greedy algorithm in the context of RBM is to adaptively enrich the reduced subspace with orthogonal basis functions \cite{Yvongreedy}. By doing so, the evaluation of high-fidelity snapshots for all the training parameters (as done in the classical POD) can be avoided in the offline step, therefore reducing enormously the offline cost while improving the efficiency of MOR. It is based on the idea to select the parameter representing a local optimum in relation to an opportune error indicator iteratively, \begin{equation} \bm{\mu}_{n+1}=\operatorname{arg} \underset{\bm{\mu} \in \Xi_{train}}{ \operatorname{max}} \Delta^R(\mu) \label{greedyeq} \end{equation} which means in the $(n+1)^{th}$ step, basically the sample point whose error metric $\Delta^R(\bm{\mu})$ indicates to be worst approximated among all the parameters $\bm{\mu} \in \Xi_{train} \subset \mathcal{D}$ by the solution of the current reduced model $\mathcal{V}_{n+1}^R$ is selected as the next sample point. At the sampled point, the high-fidelity snapshot is generated using the finite element approximation, followed by enrichment of the reduced basis subspace by Gram-Schmidt orthogonalization of the generated snapshots. This is repeated until the error estimator reaches a prescribed tolerance. To evaluate $\Delta^R(\bm{\mu})$ we need two essential ingredients: the dual norm of the residual and a sharp lower bound for the coercivity constant which can be obtained by theta methods for simple PDEs (\cite{Hesthaven2015}) or successive constraint method (SCM) for general PDEs \cite{chen2009improved,chen2016certified,huynh2007successive}. It is essential for a good posterior error estimator to be sharp, and rigorous for ensuring the reliability of the RBM and also has to be computationally inexpensive for efficient greedy sampling. However, for complex PDEs construction of sharp and rigorous error bounds may not be easily achievable which can lead to unreliable reduced basis approximation, and also for PDE systems with high dimensional parametric space, finding an error estimate over the entire parametric set may become computationally expensive. As a result, a novel technique is suggested in this work, where we use POD for the basis construction but a different strategy utilizing multi-fidelity modeling is employed to effectively choose the snapshots, as explained in the next section. \section{Iterative multi-fidelity modeling (IMF) for building PODG-ROM}\label{sec:IMFM} Traditionally, if $\{\bm{\mathrm{u}}_{\text{\tiny HF}}^k(\bm{\mu}) \}_{k=1}^N \in \mathbb{R}^{\mathcal{N}}$ represents the high-fidelity snapshots of the PDE problem \ref{PDE} at distinct parameter values $\bm{\mu}\in \Xi_{train} \subset \mathcal{D}$, of cardinality $|\Xi_{train}|=N$, that can be suitably well approximated in a low-dimensional manifold, then the solution can be represented in a separated form as, \begin{equation} u_{\text{\tiny HF}}(\bm{x};\bm{\mu}) \approx \sum_{i=1}^{r}{\phi}_{\text{\tiny HF}}^i(\bm{x})\psi_{\text{\tiny LF}}^i (\bm{\mu}) \label{eq:HF} \end{equation} Here, $\{\bm{\phi}_{\text{\tiny HF}}^i\}_{i=1}^r \in \mathbb{R}^\mathcal{N}$ represents the high-fidelity basis functions which spans the low dimensional subspace, i.e. $\mathcal{V}^R= \operatorname{span}\{\bm{\phi}_{\text{\tiny HF}}^1, \dots, \bm{\phi}_{\text{\tiny HF}}^r\} \subset \mathcal{V}^{\mathcal{N}}$, $R= dim(\mathcal{V}^R)$ and $\psi_{\text{\tiny LF}}(\bm{\mu}):\mathcal{D}\longrightarrow \mathbb{R}$ are parametric functions that span the parametric space. It is true that high-fidelity models can capture the intricacies of complex PDE systems, but they are also equally expensive to train and the offline cost to recover the basis functions is quite high. If the parametric functions $\psi_{\text{\tiny LF}}^i(\bm{\mu})$ were previously known, we could easily extract a set of points $\bm{\mu}^{\bm{P}} \subset \Xi_{train}$ where $\bm{P} \in (1,N)$ using any efficient sampling technique. The optimal basis functions might then be recovered by generating high-fidelity snapshots at the computed set of points. However, $\psi_{\text{\tiny LF}}^i (\bm{\mu})$ are not known a priori, but we can reasonably assure that if a low-fidelity model is instead used for approximation, then by similar expression as given in eq.\ref{eq:HF} we have, \begin{equation} u_{\text{\tiny LF}}(\bm{x};\bm{\mu}) \approx \sum_{i=1}^{r}{\phi}_{\text{\tiny LF}}^i(\bm{x})\psi_{\text{\tiny LF}}^i (\bm{\mu}) ) \label{eq:LF} \end{equation} where $\psi_{\text{\tiny LF}}^i (\bm{\mu})$ have similar features as the $\psi_{\text{\tiny HF}}^i(\bm{\mu})$. Therefore, we can use those to recover the high-fidelity snapshots at computed points $u_{\text{\tiny HF}}(\bm{\mu}^{\bm{P}})$. The process can be made iterative, as the newly computed $\bm{\mu}^{\bm{P}}$ can effectively result in the reconstruction of the high-fidelity basis functions. This subsequently leads to the enrichment of the reduced basis subspace, which causes an improvement in the low-fidelity model approximation. Therefore, step 1 of the proposed method is to obtain a poor or inexpensive approximation to the FOM using a low-fidelity (lo-fi) model, $f_{\text{\tiny LF}}^{init}:\Omega \times \mathcal{D} \rightarrow \mathbb{R} $ that maps all the parameters belonging to a given training set $\bm{\mu} \in \Xi_{train} \subset \mathcal{D}$ to produce the same output with lower accuracy. The flow is shown in figure \ref{fig:IMF} and details on the construction of the initial low-fidelity model are explained in section \ref{sec:initLF}. \begin{figure}[h!] \centering \resizebox{0.8\columnwidth}{!}{ \tikzstyle{every node}=[font=\fontsize{42}{42}\selectfont] \tikzstyle{block} = [rectangle split, draw, rectangle split parts=2,text width=60em, rounded corners, minimum width=40em, minimum height=20em, align=center,font=\fontsize{42}{42}\selectfont] \tikzstyle{blueblock} = [rectangle, draw, fill=blue!20, minimum width=40em, text width=60em, text centered, rounded corners, minimum height=20em, align=center,font = \fontsize{42}{42}\selectfont] \tikzstyle{whtblock} = [rectangle,rounded corners, draw, fill=white!20, text width=24em, minimum height=10em, align=left, font = \fontsize{40}{40}\selectfont] \tikzstyle{decision} = [diamond, draw] \begin{tikzpicture}[fill=gray, scale = 5, transform shape] \input{sequence} \end{tikzpicture} } \vspace*{0.5cm} \caption{Flow of the iterative multi-fidelity modeling approach.} \label{fig:IMF} \end{figure} In step 2, we compute parametric functions $\{\bm{\psi}_{\text{\tiny LF}}^i\}_{i=1}^r \in \mathbb{R}^N$ from the low-fidelity model approximation by primarily using POD, and then in step 3, we recover "optimal sampling points" using interpolation strategy over these parametric functions to sample optimal points in a greedy procedure, $\bm{\mu}^{\bm{P}} \subset \Xi_{train}$ with $\bm{P} \in (1,N)$. The details are discussed in section \ref{sec:DEIM}. Next, in step 4, using a high-fidelity (hi-fi) model we generate snapshots on the select sample points $\bm{\mu}^{\bm{P}}$ to recover the high-fidelity basis functions $\bm{\phi}_{\text{\tiny HF}}^i \in \mathcal{V}^R$ and therefore, enrich the reduced basis subspace, $\mathcal{V}^{R} =\operatorname{span}(\{\bm{\phi}_{\text{\tiny HF}}^i\}_{i=1}^{r''}) \in \mathbb{R}^\mathcal{N}$ where $r''\leq \#(\bm{\mu}^{\bm{P}})$, the construction is properly explained in the section \ref{sec:basisfunc}. A high-fidelity model is mathematically defined as $f_{\text{\tiny HF}}:\Omega \times \mathcal{D} \rightarrow \mathbb{R} $ that maps all the selected points $\bm{\mu}^{\bm{P}} \subset \Xi_{train} $ to estimate the output with the accuracy that is needed for the task. In step 5, we then solve a POD-G ROM for all $\bm{\mu} \in \Xi_{train}$. The functions $\psi_{\text{\tiny LF}}^i$ obtained during the first approximation are likely to not generate exactly the same space as the $\psi_{\text{\tiny HF}}^i$ i.e $\operatorname{span}(\psi_{\text{\tiny LF}}^i) \neq \operatorname{span}(\psi_{\text{\tiny HF}}^i) $. Therefore, recovery of the high-fidelity basis functions $\bm{\phi}_{\text{\tiny HF}}^i$ may not be accurate, and reliable to represent the large-scale PDE system, hence the procedure has to undergo certain iterations. Finally, in step 6, we evaluate the error between the high-fidelity model and reduced basis model approximation at the computed discrete points using the error metrics that are discussed in \ref{errordef}. If it is below a certain prescribed tolerance level, we terminate the algorithm, else we adapt the low-fidelity model with the current ROM approximation (refer section \ref{sec:LFupdate}) and repeat the procedure until the ROM constructed represents the FOM adequately. To measure the overall performance of the algorithm, we also check for validation error by computing the error on another set of parameters belonging to a given validation set $\Xi_{val} \subset \mathcal{D}$. The sample points that are obtained provide a locally optimal choice at each stage of the iterative cycle, however as iteration continues and new points are added in each iteration the algorithm converges towards the global solution with certain accuracy in very reduced complexity. The proposed method in this work is tested on an elliptic PDE problem. It can be extended to hyperbolic or parabolic PDEs as well. However, the reduced basis subspace needs to be built appropriately to take into account the time integration. \subsection{Construction of the initial low-fidelity model}\label{sec:initLF} We propose two types of sketch models for the construction of an initial low-fidelity model, depending on the availability of either of the two conditions: \begin{enumerate} \item No database of high-fidelity solutions is present a priori, \item A database of high-fidelity solutions is available \end{enumerate} \paragraph{Coarse sketch model:} When no database of solutions is present a priori, the initial low-fidelity model is built using a derefined version of a high-fidelity model, which is nothing but a coarse finite element model. We refer to it as a "coarse sketch model" for future discussions. Snapshots generated using this coarse sketch appear to be a wide matrix, denoted by $\bm{S}_{\text{ \tiny LF}}=[\bm{\mathrm{u}}_{\text{\tiny LF}}^1,\bm{\mathrm{u}}_{\text{\tiny LF}}^2,\ldots, \bm{\mathrm{u}}_{\text{\tiny LF}}^N] \in \mathbb{R}^{m\times N}$, where $\bm{\mathrm{u}}_{\text{\tiny LF}}=\bm{\mathrm{u}}_h(\bm{\mu})$ at discretization points $\bm{x}_{\text{\tiny LF}} \in \mathbb{R}^m$, such that $ m << \mathcal{N}$. \paragraph{POD-G ROM:} Now, let us assume the case when we already have some solutions of the large-scale PDE system available to us, be it experimental or numerical data. This is materialized in our work by solving a high-fidelity model for any random training parameters, $\mathbf{X}=[\bm{\mathrm{u}}_{\text{\tiny HF}}^1(\bm{\mu}^k),\bm{\mathrm{u}}_{\text{\tiny HF}}^2(\bm{\mu}^k),\ldots, \bm{\mathrm{u}}_{\text{\tiny HF}}^K(\bm{\mu}^k)] \in \mathbb{R}^{\mathcal N \times K}$ with $ \mathcal{N} >>K$, and $k\in (1,N)$ and term it here as a \textbf{random sketch model}. Then the initial low-fidelity model is a ROM approximation, which is constructed by Galerkin projection of the PDE system onto the reduced basis functions computed from this sketch model. \paragraph{Remark: } Although the selection parameters for a random sketch model are entirely user-dependent, it is advised to start by building the reduced bases with just a few snapshots. For the primary purpose of lowering the offline cost of MOR, the low-fidelity model approximation from such a random sketch must remain a less expensive approximation to the FOM at the initial stage of the method. Then, as the iteration advances, the quality of the low-fidelity model improves and converges to the FOM accurately. \subsection{Parametric point selection}\label{sec:DEIM} As previously mentioned, sampling points are extracted from the parametric modes of the low-fidelity model approximation. This is based on the heuristic assumptions that the low-fidelity model, although a poor approximation to the high-fidelity model, may nonetheless accurately reflect the essential features of the high-fidelity model's parametric dependence. A brief description of how the parametric points are sampled from a given training set, $\bm{\mu}^{\bm{P}} \subset \Xi_{train} $ using DEIM is explained in the algorithm \ref{algo:deim}. DEIM finds the sample points in a greedy way from an input basis which is given here by the parametric functions computed by performing SVD on the low-fidelity model approximation, \begin{equation} \bm{S_{\text{\tiny LF}}}=\bm{\Phi}_{\text{\tiny LF}} \bm{\Sigma}_{\text{\tiny LF}} \bm{\Psi}_{\text{\tiny LF}}^{\text{\tiny T}} \end{equation} where, $\bm{\Phi}_{\text{\tiny LF }}=[\bm{\phi}_{\text{\tiny LF }}^1,\bm{\phi}_{\text{\tiny LF }}^2,\ldots,\bm{\phi}_{\text{\tiny LF }}^r] \in \mathbb{R}^{m \times r}$ with $m<<\mathcal{N}$, and $r \leq N$ are poorly approximated POD modes while $ \bm{\Psi}_{\text{\tiny LF}}=[\bm{\psi}_{\text{\tiny LF}}^1,\bm{\psi}_{\text{\tiny LF}}^2,\ldots,\bm{\psi}_{\text{\tiny LF}}^r] \in \mathbb{R}^{N \times r}$ denotes the parametric modes that span the parametric subspace. The rectangular diagonal matrix $\bm{\Sigma}_{\text{\tiny LF}} \in \mathbb{R}^{r \times r}$ contains the corresponding non-negative singular values, $\sigma_1 \geq \ldots \geq \sigma_r \geq 0$ accounting to the information content of the low-fidelity model solution data. The process of sampling starts by selecting the index with the largest magnitude, corresponding to the first entry of the input basis $\{\bm{\psi}_{\text{\tiny LF}}^i\}_{i=1}^r$. The remaining points are selected by finding the location at which the residual of the current approximation is maximum (refer to the algorithm \ref{algo:deim}). The points that are computed are unique due to the linear independence of the input basis, which guarantees that the indices are hierarchical and non-repetitive in nature. \begin{algorithm} \caption{DEIM sampling adopted from \cite{Chaturantabut2010}} \begin{algorithmic}[1] \State INPUT: Parametric functions $\bm{\Psi}_{\text{\tiny LF}} \in \mathbb{R}^{N \times r}$ \State OUTPUT: Sampled parametric points $\bm{\mu}^{\bm{P}}=\left[\bm{\mu}^{(1)}, \ldots, \bm{\mu}^{(r)}\right]^{\text{\tiny T}} \subset \Xi_{train}$ \State $ \quad \bm{\mu}^{(1)}=\max \left\{\left|\bm{\psi}_{\text{\tiny LF}(\cdot,1)}\right|\right\}$ \State $ \quad \bm{\mu}^{\bm{P}}=[\bm{\mu}^{(1)}]$ \For{ $l=2:r$ } \State $\quad \quad \text{Solve }c= [\bm{\Psi}_{\text{\tiny LF}(1:l-1,1:l-1 )}]^{-1} \bm{\psi}_{\text{\tiny LF}(1:l-1,l)}$ \State $\quad \quad \bm{\mathrm{r}}=\bm{\psi}_{\text{\tiny LF}(\cdot,l)}-\bm{\Psi}_{\text{\tiny LF}} c$ \State $\quad \quad \bm{\mu}^{(l)}=\max \{|\bm{\mathrm{r}}|\}$ \State $\qquad \bm{\mu}^{\bm{P}}=[\bm{\mu}^{\bm{P}},\mu^{(l)}]^{\text{\tiny T}}$ \EndFor \end{algorithmic} \label{algo:deim} \end{algorithm} This sampling procedure is resumed after every iteration of the proposed algorithm and is not restarted from the beginning. As a result, instead of oversampling the same points from the training set, we are able to sample distinct points from it. The parametric functions obtained in $i^{th}$ iteration of the multi-fidelity algorithm are orthogonalized with respect to the parametric functions obtained in $(i-1)^{th}$ iteration through Gram-Schmidt orthonormalization in order to prevent repetition and picking up points closer to previously calculated points. This step is crucial for finding the best and most distinctive points throughout each iteration cycle, enabling us to explore the parametric space more thoroughly. \paragraph{Remark: }A general remark to take into account that while picking parametric points, it is always a better choice to select from the first `$r$' truncated parametric functions due to its content of the highest energy or information of the system arranged in descending order. It is noteworthy that the proposed methodology doesn't necessarily perform like a classical greedy sampling procedure due to the nature of the selection of points from the parametric functions. Hence, depending on the available computing resources, the level of "greediness" can be fine-tuned. In other words, since DEIM will generate the same number of points as the rank of parametric functions, the user can decide to select all the sample points at once, or also has the option to select one parametric point per iteration. This is one of the advantages of this approach where the selection of points per iteration is completely user-dependent, which can be assimilated for parallel computations. A general remark has to be made, incorporating such a step can also lead to the selection of excess sample points than required which deviates from the main objective of selecting a few optimal points and hence needs to be taken care of. \subsection{Recovery of the reduced basis functions}\label{sec:basisfunc} The high-fidelity basis function in the first iteration of the proposed method is recovered by performing SVD on the select snapshots, $\bm{S}_{\text{\tiny HF}}=\{\bm{\mathrm{u}}_{\text{\tiny HF}}(\bm{\mu}^{\bm{P}})\}$ for all $\bm{\mu}^{\bm{P}} \subset \Xi_{train}$. \begin{equation} \operatorname{svd}(\bm{S_{\text{\tiny HF}}})= \bm{\Phi}_{\text{\tiny HF}} \bm{\Sigma}_{\text{\tiny HF}} \bm{\Psi}_{\text{\tiny HF}}^{\text{\tiny T}} \end{equation} where, $\bm{\Phi}_{\text{\tiny HF}}=[\bm{\phi}_{\text{\tiny HF}}^1,\bm{\phi}_{\text{\tiny HF}}^2,\ldots,\bm{\phi}_{\text{\tiny HF}}^{r''}] \in \mathbb{R}^{\mathcal{N}\times r''}$ with $ r''\leq \#(\bm{\mu}^{\bm{P}})$ contains the high-fidelity reduced bases that span the low-dimensional subspace $\mathcal{V}^{R}$ and $ \bm{\Psi}_{\text{\tiny HF}}=[\bm{\psi}_{\text{\tiny HF}}^1,\bm{\psi}_{\text{\tiny HF}}^2,\ldots,\bm{\psi}_{\text{\tiny HF}}^{r''}] \in \mathbb{R}^{N \times r''}$ denotes the parametric modes. Similarly, the rectangular diagonal matrix $\bm{\Sigma}_{\text{\tiny HF}} \in \mathbb{R}^{r'' \times r''}$ contains the corresponding non-negative singular values $\sigma_1 \geq \ldots \geq \sigma_{r''} \geq 0$ accounting to the information content of the high-fidelity model solution data. At $(i+1)^{th}$ iteration of the algorithm, the reduced subspace $\mathcal{V}^{R}$ is updated through the Gram-Schmidt procedure (refer algorithm \ref{algo:GS}). \begin{algorithm} \caption{ Gram-Schmidt orthonormalization at the $(i+1)^{th}$ iteration of the proposed method} \label{algo:GS} \begin{algorithmic}[1] \For{ $l=1:\text{dim}(\bm{S}_{\text{\tiny HF}})$} \State $\bm{\phi}_{\text{\tiny HF}}^l= \bm{S}_{\text{\tiny HF}}^l-\mathcal{V}_i^{R} \langle \mathcal{V}_i^{R},\bm{S}_{\text{\tiny HF}}^l \rangle $ \If{$\frac{||\bm{\phi}_{\text{\tiny HF}}^l||}{||\bm{S}_{\text{\tiny HF}}^l||} > \epsilon_g$} \State \qquad $\mathcal{V}_{i+1}^{R}= \mathcal{V}_i^{R} \bigoplus \frac{\bm{\phi}_{\text{\tiny HF}}^l}{||\bm{\phi}_{\text{\tiny HF}}^l||}$ \EndIf \State Update, $\mathcal{V}_i^{R}=\mathcal{V}_{i+1}^{R}$ \EndFor \end{algorithmic} \end{algorithm} \subsection{Updating low-fidelity model }\label{sec:LFupdate} In this part, we demonstrate how the current POD-G ROM approximation can be used to update the low-fidelity model for each iteration of the algorithm until convergence, i.e. we approximate the solution $u_{\text{\tiny LF}}:\Omega \times \mathcal{D} \rightarrow \mathbb{R}$ with a function $u^R\in \mathcal{V}^R$ defined by, \begin{equation} u^R(\bm{x};\bm{\mu} )=\sum_{i=1}^{r''}\bm{\phi}_{\text{\tiny HF}}^i (b(\bm{\mu}))_i= \bm{\Phi}_{\text{\tiny HF}}\bm{b}(\bm{\mu})\\ \label{eq:HFRB} \end{equation} where, the POD expansion coefficients $\bm{b}(\bm{\mu})=(b_1,b_2,\ldots,b_{r''})^{\text{\tiny T}}$ can be calculated by Galerkin projection of the PDE system onto the basis functions $\bm{\phi}_{\text{\tiny HF}}^i$. The initial low-fidelity snapshots data is now updated with the current reduced solution, such that $\bm{S}_{\text{\tiny LF}}=[u^R_1,u^R_2,\ldots, u^R_N] \in \mathbb{R}^{\mathcal{N}\times r''}$. \textbf{Remark}: To improve the efficiency of the method, one can also approximate the low-fidelity data with the coefficients of the POD expansion instead of the reduced solution itself and replace $\bm{S}_{\text{\tiny LF}}$ with $\bm{B}$ where $\bm{B}=[\bm{b}^1,\bm{b}^2,\ldots, \bm{b}^N] \in \mathbb{R}^{{r''}\times N}$. Here $\bm{b}^i=(b_1,b_2,\ldots,b_{r''})^{\text{\tiny T}}$ represents the POD coefficients from eq. \ref{eq:HFRB}. This process can reduce the cost of exploration of the parametric space using low-fidelity approximation from $\mathcal{O}(\mathcal{N})$ to $\mathcal{O}(r'')$. By performing SVD on $\bm{B}$ we have, \begin{equation} \bm{B}= \bm{\varphi} \bm{\varsigma}\hat{\bm{\psi}}^{\text{\tiny T}} \label{eq:svdLF} \end{equation} Rewriting eq. \ref{eq:HFRB} we have, \begin{equation} \tilde{u}^{R}(\bm{x};\bm{\mu} )\approx u^{R}(\bm{x};\bm{\mu} )=\underset{\bm{\tilde{\Phi}}_{\text{\tiny HF}}}{\underbrace{\bm{\Phi}_{\text{\tiny HF}} \bm{\varphi}}}\bm{\varsigma}\hat{\bm{\psi}}^{\text{\tiny T}} \label{eq:LFRB} \end{equation} The original high-fidelity basis functions can now be replaced by the approximate left singular vectors by the expression shown in eq. \ref{eq:LFRB}. This scaling factor is committed to improving the accuracy of the basis functions that could be lost if reduced coefficients are used in place of reduced-order solutions for the parametric exploration. The next section is dedicated to a discussion on error metrics which are used to test the reliability of the approach. \subsection{Error metric}\label{errordef} If $u_{\text{\tiny HF}}(\bm{x};\bm{\mu}^{\bm{P}})$ and $u^{R}(\bm{x};\bm{\mu}^{\bm{P}})$ represents the FOM and ROM solution respectively at the computed parametric points $\bm{\mu}^{\bm{P}}$ from the training set $\Xi_{train} \subset \mathcal{D}$, then the max norm of the relative error is estimated at the sample points such that \begin{equation} \epsilon_{train}=\underset{\bm{\mu}^{\bm{P}}}{max} \sqrt{ \frac{\sum_{ i = 1}^{\mathcal{N}}||{u_\text{\tiny HF}(x_i;\bm{\mu}^{\bm{P}})-u^{R}(x_i;\bm{\mu}^{\bm{P}})||}_2^2}{\sum_{ i = 1}^{\mathcal{N}} ||u_{\text{\tiny HF}}(x_i;\bm{\mu}^{\bm{P}})||_2^2}} \end{equation} In order to better understand the quality of the reduced model, the algorithm is validated on another set of parameters, $ \Xi_{val} \subset \mathcal{D} $ for the problem defined. By validating on a different set of points, if the error estimation between the FOM and ROM solutions is reduced as the cycle increases, it can be inferred that ROM well approximates the large-scale system for any $\bm{\mu} \in \mathcal{D}$. If the max norm error for both the training and validation set is below a certain tolerance limit, the sketch model constructed can be considered to be reliable. \begin{equation} \epsilon_{val}= \underset{\bm{\mu} }{max} \sqrt{ \frac{\sum_{ i = 1}^{\mathcal{N}}||{u_{\text{\tiny HF}}(x_i;\bm{\mu} )-u^{R}(x_i;\bm{\mu} )||}_2^2}{\sum_{ i = 1}^{\mathcal{N}} ||u_{\text{\tiny HF}}(x_i;\bm{\mu} )||_2^2}} \qquad \forall \bm{\mu} \in \Xi_{val}\end{equation} The error between all the snapshots and the ROM solutions obtained by the iterative multi-fidelity approach is evaluated and can be viewed as a benchmark for the ROM error, provided sufficient snapshots are generated: \begin{equation} \epsilon_{\text{\tiny ROM}}= \sqrt{\sum_{ i = 1}^{N} \frac{||{u_{\text{\tiny HF}}(\bm{x};\bm{\mu}^{(i)})-u^{R}(\bm{x};\bm{\mu}^{(i)})||}_2^2}{ ||u_{\text{\tiny HF}}(\bm{x};\bm{\mu}^{(i)})||_2^2}} \qquad \bm{\mu} \in \Xi_{train}\end{equation} We also analyze the POD basis projection error, which is given by the error between the snapshots and their projection onto the recovered basis functions: \begin{equation} \epsilon_{\text{\tiny POD}}= \sqrt{\sum_{ i = 1}^{N} \frac{||{u_{\text{\tiny HF}}(\bm{x};\bm{\mu}^{(i)})-\Pi u_{\text{\tiny HF}}(\bm{x};\bm{\mu}^{(i)})||}_2^2}{ ||u_{\text{\tiny HF}}(\bm{x};\bm{\mu}^{(i)})||_2^2}} \qquad \bm{\mu} \in \Xi_{train}\end{equation} \section{ 2D heat conduction problem} \label{numericaltest1} In order to assess the proposed methodology, we first begin by analyzing a simple steady-state heat conduction problem in a 2D domain, $\Omega=(0,1) \times(0,1)$ as shown in figure \ref{conduction}. This problem is reproduced from (\cite{Hesthaven2015}) where it is solved using classical greedy RBM. The boundary of the domain is split into three parts, the base, the top, and the sides, and $\Omega_{0}$ is a square block placed in the center of the domain. Let $\kappa$ be the thermal conductivity with $\kappa|_{\Omega_{0}}=\mu_{[1]}$ and $\kappa|_{\Omega_{1}}=1$ where $\Omega_{1}=\Omega \backslash \Omega_{0}.$ \begin{figure}[hbt! \centering \includegraphics[width=0.5\textwidth]{conduction.pdf} \caption{Geometry of heat conduction problem} \label{conduction} \end{figure} Two input parameters are considered for this problem $\bm{\mu}=[\mu_{[1]},\mu_{[2]}]$, where $\mu_{[1]}$ is the conductivity in $\Omega_0$ the region, and the second parameter $\mu_{[2]}$ is the constant heat flux over the bottom boundary. The strong formulation for this parameterized problem is governed by Poisson's equation. For some parameter value $\bm{\mu} \in \mathcal{D}$, find $u(\bm{\mu})$ such that \begin{equation} \begin{aligned} \nabla \cdot \kappa_{\mu} \nabla u(\bm{\mu}) &=0 & & \text { in } \Omega, \\ u(\bm{\mu}) &=0 & & \text { on } \Gamma_{\text {top }} \\ \kappa_{\mu} \nabla u(\bm{\mu}) \cdot n &=0 & & \text { on } \Gamma_{\text {side}} \\ \kappa_{\mu} \nabla u(\bm{\mu}) \cdot n &=\mu_{[2]} & & \text { on } \Gamma_{\text {base}} \end{aligned} \end{equation} Here, $u(\bm{\mu})$ is the scalar temperature field variable, and $\kappa_{\mu}$ is given such that $\kappa_{\mu}=\varphi_{1}+\mu_{[1]} \varphi_{0}$, where $\varphi$ is the characteristic function with subscript donating the corresponding domain. Defining $\mathcal{V}^\mathcal{N}=\left\{v \in H_0^1(\Omega)|v|_{\Gamma_{\text {top }}}=0\right\}$, the weak parametrized formulation then reads: for some parameter $\mu \in \mathcal{D}$, find $u(\mu) \in \mathcal{V}^\mathcal{N}$ such that, \begin{equation} \begin{gathered} a(u(\mu), v ; \bm{\mu})=f(v ; \bm{\mu}) \quad \forall v \in \mathcal{V}^\mathcal{N}, \\ a(w, v ; \bm{\mu})=\int_{\Omega} \kappa_{\bm{\mu}} \nabla w \cdot \nabla v \text { and } f(v ; \bm{\mu})=\mu_{[2]} \int_{\Gamma_{\text {base }}} v, \end{gathered} \label{eq:weakformconduction} \end{equation} for all $v, w \in \mathcal{V}^\mathcal{N}$. The selected range for the parametric study: $\bm{\mu}=[\mu_{[1]}, \mu_{[2]}]\in \mathcal{D}=[0.1,10] \times[-1,1]$. A total of, 2050 sample points are generated in which the training set $\Xi_{train}$ compromises of 2000 points and the validation set $\Xi_{val}$ consists of 50 points. For $\mu_{[1]}$, the points are generated using uniform discretization whereas for the second input parameter $\mu_{[2]}$, the points are generated using log space. The graphical representation of the temperature field for two different sets of parameters is shown in figure \ref{conductionprofile}. \begin{figure}[hbt!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{conductivityHT1.pdf} \caption{ $\mu_{[1]}=0.1, \mu_{[2]}=-1$} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{conductivityHT1500.pdf} \caption{$\mu_{[1]}=0.4, \mu_{[2]}=0.5 $} \end{subfigure} \caption{Two different representative solutions for the parameterized conductivity problem.} \label{conductionprofile} \end{figure} \subsection{Results and discussion} We now perform the proposed algorithm based on two different initialization of the low-fidelity model. The first analysis is based on the assumption that we already have some random dataset of solutions of a FOM. Then the initial low-fidelity model is a ROM which is constructed from this random sketch model. In this example, we initialize the random sketch with 2 linearly independent snapshots from the training set. Since this is a low-rank linear problem, we chose to select one sample point per iteration. With a target tolerance set to $\epsilon= 10^{-6}$, 6 iteration cycles are required to achieve the desired accuracy, as shown in figure \ref{err1a}. Also, we show the convergence plot for parameters belonging to the validation set, which decays smoothly until the target accuracy is achieved. This implies, that the quality of ROM constructed with the proposed iterative multi-fidelity approach represents well the large-scale PDE system for any parameter belonging to the parametric space $\mathcal{D}$. \begin{figure}[hbt!] \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{ConductionERupdated.pdf} \subcaption{Initial random sketch} \label{err1a} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{ConductionMaxerCFupdated.pdf} \subcaption{Initial coarse sketch} \label{err1b} \end{subfigure} \caption{ Error $\epsilon_{train}$ and $\epsilon_{val}$ between FOM and ROM solution using random sketch model (\textbf{a}), coarse sketch model (\textbf{b}). \label{err1} \end{figure} The second analysis is for the cases when no database of solutions is available a priori, we construct the initial low-fidelity model from a coarse sketch model as shown in figure \ref{condmesha}. Figure \ref{condmeshb} represents a fine grid model used for generating the high-fidelity solution. We observe that the same number of iterations are required as in the previous case to achieve accuracy of $\mathcal{O}(10^{-6})$ for parameters belonging to both the training and validation set, shown in figure \ref{err1b}. This specific example has only a parametric dimension of 2 and also this problem is not mesh-dependent, so both the sketch models consisted of sampling the same number of points. However, we will observe that the selection of points is not consistent in a complex problem as the advection-diffusion problem in 9 dimensions discussed in section \ref{numericaltest2}. Also, more or fewer sampling points may be required depending on the initialization of the low-fidelity model to retain the same target accuracy. \begin{figure}[hbt!] \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{ConductionCM.pdf} \subcaption{Coarse grid ($\#$ Nodes =62 )} \label{condmesha} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{ConductionFM.pdf} \subcaption{Fine grid ($\#$ Nodes =895)} \label{condmeshb} \end{subfigure} \caption{ (\textbf{a}) a Coarse discretized mesh for initial low-fidelity model, and (\textbf{b}) a fine discretized mesh for high-fidelity model approximations. } \end{figure} The parametric points sampled using both the sketch models are represented in figure \ref{parametricptsa}. We can notice that irrespective of the sketch model chosen to construct the low-fidelity model, the proposed method in this example extracted almost exact points in both cases. We also plot a density map in figure \ref{parametricptsb} showing the probability of a point to be picked at a certain location, which is obtained here by finding a Gaussian distribution over an ensemble of numerical experiments. In other words, each experiment is initialized by different random snapshots without any repetition. The figure reflects a very interesting behavior, showing that six of the seven sampled locations are fairly the same in each trial except for one point that has more variance than the other six, which was noticed when the third point was chosen. This is attributed to the selection mechanism, since the third point is picked randomly after the first two points are drawn from the training set during the initialization of the random sketch model. Then accordingly the algorithm optimizes the location of the third point thus yielding consistent results. Since this approach is based on heuristics, obtaining exact sampling points in different cases is not of the utmost priority, as it is more on achieving unique points that can statistically well represent the entire parametric subspace. \begin{figure}[hbt! \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{samplepts_conductionn.pdf} \subcaption{} \label{parametricptsa} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{GaussianDistIMF.pdf} \subcaption{} \label{parametricptsb} \end{subfigure} \caption{ Sampled parametric points using two sketch models (\textbf{a}), Gaussian distribution of sampling points over 10 trials (\textbf{b}). \end{figure} \begin{figure}[hbt! \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{ConductionROMPOD2.pdf} \subcaption{} \label{err2a} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{IMF_POD_greedy.pdf} \subcaption{} \label{err2b} \end{subfigure} \caption{ Error $\epsilon_{\text{\tiny POD}}$ of the POD projection, posteriori error bound by greedy RBM and error $\epsilon_{\text{\tiny ROM}}$ of the solution of the reduced order model obtained by the proposed method (\textbf{a}), and sampled parametric points using the proposed method, classical POD and greedy RBM (\textbf{b}).} \end{figure} We also plot the $\ell_2$ norm POD projection error ($\epsilon_{\text{\tiny POD}}$) and ROM error ($\epsilon_{\text{\tiny ROM}}$) against the rank of the low-fidelity model using both the sketch models in figure \ref{err2a}, as per the definitions in section \ref{errordef}. We can observe both the POD and ROM error decay exponentially with an increase in the rank of the system. Also, the POD error curve is observed to be lower than the ROM error as expected, which is represented by the decay of singular values. Now we make a comparison by solving the same problem using the greedy RBM algorithm, the lower bound is computed using the multi-min-theta approach explained in detail in \cite{Hesthaven2015}. The convergence plot for the relative $\ell_2$-norm ROM error ($\epsilon_{\text{\tiny ROM}}$) for all the parameters in the training set and the error bound is shown in figure \ref{err2a}. It is observed that the ROM error has a smooth exponential decay and is lower than the max norm error as expected. For the target accuracy set to $\mathcal{O}(10^{-6})$, the rank of the system achieved is 7, implying that 7 parametric points were sampled to recover the basis functions similar to the findings of our proposed method. Thus, it is evident that the qualitative performance of the proposed method is comparable with the greedy RBM. We also show an illustration of sampled points obtained by greedy RBM in figure \ref{err2b}. We notice that the parametric points picked in the proposed method are not entirely comparable to greedy RBM, however, the sampled points in both cases follow a logarithmic trend which may be related to the way we discretized our training set, where we chose to discretize $\bm{\mu}_{[2]}$ using a log space and a uniform discretization for $\bm{\mu}_{[1]}$. If we apply DEIM on the parametric functions obtained by POD on high-fidelity snapshots for all $\bm{\mu} \in \Xi_{train}$ and plot the sampled points in the same figure \ref{err2b}, we observe that the selection of points is comparable with the proposed method and lie on the same logarithmic trend. \section{9D advection-diffusion problem } \label{numericaltest2} In this section, we study an advection-diffusion problem with a source term for a 9 parametric dimension. As it can be seen in figure \ref{CVR}, the domain is divided into 9 subdomains where each region has a different diffusivity coefficient which serves as the input parameter. \begin{figure}[hbt! \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.8\textwidth]{convdiffvel_1.pdf \caption{ (left) Geometrical set up of advection-diffusion problem in a 9 block system and (right) advective flow field \label{CVR} \end{figure} \subsection{Problem setting} The strong form of the parameterized advection-diffusion reaction equation is governed by the elliptic PDE. For some parameter value, $\bm{\mu} \in \mathcal{D}$ find $u(\bm{\mu})$ such that: \begin{equation} \begin{aligned} \bm{b}\cdot \nabla u(\bm{\mu})-\mathbb{K}(\mu_{i}) \Delta u(\bm{\mu})+\sigma & =0 \qquad \forall\Omega_{i} \quad i=1, \ldots, 9 \\ u(\bm{\mu}) & = 0 \qquad \text{ at } \Gamma_{inlet}\\ \nabla u(\bm{\mu}) \cdot \hat{n} & = 0 \qquad \text{ at } \Gamma_{walls}\\ \sigma & = 1 \qquad \text{ at } \Omega_5\\ \label{eq:cdreq} \end{aligned} \end{equation} where, $u(\bm{\mu})$ is the unknown field variable for a spatial domain $\Omega=[0,1]\times [0,1]$. The diffusion coefficient is given by $\mathbb{K}(\bm{\mu})=\bm{\mu}$ for the input parameter $\bm{\mu} \in \mathcal{D} = [0.01,10]^9$ and $\sigma$ is the constant reaction term. $\bm{b}$ is a given-advected flow field which is obtained by solving a potential flow problem in the same bounded domain such that $\nabla. \bm{ b}=0$. We consider a low permeability zone in $\Omega_5$, and relatively higher permeability over the rest of the domains, and with a constant velocity at the inlet, we obtain the streamlines as shown in figure \ref{CVR}. The weak parameterized formulation using SUPG then reads as: for some parameter $\bm{\mu} \in \mathcal{D}$, find $u(\bm{x};\bm{\mu}) \in \mathcal{V}^\mathcal{N}$ where $\mathcal{V}^\mathcal{N}=\left\{v \in (H_0^1(\Omega))^2|v|_{\Gamma_{\text {inlet}}}=0\right\}$, \begin{equation} \langle v+\frac{\beta h}{2} \nabla v, \bm{b} \cdot \nabla u\rangle+\langle\nabla v, \mathbb{K}(\bm{\mu}) \nabla u\rangle -\langle \frac{\beta h}{2} \nabla v, \mathbb{K}(\bm{\mu}) \Delta u\rangle +\langle v+\frac{\beta h}{2} \nabla v, \sigma \rangle =0 \qquad \forall v \in \mathcal{V}^\mathcal{N} \label{eq:cdrweak} \end{equation} The parameter $\beta$ is a dimensionless constant that depends on the so-called Peclet number given by $Pe=\frac{||\bm{b}||h}{2\mathbb{K}(\bm{\mu})}$, where $h$ is a typical element size in the direction of the velocity and $||\bm{b}||$ is the characteristic flow velocity. Note that the third term in the equation \ref{eq:cdrweak} is zero in the case of linear elements used for domain discretization \cite{Hulsen2009}. The reduced weak form of eq. \ref{eq:cdrweak} is obtained by projection onto the properly selected low-dimensional subspace spanned by a reduced basis function $\{\bm{\phi}^k\}_{k=1}^{r}$ such that, \begin{comment} The reduced weak form of eq. \ref{eq:cdrweak} can be obtained by replacing $u$ by $u_{\text{\tiny POD}}$ which can be expressed as: \begin{equation} u_{\text{\tiny POD}}(\bm{x};\bm{\mu})=\sum_{j=1}^r \phi^{j}(\bm{x})z_{j}(\bm{\mu}) \label{eq:upod} \end{equation} where, $ \bm{z}=\{z_1, \ldots, z_r\}^{\text{\tiny T}} $ represents the coefficients of the POD expansion and the reduced basis function $\{\bm{\phi}^j\}_{j=1}^{r}$ is obtained by, \begin{equation} \bm{\phi}^j=POD\big([\bm{u}^1, \bm{u}^2,\dots,\bm{u}^{N}]\big) \in \mathbb{R}^{\mathcal{N}} \end{equation} The reduced weak form is obtained by Galerkin projection onto the properly selected low-dimensional subspace such that, \end{comment} \begin{equation} \langle \phi^k+\frac{\beta h}{2} \nabla \phi^k, \bm{b} \cdot \nabla u_{\text{\tiny POD}} \rangle+\langle\nabla \phi^k, \mathbb{K}(\bm{\mu}) \nabla u_{\text{\tiny POD}}\rangle +\langle \phi^k+\frac{\beta h}{2} \nabla \phi^k, \sigma \rangle =0 \qquad \forall k=1,\ldots,r \label{eq:cdrROM} \end{equation} where, $$u_{\text{\tiny POD}}(\bm{x};\bm{\mu})=\sum_{j=1}^r \phi^{j}(\bm{x})z_{j}(\bm{\mu}) $$ $ \bm{z}=\{z_1, \ldots, z_r\}^{\text{\tiny T}} $ represents the coefficients of the POD expansion. Note that only the diffusion term in the equation \ref{eq:cdrROM} is affine with respect to the input parameter $\mathbb{K}(\bm{\mu})$ and can be efficiently reduced during the offline stage, \begin{equation} \begin{gathered} \langle \nabla \phi^k,\mathbb{K}(\bm{\mu}) \nabla u_{\text{\tiny POD}}\rangle = \mathbb{K}(\bm{\mu})\sum_{j=1}^{r} \langle \nabla \phi^k, \nabla \phi^j\rangle z_j \, , \label{eq:exred} \end{gathered} \end{equation} in which the $(r \times r)$ operator $\langle \nabla \phi^k,\mathbb{K}(\bm{\mu}) \nabla u_{\text{\tiny POD}}\rangle$ can be computed once and for all in the offline stage. During the online stage, if new parameter $\mathbb{K}(\bm{\mu})$ is prescribed, the evaluation of the diffusion operator of eq. \ref{eq:cdrROM} can be done in reduced complexity (i.e. it does not depend on the original dimension $\mathcal{N}$) since it only requires $\mathcal{O}(r \times r)$ operations. This step is crucial for retaining the computational efficiency of the ROM, however, the same idea cannot be straightforwardly applied to the convective and source term of the equation \ref{eq:cdrROM} as the projection operator dependency on the input parameter i.e. diffusion coefficient is non-affine. However, the non-affineness is not addressed in this study; instead, the application of the suggested approach to a high-dimensional PDE system is the main focus. \subsection{Results and discussion} The input parameter $\bm{\mu}$ is discretized using the LHS technique with 2500 sample points, from which the training set $\Xi_{train} \subset \mathcal{D}$ consist of 2000 points and the remaining 500 samples are used for the validation set $\Xi_{val} \subset \mathcal{D}$ to certify the quality of reduced basis approximation. Figure \ref{transport1} represents the solution field with different combinations of diffusion coefficients for each of the 9 blocks. \begin{figure}[hbt!] \begin{subfigure}{0.3\textwidth} \includegraphics[trim=12cm 7cm 12cm 5cm, clip=true,width=\textwidth]{k0.pdf} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[trim=12cm 7cm 12cm 5cm, clip=true,width=\textwidth]{k67.pdf} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[trim=12cm 7cm 12cm 5cm, clip=true,width=\textwidth]{k167.pdf} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[trim=12cm 7cm 12cm 4cm, clip=true,width=\textwidth]{HT0.pdf} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[trim=12cm 7cm 12cm 4cm, clip=true,width=\textwidth]{HT67.pdf} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[trim=12cm 7cm 12cm 4cm, clip=true,width=\textwidth]{HT167.pdf} \end{subfigure} \caption{Three different representative solutions for the advection-diffusion problem shown (below) by varying nine different combinations of diffusion coefficient $\mathbb{K}(\mu)$ in all the three domains (top).} \label{transport1} \end{figure} Similar to the previous numerical example, we perform the algorithm based on two different initialization of the low-fidelity model. First, the discussion is presented for the low-fidelity model approximated using a random sketch model. Three studies are conducted to evaluate the computational performance: the first two examined the impact of the random sketch on the qualitative performance of the suggested method, while the third examined the quantitative performance of the algorithm by adjusting the greedy parameter for sampling. This algorithm is conducted over 10 trials for the first two cases, the convergence plot is shown in figure \ref{CDR1}. In the first case study, the initial rank of the random sketch for constructing a ROM is chosen to be 100, and during every iteration cycle, 10 points are added incrementally to update the low-fidelity model. With a target tolerance set to $\epsilon= 10^{-6}$, we can observe that 13 iteration cycles are required to achieve the desired accuracy in each of the 10 trials, refer to figure \ref{CDR1a}. A total count of $100+(10*12) = 220$ parametric points are sampled out of 2000 points from the training set. It is to be noted that the rank of the low-fidelity model is also enhanced by 10 which implies that all the points sampled are unique which is expected and as a consequence, the recovered basis functions by construction are linearly independent. In the second case study, the random sketch is initialized with 10 linearly independent snapshots from the training set instead of 100 and in each iteration, 10 points are sequentially added to recover the basis functions. It is observed from figure \ref{CDR1b}, that in 22 iterations the target accuracy is achieved with a total sampling of 220 points out of 2000 points from the training parametric set, similar to the first case. Thus, it is evident that irrespective of the size of the random sketch chosen for the initial construction of the low-fidelity model, the algorithm performed well in both scenarios and the final enrichment of the low-fidelity model converged towards the FOM model accurately within the prescribed tolerance. The computational time required to achieve the target accuracy is of the same order $\mathcal{O}(10^3)$ in seconds in both cases, while having comparable computational performance. \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 10cm, clip=true,width=\textwidth]{HT100_10new.pdf} \subcaption{\tiny initial rank 100, incremented by 10 points every iteration } \label{CDR1a} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 11cm 6cm 10cm, clip=true,width=\textwidth]{HT10_10new.pdf} \subcaption{\tiny initial rank 10, incremented by 10 points every iteration} \label{CDR1b} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[trim=6cm 11cm 6cm 10cm, clip=true,width=\textwidth]{HT100_2.pdf} \subcaption{\tiny initial rank 100, incremented by 2 points every iteration} \label{CDR1c} \end{subfigure} \caption{(\textbf{a,b}) Mean of error $\epsilon_{train}$ and $\epsilon_{val}$ between FOM and ROM solution obtained using different random sketch models over 10 trials and its std. deviation. (\textbf{c}) Error $\epsilon_{train}$ and $\epsilon_{val}$ between FOM and ROM solution for one trial. } \label{CDR1} \end{figure} In the third case, the study is conducted by sampling 2 parametric points per iteration instead of 10 points. Figure \ref{CDR1c} reflects 54 iteration cycles required to achieve the same target accuracy, with a total sampling of (100+54*2=208) points, unlike in the previous two cases where 220 points were selected from the parametric space. The CPU time required in this case is $\mathcal{O}(10^4)$ seconds, which is one order higher than the previous cases. This implies adding a few points per iteration can minimize the risk of sampling excess points while maintaining the same order of accuracy, but at the cost of higher CPU time. Due to the discrete nature of error evaluation, the relative training error is observed to be noisy, but with the validation error plot, we can see a smooth decay of the curve as the error is evaluated over the entire validation set rather than at select discrete points. For the second analysis, the initial low-fidelity model is built using a very coarse sketch model, as shown in figure \ref{HTmesha}. For recovering the high-fidelity solution, a fine discretized model is used in figure \ref{HTmeshb}. The points are added sequentially by incrementing with 10 every iteration. The target accuracy is achieved in 23 iterations (total count of sampled points is 230) as shown in figure \ref{CSMHTa}, which is more by 10 points compared to the random sketch model. As is already discussed, such types of PDE problems face numerical stability issues in case of high Peclet number (advection-dominated cases) and can be resolved by applying artificial diffusion in the upwind direction. It is also important to note that the amount of artificial diffusion added to the system depends on the mesh size. So coarser the mesh size, the more diffusion is required, which may affect the solution significantly. Hence, to accurately capture the physical properties of the PDE system, it is very important to consider that the initial grid takes into account all the physical aspects of the problem such that there is no loss of information. This could be the plausible explanation for a higher number of points needed when a coarse sketch model is used as compared to the ROM for the initial construction of the low-fidelity model. Nevertheless, our main objective to guarantee convergence is achieved irrespective of the initial size of the coarse mesh used and can be confirmed further from the decay of the validation error curve seen in figure \ref{CSMHTb}. The CPU time taken is of the $\mathcal{O}(10^3)$ seconds, same as the first and second case. \begin{figure}[hbt!] \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 10cm 6cm 10cm, clip=true,width=\textwidth]{coarsemeshHT.pdf} \subcaption{Coarse grid ($\#$ Nodes=575)} \label{HTmesha} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[trim=6cm 10cm 6cm 10cm, clip=true,width=\textwidth]{finemeshHT.pdf} \subcaption{Fine grid ($\#$ Nodes=3492)} \label{HTmeshb} \end{subfigure} \caption{(\textbf{a}) Coarse discretized mesh for initial low-fidelity model, and (\textbf{b}) fine discretized mesh for high-fidelity model approximations.} \end{figure} Figure \ref{CSMHTb} shows the plot for $\ell_2$ norm POD projection error and ROM error for all the parameters belonging to the training set using both sketch models. Both the POD and ROM errors have an exponential decay as the rank of the system increases, with the POD error serving as a lower bound to the ROM error. This proves the reliability of the proposed method on the quality of ROM constructed such that irrespective of the initial design of the low-fidelity model, the ROM error displays similar decay properties as the POD error. \begin{figure}[hbt! \begin{subfigure}{0.41\textwidth} \includegraphics[trim=6cm 10cm 6cm 10cm, clip=true,width=\textwidth]{HTCF10_10.pdf} \subcaption{} \label{CSMHTa} \end{subfigure} \begin{subfigure}{0.39\textwidth} \includegraphics[trim=6cm 11cm 6cm 11cm, clip=true,width=\textwidth]{HTCFROMPODn.pdf} \subcaption{} \label{CSMHTb} \end{subfigure} \caption{ Error $\epsilon_{train}$ and $\epsilon_{val}$ between FOM and ROM solution using coarse sketch model (\textbf{a}). Error $\epsilon_{\text{\tiny POD}}$ of the POD projection, and error $\epsilon_{\text{\tiny ROM}}$ of the solution of the reduced order model obtained by the proposed method \textbf{ (b)}.} \end{figure} \section{Conclusions} In this work, we presented the feasibility of a multi-fidelity approach in reduced basis approximation for a multi-dimensional parametric PDE system in an iterative procedure. The parametric sampling is adhoc and extracted from low-fidelity model approximations based on heuristic assumptions. Such approximations of low-accuracy low-fidelity models over high-fidelity models enhance the computational performance in the offline stage significantly. Not to mention, this approach proved to be adequate when a posteriori error estimator is unavailable, which is an essential ingredient for efficient greedy sampling. The greedy selection is user dependent, i.e. one can select a single point or multiple points for each iteration. However, attention is to be given as excess parametric points might be sampled than required to achieve the same target accuracy but at a higher CPU cost. So a compromise can be made on the trade-off between computational efficiency and accuracy. Overall, the construction of reduced basis subspace required less high-fidelity snapshot generation in the proposed method. This methodology is successfully demonstrated on a 2D steady-state advection-diffusion problem for 9 input parametric dimensions. A qualitative comparison is also presented for a simple steady-state heat conduction problem between the proposed method and greedy RBM, in which both cases had comparable computational performance. We also presented two different ways of initializing a low-fidelity model and irrespective of the initial quality of the low-fidelity model approximation, the method is shown to be reliable and stable by converging towards the FOM approximation within the prescribed tolerance. In our current work, we have only performed linear reduction, but we can further embed hyper reduction in our current framework "on the fly" i.e. during the multi-fidelity iterations for treating non-affine problems. This step can alleviate the cost of low-fidelity model approximations significantly. For some of the current research work one can refer to the articles \cite{alla2022adaptive,wen2022globally} on adaptive hyper reduction techniques which allows enrichment of the reduced integration domain during the online stage as the simulation progresses. Additionally, during the multi-fidelity iterations, we may also evaluate the low-fidelity model solely on a portion of the randomly chosen parametric points as opposed to the complete training set. This procedure could drastically improve the computational performance of the methodology, specially for nonlinear PDEs. However, there are certain implications to it such as there will be missing information in the parametric subspace and as suggested in the methodology, to sample points using the DEIM strategy, one needs to orthogonalize the current parametric functions in relation to the previous ones. As a possible solution to this problem, Gappy-POD may be used to reconstruct the missing data in the updated parametric functions and can be implemented into this technique for future research work. \newpage \bibliographystyle{unsrt}
1,314,259,995,539
arxiv
\section{Introduction} Coherent backscattering (CBS) of light is a phenomenon emerging in a dilute disordered medium, whereupon multiply scattered counter-propagating waves interfere constructively in a narrow angular range around the backward direction, leading to enhanced intensity of the scattered signal \cite{sheng}. CBS of light has attracted ongoing interest due to its close connection to various aspects of interference-induced effects in presence of disorder, such as weak localization \cite{albada85}, Anderson localization \cite{storzer06} or random lasers \cite{wiersma08}. Since 1999, CBS of light has been observed in cold atoms \cite{labeyrie99}. Quantum scatterers are endowed with internal structure that impacts the matter-light interaction already at the level of a single atom and, consequently, influences multiple scattering. On the one hand, in the elastic scattering regime of low laser intensity, the degeneracy of atomic dipole transitions can strongly reduce the CBS signal, and needs to be taken into account into the theory in order to meet the experiment \cite{jonckheere00,labeyrie03}. On the other hand, for degenerate (e.g., in rubidium atoms) and non-degenerate (e.g. in strontium atoms \cite{bidel02}) transitions alike, increasing the intensity of the incident laser field leads to atomic saturation accompanied by nonlinear (multi-photon) inelastic scattering processes \cite{scully} reducing the interference contrast in CBS \cite{chaneliere04}. While a thorough theory of CBS of light from degenerate atoms in the regime of elastic scattering already exists \cite{mueller01,kupriyanov03}, a theory of CBS from saturated atoms is still in a rudimentary state, several years after the ``saturation'' experiments with strontium \cite{chaneliere04} and rubidium \cite{balik05} atoms. The main obstacle for developing such a theory is that CBS becomes a multi-wave interference effect in the nonlinear scattering regime \cite{wellens08,wellens09b}, with more than two interfering amplitudes. A nonperturbative (with respect to the strength of the nonlinearity) theory of nonlinear CBS that fully accounts for the multi-wave interference character, has so far been developed only for nonlinear classical scatterers \cite{wellens08,wellens09b}, not including quantum effects resulting from multi-photon scattering processes in the regime of atomic saturation. Among the present theoretical approaches to CBS from saturated atoms, one \cite{wellens04} is based on diagrammatic scattering theory and, hence, viable to treating the multiple scattering processes of arbitrary order. It is, however, bound to a weakly inelastic regime, and, so far, has been worked out for two incident laser photons \cite{wellens06}. Other approaches are based on quantum optical master \cite{shatokhin05,shatokhin06} or Langevin \cite{gremaud06} equations, and treat in detail the response of individual atomic scatterers to the incident field of arbitrary intensity, though are restricted to a few atoms. Recently, we have proposed a new approach to multiple scattering of light from distant atoms in the saturation regime \cite{wellens09}. This method, to which we shall refer to as 'the pump-probe' approach, unifies the potential of diagrammatic scattering theory with that of quantum optical methods, and, as we believe, will enable us to develop a multiple scattering theory of intense laser light from dilute atomic gases, where the distances between atoms are much larger than the laser wavelength. Within the pump-probe approach, which has so far been elaborated in detail for two scalar atoms \cite{wellens09,geiger09}, the configuration-averaged double scattering signal is extracted from single-atom observables (such as the spectral correlation function of the atomic Bloch vector), where each atom is subject to a bichromatic driving field, consisting of the laser field (pump) and the weak field scattered from the other atom (probe) whose frequency may differ from the laser frequency. The main assumption of the pump-probe approach is that both these fields can be modelled as classical fields. Intuitively, this is expected to be possible in the case of large distance between both atoms, where only a single photon is scattered from one atom to the other, and thus correlations between different photons emitted by the same atom play no role. As the numerical data shows \cite{wellens09,geiger09}, the double scattering spectra calculated within this new method are in excellent agreement with the accurate results following from the two-atom master equation expanded up to lowest nonvanishing order in the inverse distance between both atoms. This shows that the ansatz of a classical probe field is indeed justified for large distances. Furthermore, for the elastic component of the backscattered field, we have analytically proven the strict equivalence of these two apparently very different approaches \cite{geiger09}. However, the full analytical proof including also the inelastic spectra has been missing. In the present contribution, we fill in this gap. Starting from the master equation for two scalar atoms, we derive the double scattering CBS elastic and inelastic signals, and express them in terms of purely single-atom quantities. Next, we establish the equivalence with the result of the pump-probe approach, based on single-atom equations under the bichromatic driving \cite{wellens09,geiger09}. The structure of the paper is as follows. In the next section, we introduce the master equation approach and define all quantities of physical interest. In Secs.~\ref{sec:elastic} and \ref{sec:inelastic} we deduce, respectively, the elastic and inelastic components of the backscattered light from two atoms. In Sec.~\ref{sec:bichromatic} we interpret the obtained results by considering a physical setting including single atoms subjected to bichromatic driving. Our work is concluded in Sec.~\ref{sec:conclusion}. \section{Master equation approach} \subsection{Formalism} \label{sec:formalism} Let us consider a toy model of CBS consisting of two scalar two-level atoms embedded in a common electromagnetic bath and driven by a quasi-resonant laser wave with a wave vector $\vec{k}_L$. In the following, we will neglect the atomic center-of-mass motion and focus on the dynamics of the internal degrees of freedom. The standard method to find the atomic dynamics is a master equation approach \cite{lehmberg70,agarwal74}, whereupon the bath degrees of freedom are traced out leading to the equation of motion for the quantum mechanical expectation values of atomic variables. It should be noted that a more general model taking into account the vectorial nature of the scattered field as well as of the atomic transitions has already been studied in some detail \cite{shatokhin05,shatokhin06,shatokhin07}. The reason why we are addressing here the simpler scalar case is that we want to analytically derive expressions for the power spectrum of light coherently backscattered from two atoms, and compare them with the results of \cite{wellens09} obtained for {\it scalar} atoms. However, we stress that the tools developed in this work can be generalized for arbitrary atomic transitions. In the Heisenberg picture and in the frame rotating at the laser frequency, one obtains the following master equation for the expectation value of an arbitrary atomic operator $Q$ \cite{lehmberg70}: \begin{align} \langle\dot{Q}\rangle&=\sum_{j=1}^2\langle-i\delta[\sigma^+_j\sigma^-_j,Q] -\frac{i}{2}[\Omega_j\sigma_j^++\Omega^*_j \sigma^-_j,Q]\label{meq} \\ &-\gamma(\sigma^+_j\sigma^-_jQ+Q\sigma^+_j\sigma^-_j-2\sigma^ +_jQ\sigma^-_j)\rangle \nonumber\\ &+\sum_{j\neq k=1}^2\left(T(x)\langle[\sigma_j^+Q,\sigma^-_k] +~T^*(x)[\sigma^+_j, Q\sigma^-_k]\rangle\right).\nonumber \end{align} Here, $\sigma^-_j=|1\rangle_j\langle 2|_j$ and $\sigma_j^+=|2\rangle_j\langle 1|_j$, with $|1\rangle_j$ and $|2\rangle_j$ being respectively the ground and excited states of atom $j$, denote the atomic lowering and raising operators. Furthermore, $\Omega_{j}=\Omega e^{i\vec{k}_L\cdot\vec{ r}_j}$ is the Rabi frequency dependent on the atomic position $\vec{r}_j$, $\delta=\omega_L-\omega_0$ is the laser-atom detuning, and $\gamma$ is half the Einstein's $A$ coefficient. The lower line of Eq.~(\ref{meq}) describes the retarded dipole-dipole interaction dependent on the dimensionless parameter $x=\omega_0|\vec{r}_1-\vec{r}_2|/c$. Coherent backscattering can be observed in the dilute regime $x\gg 1$, where the atoms are located in the far field of each other and exchange a single photon. Within our toy model, this corresponds to a perturbative treatment of the dipole-dipole interaction up to second order in the interatomic coupling $T(x)$. The explicit form of the latter in the limit $x\gg 1$ can be presented as \begin{equation} T(x)=i\gamma\frac{3}{2}\frac{e^{-ix}}{x}. \label{dipdip} \end{equation} By inserting into Eq.~(\ref{meq}) operators from the complete two-atom basis set of operators: $Q\in \{\vec{q}_1\otimes\vec{q}_2\}$, with \begin{eqnarray} \vec{q}_j&=&(\hat{I}_j,\sigma^-_j,\sigma^+_j,\sigma^z_j)^T,\nonumber\\ \hat{I}_j&=&\sigma^+_j\sigma^-_j+\sigma^-_j\sigma^+_j, \quad \sigma^z_j=\sigma^+_j\sigma^-_j-\sigma^-_j\sigma^+_j,\end{eqnarray} one obtains a closed linear system of 16 coupled equations for the atomic averages. Since $\langle \hat{I}_1\otimes \hat{I}_2\rangle=1$ for every atomic state (due to normalization), this can be reduced to a 15 dimensional system for the two-atom Bloch vector \begin{equation} \langle\vec{Q}\rangle=(\langle\vec{\sigma}_2\rangle,\langle\vec{\sigma}_1\rangle,\langle\vec{\sigma}_1\otimes\vec{\sigma}_2\rangle)^T \label{eq:vecQ},\end{equation} with \begin{equation} \vec{\sigma}_j=(\sigma^-_j,\sigma^+_j,\sigma^z_j)^T.\end{equation} This system of equations has the following matrix representation: \begin{equation}\langle\dot{\vec{Q}}\rangle=(A+V)\langle \vec{Q}\rangle+\vec{L}_{+,0},\label{matr_eq1}\end{equation} where the matrices $A$ and $V$ describe the evolution of independent and dipole-dipole interacting atoms, respectively: $A$ (as well as the free vector $\vec{L}_{+,0}$) is generated by the two upper lines, and $V$ by the lower line of equation (\ref{meq}). The explicit forms of the matrices $A$ and $V$ as well as vector $\vec{L}_{+,0}$ will be given in Sec.~\ref{smatrix}. \subsection{Intensity of coherently backscattered light} \label{sintensity} The solution of Eq.~(\ref{matr_eq1}) not only yields the atomic evolution, but also the field radiated by the atoms. For example, the average stationary scattered light intensity in the direction of the unit vector $\vec{n}$, \begin{equation} \langle I(\vec{n})\rangle=\lim_{t\to\infty}\langle E^{(-)}(t)E^{(+)}(t)\rangle_{\vec{n}},\end{equation} after omitting unimportant prefactors and neglecting retardation, is given by \begin{equation} \langle I(\vec{n})\rangle=\sum_j\langle\sigma^+_j\sigma^-_j\rangle_{\rm SS}+\sum_{j,k\neq j}\langle\sigma_j^+\sigma^-_k\rangle_{\rm SS} e^{ik\vec{n}\cdot(\vec{r}_j-\vec{r}_k)} \label{eq:in2}, \end{equation} where the subscript means that the expectation value is evaluated in the steady state. Bearing in mind that, in the following, all quantum mechanical expectation values are to be evaluated in the steady state, we will henceforth lighten notation by dropping this subscript. We note that the light intensity, Eq.~(\ref{eq:in2}), radiated by two distant atoms has already been considered previously, for example in \cite{rist08} (where the two atoms are coupled by an additional optical element). However, in the following, our focus will not be on the total intensity, as given by Eq.~(\ref{eq:in2}), but, with view at the multiple scattering scenario from a dilute cloud of atoms, we are rather interested in the double scattering coherent backscattering intensity, which is extracted from Eq.~(\ref{eq:in2}) by expansion in second order in the atom-atom interaction and subsequent average over the positions of atoms. Zeroth-, first-, and second-order terms of the perturbative expansion of Eq.~(\ref{matr_eq1}) in the coupling $V$ will carry upper indices `(0)', `(1)', and `(2)', respectively. That is, all quantities that will be studied here are quadratic in the dipole-dipole coupling, and will be supplied with superscript `(2)'. For example, the double scattering intensity is obtained from \begin{equation} \langle I(\vec{n})\rangle^{(2)}=\sum_j\langle\sigma^+_j\sigma^-_j\rangle^{(2)}+\sum_{j,k\neq j}\langle\sigma_j^+\sigma^-_k\rangle^{(2)} e^{ik\vec{n}\cdot(\vec{r}_j-\vec{r}_k)}.\label{eq:d_Int} \end{equation} Eq.~(\ref{eq:d_Int}) does not yet correspond to the CBS intensity. The latter is obtained after the configuration averaging procedure, denoted as $\langle \ldots\rangle_{\rm conf.}$, \begin{align}\langle\langle I(\vec{n})\rangle^{(2)}\rangle_{\rm conf.}&=\sum_j\langle\la\sigma^+_j\sigma^-_j\rangle^{(2)}\rangle_{\rm conf.}\label{eq:CBS_Int}\\ &+\sum_{j,k\neq j}\langle\la\sigma_j^+\sigma^-_k\rangle^{(2)} e^{ik\vec{n}\cdot(\vec{r}_j-\vec{r}_k)}\rangle_{\rm conf.},\nonumber \end{align} and includes two terms: the background, or ladder intensity, given by the first term in the right hand side of Eq.~(\ref{eq:CBS_Int}), and the interference, or crossed intensity, given by the second part of Eq.~(\ref{eq:CBS_Int}). The background and interference CBS intensities in Eq.~(\ref{eq:CBS_Int}) are formed respectively by co- and counter-propagating double scattering amplitudes, cf. Fig.~\ref{fig:ladd-cros} below. The background intensity is radiated uniformly into all directions $\vec n$, whereas the interference part contributes only in the backscattering direction, with a small angular width $\Delta\theta\propto 1/x\ll 1$. Thus the full angle dependence of the coherent backscattering intensity is determined by the background intensity and the height of the interference peak in exact backscattering direction, i.e. $\vec{n}$ parallel to $-{\vec k}_L$, which we will assume from now on. The configuration averaging procedure includes two stages. At one stage, averaging over interatomic distances is performed. As a result, the oscillating terms $\propto T^2(x), (T^*(x))^2$ will vanish, while terms $\propto |T(x)|^2$ which vary smoothly with $x$, will be preserved. Another stage of configuration averaging includes averaging over random orientations of the radius-vector connecting the atoms (to kill oscillations of $\Omega_j, \Omega_j^*$ and most of their combinations). Note, however, that the phase of oscillating terms $\propto \Omega_j\Omega_k^*$ cancels itself with that of the phase factor $e^{ik\vec{n}\cdot(\vec{r}_j-\vec{r}_k)}$ in the backwards direction (second term of Eq.~(\ref{eq:d_Int})), since $k\approx k_L$. This is precisely the interference CBS contribution surviving the disorder averaging. It will be shown below that the averaging over both, distance and orientation, can be performed analytically within the master equation approach. Once the average CBS intensity is known, one can assess the main measure of phase coherence called the enhancement factor \cite{jonckheere00}, defined as the interference divided by the background intensity, where, as mentioned above, the interference intensity is evaluated in exact backward direction, where it assumes its maximum value. In the presence of inelastically scattered photons, a more refined measure of phase coherence is provided by the spectrum of backscattered light \cite{shatokhin07}. Indeed, the background and interference components of the CBS spectrum indicate the interference character and magnitude as a function of the frequency of the backscattered photon; the enhancement factor follows from them after integrating over the whole frequency distribution. \subsection{Spectrum of coherently backscattered light} \label{subsec:spectrum} The master equation (\ref{meq}) is obviously Markovian, consequently, due to the quantum regression theorem \cite{scully}, the multi-time atomic correlation functions obey the same equation as Eq.~(\ref{matr_eq1}) but with modified initial conditions and free vector. Among these multi-time correlation functions, we will consider the stationary first-order temporal coherence function of scattering atomic dipoles: \begin{equation} \Gamma_1(\tau)=\sum_{j,k}\langle\la\sigma^+_j\sigma^-_k(\tau)\rangle^{(2)}e^{i\vec{k}\cdot\vec{r}_{jk}}\rangle_{\rm conf.}\label{eq:corr}, \end{equation} where $\vec{k}\equiv k_L\vec{n}$, and $\vec{r}_{jk}\equiv \vec{r}_j-\vec{r}_k$. \subsubsection{Elastic spectrum} The factorized correlation function of atomic dipoles \begin{align} \Gamma_{1;{\rm el}}(\tau)&=\sum_{j,k}\Bigl\langle\left(\langle\sigma^+_j\rangle^{(0)}\langle\sigma^-_k(\tau)\rangle^{(2)}+\langle\sigma^+_j\rangle^{(1)}\langle\sigma^-_k(\tau)\rangle^{(1)}\right.\Bigr.\nonumber\\ &\Bigl.\left.+\langle\sigma^+_j\rangle^{(2)}\langle\sigma^-_k(\tau)\rangle^{(0)}\right)e^{i\vec{k}\cdot\vec{r}_{jk}}\Bigr\rangle_{\rm conf.}, \end{align} gives the elastic component of the spectrum ($\propto \delta(\omega-\omega_L)$) through the Laplace transform \cite{scully} \begin{equation} S_{\rm el}(\omega)=\frac{\,{\rm Re}\,}{\pi}\int_0^\infty d\tau e^{i\omega\tau}\Gamma_{1;{\rm el}}(\tau),\end{equation} since the temporal evolution of the average dipole lowering operators in the steady state reduces to $e^{-i\omega_L\tau}$. Then \begin{equation} S_{\rm el}(\omega)=I^{(2)}_{\rm el}\delta(\omega-\omega_L),\label{eq:Sel}\end{equation} with \begin{align} I^{(2)}_{\rm el}&=\sum_{j,k=1}^2\Bigl\langle\left(\langle\sigma^+_j\rangle^{(0)}\langle\sigma^-_k\rangle^{(2)}+\langle\sigma^+_j\rangle^{(1)}\langle\sigma^-_k\rangle^{(1)}\right.\Bigr.\nonumber\\ &\Bigl.\left.+\langle\sigma^+_j\rangle^{(2)}\langle\sigma^-_k\rangle^{(0)}\right)e^{i\vec{k}\cdot\vec{r}_{jk}}\Bigr\rangle_{\rm conf.}.\label{eq:Sp_el}\end{align} The expression inside $\langle\ldots\rangle_{\rm conf.}$ explicitly depends on the coordinates of atoms $j$ and $k$. However, after the configuration averaging, the result must be entirely symmetric with respect to indices interchange $j\leftrightarrow k$. Hence, instead of Eq.~(\ref{eq:Sp_el}), we can use shortened expressions. Let us write these expressions down separately for the background ($L_{\rm el}$) and interference ($C_{\rm el}$) contributions, assuming exactly the backward direction of observation ($\vec{k}=-\vec{k}_L$): \begin{align} L_{\rm el}&=2\langle\la\sigma_2^+\rangle^{(0)}\langle\sigma_2^-\rangle^{(2)}+\langle\sigma_2^+ \rangle^{(1)}\langle\sigma_2^-\rangle^{(1)}\nonumber\\ &+\langle\sigma_2^+\rangle^{(2)}\langle\sigma_2^-\rangle^{(0)}\rangle_{\rm conf.},\label{eq:el-ladd-1}\\ C_{\rm el}&=2\langle(\langle\sigma_1^-\rangle^{(0)}\langle\sigma_2^+\rangle^{(2)} +\langle\sigma_1^-\rangle^{(1)}\langle\sigma_2^+\rangle^{(1)}\nonumber\\ &+\langle\sigma_1^-\rangle^{(2)}\langle\sigma_2^+\rangle^{(0)})e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.}.\label{eq:el-cros-1} \end{align} Note that in Eq.~(\ref{eq:el-ladd-1}), we have fixed $j=k=2$, while in Eq.~(\ref{eq:el-cros-1}), $j=2$ and $k=1$. \subsubsection{Inelastic spectrum} \label{sec:inelastic-1} In order to evaluate the inelastic spectrum of the backscattered light, let us introduce the time-dependent vector \begin{equation} \Delta\vec{s}_j(\tau)\equiv \vec{s}_j(\tau)-\langle\sigma^+_j\rangle\langle\vec{Q}(\tau)\rangle,\end{equation} where $\vec{s}_j(\tau) \equiv \langle\sigma^+_j\vec{Q}(\tau)\rangle$. According to the quantum regression theorem \cite{scully}, the vector $\Delta\vec{s}_j$ obeys the equation \begin{equation} \Delta \dot{\vec{s}}_j =( A+V)\Delta\vec{s}_j\label{eq:vecSj}\end{equation} with the initial condition \begin{equation} \Delta\vec{s}_j(0)=\langle\sigma^+_j\vec{Q}\rangle-\langle\sigma_j^+\rangle\langle \vec{Q}\rangle. \label{eq:init_Dsj}\end{equation} The inelastic spectrum follows from the fluctuating part of the correlation function, Eq.~(\ref{eq:corr}): \begin{align} \Delta \Gamma_{1;{\rm inel}}(\tau)&= \Gamma_{1}(\tau)-\Gamma_{1;{\rm el}}(\tau)\nonumber\\ & = \langle [\Delta\vec{s}^{\;(2)}_1(\tau)]_4+[\Delta\vec{s}^{\;(2)}_2(\tau)]_1\label{corr_in}\\ &+[\Delta\vec{s}^{\;(2)}_1(\tau)]_1e^{-i\vec{k}_L\cdot\vec{r}_{12}}+[\Delta\vec{s}^{\;(2)}_2(\tau)]_4e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.}\nonumber\end{align} via Fourier transformation \begin{equation} S_{\rm inel}(\nu)=\frac{\,{\rm Re}\,}{\pi}\int^\infty_0 d\tau e^{i\nu\tau} \Delta \Gamma_{1;{\rm inel}}(\tau), \label{eq:Sinel}\end{equation} where $\nu=\omega-\omega_L$ denotes the detuning with respect to the laser frequency, and the fact that Eq.~(\ref{matr_eq1}) is written in the rotating frame is taken into account. The indices $[\dots]_1$ and $[\dots]_4$ in Eq.~(\ref{corr_in}) refer to the components $\sigma_2^-$ and $\sigma_1^-$ of the 15 dimensional vector $\vec{Q}$, see Eq.~(\ref{eq:vecQ}). Employing the same symmetry argument as we did when writing down the expression for the elastic contributions, let us write down the corresponding expressions for the inelastic background and interference spectra: \begin{align} L_{\rm inel}(\nu)&=\frac{2\,{\rm Re}\,}{\pi}\int^\infty_0 d\tau e^{i\nu\tau}\langle[\Delta\vec{s}^{\;(2)}_2(\tau)]_1\rangle_{\rm conf.}\label{eq:Linel}\\ C_{\rm inel}(\nu)&=\frac{2\,{\rm Re}\,}{\pi}\int^\infty_0 d\tau e^{i\nu\tau}\langle[\Delta\vec{s}^{\;(2)}_2(\tau)]_4e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.}\label{eq:Cinel}\end{align} Eqs.~(\ref{eq:Linel}) and (\ref{eq:Cinel}) can be cast into a more convenient form by using Laplace transforms (which exist also for purely imaginary Laplace transform variable $z=-i\nu$ because, as a consequence of the fluctuation-dissipation theorem \cite{landau-V}, the fluctuating parts of the atomic correlation functions are exponentially decaying with time). One obtains \begin{align} L_{\rm inel}(\nu)&=\frac{2\,{\rm Re}\,}{\pi}\langle[\Delta\vec{\tilde{s}}_2^{\;(2)}(-i\nu)]_1\rangle_{\rm conf.},\label{L-inel_lap}\\ C_{\rm inel}(\nu)&=\frac{2\,{\rm Re}\,}{\pi}\langle[\Delta\vec{\tilde{s}}_2^{\;(2)}(-i\nu)]_4e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.},\label{C-inel-lap} \end{align} where $\tilde{x}(z)=\int_0^\infty dt \exp(-zt)x(t)$ indicates the Laplace image of $x(t)$. It follows from Eqs.~(\ref{eq:el-ladd-1}), (\ref{eq:el-cros-1}, (\ref{L-inel_lap}) and (\ref{C-inel-lap}) that finding the elastic and inelastic spectra of CBS requires the solution of Eqs.~(\ref{matr_eq1}) and (\ref{eq:vecSj}), expanded up to second order in the dipole-dipole interaction matrix $V$, with subsequent configuration averaging of the result. Both of these tasks can be accomplished by exploiting the tensor structure of matrices $A$ and $V$ to be discussed in the next section. \subsection{Structure of the evolution matrices} \label{smatrix} Let us now study the structure of the matrices $A$ and $V$ introduced in Eq.~(\ref{matr_eq1}), referring to the evolution of non-interacting and interacting atoms, respectively. \subsubsection{Free evolution: matrix A} \label{smatrixa} With the ordering of the 15-dimensional vector defined by Eq.~(\ref{eq:vecQ}), the matrix $A$ receives the following block structure \begin{equation} A=\left(\begin{array}{cc} M_+& 0\\ L_\times&M_{\times}\end{array}\right).\label{eq:defA}\end{equation} The block matrix $M_+$ has dimensions $6\times 6$; it describes the individual evolution of atoms 2 and 1 independently of each other: \begin{equation} M_+=M_2\oplus M_1, \end{equation} with the matrix $M_j$ being the standard Bloch matrix for a single atom, generated by the upper two lines of Eq.~(\ref{meq}): \begin{equation} M_j=\left(\begin{array}{ccc}-\gamma+i\delta&0&-i\Omega_j/2\\ 0&-\gamma-i\delta&i\Omega_j^*/2\\-i\Omega_j^*&i\Omega_j&-2\gamma\end{array}\right).\label{eq:BlochM}\end{equation} We remind that the optical Bloch equation for a single atom reads \cite{carmichael} \begin{equation} \langle\dot{\vec{\sigma_j}}\rangle=M_j\langle\vec{\sigma_j}\rangle+\vec{L}, \quad \vec{L}=(0,0,-2\gamma)^T.\label{eq:Bloch_j}\end{equation} The $9\times 9$ block $M_\times$ describes the evolution of two-atom correlation functions of uncoupled atoms and, hence, reads \begin{equation} M_{\times}=M_1\otimes I_2+I_1\otimes M_2,\end{equation} where $I_j$ is the $3\times 3$ unit matrix. In the following, we will drop the indices of the matrices unless the latter differ for atoms 1 and 2. Accordingly, the non-diagonal $9\times 6$ block matrix $L_\times$ can be written as \begin{equation} L_\times=(\vec{L}\otimes I\;\;I\otimes\vec{L}), \label{eq:L_times}\end{equation} with $\vec{L}$ defined in Eq.~(\ref{eq:Bloch_j}). Finally, it is appropriate to specify here the vector $\vec{L}_{+,0}$ introduced in Eq.~(\ref{matr_eq1}): \begin{equation} \vec{L}_{+,0}=(\underbrace{\vec{L}^{\,T},\vec{L}^{\,T}}_{\equiv\vec{L}^{\,T}_+},\underbrace{0,\ldots,0}_{9\;\text{zeroes}})^T.\label{eq:vecL_+}\end{equation} To conclude, we have defined all ingredients of the composite system of two non-interacting atoms in terms of single-atom Bloch vectors and their tensor products. We will next do the same for the interacting atoms. \subsubsection{Dipole-dipole interaction: matrix V} \label{smatrixv} We remind that the matrix $V$ is generated by the third line of Eq.~(\ref{meq}). It naturally splits into 4 components: \begin{equation} V=\underbrace{V_{12}+V_{21}}_{\propto T(x)}+\underbrace{V_{12}^*+V_{21}^*}_{\propto T^*(x)},\label{eq:matrV_1}\end{equation} where the first and second indices in the subscripts coincide, respectively, with the values of the indices $j$ and $k$ in Eq.~(\ref{meq}). The four matrices $V_{12}$, $V^*_{12}$, $V_{21}$ and $V_{21}^*$ describe four elementary exchange excitation processes between the atoms which are depicted diagrammatically in Fig.~\ref{fig:processes}. \begin{figure} \includegraphics[width=5cm]{elem_dip_dip.eps} \caption{Diagrammatic representation of the elementary dipole-dipole interaction processes between two atoms (black spots correspond to their electronic states \emph{after} the interaction) with the respective terms of the matrix V, see Eq.~(\ref{eq:matrV_1}). Solid (dashed) arrows depict positive (negative) frequency photons. (a),(d): excitation is lost (gained) by atom 1 (2); (b),(c): same in reversed order.} \label{fig:processes} \end{figure} Furthermore, we associate terms $\propto T^*(x)$ (or $T(x)$) with positive (or negative) frequency photons, in analogy with the single-atom case, see Eq.~(\ref{eq:Bloch_j}), where $\langle\sigma_j^-\rangle$ (or $\langle\sigma_j^+\rangle$) are coupled to $\langle\sigma_j^z\rangle$ by a coefficient proportional to the positive (or negative) frequency component of the laser field $\Omega_j$ (or $\Omega_j^*$). Likewise, in the present two-atom case, it is easy to show that the expectation values $\langle\sigma_j^-\rangle$ (or $\langle\sigma_j^+\rangle$) are coupled to $T^*(x)\langle\sigma_j^-\sigma_k^z\rangle$ (or $T(x)\langle\sigma_j^+\sigma_k^z\rangle$). Let us proceed with the analysis of the structure of the matrix $V$. Each of the four components of the matrix $V$, Eq.~(\ref{eq:matrV_1}), has the following block structure: \begin{equation} V_{jk}= \left(\begin{array}{cc} 0& (V_{jk})_\urcorner\\ (V_{jk})_\llcorner&(V_{jk})_\times\end{array}\right)\label{eq:V_2},\end{equation} where the dimensions of the blocks $(V_{jk})_\urcorner$, $(V_{jk})_\llcorner$, and $(V_{jk})_\times$ are $6\times 9$, $9\times 6$, and $9\times 9$, respectively. From Eqs.~(\ref{meq}), (\ref{eq:matrV_1}) and (\ref{eq:V_2}), it is straightforward to obtain explicit expressions for the matrices $(V_{jk})_\alpha$ and $(V^*_{jk})_\alpha$ ($j\neq k=1,2$, $\alpha=\urcorner,\llcorner,\times$). We will instead introduce them implicitly -- through their action on probe vectors. Let $\vec{a}_1$ and $\vec{a}_2$ be 3-component column vectors. Using these vectors we create two `probe' vectors \begin{equation}\vec{a}_1\otimes\vec{a}_2,\quad \left(\begin{array}{c}\vec{a}_2\\\vec{a}_1\end{array}\right).\label{probe_vectors}\end{equation} The interaction matrices are now characterized by the following identities: \begin{subequations} \begin{eqnarray} (V_{12})_\urcorner(\vec{a}_1\otimes\vec{a}_2)&=&\left(\begin{array}{c}2iT\Delta^{(+)}\vec{a}_2(\vec{a}_1)_2\\\vec{0}\end{array}\right),\label{id-V12-ur}\\ (V_{21})_\urcorner(\vec{a}_1\otimes\vec{a}_2)&=&\left(\begin{array}{c}\vec{0}\\2iT\Delta^{(+)}\vec{a}_1(\vec{a}_2)_2\end{array}\right),\label{id-V21-ur}\\ (V_{12}^*)_\urcorner(\vec{a}_1\otimes\vec{a}_2)&=&\left(\begin{array}{c}\vec{0}\\-2iT^*\Delta^{(-)}\vec{a}_1(\vec{a}_2)_1\end{array}\right),\label{id-V12*-ur}\\ (V_{21}^*)_\urcorner(\vec{a}_1\otimes\vec{a}_2)&=&\left(\begin{array}{c}-2iT^*\Delta^{(-)} \vec{a}_2(\vec{a}_1)_1\\ \vec{0}\end{array}\right),\label{id-V21*-ur}\\ (V_{12})_\times(\vec{a}_1\otimes\vec{a}_2)&=&-2T\Delta^{(-)}\vec{a}_1\otimes\Delta^{(+)}\vec{a}_2,\label{id-V12-t}\\ (V_{21})_\times(\vec{a}_1\otimes\vec{a}_2)&=&-2T\Delta^{(+)}\vec{a}_1\otimes\Delta^{(-)}\vec{a}_2,\label{id-V21-t}\\ (V_{12}^*)_\times(\vec{a}_1\otimes\vec{a}_2)&=&-2T^*\Delta^{(-)}\vec{a}_1\otimes\Delta^{(+)}\vec{a}_2,\label{id-V12*-t}\\ (V_{21}^*)_\times(\vec{a}_1\otimes\vec{a}_2)&=&-2T^*\Delta^{(+)}\vec{a}_1\otimes\Delta^{(-)}\vec{a}_2,\label{id-V21*-t}\\ (V_{12})_\llcorner\left(\begin{array}{c}\vec{a}_2\\\vec{a}_1\end{array}\right)&=&\vec{n}_1\otimes(2iT\Delta^{(+)}\vec{a}_2),\label{id-V12-ll}\\ (V_{21})_\llcorner\left(\begin{array}{c}\vec{a}_2\\\vec{a}_1\end{array}\right)&=&2iT\Delta^{(+)}\vec{a}_1\otimes\vec{n}_1,\label{id-V21-ll}\\ (V_{12}^*)_\llcorner\left(\begin{array}{c}\vec{a}_2\\\vec{a}_1\end{array}\right)&=& (-2iT^*\Delta^{(-)}\vec{a}_1)\otimes\vec{n}_2,\label{id-V12*-ll}\\ (V_{21}^*)_\llcorner\left(\begin{array}{c}\vec{a}_2\\\vec{a}_1\end{array}\right)&=&\vec{n}_2\otimes(-2iT^*\Delta^{(-)}\vec{a}_2),\label{id-V21*-ll} \end{eqnarray} \label{eq:all_V} \end{subequations} where $(\vec{a}_1)_i$ and $(\vec{a}_2)_i$ (and similarly for other three-component vectors in the following) refer to the $i$-th component ($i=1,2,3$) in the basis given by the choice of the single-atom Bloch vector $(\langle\sigma^-\rangle,\langle\sigma^+\rangle,\langle\sigma^z\rangle)$. Furthermore, the argument $x$ in the coupling constants is for brevity dropped, $\vec{n}_1=(\frac{1}{2},0,0)^T$, $\vec{n}_2=(0,\frac{1}{2},0)^T$, $\vec{0}=(0,0,0)^T$, and \begin{equation} \Delta^{(-)}=\left(\begin{array}{ccc}0&0&-i/2\\0&0&0\\0&i&0\end{array}\right),\;\; \Delta^{(+)}=\left(\begin{array}{ccc}0&0&0\\0&0&i/2\\-i&0&0\end{array}\right).\label{eq:defDmDp} \end{equation} Note that the matrices $V_\urcorner$, $V_\times$, and $V_\llcorner$ transform the probe vectors, Eq.~(\ref{probe_vectors}), to new vectors which can again be represented in the form of Eq.~(\ref{probe_vectors}). As regards the matrices $\Delta^{(-)}$ and $\Delta^{(+)}$, they describe the coupling of the Bloch vector of an individual atom to another atom -- via coupling to the two-atom correlation functions, and the coupling of the latter back to the Bloch vector of the other atom. Remarkably, exactly the same matrices describe the coupling of a single atom to the classical laser field, see the components proportional to $\Omega_j$ and $\Omega_j^*$ in Eq.~(\ref{eq:BlochM}). This is not a mere coincidence. We will prove below that, in the far-field limit, where the atoms exchange only single photon, the radiation of one atom onto the other atom is equivalent to that of a classical field with corresponding frequency distribution. \section{The elastic component of CBS} \label{sec:elastic} As obvious from Eqs.~(\ref{eq:el-ladd-1}) and (\ref{eq:el-cros-1}), the elastic component of the spectrum results from the steady state of the two-atom Bloch vector, see Eq.~(\ref{matr_eq1}): \begin{equation} \langle\vec{Q}\rangle=-(A+V)^{-1}\vec{L}_{+,0},\label{eq:StatSol}\end{equation} expanded in different powers of the dipole-dipole interaction $V$ and finally configuration-averaged. \subsection{Recurrence relations} \label{srecurrence} First, let us perform the expansion in powers of $V$. For this purpose, we split the 15-dimensional vector $\langle\vec{Q}\rangle$ into a 6- and a 9- dimensional vector characterizing, respectively, the single-atom Bloch vectors and the correlations between both atoms: \begin{equation} \label{def:x-y} \vec{x}^{\;(n)}\equiv \left(\begin{array}{c}\langle\vec{\sigma}_2\rangle\\ \langle\vec{\sigma}_1\rangle\end{array}\right)^{(n)},\quad \vec{y}^{\;(n)}\equiv\langle\vec{\sigma}_1\otimes\vec{\sigma}_2\rangle^{(n)}, \end{equation} where $n$ corresponds to the power in the series expansion in $V$. By using the block structure of the matrix $A+V$ and applying to it the formula for the inversion of block matrices \cite{horn} (see \ref{sec:inversionA}), one can show that these vectors obey the following system of recurrence relations: \begin{subequations} \label{recurrence} \begin{align} \vec{x}^{\;(n)}&=G_+V_\urcorner\vec{y}^{\;(n-1)}\label{eq:s2+s1-n}\\ \vec{y}^{\;(n)}&=G_\times V_\llcorner\vec{x}^{\;(n-1)} +G_\times V_\times\vec{y}^{\;(n-1)} +G_\times L_\times\vec{x}^{\;(n)}\label{eq:s1*s2-n}, \end{align}\end{subequations} with $G_+=-M_+^{-1}$, $G_\times=-M_\times^{-1}$, $V_\alpha=(V_{12})_\alpha+(V_{21})_\alpha+(V^*_{12})_\alpha+(V^*_{21})_\alpha$, $\alpha=\urcorner$, $\llcorner$, $\times$. The initial condition reads: \begin{equation} \vec{x}^{\;(0)}=\left(\begin{array}{c}G_2\vec{L}\\G_1\vec{L}\end{array}\right),\;\; {\rm and}\;\; \vec{x}^{\;(n)}=0,\; \vec{y}^{\;(n)}=0,\;\; {\rm for}\;\;n<0, \label{x-0} \end{equation} where $G_j=-M_j^{-1}$. The relations (\ref{recurrence}) represent a specific case of the general recurrence relations for the sub-blocks of the matrix $(z-A-V)^{-1}$, to be considered below in Sec.~\ref{sec:Inel-general}. \subsection{Steady state solutions} \label{sec:elastic1} Using the above recurrence relations of Sec.~\ref{srecurrence}, we will now explicitly calculate the steady state for fixed atomic coordinates, before we perform the configuration average in Sec.~\ref{sconfigav}. From Eqs.~(\ref{def:x-y}) and (\ref{x-0}), we obtain the Bloch vectors for two non-interacting atoms ($n=0$):\begin{equation} \langle\vec{\sigma}_2\rangle^{(0)}=G_2\vec{L},\quad \langle\vec{\sigma}_1\rangle^{(0)}=G_1\vec{L}\label{eq:vs_2-0}.\end{equation} The corresponding result for the correlations involves, according to Eq.~(\ref{eq:s1*s2-n}), the matrix $G_\times$, see also Eq.~(\ref{eq:s1*s2-0}). For the sake of reducing all calculations to the subspaces of atoms 1 and 2, we shall use the following integral representation of the matrix $G_\times$, proven in \ref{sec:int-repr-G}: \begin{equation} G_\times=\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi} G_1(\pm i\omega')\otimes G_2(\mp i\omega')\label{eq:int_repr_G_times},\end{equation} where $G_{1,2}(\pm i\omega')=(\pm i\omega'-M_{1,2})^{-1}$. The $\pm$ ($\mp$) signs of the integrands reflect invariance of the result, provided that the signs of the constituents are opposite. Application of the integral representation Eq.~(\ref{eq:int_repr_G_times}) to Eq.~(\ref{eq:s1*s2-0}) yields: \begin{align} \langle\vec{\sigma}_1\otimes\vec{\sigma}_2\rangle^{(0)} & =\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi} G_1(i\omega')\vec{L}\otimes G_2(-i\omega')G_2\vec{L}\nonumber\\ & +\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi} G_1(i\omega')G_1\vec{L}\otimes G_2(- i\omega')\vec{L},\label{sum-rule-1} \end{align} Eq.~(\ref{sum-rule-1}) can be evaluated by means of the general {\it sum rule} valid for $\,{\rm Re}\,[z]=0$ (see proof in \ref{sec:sum-rule}): \begin{align} &\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi}\ldots G_1\left(\frac{z}{2}+i\omega'\right)\ldots G_2\left(\frac{z}{2}-i\omega'\right)G_2(z)\ldots\nonumber \\ +&\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi}\ldots G_1\left(\frac{z}{2}+i\omega'\right)G_1\ldots G_2\left(\frac{z}{2}-i\omega'\right)\ldots\nonumber\\ &=\ldots G_1\ldots G_2(z)\ldots,\label{eq:general_sum_rule} \end{align} where $\ldots$ stand for arbitrary expressions which do not depend on $\omega'$ and are identical in all three lines of Eq.~(\ref{eq:general_sum_rule}). Applying this sum rule to Eq.~(\ref{sum-rule-1}) and using Eq.~(\ref{eq:vs_2-0}) yields: \begin{equation}\langle\vec{\sigma}_1\otimes\vec{\sigma}_2\rangle^{(0)}= \langle\vec{\sigma}_1\rangle^{(0)}\otimes\langle\vec{\sigma}_2\rangle^{(0)},\label{vs1*vs2-0}\end{equation} which is the obvious result for two non-interacting atoms. The first-order correction to the vector $\langle\vec{\sigma}_2\rangle^{(1)}$ can be found from Eq.~(\ref{recurrence}). Thereby, we arrive at \begin{align} \langle\vec{\sigma}_2\rangle^{(1)}&=2iT_{12}(G_1\vec{L})_2G_2\Delta^{(+)} G_2\vec{L}\nonumber\\ &-2iT_{21}^*(G_1\vec{L})_1G_2\Delta^{(-)} G_2\vec{L},\label{eq:vs_2^1}\end{align} where Eqs.~(\ref{id-V12-ur}), (\ref{id-V21*-ur}), and (\ref{recurrence}) have been used. The first-order correction for the Bloch vector of the other atom results accordingly as: \begin{align}\langle\vec{\sigma}_1\rangle^{(1)}&=2iT_{21}(G_2\vec{L})_2G_1\Delta^{(+)} G_1\vec{L}\nonumber\\ &-2iT_{12}^*(G_2\vec{L})_1G_1\Delta^{(-)} G_1\vec{L},\label{eq:vs_1^1}\end{align} In order to be able to associate each term in Eqs.~(\ref{eq:vs_2^1}) and (\ref{eq:vs_1^1}) with the photon exchange processes depicted in Fig.~\ref{fig:processes}, we have supplemented the coupling strength $T$ with the corresponding indices, although their values are identical, i.e. $T_{12}=T_{21}=T$. As evident from Fig.~\ref{fig:processes}, and in agreement with Eqs.~(\ref{eq:vs_2^1}) and (\ref{eq:vs_1^1}), atom 2 is affected by processes (a) and (d), corresponding to $T_{12}$ and $T_{21}^*$, whereas atom 1 is affected by (b) and (c), corresponding to $T_{21}$ and $T_{12}^*$, Similarly, we have derived the first- and second-order corrections to the correlation functions $\langle\vec{\sigma}_1\otimes\vec{\sigma}_2\rangle^{(1)}$ and $\langle\vec{\sigma}_2\rangle^{(2)}$ (see \ref{sec:explicit-corrections}). Concerning the function $\langle\vec{\sigma}_1\rangle^{(2)}$, it follows from the expression (\ref{eq:vs_2^2}) for the function $\langle\vec{\sigma}_2\rangle^{(2)}$ after interchange of the lower indices $1\leftrightarrow 2$ numbering the atoms. \subsection{Configuration average: ladder and crossed intensity} \label{sconfigav} Now, we have all ingredients to derive the elastic background and interference contributions $L_{\rm el}$ and $C_{\rm el}$ to the average backscattered intensity given by Eqs.~(\ref{eq:el-ladd-1}) and (\ref{eq:el-cros-1}). Indeed, these ingredients, $\langle\vec{\sigma}_{1,2}\rangle^{(0)}$, $\langle\vec{\sigma}_{1,2}\rangle^{(1)}$, and $\langle\vec{\sigma}_{1,2}\rangle^{(2)}$, are given by Eqs.~(\ref{eq:vs_2-0}), (\ref{eq:vs_2^1}), (\ref{eq:vs_1^1}) and (\ref{eq:vs_2^2}), respectively. We should finally perform configuration averaging of these expressions over the coordinates of both atoms. First, let us remind that both $L_{\rm el}$ and $C_{\rm el}$ are of second order in the interatomic couplings $T(x)$ and $T^*(x)$. These couplings rapidly oscillate as a function of the distance $r_{12}=x/k$ between the atoms, see Eq.~(\ref{dipdip}). Hence, as already mentioned in Sec.~\ref{sintensity}, only terms proportional to $|T(x)|^2$ survive averaging over the interatomic distance, whereas all terms proportional to $T(x)^2$ or $T^*(x)^2$ vanish. The next step is the average over the interatomic angle variables. Also this averaging can be performed analytically, thanks to the following observation: the expression to be configuration averaged contains sums of products of matrix elements related to either of the atoms. It turns out (see \ref{sec:phases}) that all these matrix elements coincide, up to a position-dependent phase, with the ones obtained for an atom placed at the coordinate origin. This property reflects the fact that changing atomic positions changes nothing but the phases of the Rabi frequencies at the positions of atoms 1 and 2. The disorder average is then survived only by those terms who have not gotten the phase factor; all the rest terms vanish. Using the phase relations of \ref{sec:phases}, we can prove the following simple recipe in order to identify the terms surviving the configuration average: we remind that, according to Eq.~(\ref{eq:matrV_1}), $T$ and $T^*$ indicate the exchange of negative and positive frequency photons between the atoms. Moreover, the intensity of backscattered light, see Eq.~(\ref{eq:in2}), results from emission of a positive and a negative frequency photon from atom $1$ or $2$, indicated by the terms $\sigma_j^-$ and $\sigma_j^+$, $j=1$ or $2$, respectively. As we have found, exactly those processes survive the ensemble average, where an atom emitting a positive (or negative) frequency photon has received this positive (or negative) frequency photon from the other atom. For this reason, we have labelled in Sec.~\ref{sec:elastic1} the coupling strengths $T_{12}$, $T_{12}^*$, $T_{21}$ and $T_{21}^*$ in correspondence with the four processes depicted in Fig.~\ref{fig:processes}. In particular, the ladder component, which results, according to Eq.~(\ref{eq:el-ladd-1}) from emission of a positive and a negative frequency photon from atom 2, involves only processes (a) and (d) of Fig.~\ref{fig:processes}. The disorder average is hence performed by extracting, among all terms obtained when inserting Eqs.~(\ref{eq:vs_2-0}), (\ref{eq:vs_2^1}) and (\ref{eq:vs_2^2}) into Eq.~(\ref{eq:el-ladd-1}), those which are proportional to $T_{12}T_{21}^*$. The crossed contribution, involving emission of a positive frequency photon from atom 1 and a negative frequency photon from atom 2, consists of all the terms proportional to $T_{12}T_{12}^*$, corresponding to processes (a) and (b) in Fig.~\ref{fig:processes}. Introducing notations: \begin{subequations} \label{notations} \begin{align} \langle\Delta\vec{\sigma}(\pm i\omega)\rangle^{(\pm)}&\equiv G(\pm i\omega)\Delta^{(\pm)} G\vec{L} \label{notation-a}\\ \langle\Delta\vec{\sigma}(\omega)\rangle^{(2)}&\equiv G\Delta^{(+)} G(-i\omega)\Delta^{(-)} G\vec{L}\nonumber\\ &+G\Delta^{(-)} G(i\omega)\Delta^{(+)} G\vec{L}\label{notation-b}, \end{align} \end{subequations} and dropping the common prefactor $8T_{12}T_{21}^*$, we arrive at the following final expression for the elastic background intensity \begin{align} L_{\rm el}&=\langle\sigma^+\rangle^{(0)}\langle\sigma^-\rangle^{(0)}\bigl(\langle\Delta\sigma^+(0)\rangle^{(2)}\langle\sigma^-\rangle^{(0)}\bigr.\nonumber\\ &\bigl.+\langle\Delta\sigma^-(0)\rangle^{(2)}\langle\sigma^+\rangle^{(0)}+\langle\Delta\sigma^+(0)\rangle^{(+)}\langle\Delta\sigma^-(0)\rangle^{(-)}\bigr.\nonumber\\ &\bigl.+\langle\Delta\sigma^+(0)\rangle^{(-)}\langle\Delta\sigma^-(0)\rangle^{(+)}\bigr)\nonumber\\ &+\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi} P^{(0)}(\omega')\bigl(\langle\Delta\sigma^+(\omega')\rangle^{(2)}\langle\sigma^-\rangle^{(0)}\bigr.\nonumber\\ &\bigl.+\langle\Delta\sigma^-(\omega')\rangle^{(2)}\langle\sigma^+\rangle^{(0)}\bigr), \label{el-Ladder-final} \end{align} where \begin{equation} P^{(0)}(\omega)=(G(-i\omega)\vec{G}^{(0)}_1)_2+(G(i\omega)\vec{G}^{(0)}_2)_1,\label{Mollow}\end{equation} with \begin{subequations} \label{vec-G10-G20} \begin{eqnarray} \vec{G}_{1}^{(0)}&=&-i\Delta^{(+)} G\vec{L}+\vec{n}_2-(G\vec{L})_1G\vec{L},\label{eq:G1}\\ \vec{G}_{2}^{(0)}&=&+i\Delta^{(-)} G\vec{L}+\vec{n}_1-(G\vec{L})_2G\vec{L},\label{eq:G2} \end{eqnarray} \end{subequations} gives the inelastic spectrum of single-atom resonance fluorescence \cite{mollow69} known also as the Mollow triplet (see also \ref{sec:mollow}). For the interference contribution, we obtain similarly: \begin{align} C_{\rm el}&=\langle\sigma^+\rangle^{(0)}\langle\sigma^-\rangle^{(0)}\langle\Delta\sigma^-(0)\rangle^{(-)}\langle\Delta\sigma^+(0)\rangle^{(+)}\nonumber\\ &+\langle\sigma^+\rangle^{(0)}\langle\sigma^+\rangle^{(0)}\langle\Delta\sigma^-(0)\rangle^{(-)}\langle\Delta\sigma^-(0)\rangle^{(+)}\nonumber\\ &+\langle\sigma^-\rangle^{(0)}\langle\sigma^-\rangle^{(0)}\langle\Delta\sigma^+(0)\rangle^{(-)}\langle\Delta\sigma^+(0)\rangle^{(+)}\nonumber\\ &+\langle\sigma^-\rangle^{(0)}\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi}\langle\Delta\sigma^+(-i\omega')\rangle^{(-)}(G\Delta^{(+)} G(i\omega')\vec{G}^{(0)}_1)_2\nonumber\\ &+\langle\sigma^+\rangle^{(0)}\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi}(G\Delta^{(-)} G(-i\omega')\vec{G}^{(0)}_2)_1\langle\Delta\sigma^-(i\omega')\rangle^{(+)}, \label{el-Crossed-final} \end{align} \section{Inelastic spectrum of CBS} \label{sec:inelastic} \subsection{General remarks} \label{sec:Inel-general} In the previous section, we have presented a detailed derivation of the elastic spectrum of CBS backscattered from two laser-driven atoms. The analytical results, Eqs.~(\ref{el-Ladder-final}) and (\ref{el-Crossed-final}), for the elastic ladder and crossed component are manifestly single-atom expressions. While the physical setting leading to such expressions will be discussed below, here we will proceed with the inelastic spectrum. Although we will need essentially the same tools as we used for the derivation of the elastic spectrum, the pathway to the final results is much lengthier for the inelastic case. Henceforth, we will restrict ourselves to a sketch of the derivation. In Sec.~\ref{sec:elastic}, it was shown that the background and interference elastic intensities as expressed by Eqs.~(\ref{eq:el-ladd-1},\ref{eq:el-cros-1}) correspond to processes (a),(d) and (a),(b) of Fig.~\ref{fig:processes}, respectively. The expressions given by Eqs.~(\ref{L-inel_lap}) and (\ref{C-inel-lap}) generalize Eqs.~(\ref{eq:el-ladd-1}) and (\ref{eq:el-cros-1}) to account for inelastic scattering, whereas the excitation exchange processes remain the same. In other words, the ladder and crossed inelastic spectra, Eqs.~(\ref{L-inel_lap}) and (\ref{C-inel-lap}), are also bilinear forms in the matrices $V_{12}$, $V_{21}^*$ and $V_{12}$, $V_{12}^*$, respectively. Hence, to obtain the inelastic spectrum of CBS, we will solve Eq.~(\ref{eq:vecSj}) for $j=2$ using Laplace transformation, with the initial conditions being given by Eq.~(\ref{eq:init_Dsj}), perturbatively to second order in the matrix $V$. \subsubsection{Perturbative expansion of the resolvent} The perturbative solution of Eq.~(\ref{eq:vecSj}) reads: \begin{align} \label{eq:R_n-s-m} \Delta\vec{\tilde{s}}^{\;(2)}_2(z)&=R^{(0)}(z)\Delta\vec{s}^{\;(2)}_2(0) +R^{(1)}(z)\Delta\vec{s}^{\;(1)}_2(0)\nonumber\\&+ R^{(2)}(z)\Delta\vec{s}^{\;(0)}_2(0),\end{align} where $R^{(l)}(z)$, $l=0,1,2$, represent subsequent terms in the series expansion of the resolvent matrix \begin{equation} R(z)=(z-A-V)^{-1}\end{equation} in $V$. Obviously, the only difference of the perturbative expansion of $R(z)$ from that of the matrix $(-A-V)^{-1}$ is that, in the latter case, the Green matrices are evaluated at $z=0$, while in the former case at $z=-i\nu$. Accordingly, the four sub-blocks of the matrix $R(z)$: \begin{equation} R(z)=\left(\begin{array}{cc}z-M_+&-V_\urcorner\\ -L_\times-V_\llcorner&z-M_\times-V_\times\end{array}\right)^{-1}\end{equation} satisfy the recurrence relations generalizing Eqs.~(\ref{recurrence}): \begin{subequations}\label{recur_R} \begin{align} R^{(n)}_{11}(z)&=G_+(z) V_\urcorner R^{(n-1)}_{21}(z),\\ R^{(n)}_{12}(z)&=G_+(z) V_\llcorner R^{(n-1)}_{22}(z),\\ R^{(n)}_{2k}(z)&=G_\times L_\times R^{(n)}_{1k}(z)+G_\times V_\times R^{(n-1)}_{2k}(z)\nonumber\\ &+G_\times V_\llcorner R^{(n-1)}_{1k}(z),\quad k=1,2,\end{align}\end{subequations} with the initial condition \begin{equation} R^{(0)}_{11}(z)=G_+(z),\quad R^{(0)}_{22}(z)=G_\times(z),\end{equation} and $R^{(n)}_{kl}=0$ for $n<0$. \subsubsection{Initial conditions} We will next show that the vectors of the initial conditions $\Delta\vec{s}^{(2-n)}_2(0)$ on which the sub-blocks of $R^{(n)}(z)$ act have the same structure as the probe vectors introduced in Eq.~(\ref{probe_vectors}). Indeed, from the definition (\ref{eq:init_Dsj}) of $\Delta\vec{s}_2(0)$, one obtains \begin{equation} \Delta\vec{s}^{\;(n)}_2(0)=\langle\sigma^+_2\vec{Q}\rangle^{(n)}-(\langle\sigma_2^+\rangle\langle \vec{Q}\rangle)^{(n)}, \end{equation} what leads to the following expressions \begin{subequations}\label{init-expand} \begin{align} \Delta\vec{s}^{\;(0)}_2(0)&=\langle\sigma^+_2\vec{Q}\rangle^{(0)}-\langle\sigma_2^+\rangle^{(0)}\langle \vec{Q}\rangle^{(0)},\\ \Delta\vec{s}^{\;(1)}_2(0)&=\langle\sigma^+_2\vec{Q}\rangle^{(1)}-\langle\sigma_2^+\rangle^{(0)}\langle \vec{Q}\rangle^{(1)}\nonumber\\ &-\langle\sigma_2^+\rangle^{(1)}\langle \vec{Q}\rangle^{(0)},\\ \Delta\vec{s}^{\;(2)}_2(0)&=\langle\sigma^+_2\vec{Q}\rangle^{(2)}-\langle\sigma_2^+\rangle^{(2)}\langle \vec{Q}\rangle^{(0)}\nonumber\\ &-\langle\sigma_2^+\rangle^{(0)}\langle\vec{Q}\rangle^{(2)}-\langle\sigma_2^+ \rangle^{(1)}\langle \vec{Q}\rangle^{(1)}. \end{align} \end{subequations} The vectors $\langle\vec{Q}\rangle^{(n)}$ are manifestly of the form (\ref{probe_vectors}) by virtue of the relations (\ref{eq:all_V}), (\ref{recurrence}), and (\ref{eq:general_sum_rule}). The same holds for the vector $\langle\sigma_2^+\vec{Q}\rangle^{(n)}$ which is defined as \begin{equation} \langle\sigma_2^+\vec{Q}\rangle^{(n)}=\left(\begin{array}{c}i\Delta^{(-)}\langle \vec{\sigma}_2\rangle^{(n)}\\ \langle\vec{\sigma}_1\sigma_2^+\rangle^{(n)}\\ \langle\vec{\sigma}_1\otimes i\Delta^{(-)}\vec{\sigma}_2\rangle^{(n)}\end{array}\right). \label{relation-s2Q}\end{equation} Hence, the relations (\ref{eq:all_V}), (\ref{recurrence}), and (\ref{recur_R}), supplemented by the integral representation (\ref{eq:int_repr_G_times}) together with the identities derived in \ref{sec:sum-rule}, are sufficient to evaluate all terms in Eq.~(\ref{eq:R_n-s-m}). \subsubsection{Configuration averaging} The configuration averaging is reduced to the same procedure as for the elastic intensities. By choosing terms in the ladder and crossed spectra that are bilinear forms in the matrices $V_{12}$, $V_{21}^*$ and $V_{12}$, $V_{12}^*$, respectively, and keeping only terms proportional to $|T|^2$, we automatically select the components surviving the disorder averaging. The averaged expressions for $\langle[\Delta\vec{s}_2^{\;(2)}(-i\nu)]_1\rangle_{\rm conf.}$ and $\langle[\Delta\vec{s}_2^{\;(2)}(-i\nu)]_4e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.}$ then follow from $[\Delta\vec{s}_2^{\;(2)}(-i\nu)]_1$ and $[\Delta\vec{s}_2^{\;(2)}(-i\nu)]_4$, respectively, after dropping the atomic indices. \subsection{Inelastic ladder spectrum} \label{sec:inel-ladder} After configuration averaging of both sides of Eq.~(\ref{eq:R_n-s-m}), we obtain \begin{align} \langle[\Delta\vec{\tilde{s}}_2^{\;(2)}(-i\nu)]_1\rangle_{\rm conf.}&=L^{(2;0)}(-i\nu)+L^{(1;1)}(-i\nu)\label{Ladder-inel-split}\\ &+L^{(0;2)}(-i\nu),\nonumber \end{align} where \begin{equation} L^{(2-n;n)}(-i\nu)=\langle [R^{(2-n)}(-i\nu)\Delta\vec{s}^{\;(n)}_2(0)]_1\rangle_{\rm conf.}.\end{equation} In accordance with Eq.~(\ref{L-inel_lap}), the ladder spectrum reads \begin{equation} L_{\rm inel}(\nu)=\frac{2\,{\rm Re}\,}{\pi}(L^{(2;0)}(-i\nu)+L^{(1;1)}(-i\nu)+L^{(0;2)}(-i\nu)).\end{equation} The explicit expressions for the functions $L^{(2;0)}(-i\nu)$, $L^{(1;1)}(-i\nu)$, and $L^{(0;2)}(-i\nu)$ are given in \ref{sec:Lnu-Cnu}. Before we write down the final form of the inelastic ladder spectrum, let us introduce a new spectral function \begin{align} \label{P2_0} P^{(2)}(\omega',\nu)&=2\,{\rm Re}\,\Bigl\{(G(-i\nu)[\vec{G}_2^{(2)}(\omega')\Bigr.\nonumber\\ &+\Delta^{(-)} G(-i\nu+i\omega')\Delta^{(+)} G(-i\nu)\vec{G}^{(0)}_{2}\nonumber\\ &+\Delta^{(-)} G(-i\nu+i\omega')\vec{G}^{(+)}_{2}(i\omega')\nonumber\\ &+\Delta^{(+)} G(-i\nu-i\omega')\Delta^{(-)} G(-i\nu)\vec{G}^{(0)}_{2}\nonumber\\ &+\Bigl.\Delta^{(+)} G(-i\nu-i\omega')\vec{G}^{(-)}_{2}(-i\omega')])_1\Bigr\}. \end{align} By using the function $P^{(2)}(\omega',\nu)$ and the inelastic single-atom power spectrum $P^{(0)}(\omega')$, see Eq.~(\ref{Mollow}), we can write down the expression for the inelastic ladder spectrum in a compact form: \begin{align} L_{\rm inel}(\nu)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi} P^{(0)}(\omega') P^{(2)}(\omega',\nu)\nonumber\\ &+\frac{1}{2\pi}\langle\sigma^+\rangle^{(0)}\langle\sigma^-\rangle^{(0)}P^{(2)}(0,\nu)\nonumber\\ &+\frac{1}{2\pi}\langle\Delta\sigma^+(i\nu)\rangle^{(-)}\langle\Delta\sigma^-(-i\nu)\rangle^{(+)}P^{(0)}(\nu)\nonumber\\ &+\frac{1}{2\pi}\langle\Delta\sigma^+(i\nu)\rangle^{(+)}\langle\Delta\sigma^-(-i\nu)\rangle^{(-)}P^{(0)}(\nu). \label{Lad-inel-final} \end{align} Not only has the expression for the inelastic background spectrum, Eq.~(\ref{Lad-inel-final}), become short thanks to the function $P^{(2)}(\omega',\nu)$. The latter function has a clear physical interpretation as the cross section for a weak probe beam of frequency $\omega'$ to be scattered into the frequency $\nu$ by a strongly driven atom. While this issue will be discussed in more detail when introducing the pump-probe approach in Sec.~\ref{sec:bichromatic} below, we will next derive the inelastic interference spectrum. \subsection{Inelastic crossed spectrum} \label{sec:inel-crossed} Repeating the same steps as taken previously in the derivation of the inelastic background spectrum, we obtain \begin{align} \langle[\Delta\vec{s}_2^{\;(2)}(-i\nu)]_4e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.}&=C^{(2;0)}(-i\nu)+C^{(1;1)}(-i\nu)\nonumber\\ &+C^{(0;2)}(-i\nu),\label{Crossed-inel-split} \end{align} where \begin{equation} C^{(2-n;n)}(-i\nu)=\langle [R^{(2-n)}(-i\nu)\Delta\vec{s}^{\;(n)}_2(0)]_4e^{i\vec{k}_L\cdot\vec{r}_{12}}\rangle_{\rm conf.},\end{equation} from which the inelastic spectrum reads \begin{equation} C_{\rm inel}(\nu)=\frac{2\,{\rm Re}\,}{\pi}(C^{(2;0)}(-i\nu)+C^{(1;1)}(-i\nu)+C^{(0;2)}(-i\nu))\label{Cinel-2},\end{equation} where the explicit expressions for the functions $C^{(2;0)}(-i\nu)$, $C^{(1;1)}(-i\nu)$, and $C^{(0;2)}(-i\nu)$ are presented in \ref{sec:Lnu-Cnu}. The crossed spectrum can conveniently be represented with the help of the frequency correlation functions $P^{(+)}(\omega',\nu)$ and $P^{(-)}(\omega',\nu)=(P^{(+)}(\nu,\omega'))^*$: \begin{align} P^{(+)}(\omega',\nu)&=(G(i\nu)\Delta^{(+)} G(i\nu-i\omega')\vec{G}^{(0)}_{1})_2\nonumber\\ &+(G(i\nu)\vec{G}^{(+)}_{1}(i\omega'))_2\nonumber\\ &+(G(-i\nu+i\omega')\Delta^{(+)} G(-i\nu)\vec{G}^{(0)}_{2})_1\nonumber\\ &+(G(-i\nu+i\omega')\vec{G}^{(+)}_{2}(i\omega'))_1,\label{pplus}\\ P^{(-)}(\omega',\nu)&=(G(-i\nu)\Delta^{(-)} G(-i\nu+i\omega')\vec{G}^{(0)}_{2})_1\nonumber\\ &+(G(-i\nu)\vec{G}^{(-)}_{2}(-i\omega'))_1\nonumber\\ &+(G(i\nu-i\omega')\Delta^{(-)} G(i\nu)\vec{G}^{(0)}_{1})_2\nonumber\\ &+(G(i\nu-i\omega')\vec{G}^{(-)}_{1}(-i\omega'))_2,\label{pminus} \end{align} The physical significance of the functions $P^{(+)}(\omega',\nu)$ and $P^{(-)}(\omega',\nu)$ as spectral correlation functions of a two-level system subjected to a bichromatic driving will be clarified below within the same setting as the function $P^{(2)}(\omega',\nu)$. To this end, they allow us to write down a concise expression for the crossed inelastic spectrum as: \begin{align} C_{\rm inel}(\nu)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega'}{2\pi} P^{(+)}(\omega',\nu)P^{(-)}(\nu-\omega',\nu)\nonumber\\ &+\frac{1}{2\pi}\langle\sigma^+\rangle^{(0)}\langle\Delta\sigma^-(-i\nu)\rangle^{(-)}P^{(+)}(0,\nu)\nonumber\\ &+\frac{1}{2\pi}\langle\sigma^-\rangle^{(0)}\langle\Delta\sigma^+(i\nu)\rangle^{(+)}P^{(-)}(0,\nu). \label{Cros-inel-final} \end{align} \section{Pump-probe approach} \label{sec:bichromatic} In Sections \ref{sec:elastic} and \ref{sec:inelastic}, we derived the configuration-averaged double scattering elastic and inelastic spectra of the background and interference contributions to CBS from two atoms driven by a laser field of arbitrary intensity. These quantities, given by Eqs.~(\ref{el-Ladder-final}), (\ref{el-Crossed-final}), (\ref{Lad-inel-final}), and (\ref{Cros-inel-final}), are expressed in terms of single-atom averages and spectral correlation functions of which some, e.g. $P^{(+)}(\omega',\nu)$, $P^{(-)}(\omega',\nu)$, and $P^{(2)}(\omega',\nu)$ have been introduced rather formally. We will next furnish them with a physical meaning. This meaning can be straightforwardly clarified because precisely the same expressions have been obtained by solving optical Bloch-like equations for a single atom subject to bichromatic driving \cite{wellens09,geiger09}. Here, the driving field ${\cal E}(t)$ is composed of the injected laser field and the photons scattered by the other atom: \begin{equation} {\cal E}(t)={\cal E}_Le^{-i\omega_L t} +v^{(+)} e^{-i(\omega+\omega_L) t} +{\cal E}_L^*e^{i\omega_L t}+ v^{(-)} e^{i(\omega+\omega_L) t}, \label{drive} \end{equation} both represented as a classical field with positive and negative frequency components. The laser field amplitude ${\cal E}_L$ is proportional to the Rabi frequency $\Omega$. In correspondence to the approximation of large distance between the atoms, the probe fields $v^{(+)}$ and $v^{(-)}$ are assumed to be much weaker than the laser field ${\cal E}_L$. As already argued in the introduction, this approximation enables the modeling of the atomic radiation as a classical field acting on the other atom. We then consider the frequency correlation function \begin{align} C(\omega_1,\omega_2) & = \int_{-\infty}^\infty \frac{dt_1dt_2}{2\pi} e^{-i(\omega_1+\omega_L) t_1+i(\omega_2+\omega_L) t_2}\nonumber\\ & \times \left( \langle\sigma^+(t_1)\sigma^-(t_2)\rangle-\langle\sigma^+(t_1)\rangle\langle\sigma^-(t_2)\rangle\right),\label{dipole} \end{align} representing a positive frequency photon $\omega_1$ and a negative frequency photon $\omega_2$ (both with respect to the laser frequency) emitted by a single atom subject to the bichromatic driving, Eq.~(\ref{drive}), and expand it in powers of the weak probe field amplitudes $v^{(+)}$ and $v^{(-)}$: \begin{subequations} \begin{align} \left.C\right|_{v^{(\pm)}=0} & = \delta(\omega_1-\omega_2)P^{(0)}(\omega_1)\label{p0},\\ \left.\frac{\partial C}{\partial v^{(+)}}\right|_{v^{(\pm)}=0} & = \delta(\omega_2-\omega_1-\omega) P^{(+)}(\omega,\omega_2),\ \ \ \ \ \label{c2} \\ \left.\frac{\partial C}{\partial v^{(-)}}\right|_{v^{(\pm)}=0} & = \delta(\omega_1-\omega_2-\omega)P^{(-)}(\omega,\omega_1)\label{c1},\\ \left.\frac{\partial^2 C}{\partial v^{(+)}\partial v^{(-)}}\right|_{v^{(\pm)}=0} & = \delta(\omega_1-\omega_2)P^{(2)}(\omega,\omega_1)\label{p1}. \end{align} \end{subequations} The $\delta$-functions in Eqs.~(\ref{p0}-\ref{p1}) originate from integrating over $t_+:=t_1+t_2$ in Eq.~(\ref{dipole}), and are thus a consequence of time translation invariance or energy conservation. As we have found, the quantities defined by Eqs.~(\ref{p0}-\ref{p1}) -- i.e. by the perturbative solution of the single-atom Boch equations with bichromatic driving -- exactly correspond to the quantities $P^{(0)}$, $P^{(\pm)}$ and $P^{(2)}$ introduced above in Eqs.~(\ref{Mollow},\ref{P2_0},\ref{pplus},\ref{pminus}). This establishes the equivalence between the master equation and the pump-probe approach. Understanding the physical meaning of the quantities $P^{(0)}$, $P^{(\pm)}$ and $P^{(2)}$ becomes easier with the help of diagrammatic language. Let us address a modified version of Fig.~\ref{fig:processes} plotted in Fig.~ \ref{fig:ladd-cros}. Namely, we added in Fig.~\ref{fig:ladd-cros} the laser field with which both atoms continuously interact, the positive and negative frequency components of the backscattered field, and combined the exchange of excitation diagrams (a),(d) and (a), (b) of Fig.~\ref{fig:processes} into the processes (a) and (b) of Fig.~\ref{fig:ladd-cros}, respectively. \begin{figure} \includegraphics[width=8cm]{ladder_crossed-m.eps} \caption{Double scattering processes contributing to (a) ladder and (b) crossed spectra of CBS. Thick arrows indicate (positive and negative components of) the laser field driving the atoms (black spots), thin solid (dashed) arrows indicate positive (negative) frequency components of the fields scattered by the atoms. In all expressions, the frequencies $\nu$, $\omega'$ and $\omega''$ are relative with respect to the laser frequency $\omega_L$.} \label{fig:ladd-cros} \end{figure} As discussed in Sec.~\ref{sconfigav}, these are the processes which survive the configuration average, after solving the two-atom master equation (\ref{meq}) perturbatively to second order in the interaction matrix $V$. Now, let us split each of the diagrams (a) and (b) in Fig.~\ref{fig:ladd-cros} into `building blocks', in order to represent the individual atoms together with their incoming and outgoing fields. The resulting 4 diagrams are shown in Fig.~\ref{fig:splits}. \begin{figure} \includegraphics[width=6.5cm]{4spectral_corr-m.eps} \caption{Building blocks (diagrams) including the individual atoms with their incoming and outgoing field. The combined diagrams (a-1) and (a-2) give the ladder contribution (diagram (a) in Fig.~\ref{fig:ladd-cros}), while the combined (b-1) and (b-2) diagrams give the crossed contribution (diagram (b) in Fig.~\ref{fig:ladd-cros}).}\label{fig:splits} \end{figure} Diagram (a-1) shows an atom subjected to a laser field. Consequently, its outgoing fields correspond to resonance fluorescence with the frequency distribution given by the function $P^{(0)}(\omega')$ defined in Eq.~(\ref{Mollow}). On diagram (a-2), an atom is subjected to two classical fields: the laser field and a weak field at frequency $\omega'$ scattered by the other atom. The response of the atom to such a bichromatic field expanded to second order in the weak field amplitude is described by the function $P^{(2)}(\omega',\nu)$ introduced in Eq.~(\ref{P2_0}). Integrating over the frequency $\omega'$ leads to $\int d\omega' P^{(0)}(\omega')P^{(2)}(\omega',\nu)$, which is the first contribution to the inelastic ladder intensity, Eq.~(\ref{Lad-inel-final}). This is the contribution which originates when both atoms scatter inelastically, since the single-atom response as defined in Eq.~(\ref{dipole}) above includes only inelastic scattering. However, the same analysis can be repeated including also elastic scattering, i.e. considering Eq.~(\ref{dipole}) without subtracting the term $\langle\sigma^{(+)}\rangle\langle\sigma^{(-)}\rangle$ on the right hand side. As it turns out, the elastic response of a single atom to a probe field of frequency $\omega$ is then determined, in first and second order, by the corrections $\langle\Delta\vec{\sigma}(\pm i\omega)\rangle^{(\pm)}$ and $\langle\Delta\vec{\sigma}(\omega)\rangle^{(2)}$ to the Bloch vector defined in Eqs.~(\ref{notation-a},\ref{notation-b}). This yields the other terms of the inelastic ladder component, Eq.~(\ref{Lad-inel-final}), where one atom scatters inelastically and the other one elastically. In the same spirit, the crossed spectrum is obtained as follows: diagram (b-1) describes a positive frequency component of the field backscattered by the atom subjected to the laser field and to a positive frequency component at the frequency $\omega''$ emitted by the other atom. The atom also emits a negative frequency component at the frequency $\omega'$ towards another atom. The frequency distribution of these photons is described by the function $P^{(+)}(\nu,\omega')$. The frequency $\omega''$ is then determined, due to energy conservation in the scattering processes, as $\omega''=\nu-\omega'$. Hence, diagram (b-2) is described by the function $P^{(-)}(\nu-\omega',\nu)$. Integrating over $\omega'$ yields the first term of Eq.~(\ref{Cros-inel-final}), corresponding to inelastic-inelastic scattering, whereas the remaining terms are obtained by including a single elastic event. \section{Conclusion} \label{sec:conclusion} We have studied the spectrum of laser light doubly scattered between two scalar atoms. By solving the two-atom master equation to second order in the inverse distance between both atoms and averaging over the random positions of the atoms, we derived analytical expressions for the ladder (background) and crossed (interference) components of the coherent backscattering signal. The results thereby obtained reduce to single-atom expressions coinciding exactly with the results based on the pump-probe approach to coherent backscattering \cite{wellens09,geiger09}. In the latter, heuristic, approach, all correlation functions are calculated by considering from the very beginning a single atom subject to a bichromatic classical driving field. This does not contradict the well-known fact (see, for instance, \cite{scully,carmichael}) that atomic radiation exhibits photon antibunching and, hence, is nonclassical. Photon antibunching and other nonclassical properties of resonance fluorescence are quantum statistical properties which do not reveal themselves inasmuch as single excitation exchange is involved. Instead, these properties would become relevant for smaller interatomic distances, where several photons are exchanged between both atoms. In the case of two distant atoms, corresponding to the experimentally realistic case of a dilute atomic medium, however, the classical modeling is perfectly adequate - at least in order to describe the intensity of the atomic radiation. Whether an appropriately extended pump-probe approach could also be able to reproduce intensity-intensity correlations \cite{rist08,skornia01} is a subject for future studies. As shown in this paper, the classical modeling of the fields exchanged between the atoms allows to reduce quantum mechanical calculations from the two-atom Hilbert space to single-atom Hilbert spaces. In combination with multiple scattering theory as already existing for nonlinear classical scatterers \cite{wellens09,wellens09b}, the pump-probe approach thus opens the way to treat coherent backscattering of intense laser light by a dilute cloud consisting of a very large number of atoms. \section{Acknowledgements} Funding by the DFG through an individual grant (V. S.) and through the Forschergruppe 760 is gratefully acknowledged.
1,314,259,995,540
arxiv
\subsection{Problem Statement} Consider a 2D bounded workspace $\mathcal W\subset \mathbb R^2$ containing a set of $n$ objects $\mathcal O=\{o_1, ..., o_n\}$. Each object is assumed to be an upright \emph{generalized cylinder}. A \emph{feasible arrangement} $\mathcal A =\{p_1, ..., p_n\}$, $p_i = \{x_i, y_i, \theta_i\} \in SE(2)$ is a set of poses for objects in $\mathcal O$, such that (1) each object's footprint is contained in $\mathcal W$, and (2) no two objects collide. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{sodas.png} \caption{[Left] A rearrangement task involving three soda cans. [Right] The corresponding dependency graph $G$.} \label{fig:soda} \end{figure} We consider (overhand) pick-n-place actions to move objects one by one. A pick-n-place action, represented as an ordered pair $(o, p)$, grasps an object $o$ at its current pose and lifts it above all other objects. It then moves it horizontally, and places it at the target pose $p$ within $\mathcal W$. An action is collision-free if and only if both the current and resulting arrangements are feasible. A plan $P$ from a feasible $\mathcal A_s$ to a feasible $\mathcal A_g$ is a sequence of collision-free pick-n-place actions transforming $\mathcal A_s$ into $\mathcal A_g$. We want to compute feasible plans that minimize the number of pick-n-place actions, which leads to increased system throughput. In summary: \begin{problem}[\texttt{TORO}\xspace w/ Internal Buffers (\texttt{TORI}\xspace)] Given feasible arrangements $\mathcal A_s=\{p^s_1, ..., p^s_n\}$ and $\mathcal A_g=\{p^g_1, ..., p^g_n\}$, find a feasible plan $P$ sequentially moving objects from $\mathcal A_s$ to $\mathcal A_g$, which minimizes the number of actions. \end{problem} \subsection{Dependency Graph and Internal Buffers} It is not always possible to move an object to its goal pose, which may be occupied by other objects. This leads to \emph{dependencies} between objects, i.e., when the goal pose $p^g_i$ of $o_i$ collides with $o_j$ at its current pose, then $o_i$ depends on $o_j$ and $o_j$ must be moved before moving $o_i$ to its goal. Dependencies induced by $\mathcal A_s$ and $\mathcal A_g$ give rise to a dependency graph $G$ \cite{van2009centralized, han2018complexity}. Fig.~ \ref{fig:soda} provides an example graph. When $G$ is acyclic, the instance is called "monotone" and can be solved with at most $n$ actions moving each object once from its start to its goal pose following the topological order of $G$. Otherwise, some object(s) must be temporarily displaced to break these cycles and complete the task. We refer to these intermediate poses as "\emph{buffers}", which may be external (i.e., outside $\mathcal W$) or internal (i.e, contained in $\mathcal W$). If the buffers must be internal, the problem is \texttt{TORI}\xspace. Otherwise, the problem is \texttt{TORO}\xspace with external buffers, denoted as \texttt{TORE}\xspace. Solving \texttt{TORE}\xspace only requires dealing with inherent constraints defined by $G$. For instance, to solve the problem in Fig.~\ref{fig:soda}, we can move the Pepsi to an external buffer to break the cycle, move the Coke first and then Fanta to their goal locations and finally bring back the Pepsi into the workspace. With internal buffers, we must find a temporary location for the Pepsi in $\mathcal W$. Due to acquired constraints (as defined in Sec.~\ref{sec:intro}) arising from internal buffer selection, \texttt{TORI}\xspace, the problem we study in this work, is more challenging than \texttt{TORE}\xspace. Intuitively, selecting buffers inside workspace (\texttt{TORI}\xspace) is much more difficult and constrained than using buffers outside the workspace (\texttt{TORE}\xspace) to store displaced objects. Nevertheless, we show here that plans can be efficiently derived from the minimum \emph{total buffer} solution to \texttt{TORE}\xspace \cite{han2018complexity} and the minimum \emph{running buffer} solution to \texttt{TORE}\xspace \cite{gaorunning}, which computes the minimum number of \emph{concurrent external buffers} needed to solve a \texttt{TORE}\xspace instance. For both objectives, since \texttt{TORE}\xspace has been shown to be computationally intractable \cite{han2018complexity,gaorunning} and is a special case of \texttt{TORI}\xspace, \texttt{TORI}\xspace is also NP-hard. \subsubsection{Primitive Plan} To compute a \emph{primitive plan}, we assume enough free space is available so that no acquired constraints will be created. This transforms the problem into a \texttt{TORE}\xspace problem, where each object is displaced at most once before it moves to the goal pose. Then, an object $o_i \in \mathcal O$ can have three \emph{primitive} actions: \begin{enumerate} \item $(o_i, s\rightarrow g)$: moving from $p^s_i$ to $p^g_i$; \item $(o_i, s\rightarrow b)$: moving from $p^s_i$ to a buffer; \item $(o_i, b\rightarrow g)$: moving from a buffer to $p^g_i$. \end{enumerate} A primitive plan is a sequence of primitive actions; computing such a plan is similar to finding a linear vertex ordering \cite{adolphson1973optimal, shiloach1979minimum} of the dependency graph. We use dynamic programming based methods \cite{gaorunning} to achieve this, which minimizes the number of total buffers or running buffers. \subsubsection{Buffer Allocation} Free space inside the workspace $\mathcal{W}$ is scarce in cluttered spaces (e.g., Fig.~\ref{fig:density}) and acquired constraints must be dealt with through the careful allocation of buffers inside $\mathcal{W}$. We apply a greedy strategy to find feasible buffers based on a primitive plan (Algo. \ref{alg:buffer}). The general idea is to incrementally add constraints on the buffers until we find feasible buffers for the whole primitive plan or terminate at a step where there are no feasible buffers for the primitive plan. In Algo.~\ref{alg:buffer},~$\mathcal O_s, \mathcal O_g, \mathcal O_b$ are the sets of objects currently at start poses, goal poses and buffers respectively. \begin{algorithm} \begin{small} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwComment{Comment}{\% }{} \caption{ Buffer Allocation} \label{alg:buffer} \SetAlgoLined \vspace{0.5mm} \Input{$\pi$: a primitive plan; $\mathcal A_s=\{p^s_1,...,p^s_n\}$: start arrangement; $\mathcal A_g=\{p^g_1,...,p^g_n\}$: goal arrangement} \Output{$B$: buffers; TerminatingStep: the action step where buffer generation fails, $\infty$ if Success.} \vspace{0.5mm} $\mathcal O_s$ $\leftarrow$ $\mathcal O$; $\mathcal O_g$, $\mathcal O_b$ $\leftarrow$ $\emptyset$; $B \leftarrow$ RandomPoses($\mathcal O$)\\ \For{ $(o_i, m)\in \pi$}{ \If{$m$ is s $\rightarrow$ b}{ $\mathcal O_b$.add($o_i$)\\ Constraints[$o_i$]$\leftarrow$GetPoses($\mathcal O_s \bigcup \mathcal O_g - \{o_i\}$)\\ } \ElseIf{$m$ is b $\rightarrow$ g}{ \lFor{$o\in \mathcal O_b\backslash\{o_i\}$}{ Constraints[$o$].add($p^g_i$) } } \Else{ \lFor{$o\in \mathcal O_b$}{ Constraints[$o$].add($p^g_i$) } } Success, $B'$ $\leftarrow$ BufferGeneration($\mathcal O_b$, Constraints, $B$)\\ \If{Success}{ $B$ $\leftarrow$ $B'$\\ $\mathcal O_s, \mathcal O_g, \mathcal O_b \leftarrow$ UpdateState($\mathcal O_s, \mathcal O_g, \mathcal O_b$)\\ } \lElse{ \Return $B$, $\pi$.index(action) } } \vspace{0.5mm} \Return $B$, $\infty$\\ \end{small} \end{algorithm} We start with $\mathcal A_s$ where all the objects are at start poses and the buffers are initialized at random poses (line 1). Each action in $\pi$ indicates an object $o_i$ that is manipulated and the action $m$ performed (line 2). If $o_i$ is moved to a buffer (line 3), then we add it into $\mathcal O_b$ (line 4). The current poses of other objects in $O_s\bigcup O_g$ are seen as fixed obstacles for $o_i$ (line 5). If $o_i$ is leaving the buffer (line 6), then other objects in $\mathcal O_b$ should avoid the goal pose $p^g_i$ of $o_i$ (line 7). If $o_i$ is moving directly from $p^s_i$ to $p^g_i$ (line 8, the ``else'' corresponds to $m$ being $s\to g$, e.g., directly go from start to goal), then all buffers for objects in the current $\mathcal O_b$ need to avoid $p^g_i$ (line 9). After setting up acquired constraints, we generate new buffers for objects in $O_b$ to satisfy these constraints by either sampling or solving an optimization problem (line 10). Old buffers in $B$ satisfying new constraints will be directly adopted. If feasible buffers are found (line 11), then buffers and object states will be updated (line 12-13). Otherwise, we return the feasible buffers computed and record the terminating step of the algorithm (line 14). In the case of a failure, the returned buffers provide a \emph{partial plan}. Fig.~\ref{fig:AlgoExample} illustrates the buffer allocation process via an example. The green, cyan, and transparent discs represent the current poses, goal poses and allocated buffers respectively. When we move $o_1$ to a buffer $B_1$ (Fig. \ref{fig:AlgoExample}(b)), it only needs to avoid collision with $p^s_2$ and $p^s_3$. But as we move $o_3$ to a buffer, $B_1$ needs to avoid $o_3$'s buffer $B_3$ as well. To satisfy the added constraint, $B_1$ will be reallocated. Since the new buffers $B_1$ and $B_3$ (Fig.~\ref{fig:AlgoExample}(c)) satisfy the constraints added in the following steps, they need not to be relocated. Note that the buffer originally selected for $o_1$ but then replaced will not appear in the resulting plan, i.e., $o_1$ will move directly to the new buffer (Fig.~\ref{fig:AlgoExample}(c)-(f)). Algo. \ref{alg:buffer} works with one strongly connected component of the dependency graph at a time, treating objects in other components as fixed obstacles. Once the feasible buffers are found, all the primitive actions can be transformed into feasible pick-n-place actions inside the workspace. And therefore, the primitive plan can be transformed into a rearrangement plan moving objects from $\mathcal A_s$ to $\mathcal A_g$. The function BufferGeneration is implemented by either sampling or solving an optimization problem, both of which are discussed below. \begin{figure} \vspace{2mm} \centering \includegraphics[width=\columnwidth]{AlgoExample.pdf} \vspace{-.25in} \caption{A working example with three objects defined in (a). The primitive plan is [($o_1$, s $\rightarrow$ b), ($o_3$, s $\rightarrow$ b), ($o_2$, s $\rightarrow$ g), ($o_1$, b $\rightarrow$ g), ($o_3$, b $\rightarrow$ g)]. Figures (b)-(f) show the steps of Alg. 1 after each action. The transparent discs with dashed line boundaries ($B_i$) represent the buffers satisfying constraints up to each step. For each object $o_i$, the green ($O_i$) and cyan ($G_i$) discs represent the current and goal poses respectively.} \label{fig:AlgoExample} \end{figure} \paragraph{Sampling}\label{sec:Sampling} Given the object poses that buffers need to avoid so far, feasible buffers can be generated by sampling poses inside the free space. When objects stay in buffers at the same time,we sample buffers one by one and previously sampled buffers will be seen as obstacles for latter ones. \paragraph{Optimization}\label{sec:Optimization} For cylindrical objects $o_i$ at $(x_i,y_i)$ with radius $r_i$, and $o_j$ at $(x_j,y_j)$ with radius $r_j$, they are collision-free when $(x_i-x_j)^2+(y_i-y_j)^2 \geq (r_i+r_j)^2$ holds. By further restricting the range of buffer centroids to assure they are in the workspace, the buffer allocation problem can be transformed into a quadratic optimization problem. For objects with general shapes, collision avoidance cannot be presented by inequalities of object centroids. We can construct the optimization problem with $\phi$ functions of the objects \cite{chernov2010mathematical} and solve the problem with gradients. \subsection{Lazy Buffer Allocation}\label{sec:LazyBufferGeneration} \input{04_01_lazy_buffer_generation} \subsection{Tabletop Rearrangement with Lazy Buffers (\textsc{TRLB}\xspace)}\label{sec:PartialPlan} \input{04_02_partial_plan} \subsection{Preprocessing}\label{sec:Preprocess} \input{04_03_Preprocess} \subsection{Ablation Study for Cylindrical Objects} We first present experiments with cylindrical objects to compare lazy buffer generation algorithms given different options, including: (1) Primitive plan computation: running buffer minimization (RBM), total buffer minimization (TBM), random order (RO); (2) Buffer allocation methods: optimization (OPT), sampling (SP); (3) High level planners: one-shot (OS), forward search tree (ST), bidirectional search tree (BST); and (4) With or without preprocessing (PP). Here, the one-shot (OS) planner is using primitive plans and buffer allocation (Sec.~\ref{sec:LazyBufferGeneration}) without tree search (Sec.~\ref{sec:PartialPlan}). In OS, we attempt to compute a feasible rearrangement plan up to $30|\mathcal O|$ times before announcing a failure. Notice that at least $|\mathcal O|$ actions are required for solving any instance. A full \textsc{TRLB}\xspace algorithm is a combination of components, e.g., RBM-SP-BST stands for using the primitive plans that minimize running buffer size, performing buffer allocation by sampling, maintaining a bidirectional search tree, and doing so without preprocessing. For evaluation, we first compare the primitive plan computation options, using sampling-based buffer allocation, bidirectional tree search and no preprocessing. TBM and RBM plans are computed using dynamic programming based solvers \cite{gaorunning}. The results are shown in Fig.\ref{fig:Primitive}. Even though plans generated by TBM-SP-BST are slightly shorter than RBM-SP-BST, TBM-SP-BST is less scalable as either the density level or the number of objects in the workspace increases. Compared to RBM plans, individual RO plans can be generated almost instantaneously but we don't see much benefit in computation time for the overall algorithm. The results indicate that RBM should be used for primitive plan computation as it results in efficient and high-quality solutions. \begin{figure}[h!] \vspace{2mm} \centering \begin{overpic}[width=1\columnwidth]{primitive_03.pdf} \put(-1.5,-22.5){ \includegraphics[width=1\columnwidth]{primitive_05.pdf}} \end{overpic} \vspace{13mm} \caption{Comparison of primitive planners with $10$-$40$ cylinders and density levels $\rho=0.3$ (top), 0.5 (bottom) (left: computation time in seconds; middle: success rate; right: number of actions as multiples of $|\mathcal O|$).} \label{fig:Primitive} \end{figure} In Fig. \ref{fig:BufferAllocation}, buffer allocation methods are compared using the RBM primitive planner and the OS high-level planner. Optimization-based allocation guarantees completeness and generates high-quality plans but it is computationally expensive. When $\rho=0.5$, the success rate tends to be low in instances with a small number of objects. That is because for the given density level, the smaller the number of objects, the larger the object size relative to the environment, and the smaller the configuration space size relative to the environment. Thus, precisely allocating buffer locations with OPT is helpful in these cases. \begin{figure}[h!] \vspace{2mm} \centering \begin{overpic}[width=1\columnwidth]{buffer_03.pdf} \put(-1.5,-22.5){ \includegraphics[width=1\columnwidth]{buffer_05.pdf}} \end{overpic} \vspace{13mm} \caption{Comparison of buffer allocation methods with $10$-$40$ cylinders at density levels $\rho=0.3$ (top), 0.5 (bottom) (left: computation time in seconds; middle: success rate; right: number of actions as multiples of $|\mathcal O|$).} \label{fig:BufferAllocation} \end{figure} The effectiveness of the high-level planners and preprocessing are shown in Fig. \ref{fig:HighLevel}, which suggests that ST, BST and preprocessing are all effective in increasing success rate in dense environments. In addition, preprocessing significantly speeds up computation in large scale dense cases at the price of extra actions to execute preprocessing. By simplifying the dependency graph with preprocessing, less time is needed to compute a primitive plan. \begin{figure}[h!] \vspace{1mm} \centering \begin{overpic}[width=1\columnwidth]{high_level_03.pdf} \put(-1.5,-22.5){ \includegraphics[width=1\columnwidth]{high_level_05.pdf}} \end{overpic} \vspace{13mm} \caption{Comparison among high-level frameworks and OS with preprocessing. There are 10-40 cylinders in the workspace at density levels $\rho=0.3$ (top), $0.5$ (bottom) (left: computation time in seconds; middle: success rate; right: number of actions as multiples of $|\mathcal O|$).} \label{fig:HighLevel} \end{figure} The robustness of ST and BST are further evaluated with ``dense-small'' instances where a few objects are packed densely (Fig. \ref{fig:HighLevelDenseSmall}). The bidirectional search tree has a higher success rate in these cases, especially in 5-object instances. In addition to the above evaluations, we also tried integrating the preprocessing into the BST framework (RBM-SP-BST-PP), which speeds up the computation: for $60$-object instances with $\rho=0.5$, only $63\%$ of them can be solved by RBM-SP-BST in 300 seconds. All of them, however, can be solved together with the preprocessing and the solution time averaged 0.29 seconds. Similarly to the results in Fig.~\ref{fig:HighLevel}, preprocessing makes the solution plan much longer than necessary (needs around $30\%$ more actions than RBM-SP-BST). Based on the analysis on computation time, success rate, and solution quality, RBM-SP-BST is the best overall combination, and preprocessing significantly speeds up the solver with a reduction of solution quality. \begin{figure}[h!] \vspace{2mm} \centering \includegraphics[width=0.6\columnwidth]{dense_small_example.pdf} \vspace{-0.1in} \caption{Comparison between ST and BST frameworks with ``dense-small'' instances where 5-8 cylinders packed in the environment and density level $\rho=0.5$. [left] An example with 5 cylinders, [right] success rate of methods.} \label{fig:HighLevelDenseSmall} \end{figure} \subsection{Comparison with Alternatives for Cylindrical Objects} We compare the proposed method RBM-SP-BST with BiRRT(fmRS) \cite{krontiris2016efficiently} and a MCTS planner \cite{labbe2020monte}, which, to the best of our knowledge, are state-of-the-art planners for \texttt{TORI}\xspace. The MCTS planner is a C++ solver, while the other two methods are implemented in Python. Besides success rate, solution quality, and computation time, we also compare the number of collision checks which are time-consuming in most planning tasks. In Fig.~\ref{fig:LargeScale}, we compare the methods in large scale problems with $\rho=0.3$. The success rate is $100\%$ for all. Our method, RBM-SP-BST, avoids repeated collision checks due to the use of the dependency graph. BiRRT(fmRS), which only uses dependency graphs locally, spends a lot of time and conducts a lot of collision checks to generate random arrangements. MCTS generates solutions with similar optimality but does so also with a lot of collision checking, which slows down computation. We note that a value of $1$ in the right figure (number of actions) is the minimum possible, so both RBM-SP-BST and MCTS compute high-quality solutions, which RBM-SP-BST does slightly better. To sum up, in sparse large scale instances, RBM-SP-BST is two magnitudes faster and conducts much fewer collision checks than the alternatives. \begin{figure}[h!] \vspace{2mm} \centering \begin{overpic}[width=1\columnwidth]{large_scale.pdf} \end{overpic} \vspace{-.1in} \caption{Comparison of algorithms with 20-100 cylinders at density level $\rho=0.3$ (left: computation time in seconds; middle: number of collision checks; right: number of actions as multiples of $|\mathcal O|$).} \label{fig:LargeScale} \end{figure} Next, we compare the methods in ``dense-small'' instances (Fig.~\ref{fig:DenseSmallComaprison}). Here, RBM-SP-BST is the only method that maintains high success rate in these difficult cases. \begin{figure}[h!] \vspace{2mm} \centering \includegraphics[width=1\columnwidth]{dense_small.pdf} \caption{Comparison of methods on ``dense-small'' instances where 5-8 objects are packed in an environment with $\rho=0.5$ (left: computation time in seconds; middle: number of collisions; right: success rate).} \label{fig:DenseSmallComaprison} \end{figure} We further compare the performance of RBM-SP-BST and MCTS in lattice rearrangement problems, which are recently studied in the literature \cite{yurearrangement}. An example with 15 objects is shown in Fig.~\ref{fig:lattice}[left]. In the start and goal arrangements, gaps between adjacent objects are set to be 0.01 object radius, and thus buffer allocation is challenging for sampling-based methods. While MCTS tries all the actions on each node, RBM-SP-BST is able to detect the embedded combinatorial object relationship via the dependency graph and therefore needs less buffer allocation calls. As shown in Fig.~\ref{fig:lattice}[right], RBM-SP-BST has much higher success rate in lattice rearrangement tasks. \begin{figure}[h!] \vspace{2mm} \centering \includegraphics[width=0.3\textwidth]{lattice_example.pdf}\hspace{2mm} \includegraphics[width=0.3\columnwidth]{lattice_comaprison.pdf} \caption{Comparison among methods in lattice instances with 3-35 objects. [left] lattice example; [right] success rate} \label{fig:lattice} \end{figure} \subsection{Cuboid Objects} Because the MCTS solver only supports cylindrical objects, we only compare RBM-SP-BST and BiRRT(fmRS) in the cuboids setup (Fig.~\ref{fig:density}[right]). When $\rho=0.3$, RBM-SP-BST computes high quality solutions efficiently, while BiRRT(fmRS) can only solve instances with up to 20 cuboids. We mention that, when $\rho=0.4$, BiRRT(fmRS) cannot solve any instance, but RBM-SP-BST can solve 50-object rearrangement problems in 28.6 secs on average. \begin{figure}[h!] \vspace{2mm} \centering \includegraphics[width=1\columnwidth]{stick_03.pdf} \caption{Comparison between methods in cuboid instances with $10$-$30$ cuboids and $\rho=0.3$ (left: computation time in seconds; middle: success rate; right: number of actions as multiples of $|\mathcal O|$).} \label{fig:StickComaprison} \end{figure} \subsection{Hardware Demonstration} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{hardware.pdf} \caption{Experimental settings in addition to that of Fig~\ref{fig:ApplicationExample}.} \label{fig:HardwareSetups} \end{figure} We further demonstrate that the plans computed by \textsc{TRLB}\xspace can be readily executed on real robots in a complete vision-planning-control pipeline. Our hardware setup consists of a UR-5e robot arm, an OnRobot VGC 10 vacuum gripper, and an Intel RealSense D435 RGB-D camera. As shown in the accompanying video, \textsc{TRLB}\xspace solves all attempted instances (Fig~\ref{fig:ApplicationExample} and Fig.~\ref{fig:HardwareSetups}), which involves concave objects, in an apparently natural and efficient manner. \section{Introduction}\label{sec:intro} \input{01_introduction} \section{Related Work} \input{02_related} \section{Tabletop Rearrangement\\ with Internal Buffers} \input{03_problem_setup} \section{Algorithmic Solutions} \input{04_methods} \section{Experiments} \input{05_experiment} \section{Conclusion and Future Work} \input{09_conclusion} \bibliographystyle{IEEEtran}
1,314,259,995,541
arxiv
\section{Introduction} \label{sec:intro} ``Check. Your. Work. It is okay to make mistakes, but it is not okay to turn in work that is riddled with mistakes that checking your work could have caught.'' These sage words were spoken by my undergraduate mathematics professor, Dr. Kenneth Holladay. Unfortunately, somewhere between Dr. Holladay's calculus classes and my graduate discrete choice studies, checking one's work was demoted from a main actor to a supporting or missing role in the ever-unfolding data analysis drama. The tragedy of this situation is that regardless of whether one is working on undergraduate calculus problems or building predictive models for governments and corporations, there is a need to check one's work. From an econometric perspective, we do not typically have economic theory to guide all aspects of our model specification \citep[p.85]{dagsvik_2017_invariance}. Accordingly, it would be wise to check the implications of our (likely) mis-specified models. And from a broader statistical view, it is often noted that, ``all models [including discrete choice models] are false, but some are useful'' \citep{box_science_1976}. Again, we find checking for \textit{correctness} or \textit{truth} of one's model to be hopeless a priori, and the best we can do is check how our models misrepresent reality \citep[pp.734, 800]{gelman_posterior_1996}. For discrete choice models, many such checks can be broadly categorized as checking for ``overfitting'' or ``underfitting.'' overfitting is the degree to which one's model performs better when evaluated on data used for estimation (i.e. training or in-sample data) as compared to data that was not used for estimation (i.e. testing or out-of-sample data) \citep[p.6]{fariss_enhancing_2017}. Colloquially, severe overfitting is characteristic of models that ``learned'' the patterns in the in-sample data at the expense of patterns that generalize to out-of-sample data. Conversely, underfitting is the degree to which one's model has failed to learn patterns in one's in-sample data \citep[p.6]{fariss_enhancing_2017}. Underfit models typically perform poorly on some problem-specific metric for both in-sample and out-of-sample data. See Figure \ref{fig:underfitting} for an illustrative depiction of these two concepts in a binary discrete choice setting. Here, the data generating relationship is shown in dark blue. The underfit model, shown in dashed purple, is overly simple and has failed to capture observed patters in the data. Conversely, the overfit model is overly complex; in dashed and dotted light orange, the overfit model captures spurious patterns in the data that will not generalize to new samples from the data generating process. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{underfitting_example} \caption{Example of underfitting and overrfitting} \label{fig:underfitting} \end{figure} Currently, numerous methods exist for diagnosing overfitting. Such techniques include cross-validation \citep{stone_cross_1974}, resampling-methods such as the ``0.632+'' bootstrap \citep{efron_improvements_1997}, and the use of held-out data \citep{steckel_cross_1993}. Importantly, these techniques are highly general. Instead of testing for specific, hypothesized causes of overfitting, these methods check for overfitting's presence and extent. This generality promotes use: researchers can use the methods regardless of the specific analyses\footnote{Of course, one's analytic task can impact the way such checks are implemented. For instance, cross-validation is performed differently when dealing with cross-sectional panel data as compared to purely cross-sectional data.} being performed. As a result, the overall quality of predictive analyses has increased. A mostly standardized tool-box and language for assessing overfitting has developed, enabling researchers to better evaluate predictive models. Comparatively, checks for underfitting are far less common. For instance, within the discrete choice literature, few checks for the general presence and extent of underfitting have been proposed. As examples, consider two popular discrete choice textbooks: ``Discrete Choice Methods with Simulation, 2nd Edition'' by Kenneth Train \citeyearpar{train_discrete_2009} and ``Discrete Choice Analysis: Theory and Application to Travel Demand'' by \citet*{ben-akiva_discrete_1985}. In \citet{train_discrete_2009}, only 1 of the 90 sections in the book (Section 11.5) discusses model checking in a manner that can be described as a general check for underfitting. Even then, the checks discussed in that section are only applicable to a particular class of models---mixed logit models. Now consider \citet{ben-akiva_discrete_1985}. In that text, only 1 of 78 sections (Section 7.6) describes procedures for directly checking one's model against one's data for lack-of-fit. Unfortunately, this paucity of methods for assessing underfitting goes beyond these two textbooks. It is characteristic of the field. To improve the state of practice, I propose and demonstrate methods for using predictive simulations to perform general checks of underfitting in discrete choice models. In particular, I make the following contributions. Methodologically, I introduce a general and `semi-automatic' algorithm for checking discrete choice models via predictive simulations. Moreover, by combining new graphical displays with existing techniques, I introduce methods for checking one's data against one's model in terms of the model's predicted distributions of choices ($P \left( Y \right)$), choices given explanatory variables ($P \left( Y \mid X \right)$), and explanatory variables given choices ($P \left( X \mid Y \right)$). Empirically, I demonstrate my proposed methods by checking the models from \citet{brownstone_forecasting_1998}. Through this case study, I show that my proposed methods can point out lack-of-model-fit in one's models and suggest concrete model improvements that substantively change the results of one's policy analysis. Importantly, these model checking techniques complement rather than replace careful economic and behavioral scrutiny of one's models. By ensuring adherence with sound economic principles while also diagnosing and reducing the amount of underfitting in our models, we increase our chances of creating maximally useful representations of reality. The remainder of this paper is structured as follows. To ease discussion of this paper's ideas, Section \ref{sec:methods} immediately describes the proposed methodology and provides examples of the graphical displays introduced in this paper. Once readers are familiar with the proposed simulation-based checks for underfitting, Section \ref{sec:lit-review} relates these methods to existing model tests and assessments in the discrete choice and statistics literature. Next, Section \ref{sec:case-study-part2} will continue this paper's case study, showing how the displays from Section \ref{sec:methods} suggest avenues for model-improvement, not merely model criticism. Section \ref{sec:conclusion} will then recap the paper's contributions and conclude. Finally, Appendix \ref{sec:freq-questions} contains answers to frequently asked questions about the interpretation and use of predictive simulations for model checking, and Appendix \ref{sec:appendix-b-expanded-results} contains model estimation results that are of secondary importance to the paper but perhaps of interest to some readers. \section{Methodology: Demonstration and Description} \label{sec:methods} In this section, I demonstrate and describe this paper's proposed methodology for assessing underfitting in discrete choice models. It cannot be reiterated strongly enough that the \textbf{\textit{main focus}} of this paper's techniques is to show and quantify the existence and extent of underfitting of one's model. As a result, this section will not contain any information about how one might use the proposed techniques to help suggest model improvements. Such a discussion is of secondary importance to this paper and is deferred to Section \ref{sec:case-study-part2}. Moreover, not all techniques in this section will provide obvious clues to how one can improve one's model. This is to be expected. As an analogy, consider thermometers. They are useful for detecting the presence and severity of a fever, even though they don't suggest strategies for curing the underlying problem. Likewise, this paper's methods are valuable for detecting and measuring underfitting, regardless of the degree to which they suggest ways to fix the underlying causes of the lack-of-fit. Disclaimers aside, I use a case study of vehicle choice by \citet{brownstone_forecasting_1998} to provide an initial illustration of the proposed techniques. Though published twenty years ago, I chose \citeauthor{brownstone_forecasting_1998}'s paper for three reasons: (1) because the data is freely available for download by the public, (2) because the article is well-known and well-cited within the discrete choice community, and (3) because the article represents standard best practices within the field. As such, by applying this paper's proposed model checking techniques to the vehicle choice models of \citet{brownstone_forecasting_1998}, we gain insight into the types of underfitting that one might find, even in models built by the best of us. Accordingly, Subsection \ref{sec:case1-part1-data} describes the vehicle choice data and model from \citet{brownstone_forecasting_1998}. Next, Subsections \ref{sec:case1-part1-lp-plots} - \ref{sec:case1-part1-cdfs} show the seven types of plots that will be used in this paper to assess a model's lack-of-fit. These plots are introduced in increasing order of granularity. For example, the first two plots assess whether one's model fits one's data in an aggregate sense; the second and third plots assess a model's lack-of-fit for particular groups of observations for a given alternative; and the final three plots assess a model's lack-of-fit with respect to a particular alternative and particular values of a chosen explanatory variable. Finally, once specific examples of the various plots have been introduced, Subsection \ref{sec:case1-part1-discussion} will discuss the results and interpretations of these plots for diagnosing the character and extent of underfitting in \citet{brownstone_forecasting_1998}'s multinomial logit (MNL) model. Subsection \ref{sec:general-methodology} will then abstract from the case study to present this paper's general methodology for checking underfitting in discrete choice models. To make the methods as widely useful as possible, special attention will be paid to how the general method may be adapted for use with `unlabelled alternatives.' Moreover, Subsection \ref{sec:general-methodology} will present a semi-automatic algorithm for applying this paper's methodology, further decreasing the minimum efforts needed to apply this paper's techniques. \subsection{A Case Study of \citet{brownstone_forecasting_1998} (Part 1)} \label{sec:case1-part1} \subsubsection{Data and modeling context} \label{sec:case1-part1-data} The data in this case study comes from a stated-preference study of household attitudes and preferences for alternative-fuel vehicles \citep{brownstone_forecasting_1998, brownstone_transactions_1996}. The sampling plan used random telephone-digit dialing followed by a postal survey to collect data from 4,654 households. The collected data included each household's choice amongst six alternatives (three different fuel types and two body types per fuel type), the attributes of each alternative, and some limited characteristics of the household. For a detailed description of the collected attributes and characteristics, see Table \ref{table:variable-definitions-vehicle-choice}. Together, \citeauthor{brownstone_forecasting_1998} used the collected\footnote{For further information about the data and collection protocol, see \citet{brownstone_forecasting_1998, brownstone_transactions_1996}. To access the raw data, visit the website of the Journal of Applied Econometrics and download the data associated with \citet{mcfadden_mixed_2000}.} data to estimate four vehicle choice models: a MNL model, a probit model, and two mixed logit models. \begin{table} \centering \input{variable_descriptions.tex} \caption{Variable Definitions} \label{table:variable-definitions-vehicle-choice} \end{table} For part one of this case study, I demonstrate this paper's methods with \citeauthor{brownstone_forecasting_1998}'s MNL model\footnote{However, I return to the assessment of \citeauthor{brownstone_forecasting_1998}'s chosen mixed logit model in Section \ref{sec:case-study-part2}.}. I use this model because it is likely to be familiar to the largest spectrum of readers. Moreover, producing the predictive simulations for \citeauthor{brownstone_forecasting_1998}'s chosen mixed logit model seemed unnecessarily tedious. This would require writing custom code to estimate their model, yet the MNL can be used just as effectively to demonstrate this paper's model checking techniques. Justifications aside, Table \ref{table:vehicle-choice-mnl-results}, shows the results of reproducing \citeauthor{brownstone_forecasting_1998}'s MNL model. As expected, the results match those of the original study. Moreover, almost all variables in the model are statistically significant, and no variables have an obviously wrong sign. \citeauthor{brownstone_forecasting_1998}'s MNL model therefore seems to be a good `working model' that one can begin refining by iteratively: (1) criticizing the working model, (2) positing and estimating a new model that addresses the earlier criticisms, and (3) repeating steps 1 and 2 \citep[p.795]{box_science_1976}. \begin{table} \centering \input{vehicle_choice_mnl_results.tex} \caption{\citeauthor{brownstone_forecasting_1998}'s MNL model} \label{table:vehicle-choice-mnl-results} \end{table} Before seeing any data, \citeauthor{brownstone_forecasting_1998} were critical of the MNL model because its ``independence from irrelevant alternatives'' (IIA) property implies restrictive substitution patterns among the various alternatives being modeled \citep[pp.109-110]{brownstone_forecasting_1998}. This a-priori criticism led \citeauthor{brownstone_forecasting_1998} to estimate the probit and mixed logit models mentioned above since those models do not have the IIA property. To complement such data-agnostic reasons for model refinement, this paper suggests a data-driven, graphical approach to model criticism by using predictive simulations to check for underfitting. In Subsections \ref{sec:case1-part1-lp-plots} - \ref{sec:case1-part1-cdfs} that follow, I will use \citeauthor{brownstone_forecasting_1998}'s MNL model to showcase seven graphical model-checks that I believe can be widely useful for discrete choice models. Then, Subsection \ref{sec:general-methodology} will generalize these plots by detailing an overarching methodology for model checking via predictive simulations. \subsubsection{Log-predictive plots} \label{sec:case1-part1-lp-plots} This paper's first type of graphical model check is the log-predictive plot. This plot compares a scalar summary of the predictive ability of a given model on the observed data against the distribution formed by evaluating the scalar measure on simulated datasets. Because this plot evaluates a single number, the observed log-predictive value, this plot is amongst the most aggregate checks of underfitting. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{log-predictive-vehicle-choice-mnl} \caption{Log-predictive plot of \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-log-predictive} \end{figure} For a concrete example, consider Figure \ref{fig:orig-mnl-log-predictive}. Here, the scalar measure is the log-likelihood, $T \left( X, Y; \beta _{\textrm{MLE}} \right) = \sum _i \sum _{j \in C_i} y_{ij} \ln \left[ P \left( y_{ij} = 1 \mid X_{ij}, \beta _{\textrm{MLE}} \right) \right]$. In this equation, $X$ is the design matrix for the observed data; $Y$ is a vector of outcomes with one element per observation ($i$) per available alternative ($j$); $y_{ij} \in \left\lbrace 0, 1 \right\rbrace$ denotes whether observation $i$ chose alternative $j$; $X_{ij}$ contains the explanatory variables for observation $i$ and alternative $j$; $C_i$ is the choice set for individual $i$; $\beta _{\textrm{MLE}}$ is the maximum likelihood estimate (MLE) displayed in Table \ref{table:vehicle-choice-mnl-results}; and $P \left( y_{ij} = 1 \mid X_{ij}, \beta _{\textrm{MLE}} \right)$ is calculated from \citeauthor{brownstone_forecasting_1998}'s MNL model. The observed log-likelihood, $T \left( X, Y; \beta _{\textrm{MLE}} \right)$, is compared against the distribution formed by computing $T \left( X, Y^r; \beta _{\textrm{MLE}} \right)$ for simulated outcomes $Y^r$, for draws $r \in \left\lbrace 1, 2, ..., R \right\rbrace$. Here, $R$ is the total number of simulated datasets. In this case, we first simulate values of the choice model parameters ($\beta$) from the asymptotic sampling distribution of $\beta _{\textrm{MLE}}$. In other words, if $\mathcal{MVN} \left( \mu, \Sigma \right)$ is a multivariate normal distribution with mean $\mu$ and covariance matrix $\Sigma$, then I draw $\beta ^r$ from $\mathcal{MVN} \left( \beta _{\textrm{MLE}}, -H^{-1} \right)$ where $-H^{-1}$ is the negative inverse of the Hessian matrix, evaluated at $\beta _{\textrm{MLE}}$. Then for each draw of $\beta ^r$, I compute $P \left( y_{ij} = 1 \mid X_{ij}, \beta ^r \right) \ \forall \ i, j \ \in C_i$. Finally, for each observation $i$, I simulate an outcome $y ^r_i = \left[ y ^r _{i1}, y ^r _{i2}, ..., y ^r _{i, \mid C_i \mid} \right]$ from the categorical distribution with probabilities $P \left( y_{ij} = 1 \mid X_{ij}, \beta ^r \right) \ \forall j \ \in C_i$. Once the simulated values of $T \left( X, Y^r; \beta _{\textrm{MLE}} \right)$ have been calculated, their distribution can be displayed using histograms, kernel density estimates, and/or empirical cumulative density functions. In Figure \ref{fig:orig-mnl-log-predictive}, a kernel density estimate was used. Mechanics aside, the rationale behind the log-predictive plots is as follows. If one's model fits one's data well, then the simulated outcomes $\left\lbrace Y^r \right\rbrace$ from one's model should be similar to one's observed outcomes. The log-predictive plots measure similarity in terms of some log-predictive measure such as the log-likelihood evaluated at the MLE of the observed data. In a bayesian context, one might instead use the log pointwise-predictive density $T \left( X, Y \right) = \sum _i \sum _{j \in C_i} y_{ij} \ln \left[ \frac{1}{S }\sum _{s=1} ^S P \left( y_{ij} = 1 \mid X_{ij}, \beta _s \right) \right]$ where $S$ is the total number of parameter samples from one's posterior/prior distribution and $\beta _s$ is a single sample \citep[p.169]{gelman_bayesian_2014}. In both bayesian and frequentist analyses, one expects that, for a well-fitting model, the distribution of log-predictive values formed by $T \left( X, Y^r \right)$ will be symmetric and unimodal with its mode at the observed value $T \left( X, Y \right)$. This would indicate that the log-predictive values of the observed outcomes are very likely, and that one's model is not biased towards producing outcomes that have consistently lower or higher log-predictive values. Deviations from this ideal represent model misfit. As exemplified by Figure \ref{fig:orig-mnl-log-predictive}, checking an estimated model with a log-predictive plot does not always reveal a lack of fit\footnote{On the other hand, log-predictive plots \textbf\textit{{can}} reveal a lack of fit. The expectation of the distribution of simulated log-predictive values is not the observed log-predictive value. This is true despite the fact that the expectation of the simulated parameters is the point-estimate of the model parameters. As is well-known, $ E \left[ f \left( x \right) \right] \neq f \left( E \left[ x \right] \right)$. That is to say, the expectation of a function need not equal the function evaluated at an expectation}. This is because frequentist analyses typically optimize for the log-likelihood, and even bayesian analyses revise their prior distributions to concentrate in areas of the parameter space with higher log-likelihoods (and hence high log pointwise-predictive values). However, if one's log-predictive plot of an estimated model displays a marked lack of fit, this means that something is wrong. For example one's posited model may be woefully inadequate, one's estimation routine may not be converging, or one's computations may be suffering from numeric errors. In this sense, log-predictive plots are basic, graphical sanity-checks. Beyond checking one's estimated models, log-predictive plots are most useful for checking one's prior distributions in a bayesian analysis. In this setting, where analysts often choose their prior distributions for convenience rather than to reflect their true beliefs, the log-predictive plots can reveal large discrepancies between one's implied prior beliefs about the outcomes for one's dataset and the actual outcomes. If one notices such a conflict between one's data and one's prior after (1) observing a lack-of-fit between one's posterior distribution and one's data and (2) ruling out a lack-of-fit between one's likelihood and one's data, then one's `convenience prior' should be adjusted to reflect one's true a-priori information and beliefs. \subsubsection{Market share plots} \label{sec:case1-part1-ms-plots} Market share plots are a more detailed check for lack-of-fit than log-predictive plots. Instead of using a scalar summary, market share plots judge one's model using a vector of numbers, one for each alternative. In particular, market share plots compare the observed versus predicted numbers (or equivalently, percentages) of observations that choose each alternative. See Figure \ref{fig:orig-mnl-market-share} for an example. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{market-share-plot-vehicle-choice-mnl} \caption{Market share plot of \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-market-share} \end{figure} Here, the measure of interest is $T \left( Y \right) = \left[ \sum _i y_{i1}, \sum _i y_{i2}, ..., \sum _i y_{iJ} \right]$ where $J$ is the total number of alternatives in one's dataset. As before, $T \left( Y \right)$ is compared against the distribution formed by computing $T \left( Y^r \right)$ for all simulated datasets, indexed by $r$. For this plot, and all others that follow, the simulation procedure is the same as described in Subsection \ref{sec:case1-part1-lp-plots}. In general, one can use many different plots to visualize this model check. One can iterate through available alternatives, each time plotting the observed number of decision makers that choose the current alternative versus the predicted distribution of the number of decision makers choosing the current alternative. As with the log-predictive plots, the distribution can be visualized using histograms, kernel density estimates, and/or empirical cumulative distribution functions. The various comparisons may also be concisely displayed using boxplots, as in Figure \ref{fig:orig-mnl-market-share}. Here, the alternatives have been taken to be the various fuel types of each vehicle. Overall, market share plots assess the degree of underfitting in the marginal distribution, $P \left( Y \right)$, implied by one's model. These plots check whether one's model has learned the aggregate pattern of responses in one's data. Note that such patterns may be of great importance for decision making. As an example, it would be counter-productive to use a vehicle choice model for inventory stocking decisions if the model consistently under-predicted the amount of electric vehicles by a large amount. Even beyond the decision-making use case, it is important to understand the ways that one's model does not fit one's data. For example, if one wants to understand a particular choice process, then market share plots can alert one to the need for caution when interpreting models that frequently under- or over-predict one's alternatives of interest. As a final caveat, note that a market share plot, as with all the other graphical checks in this paper, is most useful when it ``measures some aspect of the data that might not be accurately fit by the model'' \citep[p.191]{gelman_1996_model}. Practically, this means that if the model being checked ensures that the predicted shares of each alternative match the observed shares, market share plots are not very useful. The most common of such models is an MNL model with $J - 1$ alternative specific constants\footnote{This property is due to the first order conditions of the MNL model with alternative specific constants. See \citet[p.62]{train_discrete_2009} for more details.}. Most other commonly used choice models (e.g. nested logit and mixed logit models) do not ensure that one's predicted market shares match the observed shares \citep{donoso_maximum_2011, klaiber_random_2018}. As a result, market share plots remain a useful tool for checking generalized extreme value (GEV) models, mixed logit models, MNL models without alternative specific constants, and most other models. \subsubsection{Binned reliability plots} \label{sec:case1-part1-reliability-plots} Conceptually, market share plots judge a model's ability to predict $P \left( Y \right)$ for the various alternatives. In this subsection, binned reliability plots will instead be used to directly evaluate a model's predictions of $P \left( Y | X \right)$, thereby providing a more detailed assessment of a model's predicted probabilities for each alternative. An example of a binned reliability plot for \citeauthor{brownstone_forecasting_1998}'s MNL model is shown in Figure \ref{fig:orig-mnl-methanol-reliability} for methanol vehicles. At its core, binned reliability plots group (i.e. bin) observations according to their predicted probabilities of choosing a given alternative, and then for each group, the mean predicted probability is plotted on the x-axis while the proportion of observations that choose the alternative is plotted on the y-axis. Hopefully, one's observed reliability curve would lie in the middle of the simulated reliability curves and exhibit the same visual trends as the simulated curves. See the following paragraphs for more information and for an explanation of the many principles, choices, and visual elements that comprise this diagram. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{reliability-plot-vehicle-choice-mnl-methanol-point} \caption{Binned reliability plot for methanol vehicles using \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-methanol-reliability} \end{figure} Starting with its name, a reliability plot tries to display the ``reliability'' of a set of probabilistic predictions, where ``reliability refers to the degree of correspondence between forecast probabilities and observed relative frequencies'' \citep[p.41]{murphy_reliability_1977}. If a model is perfectly reliable, then that model's predicted probabilities should equal the ``true probabilities''. As a result, a plot of true probabilities on the y-axis versus predicted probabilities on the x-axis should yield the straight line $y = x$. Of course, one does not observe the true probabilities, one merely observes the discrete outcomes $y_{ij} \in \left\lbrace 0, 1 \right\rbrace \ \forall i, j$. Since discrete choice models generally yield distinct probability predictions for each observation, the sample average of the outcomes at each predicted probability will typically be a poor approximation of the true probabilities. Some smoothing of the observed responses (i.e. averaging outcomes across different predicted probabilities) will therefore be required to yield useful approximations of the desired reliability plot. On one hand, continuous smooths such as locally weighted scatterplot smoothers \citep{cleveland_bootstrap_1979}, spline smoothers \citep{silverman_spline_1984}, or kernel smoothers \citep{ghosh_kernel_2018} might be preferred since the resulting smoothed curves usually avoid the sharp boundary effects that can occur with ad-hoc grouping. However, I have found such continuous smoothing procedures to be unhelpful for discrete outcome data, especially as the degree of class imbalance increases. As the proportion of observations choosing a given alternative moves away from 50\%, the continuous smooths often led to reliability diagrams that were horizontal lines. This is due to the fact that if a given alternative is unlikely to be chosen, then unless one's model has very high discriminatory power, observations with $y_{ij} = 1$ are likely surrounded by observations with $y_{ij} = 0$. As a result, local averaging (as with continuous smooths) is likely to always yield values near zero, thus producing a horizontal line. An alternative method of smoothing that avoids the problems just described is based on binning the predicted probabilities. This is the smoothing method used to create the reliability plots in this paper. Using this method, one decides on a number of partitions of the predicted probabilities. Then, the predicted probabilities are ordered from smallest to largest and divided into partitions of roughly equal size. The mean of each partition's predicted probabilities is then plotted on the x-axis and the mean of the outcomes are plotted on the y-axis. When determining the number of partitions, it should be noted that this choice is subject to a ``precision/robustness'' or ``bias/variance'' trade-off. As the number of partitions increases, one's assessment of the quality of a particular probability prediction is less biased by predicted probabilities of different magnitudes. However, as the number of partitions increases, the variance of the mean outcome within each partition increases due to the reduction in the number of observations being averaged over. The negative effects of having too many partitions is especially acute in class-imbalanced scenarios. Here, each partition's number of observations with $y_{ij} = 1$ may rapidly decrease to zero as the number of partitions increase, thereby destroying one's ability to accurately estimate the ``true'' probabilities. As a result, one can see that increased precision in one's assessments of $P \left( Y \mid X \right)$ comes at the cost of decreased robustness of those assessments. As a general guide, I have empirically found that using ten partitions seems to be adequate for most visualizations\footnote{For example, Figure \ref{fig:orig-mnl-methanol-reliability} uses ten partitions.}. However, I suggest that one experiment with more and less partitions to see whether one's conclusions about the fit of one's model is sensitive to this choice. Aside from the smoothing method and number of partitions for bin-based smooths, construction of a reliability plot also requires one to choose which predictions will be displayed on the x-axis. For ease of interpretation, I recommend choosing a single set of predicted probabilities for binning and plotting on the x-axis. Such a choice will ensure that each bin is associated with a unique and constant set of observations. With this constant association, one will be able to generate a reference distribution for $T \left( X, Y \right) = \left[ \frac{1}{\mid B_1 \mid} \sum _{i \in B_1} y_{ij} , \frac{1}{\mid B_2 \mid} \sum _{i \in B_2} y_{ij}, \dots, \frac{1}{\mid B_K \mid} \sum _{i \in B_K} y_{ij} \right]$ where $K$ is the total number of partitions, $B_k$ is the set of observations in each partition, and $\mid B_k \mid$ is the number of observations in partition $k \in \left\lbrace 1, 2, \dots, K \right\rbrace$. If multiple sets of predicted probabilities were used (such as one set per simulated $\beta_r$), then the bin memberships of the observations might change with each draw, thus complicating the description of what we are simulating the reference distribution of. Now, given that a single set of predicted probabilities is desired, $P \left( Y \mid X, \beta _{\textrm{MLE}} \right)$ is an obvious candidate in a frequentist setting. In a bayesian setting, the point estimate that seems most appropriate is the posterior mean $P_{\textrm{post}} \left( Y \mid X \right) = \int P \left( Y \mid X, \beta \right) P \left( \beta \mid X, Y \right) \partial \beta \approx \frac{1}{R} \sum _{r=1} ^R P \left( Y \mid X, \beta_r \right) P \left( \beta_r \mid X, Y \right)$ where $\beta_r$ is a sample from one's posterior distribution. Either way, once one has chosen a set of predicted probabilities, one can generate a distribution for $T \left( X, Y \right)$ using the simulated $Y^r$ from one's model. One will then be able to compare $T \left( X, Y \right)$ with this reference distribution using a plot such as Figure \ref{fig:orig-mnl-methanol-reliability}. The `observed' curve, i.e. $T \left( X, Y \right)$, will hopefully (1) lie near the line $y = x$, indicating perfect reliability, (2) lie in the middle of the reference distribution, and (3) exhibit the same visual trends as the reference distribution. Deviations from any of these three ideals represent (1) ways that one's model has failed to capture patterns in the data and (2) systematic over- or under-predictions of choice probabilities for the alternative of interest. \subsubsection{Binned marginal model plots} \label{sec:case1-part1-mm-plots} Like the binned reliability plots of the last subsection, binned marginal model plots provide a way to check one's model of $P \left( Y \mid X \right)$ for underfitting. Binned marginal model plots display the binned value of a continuous variable on the x-axis and each bin's average probabilities (both observed and simulated) for a given alternative on the y-axis. As a result, this plot checks the degree to which one's model faithfully replicates the empirical relationships between the variable on the x-axis and the true (but unobserved) $P \left( Y \mid X \right)$. For example, in Figure \ref{fig:orig-mnl-suv-marginal} the binned marginal model plot displays the predicted versus simulated relationship between $P \left( Y \mid X \right)$ and the relative cost of a sports utility vehicle (SUV). \begin{figure} \centering \includegraphics[width=0.75\textwidth]{marginal-model-plot-vehicle-choice-mnl-suv} \caption{Binned marginal model plot for sports utility vehicles using \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-suv-marginal} \end{figure} As with reliability plots, the fact that the true probabilities are unobserved means that some smoothing is required. The original marginal model plots \citep{pardoe_2002_graphical} used continuous smoothing techniques such as smoothing splines, but for the reasons discussed in Section \ref{sec:case1-part1-reliability-plots}, bin-based smoothing will be used in this paper. Accordingly, one must therefore determine the number of bins for the plot. To avoid repetition, see the previous subsection for a discussion of considerations for choosing the number of bins. In Figure \ref{fig:orig-mnl-suv-marginal}, 10 partitions were used. Beyond determining the number of partitions, one must make two additional choices. First, and most importantly, one must decide on the scalar variable to be plotted on the x-axis. Typical candidates to be plotted on the x-axis include the explanatory variables in one's model and variables that are not currently included in one's model. As noted in \citet{pardoe_2002_graphical}, marginal model plots can find variables that add useful information to one's model, even when the p-values of a Wald-test might erroneously suggest such variables' exclusion. Lastly, one must decide whether to display the averages of the binned, predicted probabilities in addition to the simulated probabilities (i.e. the averages of the binned, simulated choices). For preciseness, note that for each sample ($\beta _r$), one first predicts $P \left( Y \mid X, \beta _r \right)$, and then one uses those probabilities to simulate a vector of choices $Y^r$. The averages of the binned, predicted probabilities are the averages of each bin's $P \left( Y \mid X, \beta_r \right)$, and the averages of the binned, simulated choices are the averages of each bin's $Y^r$. Quantitatively, plotting the averages of the binned, predicted probabilities shows one the relationships that are to be asymptotically expected if one's model is true. Plotting the simulated probabilities shows the sampling distribution (at one's current sample size) of the empirical probabilities, assuming one's model is true. Personally, I have found that plotting the averages of the binned, predicted probabilities clarifies the relationship that one's model expects asymptotically. For instance, examine Figure \ref{fig:orig-mnl-suv-marginal}. Here, and in general, the averages of the binned, predicted probabilities (shown in light blue) are more tightly clustered than the simulated probabilities (shown in light purple). Quantitatively, this occurs because the averages of the binned, predicted probabilities do not have the inflated variance that the simulated probabilities incur due to the randomness that comes from simulation. Qualitatively, the more tightly clustered curves make it easier to see if one's observed data differs from the (asymptotic) expectations of one's model. These asymptotic judgements can then be qualitatively considered as one uses the binned, simulated probabilities to provide finite-sample assessments of the discrepancies between one's observed data and data generated from one's model. For instance, in Figure \ref{fig:orig-mnl-suv-marginal}, the observed data (shown in dark blue) clearly differs from the asymptotic expectations of \citeauthor{brownstone_forecasting_1998}'s MNL model. This judgement reinforces the fact that the model's simulated choices (of equal size as the observed data) show major differences from the observed data. \subsubsection{Simulated histograms} \label{sec:case1-part1-histograms} So far, the discussed model checking methods have all been aggregate checks of underfitting. To check a model's implied distribution for $P \left( Y \right)$, the log-predictive plots checked underfitting at the level of the entire dataset, and market share plots checked underfitting for each alternative. Becoming more disaggregate, binned, reliability plots and binned, marginal model plots checked for underfitting in a model's predictions of $P \left( Y \mid X \right)$ for specific alternatives using observations that had been grouped according to values of a chosen univariate variable. In the reliability plots, observations were grouped according the their predicted values of $P \left( Y \mid X, \beta _{\textrm{MLE}} \right)$ or $P_{\textrm{post}} \left( Y \mid X \right)$. For the binned, marginal model plots, observations were grouped according to whatever variable was chosen for the x-axis. In this subsection (and the following two subsections), I introduce this paper's most granular model checking methods. These techniques judge a model's implied distribution of $P \left( X \mid Y \right)$ for underfitting of specific alternatives at each observed value of a chosen univariate variable. Here, I introduce simulated histograms to assess underfitting of a discrete explanatory variable. Figure \ref{fig:orig-mnl-regcar-histogram} provides a concrete example. In this plot, I assess \citeauthor{brownstone_forecasting_1998}'s MNL model with respect to its ability to predict the number of choice observations that choose regular cars with an operating cost of 2 cents per mile. From the plot, we see that while 835 observations actually chose a regular car with an operating cost of 2 cents per mile, 96\% of the simulated datasets had less than 835 observations choosing regular cars that operate at 2 cents per mile. The observed dataset is therefore unusual with respect to this feature, and the simulated histogram displays the model's lack-of-fit at an operating cost of 2 cents per mile for regular cars. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{histogram-vehicle-choice-mnl-regcar-operating-costs} \caption{Simulated histograms for regular cars using \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-regcar-histogram} \end{figure} Generalizing beyond this specific example, one constructs a simulated histogram by first choosing a given criteria. The number of observations meeting this criteria will be one's `discrepancy' measure, $T \left( X, Y \right)$. Then, this discrepancy measure is tabulated in both the observed dataset and in each of one's simulated datasets. Next, the number of simulated observations meeting the criteria is plotted in a histogram to show the distribution of the discrepancy measure, assuming that one's model is correct. Finally, one draws a vertical line on the plot to indicate the value of the observed discrepancy $T \left( X, Y \right)$. For the simulated histograms in this paper, the discrepancy measure is the number of observations that choose a given alternative when the discrete variable of interest takes on some specified value. As an example, in Figure \ref{fig:orig-mnl-regcar-histogram}, the alternative is regular cars; the variable of interest is the operating cost of the vehicle, and the specified value is 2 cents per mile. \subsubsection{Simulated kernel density estimates} \label{sec:case1-part1-kdes} In the last subsection, simulated histograms were used to judge underfitting for specific alternatives at each observed value of a discrete variable. This section introduces simulated kernel density estimates\footnote{Note that kernel density estimates are non-parametric estimates of the probability density function of a random variable. For an econometrician's and statistician's introduction to KDEs, see \citet{racine_nonparametric_2008} and \citet{wasserman_all_2006}, respectively.} (KDEs): an analogous model checking method that assesses underfitting for specific alternatives with respect to the individual values of a continuous variable. Figure \ref{fig:orig-mnl-electric-kde} presents an example of simulated KDEs. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{kde-vehicle-choice-mnl-electric-price} \caption{Simulated kernel density estimates for electric cars using \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-electric-kde} \end{figure} In this example, the alternative of interest is electric vehicles and the variable of interest is the relative price: the price of the electric vehicle, divided by the natural log of the purchaser's household income. As a result, Figure \ref{fig:orig-mnl-electric-kde} plots the kernel density estimates of the distribution of relative prices for decision makers that were observed or simulated to have chosen an electric vehicle. From the plot, we see that (1) around a relative price of 2, the observed KDE strays outside the band of simulated KDEs; (2) between a relative price of approximately 5 and 6.5, the observed KDE drifts to the bottom edge of the simulated KDEs; and (3) between a relative price of 6.5 and 7, the observed KDE is mostly flat whereas the simulated KDEs maintain a negative slope. As noted in Section \ref{sec:case1-part1-reliability-plots}, one wants the curves generated by the simulated data to envelop and straddle the curve generated by the observed data, and one wants the curves generated by the simulated data to display the same visual trends as the curve generated by the observed data. All three points mentioned above highlight values of the relative price where the simulated KDEs fail to envelop, straddle, or mimic the observed KDE. I.e., the simulated KDE plot reveals multiple ways in which \citeauthor{brownstone_forecasting_1998}'s MNL model fails to capture patterns in the observed relationship between relative price and the choice of electric vehicles. Now, to construct a simulated KDE plot as used in this paper, the procedure is as follows. First, choose a continuous, univariate variable to be plotted on the x-axis. Next, choose the alternative that the KDEs will be plotted with respect to. Given the selected alternative and variable, construct the plot by iterating through each of the simulated vectors of choices ($ Y^r$) and the observed choices ($Y$). For each set of choices, select the observations that choose the alternative of interest, and from those selected observations, plot a KDE of the chosen variable. Note that in plotting the KDE of the chosen variable, one must choose a kernel and bandwidth for the KDE. In practice (and in Figure \ref{fig:orig-mnl-electric-kde}), I have found that the standard normal distribution kernel and Scott's (\citeyear{scott_multivariate_1992}) default bandwidth procedures are typically adequate for visualization purposes. Researchers are of course free, and encouraged, to experiment with alternative kernels and bandwidth selection procedures. \subsubsection{Simulated cumulative distribution functions} \label{sec:case1-part1-cdfs} This paper's final type of plot is a plot of simulated cumulative distribution functions (CDFs). Conceptually, simulated CDFs serve the same role as simulated KDEs: by visualizing $P \left( X \mid Y \right)$, these plots are used to assess underfitting for specific alternatives with respect to the individual values of a continuous variable. Figure \ref{fig:orig-mnl-suv-cdf} gives an example of a simulated CDF plot. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{cdf-vehicle-choice-mnl-suv-price} \caption{Simulated CDFs for sport utility vehicles using \citeauthor{brownstone_forecasting_1998}'s MNL model} \label{fig:orig-mnl-suv-cdf} \end{figure} As in Section \ref{sec:case1-part1-mm-plots}, the variable of interest is again the relative price, and the alternative of interest is again SUVs. This time, however, observations will not be binned according to their relative price values. Instead, for each vector of choices (both simulated and observed), the observations that choose SUVs will be sorted in ascending order of their relative price values. These relative prices will be plotted on the x-axis. Then, for each relative price (for the current vector of choices), the y-axis value will be the proportion of observations that chose SUVs and have relative prices less than or equal to the x-axis value. As always, we would like the CDF generated by the observed data to lie in the center of the CDFs generated by the simulated data, and we would like the simulated CDFs to display the same visual trends as the observed CDF. In the example given by Figure \ref{fig:orig-mnl-suv-cdf}, the observed CDF is clearly at the bottom edge or outside the band of simulated CDFs for most of the observed relative price values. This indicates that \citeauthor{brownstone_forecasting_1998}'s MNL model is underfit with respect to the relative price variable and SUVs. Now, given that simulated CDFs and simulated KDEs perform the same conceptual role in assessing underfitting, why should both types of plots be used? The answer is that the two types of plots are complementary. For instance, in Section \ref{sec:case1-part1-kdes}, I noted that one has to choose a bandwidth and kernel in order to construct the simulated KDEs. Simulated CDF plots require no such ad-hoc choices. Moreover, while simulated KDEs highlight the exact values where one's model expect more or less observations (i.e. where one's model predicts a higher or lower density) than observed in one's data, it can be difficult to visually identify situations when one's model has consistently but only slightly over- or under-predicted the density for a sequence of values. Such situations are easily identified with a simulated CDF plot. Conversely, while simulated CDFs excel at exposing underfitting, simulated KDEs are potentially more useful for fixing it. Indeed, knowing a variable's density with respect to a given alternative can suggest helpful transformations of that variable in one's systematic utility functions \citep{kay_transformations_1987, bergtold_bernoulli_2009}. Lastly, the particular set of simulated CDFs in Figure \ref{fig:orig-mnl-suv-cdf} may seem redundant given that the relationship between the choice of SUVs and relative price was assessed in Figure \ref{fig:orig-mnl-suv-marginal}. Again, these two plots are found to be complementary. The simulated CDF plot does not bin any observations, so we do not lose any information, and we are able to see that the relationship between relative price and SUVs is poorly captured for all relative prices below about 7.5. Because the simulated curves in Figure \ref{fig:orig-mnl-suv-marginal} do not exhibit the same visual trend as the observed curve, the marginal model plot also indicates the lack-of-fit in the MNL model's estimated relationship between the relative price and the choice of SUV. However, Figure \ref{fig:orig-mnl-suv-marginal} also points out that the partitions with predicted probabilities between approximately 0.20 and 0.27 are actually predicted accurately. This highlights the fact that at least some of the predicted probabilities of choosing SUV are trustworthy, even if the overall estimated relationship is not. Depending on one's use case, this may or may not be sufficient. \subsubsection{Discussion} \label{sec:case1-part1-discussion} In Subsections \ref{sec:case1-part1-lp-plots} - \ref{sec:case1-part1-cdfs}, I presented seven graphical checks, five of which (Figures \ref{fig:orig-mnl-methanol-reliability} - \ref{fig:orig-mnl-suv-cdf}) revealed patterns of underfitting in \citeauthor{brownstone_forecasting_1998}'s MNL model. In particular, the reliability plot in Figure \ref{fig:orig-mnl-methanol-reliability} reveals that the MNL model's probability predictions for methanol vehicles are generally unreliable. Analogous plots\footnote{These plots are not shown due to space considerations. However, they are available upon request and at \url{https://github.com/timothyb0912/check-yourself/blob/master/notebooks/_06-tb-Make_introductory_plots.ipynb}.} for the other vehicle fuel and body types show that this pattern of inaccurate probability predictions is consistent across multiple dimensions. The conclusion from these plots is that overall, \citeauthor{brownstone_forecasting_1998}'s MNL model underfits numerous aspects of the observed data. By itself, the fact that the MNL model does not adequately represent the data generating process is unsurprising. Indeed, \citet{brownstone_forecasting_1998} began their paper with the presumption that the MNL model would be overly simple and would be better replaced with models such as the mixed logit and probit models. Instead, the checks in Figures \ref{fig:orig-mnl-suv-marginal} - \ref{fig:orig-mnl-suv-cdf} are most useful for the model-free way that they reveal lack-of-fit in the MNL model. Specifically, without relying on comparisons between two potentially flawed models, Figures \ref{fig:orig-mnl-suv-marginal} - \ref{fig:orig-mnl-suv-cdf} show that \citeauthor{brownstone_forecasting_1998}'s MNL model does not adequately relate individual choices to relative costs or operating costs for the various vehicle fuel and body types. All together, these checks allow one to directly assess whether key relationships have been captured in one's model. As a result, model assessment is effectively separated from the (arguably less-important) task of model comparison: a task that often reduces to checking whether one potentially flawed model has a predictive score (e.g. log-likelihood) that is `sufficiently' higher than another potentially flawed model. \subsection{General Methodology} \label{sec:general-methodology} Now that Section \ref{sec:case1-part1} has introduced specific examples of this paper's model checking techniques, the general methodology can be introduced clearly. In particular, the model checking methodology based on predictive simulations is as follows. First, let the observed data be represented by $\left( X, Y \right)$. Note that not all of the variables in $X$ need be part of one's choice model specification. Next, specify one or more statistics of the data. For each statistic, $T$: \begin{enumerate} \item Calculate $T \left( X, Y \right)$ for the observed data, yielding a value $T_0$. Note that $T_0$ may be a vector, e.g. an empirical cumulative density function. \item From some distribution, draw $R$ samples of one's model parameters\footnote{Note, we sample from a distribution of parameters because we are interested in assessing our model, not in assessing a particular point estimate for our model. We draw parameter samples to account for our uncertainty about the true parameter value if one is a bayesian or about the possible datasets if one is a frequentist.}. For bayesian analyses, the distribution could be one's prior or posterior distribution (depending on whether one wants to check the prior or posterior). For frequentist analyses, the distribution may be the sampling distribution of one's model. The sampling distribution can be approximated using the asymptotic sampling distribution or by using the paired bootstrap \citep{brownstone_bootstrap_2001}. \begin{enumerate} \item For each sampled vector of parameters $\left( \beta _r \right)$, calculate the predicted probabilities $\left( P \left[ Y \mid X, \beta_r \right] \right)$ of each available alternative being chosen by the decision makers in one's dataset. \item For each set of predicted probabilities, simulate a choice for each decision maker. \item For each set of simulated choices $\left( Y^r \right)$, calculate $T \left( X, Y^r \right)$. \end{enumerate} \item Compare $T_0$ with the distribution formed by the samples $T \left( X, Y^r \right) \forall \ r \in \left\lbrace 1, 2, ..., R \right\rbrace$. \item The more `surprising' $T_0$ is given one's simulated values of $T$, the more one's model underfits one's data (in terms of the statistic $T$). \end{enumerate} As examples, I specified $T$ as a function with vector outputs for the simulated KDEs and CDFs; for the binned, marginal model plots and binned, reliability plots; and for the market share plots. Conversely, I specified $T$ as having scalar outputs for the simulated histograms and log-predictive plots. Regardless of how one specifies $T$, the model-checking procedure described above requires an analyst to choose one or more statistics $T$ to be checked. In general, researchers should always specify statistics that are of substantive importance in one's study. For example, if building a bicycle mode choice model using revealed preference data, one might check the observed number of cyclists in different geographic regions of one's study, even if one had not explicitly built a `spatial choice model.' However, despite the admonition to thoughtfully define the statistics that are being checked, I readily acknowledge that analysts may not always have a set of statistics in mind. Moreover, it is useful to know, in many specific but common ways, whether or not one's model fits one data. To this end, I propose the following new method for ``automatically'' checking the fit of one's model to one's data. \begin{enumerate} \item In the case of ``labeled'' alternatives (e.g. shopping brand or travel mode choice) choose one or more alternatives of interest. \label{step:labeled-case} \begin{enumerate} \item For each alternative of interest, construct market share plots and binned reliability plots to perform a basic assessment one's probability predictions. \item For each alternative of interest, cycle through the related, available, explanatory variables. For instance, continuing with bicycle mode choice, if one is looking at cyclists, then examine bicycle travel time instead of automobile travel time. \item If the explanatory variable being examined is discrete, iterate through the distinct values of the variable. \item For each value of the discrete explanatory variable being examined, define $T$ to be the number of individuals with the current value of the current explanatory variable that choose the current alternative. \item If the explanatory variable being examined is continuous, then define $T$ to be the empirical cumulative density function or a kernel density estimate of the explanatory variable, conditional on choosing the current alternative. \end{enumerate} \item In the case of ``unlabeled'' alternatives,(e.g. travel route choice or vehicle choice), if one has a discrete explanatory variable of great importance (e.g. vehicle type in a vehicle choice model), then treat the values of that discrete explanatory variable as one's ``alternative labels'' and proceed as in Step \ref{step:labeled-case}. \item In the case of unlabeled alternatives without a discrete explanatory variable of great importance, one can treat the ``labels'' as ``chosen or not'', and define the alternatives of interest to be the ``chosen'' alternatives. Then proceed as in Step \ref{step:labeled-case}. \end{enumerate} This proposed method is \textit{nearly} automatic because it merely requires one to define ``proxy labels'' for unlabelled alternatives and to define the variables of interest that are related to each distinct alternative. Note that since random utility models are the most common type of discrete choice model, each alternative's systematic utility function is likely to specify which variables relate to which alternatives. \section{Relations to Existing Literature} \label{sec:lit-review} Section \ref{sec:methods} introduced seven specific model checks and a general method for using predictive simulations to assess underfitting in one's discrete choice model. In this section, I clarify how my proposed methods deliver increased benefits compared to pre-existing model checking and model comparison techniques. First, Section \ref{sec:review-graphical-checks} shows how my proposed methods continue the progression of graphical model checks in statistics from residual plots to posterior predictive checks. Here, I highlight how my proposed methods facilitate the routine use of checks for underfitting and how they offer new ways to check one's model specifications. Next, Section \ref{sec:review-model-comparison} contrasts my proposed methods with model comparison techniques---the more common class of methods in discrete choice. In this subsection, I explain how my proposed model checking techniques avoid the limitations imposed by model comparison methods when checking for underfitting. \subsection{Graphical Model Checks} \label{sec:review-graphical-checks} Historically, much research has focused on the development of graphical methods for checking predictive models. Dating back to at least the 1960s \citep{anscombe_examination_1963, cox_general_1968}, the most widely known graphical model checks are residual plots for linear regression. Conceptually, residual plots are similar to the plots described in Section \ref{sec:methods}: they implicitly compare an observed vector of residuals against the vectors that would be expected if one's model was true. Specifically, if one's data are truly generated by one's model, then under common linear regression assumptions\footnote{For e.g., the assumptions of the Gauss-Markov theorem.}, one's residuals (i.e. the observed data minus one's predictions) are independently and identically distributed according to a normal distribution with mean zero and constant variance. Accordingly, one's residual plots would be expected to look like a series of random draws from a (scaled) standard normal distribution. This simplicity of the residual plot's expected patterns enabled one, without simulation, to visually determine when the observed residuals were markedly different from residuals that could be generated by one's model. Depending on the type of residual plots being examined, marked differences between expected and observed patterns of residuals could then be used to identify when and how one's linear function was misspecified, to identify when transformations of the outcome variable would be useful, to identify when major assumptions such as independence across observations were violated, etc. \citep{anscombe_examination_1963, cox_general_1968}. Following the tremendous success of graphical model checks for linear regression, and encouraged by the unifying framework of generalized linear models, researchers in the 1980's sought to extend graphical methods like residual plots to discrete choice models (e.g. binary and multinomial logistic regression). Unfortunately, such efforts were frustrated by the discrete dependent variables inherent to discrete choice models \citep[pp. 747-748]{albert_bayesian_1995}. In particular, discrete dependent variables complicate the expected patterns in one's residual plots, thereby precluding easy, implicit assessment of one's model. To cope with these challenges, even the earliest efforts leveraged many modern features of graphical model checks. For example, \citet{landwehr_graphical_1984} is one of the first papers to introduce graphical checks for underfitting in (binary) discrete choice models. As with my proposed methods in Section \ref{sec:methods}, \citet[p. 64]{landwehr_graphical_1984} use predictive simulations to create reference distributions that are explicitly displayed on one's plots. Moreover, \citeauthor{landwehr_graphical_1984} (p. 66) also use smoothing to deal with the discrete nature of the quantities being plotted. Even the idea of using the observed distributions of $P \left( X \mid Y \right)$ to check one's model was already noted in Donald B. Rubin's discussion (pp. 79 - 80) of \citet{landwehr_graphical_1984}. Though differing from Rubin in implementation, his idea directly inspired my creation of the simulated histograms, KDEs, and CDFs. In summary, the ``classical'' diagnostic plots for discrete choice models compare and contrast with this paper's proposed techniques as follows. As a point of commonality, the classical plots pioneered many of the techniques upon which my proposed methods rest: using predictive simulations to construct reference distributions, using smoothing to plot discrete quantities, and checking $P \left( X \mid Y \right)$ as well as $P \left( Y \mid X \right)$. However, research on classical diagnostic plots was more limited than this paper's proposed techniques. In particular, classical diagnostic plots were produced in a piecemeal fashion: one stroke of scholarly genius at a time. No unifying connections were made between these few distinct sets of graphical checks, and no unifying methodology was proposed for creating tailored graphical checks for one's unique modeling problems. For example, \citeauthor{murphy_reliability_1977} (\citeyear{murphy_reliability_1977}, \citeyear{murphy_general_1987}) introduced reliability plots to judge one's probability predictions, and a reference distribution based on predictive simulations was developed for these plots in \citet{brocker_increasing_2007}. In the intervening years between these efforts, \citet{landwehr_graphical_1984} introduced empirical probability plots, local mean deviance plots (a graphical form of subgroup analyses), and partial residual plots. Here, empirical probability plots (indirectly) assess the same relationship as reliability plots, but no connection between the two techniques was made\footnote{For completeness, note that reliability plots can be considered graphical representations of the numeric Hosmer-Lemeshow goodness-of-fit statistic \citep{lemeshow_review_1982}. Reliability plots are also similar in spirit to the Horowitz-Louviere test \citep{horowitz_testing_1985, horowitz_testing_1993} in the sense that they directly compare observed choices versus predicted probabilities.}. Finally, model checking graphs such as the ``Separation Plot'' \citep{greenhill_separation_2011} and the ``Heat Map Plot'' \citep{esarey_assessing_2012} have been recently created by researchers in quantitative political science. Of particular interest is the heat map plot which is based on familiar techniques such as smoothing and creating reference distributions based on predictive simulations. Unfortunately, although this plot is the exact analogue of the reliability plot (simply using continuous smoothing instead of binning), the connection between the two types of plots has been ignored. In contrast to these classical diagnostic plots, a second generation of graphical model checks has emerged since the mid-1990s. Beginning in earnest with the \textit{posterior predictive checks} of \citet{gelman_posterior_1996}, this framework unifies the various model checking plots. For example, building on the approaches described in \citet{gelman_exploratory_2004} and \citet{buja_statistical_2009}, all the graphs in Section \ref{sec:case1-part1} correspond to different choices of a test statistic $T$ and different ways of plotting the reference distribution that is created by the predictive simulations. Aside from these two choices, graphical model checks basically follow the seven-step procedure detailed in Section \ref{sec:general-methodology}. This clarity and abstractness of the procedure for creating graphical model checks has numerous benefits. For instance, the abstractness of the procedure has allowed bayesians and frequentists to make use of a single methodology, merely substituting their own preferred methods for sampling from their estimated models. Moreover, by appropriately defining $T$, the common methodology makes it easy to define custom checks that are tailored to one's modeling context. Clearly, the model checking techniques proposed in this paper draw heavily upon the posterior predictive checking framework. For instance, \citet[p. 896]{evans_checking_2006} note that \begin{quote} ``[...] the prior induces a probability distribution on the set of possible likelihood functions via the prior predictive probability measure $M$. If the observed likelihood [...] is a surprising value from this distribution, then this would seem to indicate that a prior-data conflict exists.'' \end{quote} The log-predictive plots merely re-purpose this observation by substituting the posterior distribution for the prior distribution. Similarly, the market share plots are simply graphical depictions of the market segment prediction tests of \citet[Section 7.6]{ben-akiva_discrete_1985}, alongside a reference distribution created by predictive simulations. Continuing through the remaining plots, the binned reliability plots are the same as their classical counterparts, but enhanced with a graphical depiction of the reference distribution created by the predictive simulations from one's model. Marginal model plots are explicit descendants of the posterior predictive checking framework \citep[pp.266-267]{pardoe_2002_graphical}, and this paper has only modified these plots to use bin-based smoothing instead of continuous smooths. Lastly, I noted above that as far back as 1984, Donald Rubin [in \citet[pp. 79-80]{landwehr_graphical_1984}] suggested that \begin{quote} ``[...] for diagnostic checking of estimates of $\textrm{Pr} \left( y = 1 \mid x \right)$, my preference is reversed. I believe that we should examine the consequences of a proposed estimate of $Pr \left( y = 1 \mid x \right)$ using the discriminant analysis formulation [i.e. $\textrm{Pr} \left( y = 1 \mid x \right) \propto \textrm{Pr} \left( x \mid y = 1 \right) \textrm{Pr} \left( y = 1 \right)$].'' \end{quote} My proposed plots of simulated histograms, KDEs, and CDFs are merely new applications of Rubin's insight to examine the implied class-conditional distributions $P \left( X \mid Y \right)$, complemented by reference distributions that are simulated from one's estimated model. Given that the plots of Section \ref{sec:case1-part1} are firmly grounded in the posterior predictive framework, some readers may wonder if there is anything truly new in this paper's proposed methodology. There is. Thus far, practitioners who use posterior predictive checks have restricted themselves to developing custom graphical checks for each new choice modeling endeavor. See \citet{gelman_diagnostic_2000} and \citet{carlin_case_2001} for examples from statisticians, and see \citet{jedidi_measuring_2003}, \citet{musalem_coupon_2008}, and \citet{gilbride_posterior_2010} for examples from the discrete choice literature. Except for \citet{jedidi_measuring_2003}, the checks in these papers are mostly specific to their particular studies/models and would not be immediately applicable to analysts in general. In Section \ref{sec:general-methodology}, however, I proposed a new approach for automatic model checking of discrete choice models. Compared to the existing literature, my model checking recommendations are more broadly applicable and more comprehensive. First, my proposed methods are applicable whenever one has disaggregate choice data available. The applicability of my methods does not depend on the specific choice model being used. Secondly, my proposed checks are more comprehensive than checks provided in most other papers because they assess model fit on multiple dimensions. They allow one to assess the fit of one's model overall (using log-predictive plots), on particular alternatives (using market share plots and binned reliability plots), and on particular variables (using binned marginal model plots, simulated histograms, simulated KDEs, and simulated CDFs). Of course, I am not claiming that my proposed model checks account for all types of checks that one may wish to do. For references to other model checks that may be of use, see the last paragraph of Appendix A. Moreover, I do not make the claim that my proposed methods will always uncover an underfit model, nor that my proposed methods are the best automatic model checking procedures that one can follow. Finally, I do not think there is any problem with using model checking techniques that are tailored to one's problem and chosen modeling technique. On the contrary, I believe that such targeted model checks are complementary to the checks described in this paper. I simply note the lack of commonly-used or agreed-upon checks for discrete choice models, and I propose my methodology as one useful approach to facilitate the routine use of graphical model checks for underfitting. \subsection{Model Comparison versus Model Checking} \label{sec:review-model-comparison} Thus far, most of the articles cited in this literature review have been examples of graphical model checking from the statistics literature. This is by necessity as opposed to choice. Overall, there have been few examples of graphical model checking within the discrete choice literature. The papers of \citet{dunn_graphical_1987} and \citet{nagel_diagnostics_1992} are notable exceptions. Instead, most model diagnostics within the discrete choice literature are instances of model comparison as opposed to model checking. For concreteness, take ``A Diagnostic Test for the Multinomial Logit Model'' by \citet{tse_diagnostic_1987}. This diagnostic test does not directly compare the observed data with the estimated model to check for lack-of-fit. Instead, Tse proposes ``a Lagrange multiplier test for the multinomial logit model against the dogit model as the alternative hypothesis'' (\citeyear{tse_diagnostic_1987}). In general, discrete choice model checks for the ``independence from irrelevant alternatives'' (IIA) property are all model comparisons. Some, as in \citet{tse_diagnostic_1987}, test the MNL model against discrete choice models that do not have the IIA property. Other tests such as the Hausman-McFadden test \citep{hausman_specification_1984}, compare a given MNL model against the same MNL model estimated on a subset of alternatives. Likewise, model comparisons are implicitly or explicitly behind Wald, Lagrange Multiplier, and Likelihood Ratio tests for omitted variables or parameter heterogeneity (both systematic and unobserved heterogeneity as embodied in mixed logit models). All of these tests can be viewed as tests of a restricted model against an unrestricted model. The major problem with such comparisons is that both models being compared may be misspecified. Accordingly, the fact that one's model passes a model comparison test may not mean anything more than the fact that one's entertained model was better than an even worse model. As noted by \citet{zheng_testing_2008}, ``[...] the performance of these parametric tests rely on correct specification of the alternative model. If the alternative is misspecified, those tests may have low or no power.'' Model checks as described in this paper avoid the issue of mis-specifying one's alternative model by directly testing one's entertained model against the observed data. \section{Case Study (Part 2)} \label{sec:case-study-part2} With a greater understanding of how this paper's proposed methodology is positioned with respect to the relevant literature, this section returns to the case study began earlier. In Section \ref{sec:case1-part1}, I used this paper's proposed model checks to reveal serious underfitting in \citeauthor{brownstone_forecasting_1998}'s MNL model. In this section, I will show how the proposed checks can be used to reduce the observed lack-of-fit. Overall, the model checking procedure of Section \ref{sec:case1-part1} provided two main insights. First, underfitting was revealed by all of the plots involving the relative price variable (i.e. price over log income). This can be seen, for instance, in Figures \ref{fig:orig-mnl-suv-marginal}, \ref{fig:orig-mnl-electric-kde}, and \ref{fig:orig-mnl-suv-cdf}. To partially address this issue, the relative price variable was re-specified as piecewise linear: one term accounted for relative prices less than 3, and the other term accounted for relative prices greater than 3. Three was chosen as the knot location for the piecewise linear specification based on plots such as Figures \ref{fig:orig-mnl-suv-marginal} and \ref{fig:orig-mnl-suv-cdf}. These graphs showed marked underfitting at the relative price value of 3. For a more formal method of model improvement, one could have instead re-specified the relative price variable using techniques such as generalized additive models \citep{hastie_generalized_1986, abe_generalized_1999}. Secondly, the ``automatic'' model checking procedure of Section \ref{sec:general-methodology} revealed that for many variables, the pattern of underfitting often differed by both vehicle fuel type and body type. Such differing patterns indicate that there is heterogeneity in how the original MNL model differs from the true data generating process, and as result, there is likely heterogeneity in the true relationship between the various explanatory variables and the probability of a given alternative being chosen. To accommodate such heterogeneity, I re-specified the MNL by including an interaction between the body type variable and the piecewise linear price variable, as well as an interaction between the body type and the range, acceleration, top speed, pollution, vehicle size, and operating cost variables. Similarly, I also added an interaction term between the fuel type variable and the piecewise linear price variables, range, top speed, pollution, vehicle size, and operating cost variables. Acceleration was not interacted with fuel type since the simulated histograms of acceleration by vehicle fuel type did not suggest severe underfitting. In summary, I changed the specification of \citeauthor{brownstone_forecasting_1998}'s MNL model to include a piecewise linear specification of the relative price variable and to include interactions between many of the explanatory variables and the vehicle body type and fuel type. These changes were not exhaustive of all improvements suggested by my proposed model checking procedure. For instance, simulated histograms of the range, top speed, pollution, vehicle size and operating cost variables all suggest that these variables should not be entered using a single linear term in the model. These variables have few unique values, and given that the patterns of underfitting vary across these values, the variables in question should be represented as categorical (e.g. using dummy-variables) rather than continuous. Despite recognizing these issues, I restricted myself to the changes described in this paragraph's opening sentence for simplicity\footnote{Using dummy variables for the vehicle attributes mentioned above would have necessitated the use of regularization to ensure monotonicity in the (ordered) attribute values.}. In particular, my aim in this section is to show that the proposed model checking procedures can suggest concrete model improvements, not to demonstrate how one can make \textit{all possible} improvements and not to demonstrate the best possible way to make model improvements. As a result, the parameter estimates are not of major importance in this study. For space reasons, interested readers will therefore have to find the typical table of parameter estimates in Appendix B. \begin{table} \centering \input{mnl_model_comparison_summary.tex} \caption{Model Summary Statistics} \label{table:model-summaries} \end{table} Instead, Table \ref{table:model-summaries} provides general summary statistics about the fits of the original and expanded MNL models. This table's main message is that there are multiple points of corroborating, classical evidence that the additional parameters in the expanded MNL are useful and not simply overfitting. With all the changes described two paragraphs ago, the number of estimated parameters increased from 21 coefficients in \citeauthor{brownstone_forecasting_1998}'s MNL model to 89 in the expanded model. Accordingly, the in-sample log-likelihood of the expanded model increased to -7,311.634---higher than that of the MNL, probit, or mixed logit models estimated in \citet{brownstone_forecasting_1998}. Even after penalizing the expanded model for its extra parameters, McFadden's $\bar{\rho}^2$ was higher in the expanded MNL (0.113) versus the original MNL model (0.111). Likewise, the Akaike Information Criterion (AIC) was lower in the expanded MNL (14,787) versus the original MNL (14,825). Finally, the out-of-sample log-likelihood\footnote{These values were computed using the average out-of-sample log-likelihood from ten-fold cross-validation. } was greater for the expanded MNL model (-739.7) as compared to the original model (-741.2). Beyond classical measures of fit, the expanded MNL model was subjected to the same graphical model checking procedures as the original model. As expected, and as shown in Figures \ref{fig:new-mnl-methanol-reliability} - \ref{fig:new-mnl-suv-cdf}, many of the observed instances of underfitting from the original MNL model have been resolved. In particular, Figures \ref{fig:new-mnl-suv-marginal}, \ref{fig:new-mnl-electric-kde}, and \ref{fig:new-mnl-suv-cdf} show that the underfitting of the relative price variable has been greatly reduced. For instance, the observed KDE no longer lies outside the simulated KDEs at any point, and the observed CDF now lies in the middle of the simulated CDFs. Likewise, the marginal model plots have improved: in the expanded model, only two points lie outside the blue band of asymptotically expected probabilities, whereas 7 points lied outside this band in the \citeauthor{brownstone_forecasting_1998}'s MNL model. In terms of the operating cost, Figure \ref{fig:new-mnl-regcar-histogram} shows that the observed data is less extreme in the expanded MNL model as compared to the original: 86\% of predicted samples in the expanded model are less than the observed value as compared to 96\% with the original MNL model. Finally, by comparing Figure \ref{fig:new-mnl-methanol-reliability} to Figure \ref{fig:orig-mnl-methanol-reliability}, one can see that the expanded MNL model has better calibrated (but still unreliable) probability predictions. For methanol vehicles, the area between the dark blue curve and the dashed reference line is lower in Figure \ref{fig:new-mnl-methanol-reliability} with the expanded MNL model as compared to Figure \ref{fig:orig-mnl-methanol-reliability} with \citeauthor{brownstone_forecasting_1998}'s MNL model. This finding is also confirmed by the reliability plots (not shown due to space considerations) for the other fuel types. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{reliability-plot-vehicle-choice-interaction-mnl-methanol-point} \caption{Binned reliability plot for methanol vehicles using the `expanded' MNL model} \label{fig:new-mnl-methanol-reliability} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{marginal-model-plot-vehicle-choice-interaction-mnl-suv} \caption{Binned marginal model plot for sports utility vehicles using the `expanded' MNL model} \label{fig:new-mnl-suv-marginal} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{histogram-vehicle-choice-interaction-mnl-regcar-operating-costs} \caption{Simulated histograms for regular cars using the `expanded' MNL model} \label{fig:new-mnl-regcar-histogram} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{kde-vehicle-choice-interaction-mnl-electric-price} \caption{Simulated kernel density estimates for electric cars using the `expanded' MNL model} \label{fig:new-mnl-electric-kde} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{cdf-vehicle-choice-interaction-mnl-suv-price} \caption{Simulated CDFs for sport utility vehicles using the `expanded' MNL model} \label{fig:new-mnl-suv-cdf} \end{figure} Finally, the changes inspired by this paper's techniques do more than merely improve the fit of the original MNL model. The changes also lead to substantively different policy implications when compared to \citeauthor{brownstone_forecasting_1998}'s chosen mixed logit model (``Mixed logit B''). In particular, one main policy that was analyzed in \citet{brownstone_forecasting_1998} was a 20\% increase in the price of large gasoline cars. \citeauthor{brownstone_forecasting_1998}'s chosen model predicted that the market share of large gas cars would have a relative decrease of approximately 12\% percent\footnote{Forecast results for \citeauthor{brownstone_forecasting_1998}'s Mixed logit B were not numerically reported in their paper. The forecast results in this text were manually produced using custom written python code and \citeauthor{brownstone_forecasting_1998}'s estimated parameters.}. Similarly, the expanded MNL model predicted an 8\% relative decrease in the market share of large gas cars. \begin{table} \centering \input{vehicle-choice-forecast-results.tex} \caption{Top-Four Vehicle Categories Forecasted to Increase in Market Share} \label{table:vehicle-forecasts} \end{table} Despite similar predictions for the market share of large gas cars, the two model's forecasts differ most regarding the vehicle classes that are predicted to grow appreciably in market share. These differences are shown in Table \ref{table:vehicle-forecasts}. On one hand, \citeauthor{brownstone_forecasting_1998}'s chosen model predicts an environmentally ``hellish'' scenario where people buy other, even larger gasoline powered vehicles. On the other hand, my expanded MNL model predicts an environmentally ``heavenly'' scenario where individuals buy smaller, electric and compressed natural gas vehicles. Specifically, \citeauthor{brownstone_forecasting_1998}'s chosen model predicts that large gasoline station wagons, trucks, and vans would see relative increases in their market share of approximately 6\%, 8\%, and 6\% respectively. Moreover, \citeauthor{brownstone_forecasting_1998}'s chosen model does not predict that any alternative fuel vehicle will have a relative market share increase greater than approximately 4\%. In contrast, my expanded MNL model predicts that subcompact electric vans and compact, compressed natural gas station wagons will have the largest relative market share increases, at 3\% each. After this, the expanded MNL model predicts relative market share increases of 2\% for both large gasoline vehicles as well as many alternative fuel vehicles. These differing forecasts between Mixed logit B and the expanded MNL model imply conflicting qualitative lessons for policy makers that may have been considering policies such as environmentally motivated taxes on large gasoline cars. \section{Conclusion} \label{sec:conclusion} In this paper, I introduced and demonstrated a procedure for checking one's discrete choice models using predictive simulations. Conceptually, this method compares features of one's observed data to the distribution of those features in the datasets formed using simulated outcomes. To operationalize these ideas, I introduced seven graphical model checks for underfitting. These seven plots are designed to be of use in routine model checking situations, ideally throughout and at the conclusion of any discrete choice analysis. In addition to these specific plots, I also propose a general and nearly-automatic procedure for designing custom checks for underfitting. To demonstrate the proposed procedures, I used a vehicle choice case study from \citet{brownstone_forecasting_1998}. In this case study, the seven plots and the proposed model checking procedure uncovered serious underfitting in \citeauthor{brownstone_forecasting_1998}'s MNL model---underfitting that was not addressed by their final mixed logit model. I then demonstrated how the insights derived from my proposed plots lead to an expanded MNL model with greater levels of fit and substantively different forecasts from \citeauthor{brownstone_forecasting_1998}'s final mixed logit model. In particular, this paper's proposed model checks inspired a more detailed representation of the systematic heterogeneity in individual sensitivities to variables such as price, operating cost, etc. Once this systematic heterogeneity was better captured, the resulting model fit better than either \citeauthor{brownstone_forecasting_1998}'s MNL or mixed logit models. Moreover, the new model had aggregate forecasts that were much more hopeful about the ability of pricing policies to push individuals away from large gas cars to smaller, alternative fuel vehicles. Beyond this specific example, the goal of this paper is to introduce a new set of model checking tools to the discrete choice community. The case study of \citet{brownstone_forecasting_1998} was used to show how models estimated by the best of us may still be underfitting our data. In my own work, I have used these techniques to spot errors even more egregious than those detailed in this paper. These errors have been found everywhere: from simple binary logit models to complex machine learning techniques. No model is safe. However, with the model checking methods introduced in this paper, we can better see how our models are misrepresenting reality, thereby allowing us to raise the quality of models used within our papers, our companies, and our governments. \section*{Acknowledgements} Thanks to Albert Yuen, Joan Walker, Andrew Campbell, and two reviewers for providing helpful comments on an earlier draft of this manuscript. Any remaining errors are my own. Thanks also to Andrew Campbell for inspiring this research by asking the fundamental question: ``so, how do you check your models after you've estimated them?'' \section*{Replication Materials} All replication data and code for this paper can be found online at \url{https://github.com/timothyb0912/check-yourself}. Additionally, stand-alone software for producing the posterior predictive simulations and plots mentioned in this paper can be found in the ``pylogit.viz'' module of the PyLogit open-source package for the python programming language. This software will work regardless of the software that one initially used for model estimation. One simply needs to be able to store one's data and simulated probability predictions in a software-agnostic format such as a csv file. The package is available through the Python Packaging Index (PyPI) as well as through the Anaconda distribution. See \url{https://github.com/timothyb0912/pylogit} for detailed installation instructions. \newpage \begin{appendices} \section{Frequently Asked Questions} \label{sec:freq-questions} This paper introduced the concept of assessing underfitting in discrete choice models by visually comparing predictive simulations from one's model to one's observed data. This introduction has included concrete examples of such graphical model checks in Section \ref{sec:case1-part1}, a general algorithm for use with labeled and unlabeled alternatives in Section \ref{sec:general-methodology}, relations to the broader literature on model checking in statistics and discrete choice in Section \ref{sec:lit-review}, and examples of how these model checks could be used to guide one's model improvement efforts in Section \ref{sec:case-study-part2}. However, despite all the topics discussed in this introduction, some related issues have still been excluded in order to minimize the article's length. This appendix on \textit{Frequently Asked Questions} attempts to briefly address the most common questions that reviewers, friends, and colleagues have had after reading earlier drafts of this paper. \begin{QandA} \item How do predictive simulations help us make decisions about how to improve our models? \begin{answered} Essentially, predictive simulations are like thermometers. Thermometers are useful for assessing a patient's body temperature, and through repeated use, one can assess how a patient's temperature is changing alongside a caretaker's actions. Likewise, predictive simulations are most useful for assessing the discord between one's model and reality, and through repeated use, one can assess how this discord has been altered by changes to one's model. Taking the analogy further, note that thermometers do not tell doctors the cause of a fever or how a fever can be reduced. Similarly, predictive simulations do not tell researchers the cause of the problems they uncover, nor do they suggest particular remedies to those problems. Once aware of the extent to which one's current model misrepresents reality, it is the analyst's responsibility to diagnose their model and brainstorm ways to increase its realism. Afterwards, predictive simulations will be there to show the analyst how well their changes worked. \end{answered} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{example-histogram} \caption{Example histogram where the predictive p-value is misleading} \label{fig:example-histogram} \end{figure} \item Why should we use graphical model checks instead of simply looking at point measures or purely numeric summaries? \begin{answered} In general, solely looking at scalar values (such as p-values) leads to a loss of information. Far from being harmless, this information loss can potentially lead to incorrect assessments of model fit. Imagine one is looking at a simulated histogram such as the one shown in Figure \ref{fig:example-histogram}. Here, one's predictive distribution is bimodal and is roughly symmetric about the observed value. As a result, one's p-value may be close to 0.5, even though few simulated datasets in this example have test statistics that are similar to the observed test statistic (the dashed vertical line in the plot). To facilitate correct assessments of model fit and avoid being unnecessarily misled, I recommend using graphical model checks in addition to purely numeric summaries. Beyond the insufficiency of predictive p-values for model checking, creating other scalar surrogates for the graphical check is non-trivial. For instance, even if one's predictive distribution is unimodal, it is not valid to think that the value of a test statistic, evaluated using the probabilities generated at a point estimate of one's parameters, will serve as a useful surrogate for the mean of one's predictive distribution of that test statistic. This is because $ E \left[ f \left( z \right) \right] \neq f \left( E \left[ z \right] \right)$ with respect to a random variable $z$ and arbitrary non-linear function $f \left( \cdot \right)$. In this paper's model checking context, this fact implies that even though $\beta_{\textrm{MLE}}$ is the mean of the asymptotic sampling distribution, and therefore the mean of the simulated parameters $\beta_r$, $P \left( Y \mid X, \beta_{\textrm{MLE}} \right) \neq E \left[ P \left( Y \mid X, \beta_r \right) \right]$. The probabilities at our point estimate of the parameters are not the mean of the simulated probabilities. Given this inequality, even simple graphical checks such as simulated histograms cannot be replaced by scalar comparisons based on point estimates. More complicated graphical checks such as simulated KDEs use an additional layer of non-linear functions in their calculation of the test statistic. These additional non-linearities rule out the use of a point estimate of the model's parameters for purposes of trivially computing the mean of the predictive distributions of one's test statistics. \end{answered} \item What are some ways that predictive simulations should NOT be used? \begin{answered} Predictive simulations should not be used as hypothesis tests that are used to accept or reject models. A-priori, we know all models are wrong. Predictive simulations should therefore be used to understand the ways and extent to which one's model differs from reality. If desired, this understanding can then be used increase the realism of one's model or to inform the way that the model is used. In some cases, this informing may mean completely rejecting the use of a particular model. However, in other cases, one may decide to only make inferences from the model when the model performs well on checks related to the inference in question. The decision is up to the ``stakeholders'' of the model (e.g. peer reviewers, company or organization managers, etc.). For example, if a travel mode choice model performs well on underfitting checks related to transit travel cost but not travel time, then perhaps one may use the model to forecast the impact of fare changes but not for forecasting the impact of adding an express service that would reduce individual travel times. \end{answered} \item How do we interpret the proposed graphical model checks? \begin{answered} In general, the proposed graphical model checks are ways to visually assess the similarity between data generated by one's model and one's observed data. As mentioned in the question ``How do predictive simulations help us make decisions about how to improve our models?,'' the proposed checks are to be used like a thermometer. One checks one's temperature, and any suspiciously high temperature measurements must then be investigated for their root cause and effective remedies. Similarly, any discrepancies uncovered by the proposed model checks have to be investigated for their root cause and for appropriate model improvements. Such investigations may not be easy or straightforward. The visually diagnosed problems may be related to one's overall model specification (e.g. using an MNL model instead of a nested logit model), to omitted variables, or even to misspecification of the relationship between one or more correlated variables and the outcome variable. As usual, an analyst must still make use of their domain knowledge, intuition, and thoroughness to resolve any discovered problems. Similar to the question of interpreting graphical model checks is the question of how one interprets their associated predictive p-values. Mathematically, the predictive p-value is the proportion of simulated datasets for which a given scalar statistic was lower than the observed value of that scalar statistic. These predictive p-values can be helpful (though insufficient, in my opinion) for evaluating the extent to which one's observed data ``looks like'' one's simulated data. In particular, they may be especially helpful for assessing small model changes that are hard to distinguish visually in a graphical model check. \end{answered} \item When should one be concerned by the results of a particular graphical model check? \begin{answered} I do not think that there is a universal answer to this question. It is equivalent to asking ``how good is good enough?'' One's answer to these questions will almost assuredly be driven by problem specific considerations such as the purpose for which the model is being built and the specific statistic being checked. However, there are some general guidelines that one can use. Overall, if one's model is close to the data generating process, then one's observed data should look like a random draw of the data simulated from one's model. Accordingly, one should expect the observed statistics to lie near the center of the distribution of simulated statistics. For scalar statistics, I would be concerned if the distribution of simulated statistics was not clustered around a line for the observed statistic. For vector statistics, I would be concerned if the observed vector, at all points, was not near the center of the simulated vectors. Additionally I would be concerned if the observed vector did not follow the same visual trends as the simulated vectors. (After all, the observed vector should ``look like'' a random draw of the simulated vectors). How far away from this ideal a model is allowed to exist is up to the ``stakeholders'' of the model to decide and to defend. \end{answered} \item Besides the examples shown in this paper, what are other specific ways that one can use predictive simulations to check our models? \begin{answered} What follows is by no means an exhaustive list, but predictive simulations can also be used in the following four ways to check one's models. First, as demonstrated in \citet{pardoe_2002_graphical}, one can use predictive simulations to determine whether variables should be added or removed from one's model. Secondly, one can use the proposed graphical checks to visualize and understand differences between competing model extensions. For instance, one can check the data generated from using a common specification of one's systematic utilities but different distributions of the random utility (i.e. different link functions), different mixing distributions in a mixed logit model, different class membership models in a latent class model, etc. Third, a closely related use of predictive simulations is to check one's mixed logit models by comparing one's inferences for the distribution individual tastes with one's assumed mixing distribution of tastes (see \citet{allenby_marketing_1998} and \citet{gilbride_posterior_2010} for examples). Finally, as demonstrated and explained in \citet{kucukelbir_evaluating_2017}, one could also use predictive simulations to discover groups of observations that one's model fits particularly poorly. \end{answered} \end{QandA} \newpage \section{Expanded MNL Estimation Results} \label{sec:appendix-b-expanded-results} \begin{table}[!htbp] \centering \input{vehicle_choice_expanded_mnl_results.tex} \caption{Expanded MNL model} \label{table:vehicle-choice-expanded-mnl-results} \end{table} \begin{table} \centering \input{vehicle_choice_expanded_mnl_results_part2.tex} \caption{Expanded MNL model (cont'd)} \label{table:vehicle-choice-expanded-mnl-results-2} \end{table} \newpage \section*{\refname}
1,314,259,995,542
arxiv
\section{Introduction} The properties of the scalar particle with mass around 125 GeV discovered at the Large Hadron Collider (LHC) in 2012 \cite{Chatrchyan:2012xdj,Aad:2012tfa} have been extensively studied since its observation. These studies show strong evidence that the coupling of this particle to fermions and vector bosons are compatible within $10-20 \%$ with those of the Higgs boson as predicted in the Standard Model (SM) of elementary particles. The complete identification of the scalar particle discovered in 2012 with the Higgs boson of the SM requires also the study of the Higgs self interactions that come from the scalar potential part in the SM Lagrangian. In the SM, the Higgs potential in the unitary gauge reads \begin{equation} V(\phu) = \frac{m_{ \sss H}^2}{2} \phu^2 + \lambda_{3} v \phu^3 + \frac{\lambda_4}4 \phu^4 \label{eq:potun} \end{equation} where the Higgs mass ($m_{ \sss H}$) and the trilinear $(\lambda_{3})$ and quartic $(\lambda_{4})$ interactions are linked by the relations $\lambda_{4}^{\rm SM}=\tril^{\rm SM} = \lambda =m_{ \sss H}^2/(2\,v^2)$, where $v=(\sqrt2 \,G_\mu)^{-1/2}$ is the vacuum expectation value, and $\lambda$ is the coefficient of the $(\Phid \Phi)^2$ interaction, $\Phi$ being the Higgs doublet field. The experimental verification of these relations relies on the measurements of double-Higgs and triple-Higgs productions. However, since the cross sections for these processes are quite small, constraining $\lambda_{3}$ and $\lambda_4$ couplings within few times their predicted SM value is already extremely challenging. In the case of double Higgs production at present only exclusion limits are available. The most stringent result, coming from the ATLAS combination of the $b\bar{b}b\bar{b},\: b\bar{b}\tau \tau$ and $b\bar{b} \gamma \gamma$ channels, allows to set a bound $-5\, \tril^{\rm SM} < \lambda_{3} < 12\, \tril^{\rm SM}$ at 95\% C.L. \cite{Aad:2019uzh}. It is not yet clear if double Higgs production will be observed at the end of the high luminosity (HL) period of LHC with a collected luminosity of 3000 fb$^{-1}$ or just $\cal{O}$ $(1)$ bounds on $\lambda_{3}$ are going to be set. Concerning $\lambda_{4}$, given the smallness of the triple-Higgs production cross section (around $0.1$ fb at $\sqrt{s}= 14$ TeV) $\lambda_{4}$ will be only very loosely constrained at the HL-LHC. In order to constrain the trilinear Higgs self-coupling, a complementary strategy based on precise measurements was proposed. In this approach the effects induced at the loop level on various processes by a modified $\lambda_{3}$ coupling are studied. This approach builds on the assumption that New Physics (NP) couples to the SM via the Higgs potential in such a way that the lowest-order Higgs couplings to the other fields of the SM (and in particular to the top quark and vector bosons) are still given by the SM prescriptions or, equivalently, modifications to these couplings are so small that do not swamp the loop effects one is considering. This strategy was first applied to $ZH$ production at an $e^+ e^-$ collider in Ref.~\cite{McCullough:2013rea}, later to Higgs production and decay modes at the LHC \cite{Degrassi:2016wml,Gorbahn:2016uoy,Bizon:2016wgr, Maltoni:2017ims,Maltoni:2018ttu,Borowka:2018pxx,Gorbahn:2019lwq} and also to the study of electroweak precision observables \cite{Degrassi:2017ucl,Kribs:2017znd}. Using this strategy a recent analysis of the ATLAS Collaboration set a more stringent bound, $-2.3\, \tril^{\rm SM} < \lambda_{3} < 10.3\, \tril^{\rm SM}$ at 95\% C.L., by combining the single Higgs boson analyses targeting the $\gamma \gamma,\, Z Z^\star,\, W W^\star,\, \tau^{+} \tau^- $ and $b \overline{b}$ decay channels and the double Higgs boson analyses in the $b\bar{b}b\bar{b},\: b\bar{b}\tau \tau$ and $b\bar{b} \gamma \gamma$ decay channels using data collected at $\sqrt{s} = 13$ TeV \cite{ATLAS:2019pbo}. The aim of this work is twofold. On one side we continue the program of identifying in single Higgs processes the $\lambda_{3}$-dependent loop contributions by examining the $\phu \to \gamma Z$ decay. On the other side we use this decay to show in detail, for contributions that arise at two-loop level, the equivalence, at the loop order we are working, of two approaches. In the first one, following Ref.~\cite{Degrassi:2016wml}, the $\lambda_{3}$-dependent loop contributions are studied via the introduction of a rescaling factor multiplying the trilinear SM coupling, $\lambda_{3} = \kappa_{\lambda} \tril^{\rm SM}$, with $\tril^{\rm SM}\equiv G_\mu \,m_{ \sss H}^2/\sqrt{2} $, working in the unitary gauge (UG): this is the so called $\kappa$-framework or anomalous coupling approach. The second one is based on the modification of the SM scalar potential via the addition of higher-dimensional operators that affect only the Higgs self-coupling, i.e an (in)finite tower of $(\Phid \Phi)^n$ terms $(n >2)$. The latter, when only the term $n=3$ is assumed, corresponds to the SM effective field theory (SMEFT) approach with one single dimension-six operator, the $O_6 = (\Phid \Phi)^3$ one. The calculation of $\lambda_{3}$-dependent loop contributions in the $\phu \to \gamma Z$ decay shares many similarities with the analogous calculation for $\phu \to \gamma \gamma$. For this process the results in the literature obtained using the anomalous coupling approach \cite{Degrassi:2016wml} and the SMEFT approach \cite{Gorbahn:2016uoy} are not in agreement. As by-product of our $\phu \to \gamma Z$ analysis we resolve this discrepancy. The paper is organized as follows. In section \ref{sec:2} we present the general structure of the $\lambda_{3}$-dependent contribution in $\Gamma(\phu \to \gamma Z)$ outlining the (approximate) way this contribution is evaluated. In section \ref{sec:3} we discuss the renormalization conditions that should be imposed on the scalar potential of a $(\Phid \Phi)^n$ theory in order to obtain a result for the anomalous trilinear contribution that is identical to the one obtained in the $\kappa$-framework working in the UG where the renormalization of the one-loop contribution is not needed. Section \ref{sec:4} presents our results for the $\lambda_{3}$-dependent contribution in the partial width $\Gamma(\phu \to \gamma Z)$ and its corresponding branching ratio (BR). Finally, in the Conclusions we shortly discuss the contribution due to an anomalous trilinear Higgs coupling in $\Gamma(\phu \to \gamma \gamma)$ confirming the equivalence between the $\kappa$-framework and the SMEFT with the $O_6$ operator. \section{$\lambda_{3}$-dependent contribution in $\Gamma (h \to \gamma Z)$} \label{sec:2} We begin by recalling the structure of the $\lambda_{3}$-dependent contribution in single Higgs processes as discussed in Ref.\cite{Degrassi:2016wml}. This contribution arises from next-to-leading order (NLO) electroweak (EW) corrections and can be organized in two categories: a universal part proportional to $(\kappa_{\lambda})^2$ due to the wave-function renormalization of the external Higgs boson, and a process-dependent part linear in $\kappa_{\lambda}$ that in general depends on the kinematics of the process under consideration. Specializing to the $h \to \gamma Z$ decay we write for the $\lambda_{3}$-dependent contribution in the width \begin{equation} \Gamma_{\lambda_{3}} = Z_H\,\Gamma_{\rm{LO}} \left( 1 + \kappa_{\lambda} C_1 \right)\,, \label{NLOEWSM} \end{equation} with \begin{eqnarray} Z_H &=& \frac1{1- \kappa_{\lambda}^2\, \delta Z_H}, \\ \delta Z_{H} &=& -\frac{9}{16} \, \frac{G_\mu\,m_{ \sss H}^2 }{\sqrt{2}\, \pi^2} \left(\frac{2\pi}{3\sqrt{3}}-1\right)= -1.536 \cdot 10^{-3} \,. \label{deltaZH} \end{eqnarray} In eq.(\ref{NLOEWSM}) $C_1$ is the process-dependent contribution that will be presented in this paper while $\Gamma_{\rm{LO}}$ stands for the LO prediction. Neglecting $\mathcal{O}(\kappa_{\lambda}^3\,\alpha^2)$ the relative corrections induced by an anomalous trilinear Higgs self-coupling can be expressed as \begin{equation} d \Gamma_{\lambda_{3}} \equiv \frac{\Gamma_{\rm{NLO}} - \Gamma^{\rm SM}_{\rm{NLO}}}{\Gamma_{\rm{LO}}} = (\kappa_{\lambda}-1) C_1+(\kappa_{\lambda}^2-1) C_2 \,, \label{corr} \end{equation} where $\Gamma^{\rm SM}_{\rm{NLO}}$ is the NLO SM result\footnote{In the SM NLO result only the contribution proportional to $\tril^{\rm SM}$ is included.} and \begin{equation} C_2 = \frac{\delta Z_H}{(1- \kappa_{\lambda}^2 \delta Z_H)}~. \label{eqC2} \end{equation} The range of validity of eq.(\ref{corr}) can be identified according to Ref.\cite{Degrassi:2016wml} with $|\kappa_{\lambda}| \lesssim 20$ where this bound was derived via an estimate of the missing terms in the perturbative calculation of single Higgs processes. Because the photon does not couple directly to neutral particles, the decay process $h \rightarrow \gamma Z$ receives contribution at LO from one-loop diagrams. Then, the evaluation of the $C_1$ coefficient requires a two-loop calculation. An exact evaluation of the $\lambda_{3}$-dependent two-loop diagrams is not possible. Then, we employ the same strategy used in the analogous calculation of the $\Gamma( h \to \gamma \gamma)$ \cite{Degrassi:2016wml}, namely the relevant diagrams are computed via a Taylor expansion in the external momenta. Calling $q_1$ and $q_2$ the momenta of the photon and of the $Z$, respectively, we make an expansion in the parameters $q^2_1/(4 m^2)$, $q^2_2/(4 m^2)$ and $(q_1 \cdot q_2)/(4 m^2)$ where $m$ is the mass of any particle running into the loops, i.e. $m=m_{t}, \,m_{ \sss H},\, m_{ \sss W},\,m_{ \sss Z}$, and at the end of the computation we set $q_1^2 =0 , \,q^2_2= m_{ \sss Z}^2$ and $(q_1 \cdot q_2)= (m_{ \sss H}^2 - m_{ \sss Z}^2)/2$. We point out that our expansion parameters are all smaller than 1 although not always very small. In order to test the consistency of our small-momentum expansion approach we first compare the LO contribution computed exactly, with its evaluation via a small momentum expansion in the UG. The $h \to \gamma Z$ decay width can be written, \begin{equation} \Gamma(h \to \gamma Z) = \frac{G_\mu^2 m_{ \sss Z}^2 \alpha ~ m_{ \sss H}^3}{64 \pi^4} \Biggl(1-\frac{m_{ \sss Z}^2}{m_{ \sss H}^2} \Biggr)^3 |\mathcal{F} |^2 \end{equation} with \begin{equation} \mathcal{F} = N_c Q_t ~ (I_{3 t}-2Q_t \sin^2 \theta_W) \mathcal{F}_t + \cos^2 \theta_W \mathcal{F}_W \end{equation} where $\theta_W$ is the weak angle, while $\mathcal{F}_t$ and $\mathcal{F}_W$ are the fermionic and bosonic contributions to the amplitude, respectively. In the former we are considering only the dominant top quark contribution that sets the color factor, the electric charge and the weak isospin to be $N_c=3, \,Q_t =2/3,\, I_{3t} =1/2$. At LO the $\mathcal{F}_t$ and $\mathcal{F}_W$ terms can be written \cite{Cahn:1978nz,Bergstrom:1985hp} \begin{equation} \mathcal{F}_t^{\rm{LO}} = 2 \Bigl[I_1(1/h_{4t}, 1/z_{4t}) - I_2 (1/h_{4t}, 1/z_{4t}) \Bigr] \label{eq:Ft1l_analytic} \end{equation} \begin{eqnarray} \mathcal{F}_W^{\rm{LO}} &=& 4(3 - \tan^2 \theta_W) I_2(1/h_{4w}, 1/z_{4w})\nonumber \\ &+& \Bigl[ (1+ 2 h_{4w} ) \tan^2 \theta_W - (5+2 h_{4w} ) \Bigr] I_1 (1/h_{4w}, 1/z_{4w}) \label{eq:FW1l_analytic} \end{eqnarray} where $h_{4 i} = m_{ \sss H}^2/(4 m_i^2)$ and $z_{4 i} = m_{ \sss Z}^2/(4 m_i^2)$. The functions $I_1$ and $I_2$ are defined as \begin{eqnarray} I_1 (\tau, \lambda) &=& \frac{\tau \lambda}{2(\tau -\lambda)} + \frac{\tau^2 \lambda^2}{2(\tau -\lambda)^2} \Bigl[f(\tau) -f(\lambda) \Bigr] + \frac{\tau^2 \lambda}{(\tau - \lambda)^2} \Bigl[ g(\tau) - g(\lambda) \Bigr], \nonumber \\ && \ \label{Iuno}\\ I_2 (\tau, \lambda) &=& - \frac{\tau \lambda}{2(\tau -\lambda)} \Bigl[f(\tau)- f(\lambda) \Bigr], \end{eqnarray} where (for $x \geq 1$) \begin{equation} f(x) = \arcsin^2 \Biggl(\frac{1}{\sqrt{x}} \Biggr)\, , \qquad g(x) = \sqrt{x -1} \arcsin \Biggl(\frac{1}{\sqrt{x}} \Biggr) . \end{equation} Using the small-momentum expansion we obtain the following expressions for the fermionic and bosonic contributions \begin{eqnarray} \mathcal{F}_t^{\rm{LO}} &=& \frac{2}{3}+\frac{1}{45}\, h_{4t} (7+11 z_h) + \frac{4}{315} \,h_{4t}^2 \left(5+8 z_h+11 z_h^2\right)\nonumber \\ &+& \frac{4}{1575}\, h_{4t}^3 \left(13+21 z_h+29 z_h^2+37 z_h^3 \right) \nonumber \\ &+& \frac{128}{51975}\, h_{4t}^4 \left(8+13 z_h+18 z_h^2+23 z_h^3+28 z_h^4 \right) + \mathcal{O}(h_{4t}^5) \label{Ftexp}\\ \mathcal{F}_W^{\rm{LO}} &= & - 7-\frac{2}{15}\, h_{4w} (11-37 z_h)+ \frac{4}{105}\, h_{4w}^2 \left(-19-z_h+31 z_h^2\right) \nonumber \\ &-& \frac{8}{1575}\, h_{4w}^3 \left(87+19 z_h-39 z_h^2-97 z_h^3\right) \nonumber \\ &-& \frac{16}{17325}\, h_{4w}^4 \left(328+93 z_h-32 z_h^2-157 z_h^3-282 z_h^4\right) + \mathcal{O}(h_{4w}^5) \label{Fwexp} \end{eqnarray} where $z_h= m_{ \sss Z}^2/m_{ \sss H}^2$. We checked that eq.(\ref{Ftexp}) and eq.(\ref{Fwexp}) match exactly the expansion of the expressions in eq.(\ref{eq:Ft1l_analytic}) and eq.(\ref{eq:FW1l_analytic}) in the limit of small $h_{4 i}$ and $z_{4 i}$. Neglecting the last known contribution in eqs.(\ref{Ftexp},\ref{Fwexp}), i.e. $\mathcal{O}(h_{4t}^4)$ and $\mathcal{O}(h_{4w}^4)$, the numerical result for the decay width obtained with the small-momentum expansion differs from the evaluation of the exact expression by 2.6\%. Including also this last term the difference reduces to 1.3\%. Then, we can estimate that the evaluation of the two-loop contribution via a small momentum expansion including $\mathcal{O}(h_{4t}^3)$ and $\mathcal{O}(h_{4w}^3)$ is expected to differ from the exact result by $\mathcal{O}(5\%)$. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[scale=0.20]{2l_uni_ferm.pdf} } \subfloat[]{ \includegraphics[scale=0.20]{2l_uni_bos.pdf} } \caption{Examples of two-loop diagrams contributing to the $h\rightarrow \gamma Z$ amplitude: (a) diagram contributing to $\mathcal{F}_t$; (b) diagram contributing to $\mathcal{F}_W$. } \label{fig:1loop} \end{figure} The $C_1$ coefficient in eq.(\ref{corr}) can be defined as \cite{Degrassi:2016wml} \begin{equation} C_1= \frac{\int d\Phi ~ 2 \text{Re}\Bigl(\mathcal{M}^{0 *} \mathcal{M}^1_{\lambda_3^{\text{SM}}}\Bigr)}{\int d\Phi ~|\mathcal{M}^0|^2} \end{equation} where the integration in $d \Phi$ is over the phase space of the final-state particles, $\mathcal{M}^{0}$ is the Born amplitude and $\mathcal{M}^1_{\lambda_3^{\text{SM}}}$ is the $\tril^{\rm SM}$-linearly-dependent contribution in the the loop-corrected amplitude evaluated in the SM. However, since in the $h \to \gamma Z$ case the phase-space integral is just a multiplicative factor and both amplitudes are purely real, $C_1$ can be more easily written as \begin{equation} C_1 = \frac{2 ~ \mathcal{F}^{\rm{NLO}}_{\text{1PI}}}{\mathcal{F}^{\rm{LO}}}. \end{equation} where $\mathcal{F}^{\rm{NLO}}_{\text{1PI}}$ represents the one-particle irreducible (1PI) two-loop diagrams containing an $h^3$ interaction. In order to evaluate the $C_1$ coefficient we generated, in the UG, the two-loop diagrams contributing to the $h \to \gamma Z$ amplitude using the Mathematica package \texttt{FeynArts}\cite{Hahn:2000kx}. As in the one-loop case the diagrams can be assigned to the two categories $\mathcal{F}_t, \, \mathcal{F}_W$, see fig.~~\ref{fig:1loop}. The diagrams were manipulated using the package \texttt{FeynCalc}\cite{Mertig:1990an,Shtabovenko:2016sxi }, expanded in the external momenta and reduced to scalar integrals using a private code. After the reduction to scalar integrals we were left with the evaluation of two-loop vacuum integrals that were computed analytically using the results of Ref.\cite{Davydychev:1992mt}. The result for $C_1$ is automatically finite in the unitary gauge, i.e. no renormalization is needed, since the LO result does not depend on the trilinear coupling. As expected the fermionic and bosonic contributions are separately finite. \section{$\lambda_{3}$-dependent contribution in a $(\Phid \Phi)^n$ theory} \label{sec:3} This section is devoted to discuss how the result obtained in the $\kappa$-framework working in the UG can be recovered using a SM Lagrangian with a modified scalar potential of the form \begin{equation} V^{NP} = \sum_{n=1}^N c_{2n} ( \Phid \Phi )^n\,, \qquad\qquad \Phi = \binom{\php}{\frac{1}{\sqrt{2}}( v+ \phu + i \phd )}\,, \label{potential} \end{equation} working in a renormalizable gauge that we choose for simplicity to be the Feynman one (FG). In eq.(\ref{potential}) $N$ can be a finite integer or infinite, and in the latter case we assume the series to be convergent, while the SM potential is recovered setting $N=2$ with $c_2 = - m^2$ and $c_4 = \lambda$, where $-m^2$ is the Higgs mass term in the SM Lagrangian in the unbroken phase. A simplified discussion of the equivalence of the $\kappa$-framework with a $(\Phid \Phi)^n$ theory, when the modification of the trilinear Higgs self-coupling appears at the two-loop level, was presented in Ref. \cite{Degrassi:2017ucl}. In that reference the renormalization of the scalar potential in eq.(\ref{potential}) was not discussed in detail because it was not needed. While the calculation of $\Gamma(h \to \gamma Z)$ in the $\kappa$-framework is automatically finite, the one in a $(\Phid \Phi)^n$ theory requires to address the renormalization of the scalar potential. It is then natural to try to devise a renormalization procedure such that one obtains automatically the same result in the two approaches. The potential $V^{NP}$ up to quintic interactions can be written as \begin{eqnarray} V_{5\phi}^{NP} &=& v\, \tau\, \phu + \tau \left[\php\phm + \frac12 \phd^2 \right] + \frac12 m_{ \sss H}^2 \,\phu^2 + \left( \frac{M_{ \sss H}^2}{2 v} + v\,d \lambda_{3}\right)\phu^3 + \nonumber\\ && \frac{M_{ \sss H}^2}{ v} \,\phu \left[ \php\phm + \frac12 \phd^2 \right] + \frac{M_{ \sss H}^2}{2\,v^2} \left[\php\phm + \frac12 \phd^2 \right]^2 + \nonumber\\ && \left( \frac{M_{ \sss H}^2}{2\, v^2} + d \lambda_4\right) \frac14 \phu^4 + \left( \frac{M_{ \sss H}^2}{2 v^2} + 3 \,d \lambda_{3} \right) \phu^2 \left[ \php\phm + \frac12 \phd^2 \right] + \nonumber\\ && \frac1v \left\{d\lambda_5 \phu^5 + \left( -3 \,d \lambda_{3} + d \lambda_4 \right) \phu^3 \,\left[\php\phm + \frac12 \phd^2 \right] + \right. \nonumber\\ && ~~~~~~\left. 3 \,d \lambda_{3}\, \phu\,\left[\php\phm + \frac12 \phd^2 \right]^2 \right\}~. \label{PotqP} \end{eqnarray} The condition of the minimum of the potential reads \begin{equation} \left.\frac{d \, V^{NP}}{d \, \phu} \right|_{\phu=0} = v\,\tau = v\,\sum_{n=1}^N c_{2n}\, n \left(\frac{v^2}{2}\right)^{n-1} =0 ~, \label{minV} \end{equation} and \begin{equation} m_{ \sss H}^2 = v^2\,\sum_{n=1}^N c_{2 n}\, n(n-1) \left(\frac{v^2}{2}\right)^{n-2} + \sum_{n=1}^N c_{2 n}\, n \left(\frac{v^2}{2}\right)^{n-1} \equiv M_{ \sss H}^2 + \tau \label{Hmass} \end{equation} that enforcing the minimum condition implies $m_{ \sss H}^2 = M_{ \sss H}^2$. The anomalous contributions in eq.(\ref{PotqP}) are: \begin{eqnarray} d \lambda_{3} &=& \frac13 \sum_{n=3}^N c_{2n} \, n(n-1)(n-2) \left(\frac{v^2}{2}\right)^{n-2} \,, \label{dc3}\\ d \lambda_4 & =& \frac23 \sum_{n=3}^N c_{2n} \, n^2(n-1)(n-2) \left(\frac{v^2}{2}\right)^{n-2} \label{dc4}\\ d \lambda_5 &=& \frac1{30} \sum_{n=3}^N c_{2n} \, n(n-1)(n-2)(\frac32-4 \,n+ 2 n^2) \left(\frac{v^2}{2}\right)^{n-2} \label{dc5}~. \end{eqnarray} Eqs.(\ref{dc3}--\ref{dc5}) give rise to anomalous trilinear and quadrilinear Higgs self-interactions as well as a quintic one. The $h^5$ interaction proportional to $d \lambda_5$ is not relevant for our discussion while from the $h^3$ interaction we are going to identify $\kappa_{\lambda} = 1 + 2 v^2/m_{ \sss H}^2 \, d\lambda_{3}$. \begin{figure}[t] \begin{center \subfloat[]{ \includegraphics[scale=0.27]{1lfeynh3_1.pdf} } \subfloat[]{ \includegraphics[scale=0.27]{1lfeynh3_2.pdf} } \subfloat[]{ \includegraphics[scale=0.27]{1lfeynh3_3.pdf} } \caption{ Examples of one-loop diagrams contributing in the FG to the \mbox{$h \to \gamma Z$} process whose renormalization gives rise to a contribution proportional to an anomalous Higgs self-coupling.} \label{fig:x} \end{center} \end{figure} All the diagrams contributing to $h \to \gamma Z$ in the UG at the one-loop level contain only quantities (i.e. the gauge couplings, the top and $W$ mass) whose one-loop renormalization is not affected by Higgs self-interactions\footnote{The tadpole contribution is assumed to be fully cancelled by the tadpole counterterm.}. Instead, the situation is different in the FG where, at one loop, there are diagrams, see fig.~\ref{fig:x}, that contain a coupling proportional to $M_{ \sss H}^2/v$ whose renormalization is affected by anomalous Higgs self-couplings. Then, the calculation in the FG requires to set a renormalization procedure for $M_{ \sss H}^2/v$ as well as for the masses of the unphysical scalars. Before discussing the renormalization of $V^{NP}$ we would like to notice few things concerning the one-particle-irreducible (1PI) two-loop diagrams in the FG. The 1PI diagrams contain terms proportional to $(d \lambda_{3})^2$, i.e.~$(\kappa_{\lambda})^2$, see fig.~\ref{fig:y}(a), besides those from the wave-function renormalization of the external Higgs fields. Furthermore, there are 1PI diagrams proportional to $d \lambda_4$ from the quintic $\phu^3 \php \phm$ coupling, see fig.~\ref{fig:y}(b). Neither of these two kind of contributions are present in the $\kappa$-framework result and we expect our renormalization procedure to cancel exactly these contributions. \begin{figure}[t] \begin{center \subfloat[]{ \includegraphics[scale=0.3]{2lfeyn_a_1.pdf} } \subfloat[]{ \includegraphics[scale=0.3]{2lfeyn_b_2.pdf} } \caption{ Examples of 1PI diagrams that contain contributions not present in the $\kappa$-framework result. The dot (square) represents a coupling containing $d \lambda_{3}$ ($d \lambda_4$): a) Contributions proportional to $(d \lambda_{3})^2$. b) Contributions proportional to $d \lambda_4$.} \label{fig:y} \end{center} \end{figure} We are actually interested in defining the one-loop counterterms associated to $M_{ \sss H}^2,\: \tau$ and $v$. Following closely Ref.\cite{Sirlin:1985ux} we assumed $V^{NP}$ to be written in terms of bare quantities that are shifted according to: $c_{2n} \to c_{2n}^r - \delta c_{2n},\: v \to v^r - \delta v$. As a consequence \begin{equation} V^{NP} = V^{NP}_r - \delta V^{NP}~, \label{shiftV} \end{equation} where the renormalized potential up to quintic couplings has exactly the same form given in eq.(\ref{PotqP}) but written in terms of renormalized quantities, while the relevant terms in $ \delta V^{NP}$ that require to be defined at one-loop are: $ \tau,\: m_{ \sss H}^2,\: M_{ \sss H}^2,\: v$. To identify the vacuum as the minimum of the radiatively corrected potential we set: \begin{equation} \delta( v\, \tau) = v\, \delta \tau = v\,\sum_{n=1}^N \left(\frac{v^2}{2}\right)^{n-1} \left[ \delta c_{2n}\, n + c_{2n}\, n (n-1)\, 2 \frac{\delta v}v \right] = - T \label{eq:tadpole} \end{equation} where $i T$ is the sum of the tadpole diagrams with external leg extracted. We identify $m_{ \sss H}^2$ in eq.(\ref{PotqP}) as the on-shell Higgs mass leading to the condition (see eq.(\ref{Hmass})) \begin{equation} \delta m_{ \sss H}^2 = \delta M_{ \sss H}^2 + \delta \tau = {\rm Re}\, \Pi_{hh}(m_{ \sss H}^2) \label{eq:mh} \end{equation} where $ -i\, \Pi_{hh}(q^2)$ is the sum of all 1PI self-energy diagrams and in eq.(\ref{eq:mh}) no tadpole contribution is present because eq.(\ref{eq:tadpole}) is enforced. Eqs.(\ref{eq:tadpole}-\ref{eq:mh}) imply \begin{equation} \delta M_{ \sss H}^2 = {\rm Re}\, \Pi_{hh}(m_{ \sss H}^2) + \frac{T}v~. \label{eq:Mh} \end{equation} The quantities ${\rm Re}\, \Pi_{hh}(m_{ \sss H}^2)$ and $T$ do contain terms proportional to $d \lambda_{3}$ and $d \lambda_4$ that will be relevant for our discussion. The other quantity we are interested in is $v$, whose renormalization can be set, for example, either from the muon-decay process or the $W$ mass and in both cases the corresponding counterterm $\delta v$ does not contain (at one-loop) terms proportional to an Higgs anomalous self-coupling and therefore will not enter in our discussion. Eq.(\ref{eq:tadpole}) and eq.(\ref{eq:Mh}) are the two definitions needed to address the two-loop calculation in the FG of the effect induced by an anomalous Higgs trilinear coupling on processes that are not sensitive at the tree or one-loop level to an $h^3$ interaction. We find for the contribution proportional to $d \lambda_{3}$ and $d \lambda_4$ in $T/v$ and ${\rm Re}\, \Pi_{hh}(m_{ \sss H}^2)$ in units $\alpha/(4 \pi \sin^2 \theta_{W})\, m_{ \sss H}^2/( 8 \,m_{ \sss W}^2)$: \begin{eqnarray} \frac{T^{d\lambda}}{v} &=& \left(1+ 2 \frac{v^2}{m_{ \sss H}^2} d \lambda_{3} \right) \,3 \, m_{ \sss H}^2 \left( \frac1{\epsilon} +1- \ln \frac{m_{ \sss H}^2}{\mu^2} \right) \label{eq:tad} \\ {\rm Re}\, \Pi^{d \lambda}_{hh}(m_{ \sss H}^2) & = & - \left( 1+ 2 \frac{v^2}{m_{ \sss H}^2} d \lambda_{3}\right)^2\, 9\, m_{ \sss H}^2 \left( \frac1{\epsilon} +2 - \frac{\pi}{\sqrt{3}} - \ln \frac{m_{ \sss H}^2}{\mu^2} \right) \nonumber \\ && - \left(1+ 2 \frac{v^2}{m_{ \sss H}^2} d \lambda_4 \right) \,3 \, m_{ \sss H}^2 \left( \frac1{\epsilon} +1- \ln \frac{m_{ \sss H}^2}{\mu^2} \right) \nonumber \\ && - \left( 1+ 6 \frac{v^2}{m_{ \sss H}^2} d \lambda_{3} \right) \left[ 2\, m_{ \sss W}^2 \left( \frac1{\epsilon} +1-\ln \frac{m_{ \sss W}^2}{\mu^2} \right) \right. + \nonumber \\ && ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left. m_{ \sss Z}^2 \left( \frac1{\epsilon} +1-\ln \frac{m_{ \sss Z}^2}{\mu^2} \right) \right] \label{eq:Pi} \end{eqnarray} where $\epsilon =(4 - nd)/2$, $nd$ being the dimension of the space-time, and $\mu$ is the t-Hooft mass. From the inspection of eq.(\ref{eq:Pi}) one sees that the counterterm associated to $M_{ \sss H}^2$ (eq.(\ref{eq:Mh})) contains terms proportional to $(d\lambda_{3})^2$ and $d \lambda_4 $. It is easy to show that the renormalization of the one-loop diagrams proportional to $M_{ \sss H}^2$ (see fig.~\ref{fig:x}) cancels exactly the 1PI two-loop contribution proportional to $(d \lambda_{3})^2$, see fig.~\ref{fig:y}(a), as well as the one proportional to $d \lambda_4$, see fig.~\ref{fig:y}(b), restoring in the FG the same dependence on the anomalous coupling found in the $\kappa$-framework, i.e linear in $\kappa_{\lambda}$, apart the quadratic contribution related to the wave function renormalization, and no dependence on $\lambda_4$. The renormalization conditions eq.(\ref{eq:tadpole}) specifies also the part of the mass counterterms for the unphysical scalars that is affected by anomalous Higgs self-couplings, i.e. $\delta \tau$. A detailed discussion of the role of this counterterm in the cancellations in the FG between 1PI and counterterm diagrams can be found in Ref.\cite{Degrassi:2017ucl}. \begin{figure}[t] \begin{center \includegraphics[width=0.5\textwidth]{2l1PR.pdf} \caption{ Two-loop reducible diagrams that give rise to a contribution proportional to $d \lambda_{3}$ in the FG. The meaning of the dot is as in fig.~\ref{fig:y}.} \label{fig:z} \end{center} \end{figure} To complete the calculation of the contribution of an anomalous trilinear Higgs self-coupling in the FG one has to take into account also the effect of the reducible diagrams shown in fig.~\ref{fig:z}. In a renormalizable gauge diagrams that contain a $\gamma Z$ self-energy evaluated at vanishing external momentum also contribute to $\mathcal{F}_W$. These diagrams do not contribute in the UG because in this gauge the $\gamma Z$ self-energy evaluated at vanishing external momentum is zero. We have computed the functions ${\cal F}_t$ and ${\cal F}_W$ in the FG and found agreement with the result in the UG within the order of our approximation, as discussed in the next section. We remark that with our choice of renormalization conditions the agreement is found in a straightforward way because both results are expressed in terms of physical quantities. As a final remark we notice that the potential in eq.(\ref{potential}) has $N$ $c_{2n}$ coefficients, that are assumed to be bare quantities, while we needed to impose only two renormalization conditions, eq.(\ref{eq:tadpole}) and eq.(\ref{eq:mh}). Furthermore, the modifications of the Higgs self-couplings with respect to the SM values, $d \lambda_i, i=3..$, involve only coefficients with $n \geq 3$ (see eqs.(\ref{dc3}-\ref{dc5})). Then, on one side we have constructed a framework such that the limit to the SM case is straightforward. On the other, because we did not need to specify any renormalization condition on the modifications $d \lambda_i$, the renormalization of the latter is still free and can be specified via other processes. \section{Results} \label{sec:4} In this section we present the results for the $C_1$ coefficient. In table \ref{tab:uninum1} we give the numerical results for the first four orders of the expansion of $\mathcal{F}_t^{\rm{NLO}}$ and $\mathcal{F}_W^{\rm{NLO}}$ up to and including terms of $\mathcal{O}(h_{4t}^3)$ and $\mathcal{O}(h_{4w}^3)$, respectively. The input parameters used are the same of Ref.~\cite{Degrassi:2016wml}, or \begin{equation} m_{ \sss W} = 80.385,\: m_{ \sss Z} = 91.1876,\: m_{ \sss H} =125, \: m_{t} = 172.5 \end{equation} where all the masses are in GeV. \begin{table}[t] \centering \begin{tabular}{cccccc} \hline Fermionic & $h_{4t}^0$ & $h_{4t}^1$ & $h_{4t}^2$ & $h_{4t}^3$ & Total \\ \hline $\vphantom{\Bigl[}\mathcal{F}_t^{\rm{NLO}} (10^{-2})$ & 0.197 & 0.086 & 0.016 & 0.003 & 0.303 \\ $\mathcal{F}_t^{\rm{NLO}}$/$h_{4t}^0$ & 1.0 & 43.7\% & 8.1\% & 1.5\% & \\ \hline Bosonic & $h_{4w}^0$ & $h_{4w}^1$ & $h_{4w}^2$ & $h_{4w}^3$ & Total \\ \hline $\vphantom{\Bigl[}\mathcal{F}_W^{\rm{NLO}} (10^{-2})$ & -1.645 & -0.538 & -0.129 & -0.043 & -2.355 \\ $\mathcal{F}_W^{\rm{NLO}}$/$h_{4w}^0$ & 1.0 & 32.7\% & 7.9\% & 2.6\% & \\ \hline \end{tabular} \caption{Small-momentum expansion results for the two-loop $\lambda_3$-dependent contributions to the $h \to \gamma Z$ amplitude. The importance of each term in the expansion with respect to the first one is also shown.} \label{tab:uninum1} \end{table} The table shows that the expansions of both the fermionic and bosonic contribution have a good convergence. In particular the last term in the fermionic expansion, $\mathcal{O} (h_{4t}^3)$, contributes to the total fermionic contribution at the level of 1\%, while in the bosonic case the last term, $\mathcal{O} (h_{4w}^3)$, contributes to the total at the level of 2\%. Given $\mathcal{F}^{\rm{LO}}=-5.29$ we find $C_1 = 0.72 \cdot 10^{-2}$, a value larger than the $C_1$ coefficient for the $h \to \gamma \gamma$ decay ($C_1^{\gamma\gamma}=0.49 \cdot 10^{-2}$) and close to the one of the $h \to ZZ$ decay ($C_1^{ZZ}=0.83 \cdot 10^{-2}$) that is the decay mode most sensitive to an anomalous trilinear coupling. The effect of an anomalous $\lambda_3$ in the partial decay width $\Gamma(h \to \gamma Z)$ and in the corresponding BR is presented in fig.~\ref{fig:deltagammabr} as a function of $\kappa_{\lambda}$. Similarly to the other decay modes of the Higgs boson \cite{Degrassi:2016wml}, the correction to the partial decay width can be substantial even for $-10 \lesssim \kappa_{\lambda} \lesssim 10$, while for the same range of $\kappa_{\lambda}$ values the correction to the BR is much smaller because in the BR the universal quadratic dependence on $\kappa_{\lambda}$ in eq.(\ref{corr}) cancels out. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[scale=0.5]{deltagamma.pdf} } \subfloat[]{ \includegraphics[scale=0.5]{deltaBR.pdf} } \caption{Modification of the $h \to \gamma Z$ decay width (a) and branching ratio (b) due to an anomalous $\lambda_3$.} \label{fig:deltagammabr} \end{figure} Finally we want to comment on the numerical agreement between the calculation using the $\kappa$-framework working in the UG and the one in a $(\Phid \Phi)^n$ theory working in the FG. While the results of the expansion for the fermionic contribution in the UG and in FG are equal at the analytic level term by term, the same is not true for the bosonic contribution. The reason is related to the fact that the expansion in the external momenta is, in general, not the same in the two gauges. Indeed, part of the kinematic dependence in the UG is transfer in a FG to couplings as can be easily understood looking at the one-loop diagrams. The diagrams in fig.~\ref{fig:x}, that appear in the FG, are proportional to $m_{ \sss H}$ from the coupling $\phu \php\phm$, while in the UG the only dependence from $m_{ \sss H}$ at one-loop is of kinematical origin when the external momenta are evaluated on-shell. In this situation we expect that the evaluation of two-loop integrals via an expansion in kinematical variables will not give, at a fixed order, the same number in the UG and FG. However, if the expansion is made correctly, we expect the numerical difference between the two results to be of higher order with respect to the last known term. Defining the quantity \begin{equation} \Delta \mathcal{F}_W = \Biggl|\frac{2 (\mathcal{F}_W^\text{UG}-\mathcal{F}_W^\text{FG})}{ (\mathcal{F}_W^\text{UG}+\mathcal{F}_W^\text{FG})}\Biggr| \end{equation} as the relative difference between the results in the UG and the FG, we find $\Delta \mathcal{F}_W =2.6\%$ for the total result, that is well within the expected accuracy of an expanded result up to and including $h_{4w}^3$ terms. In table \ref{tab:relativeUGFG} we present $\Delta \mathcal{F}_W$ order by order in the expansion, to show the nice convergence pattern between the numerical values in the UG and in the FG. \begin{table}[t] \centering \begin{tabular}{cccccc} \hline & $h_{4w}^0$ & $h_{4w}^1$ & $h_{4w}^2$ & $h_{4w}^3$ & Total\\ \hline $\vphantom{\Bigl[}\mathcal{F}_W^\text{UG} (10^{-2})$ & -1.645 & -0.538 & -0.129 & -0.043 & -2.355 \\ $\vphantom{\Bigl[} \mathcal{F}_W^\text{FG} (10^{-2})$ & -1.181 & -0.727 & -0.280 & -0.113 & -2.301\\ $\Delta \mathcal{F}_W$& 32.8\% & 13.4\% & 5.5\% & 2.3\% & \\ \hline \end{tabular} \caption{Relative importance of the difference between the UG and FG results for the bosonic contributions, for different orders of the expansion.} \label{tab:relativeUGFG} \end{table} \section{Conclusions} \label{sec:5} In this work we have discussed the modifications in the partial decay width $\Gamma(h \to \gamma Z)$ and in its BR induced by an anomalous trilinear Higgs self-coupling. The two-loop computation has been performed in the $\kappa$-framework and in a $(\Phid \Phi)^n$ theory. In the latter case we had to address the renormalization of the scalar potential. We showed that the conditions on the minimum of the potential (eq.(\ref{eq:tadpole})) and on the Higgs mass (eq.(\ref{eq:Mh})) are sufficient in order to obtain a finite result in the $(\Phid \Phi)^n$ theory in agreement with the one obtained in the $\kappa$-framework within the order of our approximate calculation. It should be remarked that the two above renormalization conditions do not actually specify the Wilson coefficients of operators with dimension larger than 4. We have found that the sensitivity of the $h \to \gamma Z$ to an anomalous trilinear Higgs self-coupling process is very similar to that of the $h \to WW$ mode that is the second most sensitive mode after the $h \to ZZ$ one. The same renormalization framework employed in this work can be used to discuss the $h \to \gamma \gamma$ decay in a $(\Phid \Phi)^n$ theory or, if only the terms $ n\leq 3$ are considered, in the SMEFT with only the $O_6$ operator. For this decay there are in the literature two results, not in agreement, one obtained in the $\kappa$-framework via an expansion in the external momenta up to and including terms $h^3_{4t}$, $h^3_{4w}$ \cite{Degrassi:2016wml} and one in the SMEFT where the diagrams were evaluated in the limit $m_{ \sss W} \to \infty$ or equivalently $m_{ \sss H} \to 0$ \cite{Gorbahn:2016uoy}. We proved that the result obtained using a $(\Phid \Phi)^n$ theory with the renormalization conditions discussed in section \ref{sec:3} is identical, at the analytic level term by term, with that obtained in the $\kappa$-framework confirming the result of Ref.\cite{Degrassi:2016wml}. It is not easy to pin down the source of discrepancy with the result in Ref.\cite{Gorbahn:2016uoy}. We suspect that it could be due to a different way of taking into account the renormalization of $M_{ \sss H}^2$ that gives a contribution also in the limit $m_{ \sss H} \to 0$ taken in that work. \section*{Acknowledgements} The authors thanks F.~Maltoni for useful comments. This work was partially supported by the Italian Ministry of Research (MIUR) under grant PRIN 20172LNEEZ. \bibliographystyle{JHEP}
1,314,259,995,543
arxiv
\section{Introduction} In recent years, fluctuations in nonequilibrium systems have drawn considerable attention to a new kind of fluctuation theorems. These fluctuation theorems are asymmetric relations for the distribution functions for work, heat, etc., and may be satisfied even far from equilibrium states or for small systems in which the magnitude of the fluctuations can be large. These fluctuation theorems have been proved for deterministic thermostated systems \cite{ECM93,ES94,GC95} as well as for stochastic systems \cite{K98,LS99}, and have also been discussed in connection with the Onsager-Machlup fluctuation theory \cite{TC07a}. Moreover, experimental confirmations for these theorems have been obtained \cite{CL98,WSM02,ST05,SST05}. It has also been shown that the fluctuation theorems include the fluctuation-dissipation theorem, as well as Onsager's reciprocal relations, near equilibrium states \cite{ECM93,LS99,G96}. In our previous paper \cite{TC07a}, based on a generalization of the Onsager-Machlup theory for fluctuations around equilibrium to those around nonequilibrium steady states using a path integral approach, we discussed fluctuation theorems for a stochastic dynamics described by a Langevin equation. For a Brownian particle driven by a mechanical force $F(x_{s},s)$, the Langevin equation for the particle position $x_{s}$ at time $s$ is of the general form \begin{eqnarray} m\frac{d^{2} x_{s}}{ds^{2}} = - \alpha \frac{d x_{s}}{ds} + F(x_{s},s) + \zeta_{s} \label{LangeEquat0a} \end{eqnarray} with the mass $m$ of the particle, the friction coefficient $\alpha$ and a random noise $\zeta_{s}$. In our previous paper, as a nonequilibrium model we considered a dragged Brownian particle, in which the mechanical force is given by a harmonic force $F(x_{s},s)=-\kappa (x_{s}-vs)$ with the spring constant $\kappa$ and the dragging velocity $v$. Furthermore we mainly considered this model under the over-damped assumption. This assumption can be used for a dynamics on a much longer time scale than the inertial characteristic time $\tau_{m}\equiv m/\alpha$, and the dynamical equation under this assumption is simply given by neglecting the inertial term containing the mass in Eq. (\ref{LangeEquat0a}), i.e. by \begin{eqnarray} \frac{d x_{s}}{ds} = \frac{1}{\alpha} F(x_{s},s) + \frac{1}{\alpha} \zeta_{s} . \label{LangeEquat0b} \end{eqnarray} Equation (\ref{LangeEquat0b}) is much simpler than Eq. (\ref{LangeEquat0a}), but information of the system on the shorter time scale than $\tau_{m}$ is lost in Eq. (\ref{LangeEquat0b}). It may be noted that Machlup and Onsager already developed their fluctuation theory around equilibrium not only for the case corresponding to the over-damped case \cite{OM53} but also for the inertial case \cite{MO53}. In our previous paper we discussed also a generalization of the Onsager-Machlup theory for nonequilibrium steady states including the inertial term \cite{TC07a}. However, there we treated only one type of fluctuation theorem, the so called transient fluctuation theorem \cite{ES94}, which is restricted to equilibrium initial conditions. Another fluctuation theorem, the asymptotic fluctuation theorem \cite{GC95}, which holds for any initial condition (including a nonequilibrium steady state% \footnote{ A fluctuation theorem for a nonequilibrium steady state initial condition has been called the steady state fluctuation theorem (or the Gallavotti-Cohen fluctuation theorem \cite{GC95}), which is a special case of asymptotic fluctuation theorems. } $\!$), was not discussed for inertial cases in Ref. \cite{TC07a}. Different from the transient fluctuation theorem, which is correct for all times as a mathematical identity \cite{CG99}, the asymptotic fluctuation theorem is satisfied in the long time limit only. However, as we will discuss in this paper, a variety of interesting inertial effects appear for finite times for a nonequilibrium initial condition, before the asymptotic fluctuation theorem is achieved. Although there are some results for fluctuation theorems for stochastic systems including inertia \cite{ZBC05,DJG06}, the asymptotic fluctuation theorem with inertia has not been discussed fully in connection with the Onsager-Machlup theory so far. The purpose of this paper is therefore to discuss, in the context of the Onsager-Machlup path integral approach, inertial effects in nonequilibrium steady state work fluctuations, including the asymptotic fluctuation theorem. For these discussions we use the Langevin equation (\ref{LangeEquat0a}) for a dragged Brownian particle without the over-damped assumption. The work distribution function is calculated explicitly for any initial condition, and its finite time properties are investigated. As an important inertial effect we show a critical value of mass above which the work distribution function shows a time-oscillatory behavior. The nonequilibrium work used in this paper is based on the generalized Onsager-Machlup theory, as obtained in our previous paper \cite{TC07a}. In that paper we considered two kinds of work in two different frames: (A) the work $\mathcal{W}_{l}$ done in the laboratory frame ($l$) and (B) the work $\mathcal{W}_{c}$ done in the comoving frame ($c$) where the average velocity of the Brownian particle is zero in a nonequilibrium steady state. A difference between these two works is that $\mathcal{W}_{c}$ includes a d'Alembert-like force, which is absent in $\mathcal{W}_{l}$. In this paper, we show that both the works $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$ satisfy the asymptotic fluctuation theorem. We also discuss dramatic differences between the work distribution functions for $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$ for finite times. The outline of this paper is as follows. In Sec. \ref{DraggBrown} we introduce a dragged Brownian particle model with inertia, and treat its dynamics using a path integral. In Sec. \ref{WorkInert} we introduce the works done in the laboratory and comoving frames and calculate their distribution functions. In Sec. \ref{AsympFluct} we prove the asymptotic work fluctuation theorem. In Sec. \ref{InertEffec} we discuss inertial effects in the work distribution functions for finite times. Finally, Sec. \ref{ConclRemar} is devoted to a summary and some remarks on this paper. \section{Dragged Brownian Particle with Inertia} \label{DraggBrown} We consider a Brownian particle confined by a harmonic potential, which moves with a constant velocity $v$ through a fluid, as discussed in our previous paper \cite{TC07a}. The dynamics of this particle is described by a Langevin equation \begin{eqnarray} m\frac{d^{2} x_{s}}{ds^{2}} = - \alpha \frac{d x_{s}}{ds} - \kappa \left(x_{s}-vs\right) + \zeta_{s} . \label{LangeEquat1} \end{eqnarray} Here, we assume that $\zeta_{s}$ is the Gaussian-white random force whose probability functional $P_{\zeta}(\{\zeta_{s}\})$ for $\{\zeta_{s}\}_{s\in[t_{0},t]}$ is given by \begin{eqnarray} P_{\zeta}(\{\zeta_{s}\}) = C_{\zeta} \exp\left( -\frac{\beta}{4\alpha}\int_{t_{0}}^{t}ds\; \zeta_{s}^{2} \right) \label{NoiseProba1} \end{eqnarray} with the normalization coefficient $C_{\zeta}$ and the inverse temperature $\beta \equiv 1/(k_{B}T)$, where $k_{B}$ is the Boltzmann's constant and $T$ is the temperature of the heat reservoir. [Note that the coefficient $C_{\zeta}$ can depend on the initial time $t_{0}$ and the final time $t$, but such time dependences in $C_{\zeta}$, as well as in similar coefficients $C_{x}$ and $C_{\mathcal{E}}$ introduced later, are suppressed.] It follows from Eq. (\ref{NoiseProba1}) that the first two auto-correlation functions of the random force $\zeta_{s}$ are given by $\langle \zeta_{s} \rangle = 0 $ and $\langle \zeta_{s_{1}}\zeta_{s_{2}} \rangle = (2\alpha/\beta) \delta(s_{1}-s_{2})$ with the notation $\langle \cdots\rangle$ for an initial ensemble average. Now, we consider the probability functional $P_{x}(\{x_{s}\})$ for a path $\{x_{s}\}_{s\in[t_{0},t]}$ of the particle position $x_{s}$. By inserting Eq. (\ref{LangeEquat1}) into Eq. (\ref{NoiseProba1}) and interpreting the probability functional $P_{\zeta}(\{\zeta_{s}\})$ for $\zeta_{s}$ as the probability functional $P_{x}(\{x_{s}\})$ for $x_{s}$, we obtain, apart from a normalization coefficient, \begin{eqnarray} && P_{x}(\{x_{s}\}) \nonumber \\ && = C_{x} \exp\left[ -\frac{1}{4D}\int_{t_{0}}^{t}ds\; \left(\dot{x}_{s} + \frac{x_{s}-vs}{\tau_{r}} + \frac{m}{\alpha}\ddot{x}_{s} \right)^{2}\right] \hspace{0.5cm} \label{PathProba1} \end{eqnarray} with $\dot{x}_{s}\equiv dx_{s}/ds$, $\ddot{x}_{s}\equiv d^{2}x_{s}/ds^{2}$ and the normalization coefficient $C_{x}$. Here, $D\equiv k_{B}T/\alpha$ is the diffusion constant given by the Einstein relation and $\tau_{r}\equiv\alpha/\kappa$ is the relaxation time in the over-damped case. For another derivation of Eq. (\ref{PathProba1}) via a Fokker-Planck equation corresponding to the Langevin equation, see, for example, Ref. \cite{R89}. For systems whose dynamics is expressed by a second-order Langevin equation, like Eq. (\ref{LangeEquat1}), we introduce the path integration of any functional $X(\{x_{s}\})$ as $ \int_{(x_{t_{0}},\dot{x}_{t_{0}})=(x_{i},p_{i}/m)}^{ (x_{t},\dot{x}_{t})=(x_{f},p_{f}/m)} \mathcal{D}x_{s} \;$ $X(\{x_{s}\}) $, with respect to paths $\{x_{s}\}_{s\in(t_{0},t)}$ satisfying the initial ($i$) condition $(x_{t_{0}},\dot{x}_{t_{0}})=(x_{i},p_{i}/m)$ and the final ($f$) condition $(x_{t},\dot{x}_{t})=(x_{f},p_{f}/m)$. Using this notation for the functional integral, the functional average $\pathaveA{X(\{x_{s}\})}$ over all possible paths $\{x_{s}\}_{s\in(t_{0},t)}$, as well as averages over the initial and final positions and momenta of the particle is represented by \begin{eqnarray} \pathaveA{X(\{x_{s}\})} &\equiv& \int\!\int dx_{i}dp_{i} \int_{(x_{t_{0}},\dot{x}_{t_{0}})=(x_{i},p_{i}/m)}^{ (x_{t},\dot{x}_{t})=(x_{f},p_{f}/m)}\; \mathcal{D}x_{s} \nonumber \\ &&\hspace{0.5cm}\times \int\!\int dx_{f}dp_{f} \; X(\{x_{s}\}) \nonumber \\ &&\hspace{0.5cm}\times P_{x}(\{x_{s}\}) f(x_{i},p_{i},t_{0}) \label{FunctAvera1} \end{eqnarray} with the initial distribution function $f(x_{i},p_{i},t_{0})$ for the particle position $x_{i}$ and momentum $p_{i}$. The normalization condition to specify the coefficient $C_{x}$ of the distribution functional (\ref{PathProba1}) is given by $\pathaveA{1}=1$ using the notation (\ref{FunctAvera1}) as well as the normalization condition $\int\!\int dx_{i}dp_{i}\; $ $f(x_{i},p_{i},t_{0})=1$ for the initial distribution function $f(x_{i},p_{i},t_{0})$. This finishes the introduction of our model and its dynamics. In the next section \ref{WorkInert} we introduce the work done on this system and calculate its probability distribution. \section{Work Distribution} \label{WorkInert} \subsection{Work to Drag a Brownian Particle and its Distribution} \label{WorkDragParti} In our previous paper \cite{TC07a}, we considered the work $\mathcal{W}$ to move the confining potential with a velocity $v$ in two frames; the laboratory frame using the particle position $x_{s}$ and the comoving frame using the particle position $y_{s}\equiv x_{s}-vs$ at time $s$. Based on a generalized Onsager-Machlup theory, we showed in Ref. \cite{TC07a} that the work $\mathcal{W}_{l}$ done in the laboratory frame is given by $\int_{t_{0}}^{t}ds\; [-\kappa (x_{s}-vs)] v$, and the work $\mathcal{W}_{c}$ done in the comoving frame is given by $\int_{t_{0}}^{t}ds\; (-\kappa y_{s}-m\ddot{y}_{s}) v$ with $\ddot{y}_{s}\equiv d^{2} y_{s}/ds^{2} = \ddot{x}_{s}$, leading to a difference between the work $\mathcal{W}$ in these two frames by an inertial or d'Alembert-like force $-m\ddot{y}_{s}$ . To understand this difference in a concise way, note first that by the energy conservation law, the work $\mathcal{W}$ is given by the heat $Q$ and the energy difference $\Delta E$, namely by $\mathcal{W} = Q+\Delta E$, where the energy difference $\Delta E$ is the sum of the kinetic energy difference $\Delta K$ and the potential energy difference $\Delta U$, i.e. $\Delta E = \Delta U + \Delta K$. Here, the kinetic energy difference $\Delta K = \Delta K_{c}$ and $\Delta K_{l}$ in the comoving frame and the laboratory frame are given by $(m \dot{y}_{t}^{2}/2) - (m \dot{y}_{t_{0}}^{2}/2)$ and $(m \dot{x}_{t}^{2}/2) - (m \dot{x}_{t_{0}}^{2}/2)$, respectively, so that we obtain the relation \begin{eqnarray} \Delta K_{c} = \Delta K_{l} -\int_{t_{0}}^{t} ds\; m\ddot{x}_{s} v . \label{KinetDiffe1} \end{eqnarray} Equation (\ref{KinetDiffe1}) means that the kinetic energy difference $\Delta K$ depends on the frames and its frame-difference is determined by the d'Alembert-like force $-m \ddot{x}_{s}$ as a purely inertial effect. This frame-difference of $\Delta K$ also appears in the work, and leads to the relation $\mathcal{W}_{c} = \mathcal{W}_{l} -\int_{t_{0}}^{t} ds\; m\ddot{x}_{s} v$. A more complete explanation for this frame-dependence of the work is given in Ref. \cite{TC07a}, based on a nonequilibrium generalization of the detailed balance condition. To discuss these two different kinds of work done in the laboratory and comoving frames simultaneously in this paper, we consider the work defined in general by \begin{eqnarray} \mathcal{W}(\{x_{s}\}) = \int_{t_{0}}^{t}ds\; \left[-\kappa (x_{s}-vs) -(1-\vartheta) m\ddot{x}_{s}\right] v , \label{Work1} \end{eqnarray} which gives the work $\mathcal{W}_{l}$ done in the laboratory case $(\vartheta=1)$ as well as the work $\mathcal{W}_{c}$ done in the comoving case $(\vartheta=0)$ by changing value of the parameter $\vartheta$ \footnote{ The parameter $\vartheta$ in Eq. (\ref{Work1}) is chosen in a way consistent to that in our previous paper \cite{TC07a}. } Using the functional average defined by Eq. (\ref{FunctAvera1}), the probability distribution $P_{w}(W)$ for the dimensionless work $\beta\mathcal{W}(\{x_{s}\})$ is given by \begin{eqnarray} P_{w}(W,t) = \pathaveA{\delta\!\left( W-\beta\mathcal{W}(\{x_{s}\})\right)} . \label{DistriWork1} \end{eqnarray} For later calculative convenience, we introduce a Fourier transformation $\mathcal{E}_{w}(i\lambda,t)$ of the work distribution function $P_{w}(W,t)$ through the function $\mathcal{E}_{w}(\lambda,t)$ defined by \begin{eqnarray} \mathcal{E}_{w}(\lambda,t) \equiv \pathaveA{e^{-\lambda \beta \mathcal{W}(\{x_{s}\})}} , \label{EFunctWork1} \end{eqnarray} so that the work distribution function $P_{w}(W)$ can be represented as \begin{eqnarray} P_{w}(W,t) =\frac{1}{2\pi}\int_{-\infty}^{+\infty} d\lambda\; \mathcal{E}_{w}(i\lambda,t) \; e^{i\lambda W} . \label{DistriWork2} \end{eqnarray} The function $\mathcal{E}_{w}(\lambda,t)$ can be also regarded as a generating function for the work $\mathcal{W}(\{x_{s}\})$. By Eq. (\ref{EFunctWork1}) we obtain a useful identity \begin{eqnarray} \mathcal{E}_{w}(0,t) = 1 \label{IdentE1} \end{eqnarray} used to determine a normalization constant later [Eq. (\ref{NormaConst1})]. \subsection{Path Integral Analysis for Work Distribution} To calculate the function $\mathcal{E}_{w}(\lambda,t)$ from Eq. (\ref{EFunctWork1}), we first note that \begin{eqnarray} \mathcal{E}_{w}(\lambda,t) &=& C_{x}\int\!\int dx_{i}dp_{i} \int_{(x_{t_{0}},\dot{x}_{t_{0}})=(x_{i},p_{i}/m)}^{ (x_{t},\dot{x}_{t})=(x_{f},p_{f}/m)}\; \mathcal{D}x_{s} \nonumber \\ &&\hspace{0.5cm}\times \int\!\int dx_{f}dp_{f} \; f(x_{i},p_{i},t_{0}) \nonumber \\ &&\hspace{0.5cm}\times \exp\left[ \int_{t_{0}}^{t}ds\; L(\ddot{x}_{s},\dot{x}_{s},x_{s},s)\right] \label{EFunctWork2} \end{eqnarray} by Eqs. (\ref{PathProba1}), (\ref{FunctAvera1}), (\ref{Work1}) and (\ref{EFunctWork1}). Here, $ L(\ddot{x}_{s},\dot{x}_{s},x_{s},s)$ is defined by \begin{eqnarray} L(\ddot{x}_{s},\dot{x}_{s},x_{s},s) &\equiv& -\frac{1}{4D}\left(\dot{x}_{s} + \frac{x_{s}-v s}{\tau_{r}} + \frac{m}{\alpha}\ddot{x}_{s} \right)^{2} \nonumber \\ &&\hspace{0.5cm} +\lambda \beta \left[\kappa (x_{s}-v s) +(1-\vartheta) m\ddot{x}_{s}\right] v , \nonumber \\ \label{LagraFunct1} \end{eqnarray} which may be interpreted as a Lagrangian function including a Lagrange multiplier $\lambda$ due to the restriction of the delta function for work in Eq. (\ref{DistriWork1}) \cite{TC07a}% \footnote{ In Ref. \cite{TC07a} we called only the first term on the right-hand side of Eq. (\ref{LagraFunct1}) the Lagrangian function in the Onsager-Machlup theory, which is directly connected to a transition probability. }. Here, as elsewhere in this paper, the dependence of $L(\ddot{x}_{s},\dot{x}_{s},x_{s},s)$ on the parameters $v$, $\vartheta$, etc., has not been explicitly indicated on the left-hand side of Eq. (\ref{LagraFunct1}). The first step to calculate the function $\mathcal{E}_{w}(\lambda,t)$ is to specify the most-contributing path $\{x_{s}^{*}\}_{s\in[t_{0},t]}$ in the path integral involved on the right-hand side of Eq. (\ref{EFunctWork2}). Such a special path $\{x_{s}^{*}\}_{s\in[t_{0},t]}$ is introduced as the one satisfying the variational principle \begin{eqnarray} \delta \int_{t_{0}}^{t}ds\; L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*},x_{s}^{*},s) = 0 \label{MinimActio1} \end{eqnarray} with the four boundary conditions $x_{t_{0}}^{*}=x_{i}$, $\dot{x}_{t_{0}}^{*}=p_{i}/m$, $x_{t}^{*}=x_{f}$ and $\dot{x}_{t}^{*}=p_{f}/m$. In a way similar to derive the Euler-Lagrange equation from the minimum action principle in analytical mechanics \cite{LL69}, Eq. (\ref{MinimActio1}) leads to \begin{eqnarray} &&\frac{d^{2}}{ds^{2}} \frac{\partial L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*} ,x_{s}^{*},s) }{\partial \ddot{x}_{s}^{*}} - \frac{d}{ds}\frac{\partial L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*} ,x_{s}^{*},s) }{\partial \dot{x}_{s}^{*}} \nonumber \\ &&\hspace{0.5cm} + \frac{\partial L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*} ,x_{s}^{*},s) }{\partial x_{s}^{*}} = 0 \label{LagraEquat1} \end{eqnarray} for the Lagrangian function (\ref{LagraFunct1}). Inserting Eq. (\ref{LagraFunct1}) into Eq. (\ref{LagraEquat1}) we obtain a fourth-order linear differential equation \begin{eqnarray} \tau_{m}^{2}\frac{d^{4}\tilde{x}_{s}^{*}}{ds^{4}} - \left(1-2\frac{\tau_{m}}{\tau_{r}} \right) \frac{d^{2}\tilde{x}_{s}^{*}}{ds^{2}} + \frac{1}{\tau_{r}^{2}} \tilde{x}_{s}^{*} =0 \label{LagraEquat5} \end{eqnarray} for the function $\tilde{x}_{s}^{*}$ of $s$, which is defined by \begin{eqnarray} \tilde{x}_{s}^{*} \equiv x_{s}^{*} - v s + (1-2\lambda) v\tau_{r} , \label{TildeFunctX1} \end{eqnarray} using the inertial characteristic time $\tau_{m}\equiv m/\alpha$. We consider solutions of Eq. (\ref{LagraEquat5}) of the form $\exp(\nu s)$. Inserting $\tilde{x}_{s}^{*}=\exp(\nu s)$ into Eq. (\ref{LagraEquat5}) we obtain the quadratic equation \begin{eqnarray} &&\tau_{m}^{2}\nu^{4} - \left(1-2\frac{\tau_{m}}{\tau_{r}} \right)\nu^{2} + \frac{1}{\tau_{r}^{2}} \nonumber \\ &&\hspace{0.5cm} = \left(\tau_{m} \nu^{2} + \nu +\frac{1}{\tau_{r}}\right) \left(\tau_{m} \nu^{2} - \nu +\frac{1}{\tau_{r}}\right) \nonumber \\ &&\hspace{0.5cm} = 0 \label{QuartEquat1} \end{eqnarray} for $\nu$. The solutions of Eq. (\ref{QuartEquat1}) are $\nu = \nus{+}, \nus{-}, -\nus{-}, -\nus{+}$ using $\nus{\pm}$ defined by \begin{eqnarray} \nus{\pm} = \frac{1}{2\tau_{m}} \left(1\pm\sqrt{1-4\frac{\tau_{m}}{\tau_{r}}}\;\right) . \label{CoeffNuPM} \end{eqnarray} The general solution of the fourth-order differential equation (\ref{LagraEquat5}) is represented as a superposition of these special solutions $\exp(\nu s)$, $\nu = \nus{+}, \nus{-}, -\nus{-}, -\nus{+}$, namely \begin{eqnarray} \tilde{x}_{s}^{*} = C_{1}e^{\nus{+}s} +C_{2} e^{\nus{-}s} + C_{3}e^{-\nus{-}s} +C_{4} e^{-\nus{+}s} \label{SolutTildeX1} \end{eqnarray} with constants $C_{j}$, $j=1,2,3,4$. Using Eqs. (\ref{TildeFunctX1}) and (\ref{SolutTildeX1}) and introducing the four dimensional vector $\mathbf{C} \equiv (C_{1}\; C_{2}\; C_{3}\; C_{4})^{T}$,% \footnote{ In this paper, $X^{T}$ means the transposed matrix (or vector) of any matrix (or vector) $X$. } we can rewrite \begin{eqnarray} x_{s}^{*} = \mathbf{C}^{T}\mathbf{K}_{s} + v s - (1-2\lambda) v\tau_{r} \label{FunctX1} \end{eqnarray} where the vector $\mathbf{K}_{s}$ is defined by \begin{eqnarray} \mathbf{K}_{s} &\equiv& \left(\begin{array}{c} e^{\nus{+}s} \\ e^{\nus{-}s} \\ e^{-\nus{-}s} \\ e^{-\nus{+}s} \end{array}\right) . \label{VectoK1} \end{eqnarray} The constant vector $\mathbf{C}$ is determined by the four boundary conditions for $x_{s}^{*}$ and we obtain \begin{eqnarray} \mathbf{C} &=& A_{t}^{-1} \mathbf{B}_{if}^{(1-2\lambda)} \label{ConstC1} \end{eqnarray} where the matrix $A_{t}$ is defined by \begin{eqnarray} A_{t}&\equiv& \left(\begin{array}{cccc} e^{\nus{+}t_{0}} & e^{\nus{-}t_{0}} & e^{-\nus{-}t_{0}} & e^{-\nus{+}t_{0}} \\ \nus{+}e^{\nus{+}t_{0}} & \nus{-}e^{\nus{-}t_{0}} & -\nus{-}e^{-\nus{-}t_{0}} & -\nus{+}e^{-\nus{+}t_{0}} \\ e^{\nus{+}t} & e^{\nus{-}t} & e^{-\nus{-}t} & e^{-\nus{+}t} \\ \nus{+}e^{\nus{+}t} & \nus{-}e^{\nus{-}t} & -\nus{-}e^{-\nus{-}t} & -\nus{+}e^{-\nus{+}t} \end{array}\right) \hspace{0.5cm} \label{MatriA1} \end{eqnarray} and the vector $\mathbf{B}_{if}^{(z)}$ is defined by \begin{eqnarray} \mathbf{B}_{if}^{(z)}&\equiv& \left(\begin{array}{c} x_{i} -vt_{0}\\ p_{i}/m -v\\ x_{f} -vt\\ p_{f}/m -v \end{array}\right) +z v\tau_{r} \left(\begin{array}{c} 1\\ 0\\1\\ 0 \end{array}\right) . \label{VectoBJ} \end{eqnarray} It may be noted that the first component $x_{i} -vt_{0}$ and the second component $(p_{i}/m) -v$ (the third component $x_{f} -vt_{0}$ and the fourth component $(p_{f}/m) -v$ ) of the vector $\mathbf{B}_{if}^{(0)}$ can be regarded as the initial (final) position and velocity of the particle in the comoving frame, respectively. As the next step, we represent a path $\{x_{s}\}_{s\in [t_{0},t]}$ as the sum of the most contributing path $\{x_{s}^{*}\}_{s\in [t_{0},t]}$ given by Eq. (\ref{FunctX1}) and its deviation $\{\Delta x_{s}\}_{s\in [t_{0},t]}$ defined by \begin{eqnarray} \Delta x_{s} \equiv x_{s} - x_{s}^{*} , \label{VariaDeltaX1} \end{eqnarray} where the variable $\Delta x_{s}$ satisfies the four boundary conditions $\Delta x_{t_{0}} = \Delta x_{t} = 0 $ and $\Delta\dot{x}_{t_{0}} = \Delta\dot{x}_{t} = 0$ with $\Delta\dot{x}_{s}\equiv d \Delta x_{s}/ds$. Using this variable $\Delta x_{s}$, the complete time integral $\int_{t_{0}}^{t}ds\; L(\ddot{x}_{s},\dot{x}_{s},x_{s},s)$ of the Lagrangian function can be represented as \begin{eqnarray} &&\int_{t_{0}}^{t}ds\;L(\ddot{x}_{s},\dot{x}_{s},x_{s},s) \nonumber \\ &&\hspace{0.5cm} = \int_{t_{0}}^{t}ds\; L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*},x_{s}^{*},s) \nonumber \\ && \hspace{0.5cm}\spaEq -\frac{1}{4D}\int_{t_{0}}^{t}ds\; \left( \Delta\dot{x}_{s} +\frac{1}{\tau_{r}}\Delta x_{s} + \frac{m}{\alpha}\Delta\ddot{x}_{s} \right)^{2} . \label{LagraFunct2} \end{eqnarray} in terms of the two variables $x_{s}^{*}$ and $\Delta x_{s}^{*}$. Inserting Eq. (\ref{LagraFunct2}) into Eq. (\ref{EFunctWork2}) we obtain \begin{eqnarray} \mathcal{E}_{w}(\lambda,t) &=& C_{\mathcal{E}}\int\!\int dx_{i}dp_{i} \int\!\int dx_{f}dp_{f} \nonumber \\ &&\hspace{0.5cm}\times f(x_{i},p_{i},t_{0}) \exp\left[ \int_{t_{0}}^{t}ds\; L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*},x_{s}^{*},s)\right] \nonumber \\ \label{EFunctWork3} \end{eqnarray} where $C_{\mathcal{E}}$ is defined by $ C_{\mathcal{E}} \equiv C_{x} \int_{(\Delta x_{t_{0}},\Delta\dot{x}_{t_{0}})=(0,0)}^{ (\Delta x_{t},\Delta\dot{x}_{t})=(0,0)} \mathcal{D}\Delta x_{s}\; $ $ \exp[ - (1/4D) \int_{t_{0}}^{t}ds$ $\int_{t_{0}}^{t}ds\; (\Delta\dot{x}_{s} +(1/\tau_{r})\Delta x_{s} + (m/\alpha)\Delta\ddot{x}_{s})^{2}] $ and is independent of $\lambda$. In the expression (\ref{EFunctWork3}), the contributions of the deviations $\Delta x_{s}$ to the path integral in the function $\mathcal{E}_{w}(\lambda,t)$ are included only in the coefficient $C_{\mathcal{E}}$. Next, we calculate the quantity $\int_{t_{0}}^{t}ds\;L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*},x_{s}^{*},s)$ using Eq. (\ref{FunctX1}), and then the function $\mathcal{E}_{w}(\lambda,t)$ given by Eq. (\ref{EFunctWork3}). For such a calculation, using Eq. (\ref{FunctX1}) we first note that \begin{eqnarray} \frac{d x_{s}^{*}}{ds} &=& \mathbf{C}^{T}\Theta\mathbf{K}_{s} + v , \label{DerivX1} \\ \frac{d^{2}x_{s}^{*}}{ds^{2}} &=& \mathbf{C}^{T}\Theta^{2}\mathbf{K}_{s} \label{DerivX2} \end{eqnarray} where the $4\times 4$ matrix $\Theta$ is defined by % \begin{eqnarray} \Theta \equiv \left(\begin{array}{cccc} \nus{+} &0&0&0 \\ 0& \nus{-} &0&0 \\ 0&0& -\nus{-} &0 \\ 0&0&0& -\nus{+} \end{array}\right) . \label{MatriTheta1} \end{eqnarray} Then, using Eqs. (\ref{FunctX1}), (\ref{DerivX1}) and (\ref{DerivX2}) we obtain \begin{eqnarray} \dot{x}_{s}^{*} + \frac{x_{s}^{*}-vs}{\tau_{r}} + \frac{m}{\alpha}\ddot{x}_{s}^{*} = \mathbf{C}^{T}\Gamma\mathbf{K}_{s} + 2\lambda v \label{LagraCalcu1} \end{eqnarray} where the matrix $\Gamma$ is introduced as \begin{eqnarray} \Gamma &\equiv& \tau_{m}\Theta^{2}+\Theta+\frac{1}{\tau_{r}}\mathcal{I} \nonumber \\ &=& 2 \left(\begin{array}{cccc} \nus{+} & 0 &0&0 \\ 0 & \nus{-}&0&0 \\ 0 & 0 &0&0\\ 0 & 0 &0& 0 \end{array}\right) \label{functGamma1} \end{eqnarray} with the relation $\tau_{m}\nus{\pm}^{2}-\nus{\pm}+\tau_{r}^{-1} = 0$ and $\mathcal{I}$ the $4\times 4$ identity matrix. Using Eqs. (\ref{LagraFunct1}), (\ref{FunctX1}), (\ref{DerivX2}) and (\ref{LagraCalcu1}) we obtain \begin{eqnarray} L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*},x_{s}^{*},s) &=& -\frac{1}{4}\alpha\beta\mathbf{C}^{T} \Gamma\mathbf{K}_{s} \mathbf{K}_{s}^{T} \Gamma\mathbf{C} \nonumber \\ &&\hspace{0.5cm} - \alpha\beta\lambda v \mathbf{C}^{T} \left[\tau_{m}\vartheta\Theta^{2}+\Theta\right]\mathbf{K}_{s} \nonumber \\ &&\hspace{0.5cm} - \lambda(1-\lambda) \alpha\beta v^{2} . \label{LagraFunct4} \end{eqnarray} Noting Eq. (\ref{ConstC1}) and that \begin{eqnarray} \int_{t_{0}}^{t}ds\; \mathbf{C}^{T}\Theta\mathbf{K}_{s} &=& \int_{t_{0}}^{t}ds\;\frac{dx_{s}^{*}}{ds} - v(t-t_{0}) \nonumber \\ &=& x_{f}-x_{i} - v(t-t_{0}) , \label{BoundCondi5}\\ \int_{t_{0}}^{t}ds\;\mathbf{C}^{T}\Theta^{2}\mathbf{K}_{s} &=& \int_{t_{0}}^{t}ds\;\frac{d^{2}x_{s}^{*}}{ds^{2}} = \frac{p_{f}-p_{i}}{m} \label{BoundCondi6} \end{eqnarray} by Eqs. (\ref{DerivX1}) and (\ref{DerivX2}), we further obtain \begin{eqnarray} &&\int_{t_{0}}^{t}ds\; L(\ddot{x}_{s}^{*},\dot{x}_{s}^{*},x_{s}^{*},s) \nonumber \\ &&\hspace{0.5cm} = -\frac{1}{4}\alpha\beta \left[\mathbf{B}_{if}^{(1-2\lambda)}\right]^{T} \Lambda_{t} \mathbf{B}_{if}^{(1-2\lambda)} - \alpha\beta\lambda v \mbox{\boldmath $\bf\eta$}^{T}\mathbf{B}_{if}^{(0)} \nonumber \\ &&\hspace{0.5cm}\spaEq - \lambda(1-\lambda) \alpha\beta v^{2} (t-t_{0}) \label{ActioFunct2} \end{eqnarray} where the $4\times 4$ matrix $\Lambda_{t}$ and the vector $\mbox{\boldmath $\bf\eta$}$ are defined by \begin{eqnarray} \Lambda_{t} &\equiv& \left(A_{t}^{-1}\right)^{T} \Gamma\Phi_{t} \Gamma A_{t}^{-1} , \label{MatriLambd1} \\ \mbox{\boldmath $\bf\eta$} &\equiv& \left(\begin{array}{c} -1\\ - \tau_{m}\vartheta \\ 1\\ \tau_{m}\vartheta \end{array}\right), \label{MatriVarpi1} \end{eqnarray} respectively, with the $4\times 4$ matrix $\Phi_{t}$ defined by % \begin{eqnarray} \Phi_{t} &\equiv& \int_{t_{0}}^{t}ds\; \mathbf{K}_{s} \mathbf{K}_{s}^{T} . \label{FunctPhi1} \end{eqnarray} Inserting Eq. (\ref{ActioFunct2}) into Eq. (\ref{EFunctWork3}) we obtain \begin{widetext} \begin{eqnarray} \mathcal{E}_{w}(\lambda,t) &=& C_{\mathcal{E}} e^{ - \lambda(1-\lambda) \alpha\beta v^{2} (t-t_{0})} \int\!\int dx_{i}dp_{i} \int\!\int dx_{f}dp_{f} f(x_{i},p_{i},t_{0}) \nonumber \\ &&\hspace{0.5cm}\times\exp\left\{ -\frac{1}{4}\alpha\beta \left[\mathbf{B}_{if}^{(1-2\lambda)}\right]^{T} \Lambda_{t} \mathbf{B}_{if}^{(1-2\lambda)} - \alpha\beta\lambda v \mbox{\boldmath $\bf\eta$}^{T}\mathbf{B}_{if}^{(0)} \right\} . \nonumber\\ \label{EFunctWork4} \end{eqnarray} Equation (\ref{EFunctWork4}) gives a concrete form of the function $\mathcal{E}_{w}(\lambda,t)$ for any initial distribution function $f(x_{i},p_{i},t_{0})$. The $\lambda$-independent normalization coefficient $C_{\mathcal{E}}$ in Eq. (\ref{EFunctWork4}) can be determined from the condition (\ref{IdentE1}), and we obtain \begin{eqnarray} C_{\mathcal{E}} &=& \left\{ \int\!\int dx_{i}dp_{i} \int\!\int dx_{f}dp_{f} f(x_{i},p_{i},t_{0}) \exp\left\{ -\frac{1}{4}\alpha\beta \left[\mathbf{B}_{if}^{(1)} \right]^{T} \Lambda_{t} \mathbf{B}_{if}^{(1)} \right\} \right\}^{-1} . \label{NormaConst1} \end{eqnarray} Note that by using the condition (\ref{IdentE1}) we avoided to carry out explicitly the path integral included originally in the quantity $C_{\mathcal{E}}$ [cf. Eq. (\ref{EFunctWork3})]. Inserting Eq. (\ref{EFunctWork4}) into Eq. (\ref{DistriWork2}), and carrying out the Gaussian integral over $\lambda$ appearing then in Eq. (\ref{DistriWork2}), we obtain \begin{eqnarray} P_{w}(W,t) &=& \frac{ C_{\mathcal{E}}}{\sqrt{4 \pi\alpha\beta v^{2} \left( t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J}\right) }} \int\!\int dx_{i}dp_{i} \int\!\int dx_{f}dp_{f} \; f(x_{i},p_{i},t_{0}) \nonumber \\ && \hspace{0.5cm}\times \exp\left\{ -\frac{1}{4}\alpha\beta \left[\mathbf{B}_{if}^{(1)}\right]^{T} \Lambda_{t} \mathbf{B}_{if}^{(1)} - \frac{\left\{W -\alpha\beta v\left[v(t-t_{0}) + \left(\mbox{\boldmath $\bf\eta$}^{T}-\tau_{r}\mathbf{J}^{T}\Lambda_{t}\right) \mathbf{B}_{if}^{(1)}\right] \right\}^{2}} {4\alpha\beta v^{2} \left( t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J}\right)} \right\} \label{DistriWork6} \end{eqnarray} \end{widetext} where the 4-dimensional vector $\mathbf{J}$ is defined by \begin{eqnarray} \mathbf{J} \equiv \left(\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \end{array}\right) \label{VectorJ1} \end{eqnarray} and we used the relation $\mbox{\boldmath $\bf\eta$}^{T}\mathbf{J} = 0$. Equation (\ref{DistriWork6}) is an explicit form for the work distribution function for all time, and for any initial distribution function $f(x_{i},p_{i},t_{0})$. Using Eq. (\ref{NormaConst1}) for the coefficient $C_{\mathcal{E}}$, the work distribution function (\ref{DistriWork6}) is properly normalized, namely $\int dW\; P_{w}(W,t) =1$, at any time $t$. In the next two sections \ref{AsympFluct} and \ref{InertEffec} we discuss, using the work distribution function (\ref{DistriWork6}), fluctuation properties of the work from the viewpoint of the asymptotic fluctuation theorem for $t\rightarrow +\infty$, as well as for finite times. \section{Asymptotic Fluctuation Theorem} \label{AsympFluct} The matrix $\Lambda_{t}$ defined by Eq. (\ref{MatriLambd1}) satisfies the condition \begin{eqnarray} \lim_{t\rightarrow +\infty}\frac{1}{t-t_{0}} \Lambda_{t} = 0 , \label{AsympCondi1} \end{eqnarray} as shown in Appendix \ref{AsympWork}. Equation (\ref{AsympCondi1}) implies that $v(t-t_{0})+(\mbox{\boldmath $\bf\eta$}^{T}-\tau_{r}\mathbf{J}^{T}\Lambda_{t}) \mathbf{B}_{if}^{(1)} $ $ \stackrel{t\rightarrow +\infty}{\sim} v(t-t_{0})$ and $t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J} \stackrel{t\rightarrow +\infty}{\sim} t-t_{0}$ in Eq. (\ref{DistriWork6}), so that the work distribution function $P_{w}(W,t)$ is proportional to the Gaussian function $\exp\{-[W -\alpha\beta v^{2}(t-t_{0})]^{2} /[4\alpha\beta v^{2} (t-t_{0}) ]\}$ in the long time limit $t\rightarrow +\infty$, i.e. \begin{eqnarray} P_{w}(W,t) &\stackrel{t\rightarrow +\infty}{\sim}& \frac{1}{\sqrt{4 \pi\alpha\beta v^{2} \left( t-t_{0}\right) }} \nonumber \\ &&\times \exp\left\{-\frac{\left[W -\alpha\beta v^{2}\left(t-t_{0} \right)\right]^{2}} {4\alpha\beta v^{2} (t-t_{0})} \right\} \hspace{0.5cm} \label{DistriWork5} \end{eqnarray} regardless of the initial distribution function $f(x_{i},p_{i},t_{0})$. It is important to note that the work distribution function (\ref{DistriWork5}) in the long time limit $t\rightarrow+\infty$ in the inertial case is the same as in the over-damped case. Physically, this is, of course, due to the finiteness of the inertial characteristic time $\tau_{m}$, which makes inertial effects disappear in the long time limit. Nevertheless, the proof of this equivalence is non-trivial. From Eq. (\ref{DistriWork5}) we immediately derive \begin{eqnarray} \lim_{t\rightarrow +\infty} \frac{P_{w}(W,t)}{P_{w}(-W,t)} = e^{W} \label{AsympFT1} \end{eqnarray} \emph{for any initial distribution function} $f(x_{i},p_{i},t_{0})$. We will call Eq. (\ref{AsympFT1}) the asymptotic fluctuation theorem for work. Equation (\ref{AsympFT1}) is independent of the value of the parameter $\vartheta$, i.e. of the frame of reference (laboratory or comoving) or also of the contribution of the d'Alembert-like force to the work (\ref{Work1}). \section{Inertial Effects for Finite Times} \label{InertEffec} \subsection{Slope of $\ln [P_{w}(W,t)/P_{w}(-W,t)]$ and the Critical Mass} In contrast to the asymptotic work distribution function (\ref{DistriWork5}), various inertial effects in the work distribution function appear for finite times. In this section we discuss such inertial effects using the function $G(W,t)$ defined by \begin{eqnarray} G(W,t) \equiv \frac{\partial}{\partial W} \ln \frac{P_{w}(W,t)}{P_{w}(-W,t)} . \label{FunctG1} \end{eqnarray} The function $G(W,t)$ gives the slope of the fluctuation function $\ln [P_{w}(W,t)$ $/P_{w}(-W,t)]$ with respect to $W$, and satisfies \begin{eqnarray} \lim_{t\rightarrow+\infty}G(W,t) = 1 \label{AsympFT2} \end{eqnarray} by the asymptotic fluctuation theorem (\ref{AsympFT1}).% \footnote{ A function like $\tilde{G}(W,t) \equiv (1/\langle W\rangle ) \ln [P_{w}(W,t)/P_{w}(-W,t)]$ with the average work $\langle W\rangle$ has been used to characterize fluctuation theorems \cite{ZC03a}. The function (\ref{FunctG1}) is connected to $\tilde{G}(W,t)$ by $G(W,t) = \langle W\rangle \partial \tilde{G}(W,t) /\partial W$. One of the advantage to use $G(W,t)$ instead of $\tilde{G}(W,t)$ is that different from $\tilde{G}(W,t)$, $G(W,t)$ is independent of $W$ when the distribution function $P_{w}(W,t)$ is Gaussian, as shown in Eq. (\ref{DistriWork9}). } The behavior of $G(W,t)$ for finite times depends on the initial condition. To get concrete results, in this section we concentrate on the case of a nonequilibrium steady state initial condition, which can be represented by \begin{widetext} \begin{eqnarray} f(x_{i},p_{i},t_{0}) = \frac{\beta}{2\pi} \sqrt{\frac{\kappa}{m}} \exp\left\{-\beta\left[ \frac{(p_{i}-mv)^{2}}{2m} +\frac{1}{2}\kappa (x_{i}-vt_{0}+v\tau_{r})^{2}\right]\right\} \label{InitiNNSS1} \end{eqnarray} for any frame. The initial distribution function (\ref{InitiNNSS1}) gives a Gaussian distribution for the particle initial position $x_{i}$ and momentum $p_{i}$ around their nonequilibrium steady state average values $vt_{0}-v\tau_{r}$ and $mv$, respectively. Inserting Eq. (\ref{InitiNNSS1}) into Eq. (\ref{DistriWork6}) the work distribution function is given by \begin{eqnarray} P_{w}(W,t) &=& \sqrt{\frac{1-\Omega_{t}} {4\pi\alpha\beta v^{2} \left( t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J}\right)}} \exp\left\{ -\frac{ 1- \Omega_{t} } {4\alpha\beta v^{2} \left( t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J}\right)} \left[ W -\alpha\beta v^{2}(t-t_{0}) \right]^{2}\right\} \label{DistriWork9} \end{eqnarray} where $\Omega_{t}$ is defined by \begin{eqnarray} \Omega_{t} &\equiv& \left(\mbox{\boldmath $\bf\eta$}-\tau_{r}\Lambda_{t}\mathbf{J}\right)^{T} \left[ \left(\mbox{\boldmath $\bf\eta$}-\tau_{r}\Lambda_{t}\mathbf{J}\right) \left(\mbox{\boldmath $\bf\eta$}-\tau_{r}\Lambda_{t}\mathbf{J}\right)^{T} +\left( t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J}\right) \left( \Lambda^{(0)}+\Lambda_{t} \right) \right]^{-1} \left(\mbox{\boldmath $\bf\eta$}-\tau_{r}\Lambda_{t}\mathbf{J}\right) . \label{FunctOmega1} \end{eqnarray} \end{widetext} with the $4\times 4$ matrix $ \Lambda^{(0)}$ defined by \begin{eqnarray} \Lambda^{(0)} \equiv \frac{2}{\alpha} \left(\begin{array}{cccc} \kappa & 0 & 0 & 0 \\ 0 & m & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) . \label{TildeLambda1} \end{eqnarray} [See Appendix \ref{FinitWork} for a derivation of Eq. (\ref{DistriWork9}).] Note that the work distribution function (\ref{DistriWork9}) is Gaussian with the average work $\langle W \rangle = \alpha\beta v^{2}(t-t_{0})$ at any time because we chose a Gaussian nonequilibrium steady state initial condition (\ref{InitiNNSS1}). % Since the work distribution function $P(W,t)$ is Gaussian, $G(W,t)$ defined by Eq. (\ref{FunctG1}) is independent of $W$, so that we denote it by $G(t) [=G(W,t)]$ from now on. Inserting Eq. (\ref{DistriWork9}) into Eq. (\ref{FunctG1}), we obtain \begin{eqnarray} G(t) = \frac{1-\Omega_{t}} {1 -\frac{\tau_{r}^{2}}{t-t_{0}}\mathbf{J}^{T} \Lambda_{t}\mathbf{J}} \label{FunctG2} \end{eqnarray} as an explicit form of $G(t)$. One may notice that $G(t)$ in Eq. (\ref{FunctG2}) is independent of the dragging velocity $v$ and the inverse temperature $\beta$. Moreover, $G(t)$ is positive for $t>t_{0}$ because the distribution function $P_{w}(W,t)$ is normalizable so that the coefficient $(1- \Omega_{t})/[4\alpha\beta v^{2} ( t-t_{0} -\tau_{r}^{2}\mathbf{J}^{T}\Lambda_{t}\mathbf{J})] =G(t)/[4\alpha\beta v^{2}( t-t_{0})]$ in the exponent of the Gaussian distribution function (\ref{DistriWork9}) must be positive. As a first approximation to the asymptotic relaxation of $G(t)$ to its final value (\ref{AsympFT2}), we obtain from Eq. (\ref{FunctG2}) \begin{eqnarray} G(t) \stackrel{t\rightarrow+\infty}{\sim} 1+\frac{\tau_{r}-\tau_{m}\vartheta^{2}} {t-t_{0}-\tau_{r} + \tau_{m}\vartheta^{2}} , \label{FunctGAsym1} \end{eqnarray} meaning that the function $G(t)$ decays to $1$ by a power inversely proportional to the time in the long time limit $t\rightarrow +\infty$. [See Appendix \ref{AsympFunctG} for a derivation of Eq. (\ref{FunctGAsym1}).] Equation (\ref{FunctGAsym1}) is only the first approximation for an asymptotic form of $G(t)$, but already includes an important inertial contribution to $G(t)$, as well as an interesting frame dependence of $G(t)$. Actually, the second term on the right-hand side of Eq. (\ref{FunctGAsym1}) depends on the mass $m$ via $\tau_{m}=m/\alpha$ in the laboratory frame $\vartheta=1$, while that term is independent of the mass in the comoving frame $\vartheta=0$. Another interesting property of $G(t)$ expressed by Eq. (\ref{FunctGAsym1}) is that in the laboratory frame $\vartheta=1$ the second term on the right-hand side of Eq. (\ref{FunctGAsym1}), the $t^{-1}$-decay term of $G(t)$, vanishes in the case that $\tau_{r} = \tau_{m}$, i.e. for a special mass value $m=\alpha^{2}/\kappa$. \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.4\textwidth]{fig1oscilTime.eps} \vspfigB \caption{ Time oscillation period $\mathcal{T}_{m}$ (solid line) in $\tilde{x}_{s}^{*}$ as a function of mass $m$ normalized by the critical mass $m^{*}\equiv \alpha^{2}/(4\kappa)$ for the mass in $m/m^{*}>1$. There is no time-oscillation for $m/m^{*}<1$ and the minimum of $\mathcal{T}_{m}$ is at $m/m^{*}=2$. Here, we used parameter values $\alpha =\kappa = 1$. We also plotted a time-oscillation period $\mathcal{T}_{m}^{(0)} =\left.\mathcal{T}_{m}\right|_{\alpha=0} (= 2\pi \sqrt{m/\kappa})$ (broken line) for a purely harmonic oscillation in the case without the dissipation ($\alpha = 0$). The time-oscillation period $\mathcal{T}_{m}$ approaches $\mathcal{T}_{m}^{(0)}$ in the large mass limit $m/m^{*}\rightarrow +\infty$. } \label{fig1oscilTime} \end{center} \vspace{0.2cm} \end{figure} Perhaps the most interesting implication of Eq. (\ref{FunctG2}) for $G(t)$, although it does not appear explicitly in the asymptotic expression (\ref{FunctGAsym1}) of $G(t)$, is the existence of a critical value of the mass $m=m^{*}$ above which $G(t)$ shows a time-oscillatory behavior. In our theory, this time-oscillation has its origin in the time-dependence of $\tilde{x}_{s}^{*}$ given by Eq. (\ref{SolutTildeX1}) via the exponential terms $\exp(\nus{\pm} t)$, etc., when the coefficient $\nus{\pm}$ given by Eq. (\ref{CoeffNuPM}) has an imaginary part, namely when the condition \begin{eqnarray} m > m^{*}\equiv \frac{\alpha^{2}}{4\kappa} \label{CritiMass1} \end{eqnarray} (derived from the condition $4 \tau_{m}\tau_{r}^{-1}-1>0$) is satisfied. We call the mass $m^{*}$ the critical mass in this paper, since a (smooth) ``dynamical'' phase transition takes place at $m=m^{*}$. For masses $m>m^{*}$, the position $\tilde{x}_{s}^{*}$ has a time-oscillation with the oscillation period $\mathcal{T}_{m}$ \begin{eqnarray} \mathcal{T}_{m} = 2\pi \sqrt{\frac{m}{\kappa}} \left(1-\frac{m^{*}}{m}\right)^{-1/2} \label{OscilPerio1} \end{eqnarray} corresponding to a frequency $\omega\equiv \sqrt{4\tau_{m}\tau_{r}^{-1}-1}/(2\tau_{m})=|Im\{\nus{\pm}\}|$, using the imaginary part $Im\{\nus{\pm}\}$ of $\nus{\pm}$. In Fig. \ref{fig1oscilTime} the time-oscillation period $\mathcal{T}_{m}$ (solid line) is shown as a function of the scaled mass $m/m^{*}$ for $m/m^{*}>1$. There is no time-oscillation of $\tilde{x}_{s}^{*}$ in the case of $m/m^{*}<1$, and the time-oscillation period diverges when $m/m^{*}\rightarrow 1+0$. The oscillation period $\mathcal{T}_{m}$ decreases rapidly as a function of mass $m$ for $m/m^{*} < 2$, has a minimum at $m/m^{*}=2$, and increases gradually for $m/m^{*}>2$. For comparison, we also plotted in Fig. \ref{fig1oscilTime} the scaled mass dependence of the time-oscillation period $\mathcal{T}_{m}^{(0)} = 2\pi \sqrt{m/\kappa} \; (=\left.\mathcal{T}_{m}\right|_{\alpha=0})$ (broken line) for a purely harmonic oscillator with spring constant $\kappa$. Different from the time-oscillation period (\ref{OscilPerio1}), the period $\mathcal{T}_{m}^{(0)}$ is defined for all the masses, and increases monotonically as the mass increases. The time-oscillation period (\ref{OscilPerio1}) approaches $\mathcal{T}_{m}^{(0)}$ in the large mass limit $m/m^{*}\rightarrow +\infty$. It is useful to consider the critical behavior in the time-oscillating behavior of $G(t)$ as due to the presence of \emph{two} independent time scales appearing in our model: one characterized by $\tau_{r} (=\alpha/\kappa)$ and another by $\tau_{m} (=m/\alpha)$. These time scales $\tau_{m}$ and $\tau_{r}$ are related by $\tau_{r}=4\tau_{m^{*}}$ at the critical mass $m=m^{*}$. Using these two time scales, the time oscillation period $\mathcal{T}_{m}^{(0)}$ for a purely harmonic oscillator is given by $\mathcal{T}_{m}^{(0)}=2\pi \sqrt{\tau_{r} \tau_{m}}$. Introducing the frequencies $\omega^{(0)}\equiv 2\pi / \mathcal{T}_{m}^{(0)}$ and $\omega_{m}\equiv 1 / \tau_{m}$ corresponding to the two time scales $\mathcal{T}_{m}^{(0)}$ and $\tau_{m}$, respectively, the frequency $\omega \equiv 2\pi /\mathcal{T}_{m}$ is represented as $\omega = \sqrt{[\omega^{(0)}]^{2}-\omega_{m}^{2}/4}$ corresponding to the time-oscillation period (\ref{OscilPerio1}). In this expression for the frequency $\omega$ the time oscillations occur only when the condition $[\omega^{(0)}]^{2}>\omega_{m}^{2}/4$ is satisfied. The existence of these two time scale $\tau_{m}$ and $\tau_{r}$ is therefore essential for the time-oscillatory behavior with the frequency $\omega$, noting that there is no time-oscillation in the over-damped case containing only $\tau_{r}$. In the next two subsections \ref{Vartheta0} and \ref{Vartheta1}, we investigate properties of $G(t)$ in more detail, including its time-oscillating behavior, for (A) the work done in the laboratory frame ($\vartheta = 1$), and (B) the work done in the comoving frame ($\vartheta = 0$), separately. We will also compare those results with those for the over-damped case. For this purpose, we now calculate $G(t)$ explicitly in the over-damped case. In our previous paper \cite{TC07a}, we already calculated the work distribution function $P_{w}^{(0)}(W,t)$ for the over-damped case, which is given by \begin{eqnarray} P_{w}^{(0)}(W,t) &=& \frac{1}{\sqrt{4\pi\alpha \beta v^{2} \left[ t-t_{0}-\tau_{r}(1-b_{t}) \right] }} \nonumber \\ &&\hspace{0.5cm}\times \exp\left\{-\frac{\left[W-\alpha\beta v^{2} (t-t_{0})\right]^{2}}{4\alpha\beta v^{2}\left[ t-t_{0} -\tau_{r}(1-b_{t}) \right]}\right\} \nonumber \\ \label{WorkDistr4} \end{eqnarray} with $b_{t}\equiv \exp[-(t-t_{0})/\tau_{r}]$ in the case of a nonequilibrium steady state initial distribution function $f^{(0)}(x_{i},t_{0}) = \sqrt{\beta\kappa/(2\pi)}$ $ \exp[-\beta \kappa (x_{i}-vt_{0}+v\tau_{r})^{2}/2]$ for the particle position $x_{i}$ for the over-damped case at the initial time $t_{0}$.% \footnote{ Note that the work distribution function (\ref{WorkDistr4}) approaches the distribution function (\ref{DistriWork5}) in the long time limit $t\rightarrow +\infty$ because of $t-t_{0}-\tau_{r}(1-b_{t}) \stackrel{t\rightarrow +\infty}{\sim} t-t_{0}$. } Using Eq. (\ref{WorkDistr4}), and defining, [cf. Eq. (\ref{FunctG1})], $G^{(0)}(t) \equiv (\partial/\partial W) \ln [P_{w}^{(0)}(W,t)/P_{w}^{(0)}(-W,t)]$, we have \begin{eqnarray} G^{(0)}(t) = 1+ \frac{\tau_{r}(1-b_{t})}{t-t_{0}-\tau_{r}(1-b_{t})} , \label{FunctOD1} \end{eqnarray} which gives $G(t)$ for the over-damped case \cite{ZC03b}. Note that Eq. (\ref{FunctOD1}) implies $G^{(0)}(t) \stackrel{t\rightarrow +\infty}{\sim} 1 + \tau_{r} /(t-t_{0}-\tau_{r})$, which is consistent with Eq. (\ref{FunctGAsym1}), since $\tau_{m}$ is zero for the over-damped case. \subsection{$G(t)$ in the Laboratory Frame} \label{Vartheta0} In this subsection we consider $G(t)$ given by Eq. (\ref{FunctG2}) $\;-$ which depends on the parameter $\vartheta$ to specify a frame via $\Lambda_{t}$ and $\Omega_{t}$ $-\;$ for the work done in the laboratory frame, i.e. for $\vartheta=1$. In this subsection \ref{Vartheta0}, as well as in the next subsection \ref{Vartheta1}, we use the parameter values $\alpha =\kappa = 1$ and set the initial time $t_{0}=0$, i.e. $\tau_{r}$=1 as a time unit and $m/m^{*} = 4\tau_{m}$ as the scaled mass. Figure \ref{fig2slopeG0} shows $G(t)$ given by Eq. (\ref{FunctG2}) as a function of time $t$ for the scaled masses $m/m^{*}=0$ (over-damped case), $0.999$, $2$, $4$, $8$, $20$ and $40$. The graphs of $G(t)$ all converge to 1 in the long time limit $t\rightarrow+\infty$, as required by the asymptotic fluctuation theorem (\ref{AsympFT1}), i.e. by Eq. (\ref{AsympFT2}). \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.48\textwidth]{fig2slopeG0.eps} \vspfigB \caption{(color online) Graphs of $G(t)=(\partial/\partial W) \ln [P_{w}(W,t)/P_{w}(-W,t)]$ as a function of time $t$ for the work done in the laboratory frame ($\vartheta=1$) in the case of a nonequilibrium steady state initial condition for $t\in [0,60]$. Inset: Graphs of $G(t)$ in a short time period for $t\in [0,3]$. Lines in these graphs correspond to parameter values of the scaled masses $m/m^{*}=0$ (over-damped case), $0.999$, $2$, $4$, $8$, $20$ and $40$ as indicated above this figure, and we used parameter values $\alpha = \kappa =1$ (so that $\tau_{r}$=1 for a unit time and also $m/m^{*} = 4\tau_{m}$ as the scaled mass, with $m^{*}=1/4$) and $t_{0}=0$. } \label{fig2slopeG0} \end{center} \vspace{0.2cm} \end{figure} We now discuss in some detail the properties of Fig. \ref{fig2slopeG0}. This figure shows that $G(t)$ for nonzero masses is always smaller than in the over-damped case of zero mass. In the over-damped case, $G(t)$ decreases monotonically to the final value $1$ from $+\infty$ at the initial time. A similar behavior is still observed for small masses (e.g. see the graph for $m/m^{*}= 0.999$ in Fig. \ref{fig2slopeG0}. It may also be noted that for small nonzero masses the relaxation of $G(t)$ to its final value $1$ is faster than in the over-damped case (e.g. see the graphs for $m/m^{*}=2$ and $4$ in Fig. \ref{fig2slopeG0}). This feature can be explained by the second term on the right-hand side of Eq. (\ref{FunctGAsym1}), since the absolute value $|\tau_{r}-\tau_{m}|$ of the numerator of this term is smaller for $\vartheta = 1$ than the corresponding over-damped value $\tau_{r}$ in the case of $0<m/m^{*}< 8$, using that $|\tau_{r}-\tau_{m}|<\tau_{r}$. Moreover, Fig. \ref{fig2slopeG0} shows that for large masses (e.g. see the graphs for $m/m^{*}>4$ in Fig. \ref{fig2slopeG0}), $G(t)$ is smaller than $1$ for long times, while $G(t)$ is always larger than $1$ in the over-damped case. This is because the second term on the right-hand side of Eq. (\ref{FunctGAsym1}) is negative for $\tau_{r}<\tau_{m}$ (i.e. $m/m^{*}> 4$), when $\vartheta=1$ and $t>t_{0}+\tau_{r}-\tau_{m}$. \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.48\textwidth]{fig3oscilG0A.eps} \vspfigB \caption{(color online) Linear-log plots of absolute value $|\Delta G(t)|$ of the function $\Delta G(t) = G(t)-1-(\tau_{r}-\tau_{m})/(t-\tau_{r}+\tau_{m})$ as a function of time $t$ for the work done in the laboratory frame in the case of a nonequilibrium steady state initial condition. Lines in these graphs correspond to parameter values of the scaled masses $m/m^{*}=0$, $0.5$, $0.9$, $0.999$,$1.1$, $2$, $4$ and $20$ as indicated above this figure. The minima of the oscillations of $|\Delta G(t)|$ for $m/m^{*}>1$ are actually zero, which is not indicated in this figure and Figs. \ref{fig4oscilG0B}, \ref{fig6oscilG1A} and \ref{fig7oscilG1B}. We use the same parameter values $\alpha$, $\kappa$ and $t_{0}$ as in Fig. \ref{fig2slopeG0}. } \label{fig3oscilG0A} \end{center} \vspace{0.2cm} \end{figure} \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.48\textwidth]{fig4oscilG0B.eps} \vspfigB \caption{(color online) Long time behavior of $|\Delta G(t)|$ as a function of time $t$ as linear-log plots for the work done in the laboratory frame in the case of a nonequilibrium steady state initial condition. Here, we use the same parameter values $\alpha$, $\kappa$ and $t_{0}$ as in Fig. \ref{fig2slopeG0}. Broken, dotted and dash-dotted lines in these graphs correspond to parameter values of the scaled masses $m/m^{*}=100$, $200$ and $1600$, respectively. Solid lines are fits of $|\Delta G(t)|$ to the function (\ref{FitFunct1}) using Table \ref{FittiParam1}, together with the time-oscillation period (\ref{OscilPerio1}), but they are visually indistinguishable from the graphs of $|\Delta G(t)|$ except for short times. } \label{fig4oscilG0B} \end{center} \vspace{0.2cm} \end{figure} A time-oscillatory behavior of $G(t)$ is clearly visible in Fig. \ref{fig2slopeG0} for large masses, i.e. for $m >\!> m^{*}$. To show more clearly the time-oscillatory behavior of $G(t)$ for $m > m^{*}$ as opposed to for $m < m^{*}$, we plotted in Fig. \ref{fig3oscilG0A} the absolute value of the deviation% \footnote{ We note that in this subsection we use the function (\ref{DeltaG1}) for $\vartheta=1$, while in the next subsection \ref{Vartheta1} we use the function (\ref{DeltaG1}) for $\vartheta=0$. } \begin{eqnarray} \Delta G(t) \equiv G(t)-1 -\frac{\tau_{r}-\tau_{m}\vartheta^{2}} {t-t_{0}-\tau_{r} + \tau_{m}\vartheta^{2}} \label{DeltaG1} \end{eqnarray} of $G(t)$ from its asymptotic form (\ref{FunctGAsym1}) as a function of time $t \in [0,25]$ for the cases of $m/m^{*}=0$, $0.5$, $0.9$, $0.999$, $1.1$, $2$, $4$ and $20$. To illustrate the long time behavior of $|\Delta G(t)|$ in more detail, we also show in Fig. \ref{fig4oscilG0B} the absolute value $|\Delta G(t)|$ of $\Delta G(t)$ as functions of $t\in[0,1000]$ for the scaled masses $m/m^{*}=100$, $200$ and $1600$ as linear-log plots. The deviation $\Delta G(t)$ goes to zero when $t\rightarrow +\infty$ because of the asymptotic fluctuation theorem (\ref{AsympFT2}). In Figs. \ref{fig3oscilG0A} and \ref{fig4oscilG0B}, it is important to note that there is no time-oscillation of $\Delta G(t)$ for $0\leq m/m^{*}<1$, while we do observe time-oscillations of $\Delta G(t)$ for $m/m^{*}>1$, in agreement with a critical mass (\ref{CritiMass1}), above which $G(t)$ oscillates in time. The decay of $|\Delta G(t)|$ to zero as a function of $t$ is faster for larger masses for $0\leq m/m^{*}<1$ (cf. Fig. \ref{fig3oscilG0A}), but slower for larger masses for $m/m^{*}>1$ (cf. Figs. \ref{fig3oscilG0A} and \ref{fig4oscilG0B}). To check that the time oscillation period $\mathcal{T}_{m}$ given by Eq. (\ref{OscilPerio1}) indeed appears in $G(t)$, we fitted the data for $\Delta G(t)$ to the function \begin{eqnarray} \Delta G(t) \stackrel{t\rightarrow+\infty}{\sim} a e^{-b t}\sin\left(\frac{2\pi}{\mathcal{T}_{m}}t+c\right) \label{FitFunct1} \end{eqnarray} with fitting parameters $a$, $b$ and $c$ in Fig. \ref{fig4oscilG0B}. The values of the fitting parameters $a$, $b$ and $c$ are given in Table \ref{FittiParam1}. The function (\ref{FitFunct1}) is then sufficiently close to $\Delta G(t)$ over many time-oscillation periods (except for short times), to suggest that the time-oscillations of $G(t)$ may well have the same origin as those in the position $\tilde{x}_{s}^{*}$. Similarly for Fig. \ref{fig3oscilG0A}, using the fitting function (\ref{FitFunct1}) we can also check that the time-oscillation periods of $|\Delta G(t)|$ in this figure are given by Eq. (\ref{OscilPerio1}). We fully realize that Figs. \ref{fig3oscilG0A} and \ref{fig4oscilG0B} are not enough to specify convincingly the function form of decay of $\Delta G(t)$. In Eq. (\ref{FitFunct1}) we assumed an exponential decay by a factor $a \exp(-b t)$, which seems to fit reasonably well the data in Fig. \ref{fig4oscilG0B}. However, values of the fitting parameters $a$ and $b$ shown in Table \ref{FittiParam1} appear to vary non-negligibly if we fit data including longer time periods than the ones shown in Fig. \ref{fig4oscilG0B}. In this sense, at this stage, the exponential factor in Eq. (\ref{FitFunct1}) should be regarded only as a convenience to check numerically the time oscillation period $\mathcal{T}_{m}$ appearing in $\Delta G(t)$, rather than claiming an asymptotic exponential decay of $\Delta G(t)$ of the form (\ref{FitFunct1}). \begin{table*}[!t] \begin{center} \begin{tabular}{c|c|c|c|c|c} \makebox[\widthtableB]{Frame ($\vartheta$)} & \makebox[\widthtableA]{$m/m^{*}$} & \makebox[\widthtableA]{$\mathcal{T}_{m}$} & \makebox[\widthtableA]{a} & \makebox[\widthtableA]{b} & \makebox[\widthtableA]{c} \\ \hline \hline Laboratory (1) & 100 & 31.6 & -0.077 & 0.021 & 4.4 \\ Laboratory (1) & 200 & 44.5 & -0.14 & 0.011 & 4.5 \\ Laboratory (1) & 1600 & 125.7 & -0.32 & 0.0017 & 4.6 \\ \hline Comoving (0) & 100 & 31.6 & 0.0038 & 0.021 & 4.6 \\ Comoving (0) & 200 & 44.5 & 0.0035 & 0.011 & 4.6 \\ Comoving (0) & 1600 & 125.7 & 0.0032 & 0.0024 & 4.7 \\ \hline \end{tabular} \end{center} \caption{ Values of the fitting parameters $a$, $b$ and $c$ for the function (\ref{FitFunct1}) plotted in Figs. \ref{fig3oscilG0A} and \ref{fig6oscilG1A} for the parameter values $\alpha = \kappa =1$. } \label{FittiParam1} \vspace{0.2cm} \end{table* \subsection{$G(t)$ in the Comoving Frame} \label{Vartheta1} Here we consider $G(t)$ for the work done in the comoving frame, namely the case of $\vartheta=0$, in which the work includes effects of an inertial or d'Alembert-like force. \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.48\textwidth]{fig5slopeG1.eps} \vspfigB \caption{(color online) Graphs of $G(t)=(\partial/\partial W) \ln [P_{w}(W,t)/P_{w}(-W,t)]$ as a function of time $t$ for the work done in the comoving frame ($\vartheta=0$) in the case of a nonequilibrium steady state initial condition for $t\in [0,60]$. Inset: Graphs of $G(t)$ in a short time period for $t\in [0,6]$. Lines in these graphs correspond to parameter values of the scaled masses $m/m^{*}=0$ (over-damped case), $0.999$, $2$, $4$, $8$, $20$ and $40$ as indicated above this figure and we use the same parameter values $\alpha$, $\kappa$ and $t_{0}$ as in Fig. \ref{fig2slopeG0}. } \label{fig5slopeG1} \end{center} \vspace{0.2cm} \end{figure} Figure \ref{fig5slopeG1} shows graphs of $G(t)$ given by Eq. (\ref{FunctG2}) as a function of time $t$. We chose the same masses as in Fig. \ref{fig2slopeG0}, namely $m/m^{*}=0$ (over-damped case), $0.999$, $2$, $4$, $8$, $20$ and $40$ with the critical mass $m^{*}=1/4$. It is clear that in Fig. \ref{fig5slopeG1} graphs of $G(t)$ approach $1$ as $t\rightarrow+\infty$, confirming the asymptotic fluctuation theorem (\ref{AsympFT1}). Comparing Fig. \ref{fig2slopeG0} with Fig. \ref{fig5slopeG1}, a dramatic difference in the behavior of $G(t)$ in the two frames is clearly visible. First, a striking frame-dependence of $G(t)$ is that for any nonzero mass, $G(t)$ in the comoving frame starts from a finite value at the initial time $t_{0}(=0)$ and is always larger than $1$, in fact going through a maximum to its final value $1$. This contrary to in the laboratory frame where $G(t)$ diverges for $t\rightarrow t_{0}+0$ and can be smaller than $1$ for large masses and long times as discussed in Sec. \ref{Vartheta0}. Another remarkable point is that, different from in the laboratory frame as shown in Fig. \ref{fig2slopeG0}, $G(t)$ converges to the over-damped line, much before converging to its final value $1$, as shown in Fig. \ref{fig5slopeG1}. This feature can be explained by the asymptotic form (\ref{FunctGAsym1}) of $G(t)$, whose right-hand side is independent of the mass $m$ in the comoving frame ($\vartheta=0$), so a relaxation behavior of $G(t)$ to its final value $1$ in this frame should be close to that of the over-damped case. \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.48\textwidth]{fig6oscilG1A.eps} \vspfigB \caption{(color online) Linear-log plots of the absolute value $|\Delta G(t)|$ of the function $\Delta G(t) = G(t)-1-\tau_{r}/(t-\tau_{r})$ as a function of time $t$ for the work done in the comoving frame in the case of a nonequilibrium steady state initial condition. Lines in these graphs correspond to parameter values of the scaled masses $m/m^{*}=0$, $0.5$, $0.9$, $0.999$,$1.1$, $2$, $4$ and $20$ as indicated above this figure and we use the same parameter values $\alpha$, $\kappa$ and $t_{0}$ as in Fig. \ref{fig2slopeG0}. } \label{fig6oscilG1A} \end{center} \vspace{0.2cm} \end{figure} \begin{figure}[!t] \vspfigA \begin{center} \includegraphics[width=0.48\textwidth]{fig7oscilG1B.eps} \vspfigB \caption{(color online) Long time behavior of $|\Delta G(t)|$ as a function of time $t$ as linear-log plots for the work done in the comoving frame in the case of a nonequilibrium steady state initial condition. Here, we use the same parameter values $\alpha$, $\kappa$ and $t_{0}$ as in Fig. \ref{fig2slopeG0}. Broken, dotted and dash-dotted lines in these graphs correspond to parameter values of the scaled masses $m/m^{*}=100$, $200$ and $1600$, respectively. Solid lines are fits of $|\Delta G(t)|$ to the function (\ref{FitFunct1}) using Table \ref{FittiParam1}, and they are visually indistinguishable from the graphs of $|\Delta G(t)|$ except for short times. } \label{fig7oscilG1B} \end{center} \vspace{0.2cm} \end{figure} Now, we discuss the time-oscillatory behavior of $G(t)$ in the comoving frame. We note that in the comoving frame the approach of $G(t)$ to its final value $1$ is via oscillations around the over-damped line, contrary to in the laboratory frame where this approach is unrelated to the over-damped line. Such time-oscillations are already visible for large masses $m >\!> m^{*}$ in Fig. \ref{fig5slopeG1}, but to show them in a more magnified way, we plotted in Fig. \ref{fig6oscilG1A} the absolute value $|\Delta G(t)|$ of the function $\Delta G(t)$ defined by Eq. (\ref{DeltaG1}) as a function of time $t$ as linear-log plots. Here, we plotted data for the scaled masses $m/m^{*}=0$, $0.5$, $0.9$, $0.999$, $1.1$, $2$, $4$ and $20$ and for the time period $t\in [0,25]$. It is shown in Fig. \ref{fig6oscilG1A} that time-oscillations of $\Delta G(t)$ occur for $m/m^{*}>1$ but not for $0 \leq m/m^{*} <1$. Moreover, $\Delta G(t)$ for $0 \leq m/m^{*} <1$ decays faster, while $\Delta G(t)$ for $m/m^{*}>1$ decays slower with time, for increasing mass. These features are similar to those in the laboratory frame. In Fig. \ref{fig7oscilG1B} we show linear-log plots of $|\Delta G(t)|$ as functions of $t$ for longer times $t\in [0,1000]$ and for larger masses $m/m^{*}= 100, 200$ and $1600$ than in Fig. \ref{fig6oscilG1A}. Comparing this figure in the comoving frame with the corresponding Fig. \ref{fig4oscilG0B} in the laboratory frame, we can see that the time-oscillation amplitudes of the function $\Delta G(t)$ in the comoving frame are much smaller than the corresponding ones in the laboratory frame, except for short times. This should be noted as an important frame-dependence in the behavior of $G(t)$. The time-oscillation periods appearing in Figs. \ref{fig6oscilG1A} and \ref{fig7oscilG1B} can be checked by fitting the data again to the function (\ref{FitFunct1}) with the time-oscillation period (\ref{OscilPerio1}). We only show such fitting lines for Fig. \ref{fig7oscilG1B} using the fitting parameters $a$, $b$ and $c$ of Table \ref{FittiParam1}. Like for the fitting lines in Fig. \ref{fig4oscilG0B}, the parameter values of $a$ and $b$ in Table \ref{FittiParam1} in the comoving frame also appear to vary non-negligibly for data for a longer time period than that shown in Fig. \ref{fig7oscilG1B}. Therefore, as for Fig. \ref{fig4oscilG0B}, the fitting lines in Fig. \ref{fig7oscilG1B} should not be regarded as evidence for the exponential decay in the fitting function (\ref{FitFunct1}). However, the fits of their time-oscillation periods of $|\Delta G(t)|$ to the function (\ref{FitFunct1}) in Fig. \ref{fig7oscilG1B} are satisfactory, which suggests again that the time-oscillations of $G(t)$ have the same origin as those in the position $\tilde{x}_{s}^{*}$, like in the laboratory frame. \section{Summary and Remarks} \label{ConclRemar} As a summary of this paper, we have discussed inertial effects related to the particle mass $m$ in nonequilibrium work distribution functions and their associated fluctuation theorems for a dragged Brownian particle model confined by a harmonic potential using a path integral approach for all times: asymptotic as well as finite. We considered two kinds of work: the work $\mathcal{W}_{l}$ done in the laboratory frame and the work $\mathcal{W}_{c}$ done in the comoving frame and we calculated the distribution functions $P_{w}(W,t)$ for them. Using the distributions for the work in the different frames we analytically proved, for any initial condition, an asymptotic work fluctuation theorem, which has the same form in both the frames. This contrasts with what happens for finite times, when for a nonequilibrium steady state initial condition there are major differences between the work fluctuations in the laboratory and comoving frames. This was discussed, using the quantity $G(t) \equiv (\partial/\partial W) \ln [P_{w}(W,t)/P_{w}(-W,t)]$, which approaches the value $1$ in the long time limit $t\rightarrow +\infty$ by the asymptotic fluctuation theorem. The $G(t)$ for the work $\mathcal{W}_{c}$ done in the comoving frame is larger than 1 at all times and converges to the corresponding over-damped value much before converging to its final value $1$. On the other hand, the $G(t)$ for the work $\mathcal{W}_{l}$ done in the laboratory frame can be smaller than $1$ for sufficiently large times and masses, and the relaxation behavior of $G(t)$ to its final value $1$ is very different from that for the over-damped case, even for long times. As one of the significant effects for finite times, we also discussed the existence of a critical mass $m^{*}$, so that for the mass $m > m^{*}$ a time-oscillatory behavior appears in $G(t)$ in both frames. In the remainder of this section, we make some remarks on the contents in the main text of this paper. 1) We have discussed in this paper differences between the works $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$, which originate in a frame dependence of the kinetic energy difference due to the d'Alembert-like force as we discussed in Sec. \ref{WorkDragParti}. In contrast to the work and the kinetic energy difference, the heat (as well as the potential energy difference) is frame-independent even in the inertial case. Note that the two works $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$ have the same average value in the nonequilibrium steady state, because their difference can be represented as a ``boundary term'' \begin{eqnarray} \mathcal{W}_{l} - \mathcal{W}_{c} = m (\dot{x}_{t} -\dot{x}_{t_{0}} ) v \end{eqnarray} depending on a difference between the two boundary values of $\dot{x}_{s}$ at the final time $s=t$ and the initial time $s=t_{0}$ only, so that the average of this boundary term $m (\dot{x}_{t} -\dot{x}_{t_{0}}) v$ is zero in the nonequilibrium steady state. Nevertheless, this difference $m (\dot{x}_{t} -\dot{x}_{t_{0}} ) v$ between $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$ causes dramatic differences in the work fluctuations, as shown in the subsections \ref{Vartheta0} and \ref{Vartheta1} of this paper. \begin{table*}[!t] \begin{center} \begin{tabular}{c|c|c|c|c|c} \makebox[\widthtableD]{Brownian particle} & \makebox[\widthtableC]{$x_{s}$} & \makebox[\widthtableC]{$m$} & \makebox[\widthtableC]{$\alpha$} & \makebox[\widthtableC]{$\kappa$} & \makebox[\widthtableC]{$\kappa v$} \\ \hline Torsion pendulum & $\theta_{s}$ & $I$ & $\nu$ & $C$ & $\mu$ \\ \hline \end{tabular} \end{center} \caption{ Correspondences between the dragged Brownian particle model described by Eq. (\ref{LangeEquat1}) and the torsion pendulum model described by Eq. (\ref{LangeEquat2}). } \label{CorreModel1} \vspace{0.2cm} \end{table*}% 2) In a different nonequilibrium model described by a linear Langevin equation, Ref. \cite{DJG06} considered the motion of a torsion pendulum under an external torque in a fluid. The corresponding Langevin equation for the angular displacement $\theta_{s}$ of the pendulum at time $s$ in this system is then given by \begin{eqnarray} I \frac{d^{2} \theta_{s}}{ds^{2}} = - \nu \frac{d \theta_{s}}{ds} - C \theta_{s} + M_{s} + \zeta_{s} \label{LangeEquat2} \end{eqnarray} where $I$ is the total moment of inertia of the displaced mass, $\nu$ is the viscous damping, $C$ the elastic torsional stiffness of the pendulum, $M_{s}$ the external torque, and $\zeta_{s}$ the Gaussian-white random force. For this model, Ref. \cite{DJG06} considered the case of a linear torque of \begin{eqnarray} M_{s} = \mu s \label{LinearForci1} \end{eqnarray} with a force constant $\mu$. It is important to note that Eq. (\ref{LangeEquat2}) with the force (\ref{LinearForci1}) has mathematically the same form as the Langevin equation (\ref{LangeEquat1}) with the correspondences shown in Table \ref{CorreModel1}. Based on these correspondences between the two models, for example, there should be a critical value $I^{*}$ of the total moment of inertia above which a similar time-oscillatory behavior occurs in the pendulum model, like above the critical mass $m^{*}$ in the dragged Brownian particle model treated in this paper. For the pendulum system, Ref. \cite{DJG06} considered the work $\mathcal{W}_{p}$ done by the external torque $M_{s}$ on the pendulum ($p$). This work is given there by \begin{eqnarray} \mathcal{W}_{p} = \int_{t_{0}}^{t} ds\; (M_{s}-M_{t_{0}}) \frac{d \theta_{s}}{ds} . \label{WorkTorque1} \end{eqnarray} Using Eq. (\ref{LinearForci1}) and the correspondences in Table \ref{CorreModel1}, this work corresponds to a quantity for our dragged Brownian particle model, viz. \begin{eqnarray} \mathcal{W}_{p} &\longleftrightarrow& \int_{t_{0}}^{t} ds \; \kappa v (s-t_{0}) \frac{d x_{s}}{ds} \nonumber\\ && \hspace{0.5cm} = \mathcal{W}_{l} +\kappa v (t-t_{0}) \left[x_{t} - \frac{1}{2}v(t+t_{0})\right] \hspace{0.5cm} \end{eqnarray} which is clearly different from the works $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$ discussed in this paper. In other words, $\mathcal{W}_{l}$, $\mathcal{W}_{c}$ and $\mathcal{W}_{p}$ give physically different kinds of work in nonequilibrium steady states described by a mathematically identical Langevin equation in a dynamical sense. We note that our $\mathcal{W}_{l}$ and $\mathcal{W}_{c}$ are consequences of the generalized Onsager-Machlup theory in Ref. \cite{TC07a}. We reserve a general discussion on fluctuation theorems for different kinds of work for a future publication. 3) As another nonequilibrium model described by a linear Langevin equation, Ref. \cite{ZCC04} considered electric circuit models. In that case the system is described by a first-order linear Langevin equation, which has the same form as the over-damped Langevin equation for the dragged Brownian particle model. As a generalization of these electric circuit models, an inertial effect in the electric circuit can be introduced by including its self-induction. A generalization of the arguments of Ref. \cite{ZCC04} to the case including the self-induction, as well as a discussion of the effects of self-induction on the nonequilibrium work (and heat) fluctuations, will be addressed in a future paper. Especially, it would be interesting to observe whether there is a critical value of the self-induction, above which similar oscillatory effects occur, as appear above the critical mass in the inertial case in this paper. 4) The critical mass $m^{*}$ discussed in this paper for work fluctuations also appears in the dynamics of the average position $\langle x_{s}\rangle$. In order to discuss this point, we note that taking the average of Eq. (\ref{LangeEquat1}), the average position $\langle x_{s}\rangle$ of the particle at time $s$ satisfies \begin{eqnarray} m\frac{d^{2} \langle x_{s}\rangle}{ds^{2}} = - \alpha \frac{d \langle x_{s}\rangle}{ds} - \kappa \left(\langle x_{s}\rangle-vs\right) \label{LangeAvera1} \end{eqnarray} using $\langle \zeta_{s}\rangle =0$. Using $\nus{\pm}$ defined by Eq. (\ref{CoeffNuPM}), the solution of Eq. (\ref{LangeAvera1}) is given by \begin{eqnarray} \langle x_{s}\rangle = v(s-\tau_{r}) + C^{\prime} e^{-\nus{+}s} + C^{\prime\prime} e^{-\nus{-}s} \label{LangeAvera2} \end{eqnarray} where the constants $C^{\prime}$ and $C^{\prime\prime}$ are determined by the average initial conditions $\langle x_{t_{0}} \rangle$ and $\langle \dot{x}_{t_{0}}\rangle$ and are given by \begin{eqnarray} C^{\prime} &=& -\frac{\nus{-}e^{\nus{+}t_{0}}}{\nus{+}-\nus{-}} \left[\langle x_{t_{0}}\rangle - v(t_{0}-\tau_{r}) \right] \nonumber \\ &&\hspace{0.5cm} -\frac{e^{\nus{+}t_{0}}}{\nus{+}-\nus{-}} \left(\langle \dot{x}_{t_{0}}\rangle - v \right) , \\ C^{\prime\prime} &=& \frac{\nus{+}e^{\nus{-}t_{0}}}{\nus{+}-\nus{-}} \left[\langle x_{t_{0}}\rangle - v(t_{0}-\tau_{r}) \right] \nonumber \\ &&\hspace{0.5cm} +\frac{e^{\nus{-}t_{0}}}{\nus{+}-\nus{-}} \left(\langle \dot{x}_{t_{0}}\rangle - v \right) . \end{eqnarray} Since the $\nus{\pm}$ include nonzero imaginary parts for $m>m^{*}$, a time-oscillatory behavior appears in the average position $\langle x_{s}\rangle$ for masses above this critical mass $m^{*}$. This kind of phenomenon was discussed for a damped oscillator model \cite{LL69}, but its effect on fluctuations in a nonequilibrium steady state has not been discussed to the best of our knowledge. In Ref. \cite{TC07a}, we discussed that in the over-damped case, the most probable path, which is a solution of the Euler-Lagrange equation for the Lagrangian function in the Onsager-Machlup theory, is expressed as a combination of forward and backward paths. This is also true in the inertial case, in which the most probable path is given by a solution of the ``Euler-Lagrange'' equation (\ref{LagraEquat1}) for $\lambda=0$. To show this, we note that the exponentially decaying terms $\exp(-\nus{+}s)$ and $\exp(-\nus{-}s)$ on the right-hand side of Eq. (\ref{LangeAvera2}) refer to a forward path. We can also introduce the corresponding backward path, as a combination of exponentially divergent terms $\exp(\nus{+}s)$ and $\exp(\nus{-}s)$. A combination of these forward and backward paths gives then the most probable path $\{x_{s}^{*}\}_{s\in [t_{0},t]}$ for $\lambda=0$, i.e. Eq. (\ref{TildeFunctX1}). 5) There is still the open question of an analytical discussion of the asymptotic form of $\Delta G(t)$ with the time-oscillations shown in Figs. \ref{fig3oscilG0A}, \ref{fig4oscilG0B}, \ref{fig6oscilG1A} and \ref{fig7oscilG1B}. In this paper we only analyzed $\Delta G(t)$ numerically by fitting it to the function (\ref{FitFunct1}), but in principle, such analytical information on $\Delta G(t)$ is contained in the general form (\ref{FunctG2}) of $G(t)$. 6) We have considered the asymptotic fluctuation theorem for work in this paper. We now address very briefly its connection with other fluctuation theorems. (6a) One of the other fluctuation theorems is the transient fluctuation theorem \cite{ES94}. This fluctuation theorem was already derived and discussed for a dragged Brownian particle model with inertia in Ref. \cite{TC07a}. There, we derived transient fluctuation theorems, not only for the same works as those in this paper, but also for an energy loss by friction. Different from the work, the distribution function for the energy loss by friction does \emph{not} satisfy an asymptotic fluctuation theorem. (6b) Another important fluctuation theorem is the extended heat fluctuation theorem \cite{ZC03a,BGG06}. In Ref. \cite{TC07a} we gave a simple derivation of this fluctuation theorem, based on the assumptions that (A) a correlation between the work and the energy difference at time $t$ (as well as a correlation between the energies at the initial time $t_{0}$ and the final time $t$) disappears in the long time limit $t\rightarrow +\infty$, (B) the work satisfies the asymptotic fluctuation theorem, (C) the work distribution function approaches a Gaussian distribution asymptotically in time, and (D) the distribution function $P_{e}(E)$ for energy $E$ is canonical-like, namely $P_{e}(E) \approx \exp(-\beta E)$ for $E>0$. The same derivation could be applied to all models which satisfy these four conditions (A), (B), (C) and (D). In particular, using this derivation, one can derive an analytical expression for the asymptotic heat distribution function itself, as well as the extended heat fluctuation theorem not only for the over-damped case, as was done in Ref. \cite{TC07a}, but also for the inertial case. \section*{Acknowledgements} We gratefully acknowledge financial support of the National Science Foundation, under award PHY-0501315.
1,314,259,995,544
arxiv
\section{Introduction} Atomistic models have established a critical role in material modeling and simulations. In order to study the mechanical responses, boundary conditions (BC) must be imposed. While specifying the displacement of the atoms at the boundary is straightforward, imposing a traction is much more challenging due to the fact that the range of the atomic interactions typically goes beyond nearest neighbors. In direct contrast to continuum mechanics, where the boundary is of lower dimension (curves or surfaces), the `boundary' in an atomistic model often consists of a few layers of atoms. As a result, there are multiple ways to prescribe a traction BC. For instance, forces can be applied directly to atoms at the boundary in such a way that they add up to the given traction. However, it is unclear how to distribute these forces among the atoms. In particular, boundary layers may develop and create large modeling error. Meanwhile, many mathematical problems associated with material defects have been formulated as a system under traction. Examples include cracks under mode-I loading \cite{sih1968fracture}, where uniform stress can be specified in the far field, and dislocations under shear stress \cite{hirth1982theory}, which led to the important concepts of Peierls barrier. Problems of this type can not simply be treated with BCs that prescribe the displacement of the atoms at the boundary. Another possible approach to introduce traction is the Parrinello-Rahman method \cite{parrinello1981polymorphic}, where the stress is created by allowing the shape of the simulation cell to change, which is particularly useful when phase transformation processes occur. But the method is limited to periodic cells, and it can not treat material defects without introducing artificial images. The purpose of this paper is to formulate a proper BC that represents a traction force along the boundary. We set up the problem by embedding the computational domain within an infinite molecular system, where the traction in the far-field can be introduced. This is motivated by the observation that molecular simulations are typically conducted within part of the entire sample, due to the heavy computational cost. Mathematically, the extra degrees of freedom in the surrounding region can be eliminated by solving the finite difference equations associated with the molecular statics model. This gives rise to a BC, which is expressed as an extrapolation of the displacement to the atoms outside the boundary, along with a shift vector, which depends on the traction in the far field. We further demonstrate that the typical approach in which external forces are directly applied at the boundary might be incompatible with these BCs, and that they can lead to ill-posed problems. The present approach allows one to simulate a material system with local defects under traction load, which mimics a surround elastic medium. Another potential application is to the domain decomposition (DD) method for solving a large-scale molecular system, where the problem is divided into sub-problems, each of which is associate with a sub-domain. In particular, the Dirichlet-Neumann method and Neumann-Neumann method (e.g., see \cite{toselli2005domain}) offer a coupling strategy without creating overlapping regions. Our method can be implemented within the DD framework to fascilitate parallelization. The paper is organized as follows: We first consider a one-dimensional system to demonstrate how the BC can be derived. We further analyze the stability of the resulting boundary value problem and the continuum limit. As an application, and a demonstration of such BCs in high dimensions, we consider a dislocation dipole problem in section \ref{sec: 2d}. We close the paper by a summary and some discussions. \section{A one-dimensional example} To better illustrate the idea, let us first consider a one-dimensional semi-infinite chain of atoms with undeformed position $x_i = i$, where $i \in \mathbb{Z}_+ \cup \{0\}$. We will also use the undeformed position to label the atoms. The deformed position is denoted by $y_i$ with displacement $u_i$. We assume that the atomistic potential has next-nearest-neighbor pairwise interactions. Namely, the total energy in the bulk can be written as, \[ E= \sum_{i \geq 0} \bigl( V(y_{i+2} - y_{i}) + V(y_{i+1} - y_{i}) \bigr).\] The extension to more general potentials and higher dimensions will be discussed later. Intuitively, there are at least two ways to impose a traction at the boundary. For instance, one may apply forces, denoted by $T_0$ and $T_1$, to the first two atoms, located at $y_0$ and $y_1,$ respectively. Alternatively, one can introduce two additional atoms outside the boundary, which in the present case, have current positions $y_{-2}$ and $y_{-1}$. These additional atoms will be referred to as {\it ghost atoms}, since they play different roles as the atoms in the interior. By specifying $y_{-2}$ and $y_{-1}$ (or $u_{-2}$ and $u_{-1}$), one also creates a traction at the boundary. These two methods are illustrated in Fig.~\ref{fig:chain}. \begin{figure}[ht] \begin{center} \hspace{2cm}\scalebox{0.45}{\includegraphics{chain-eps-converted-to.pdf}} \end{center} \vspace{0.5cm} \begin{center} \scalebox{0.45}{\includegraphics{chain1-eps-converted-to.pdf}} \caption{A one-dimensional chain with next nearest-neighbor interactions. Top: Tractions, in the form of point forces, are applied to the two atoms at the left boundary; Bottom: two ghost atoms are introduced outside the boundary. \label{fig:chain} } \end{center} \end{figure} \begin{remark} At this point, both methods seem plausible, and it is not immediately clear how they are related to each other. Also notable is that there are {\it two} parameters available to specific {\it one} traction condition. This will be clarified in the next section. \end{remark} We illustrate possible BCs with the following numerical experiment: We consider the one-dimensional chain model with harmonic interactions. The force constants are $\kappa_1=1$ and $\kappa_2=-0.2$ for the nearest neighbor and next nearest neighbor interactions. A traction force needs to be applied at the left boundary, while the atoms at $x_N$ and $x_{N+1}$ are fixed. We choose $N=20.$ Three traction BCs are tested. In the first case, we apply a unit force on the first atom, and in the second case, we apply the same force to the second atom. In the third test, we split the force among the first two atoms ($\frac12$ and $\frac12$). For such a simple setup, one would anticipate that the corresponding continuum model has a simple solution which is given by a uniform deformation gradient. These results are shown in Fig. \ref{fig:4bcs}. In all these cases, the solutions develop a boundary layer, and none of them is fully consistent with the continuum solution. As comparison, we include the result from the traction BC that will be derived later, in which the position of the first two atoms is determined based on the given traction. It is clear that the boundary layer has been eliminated. \begin{figure}[ht] \begin{center} {\includegraphics[width=12cm,height=6cm]{fourbcs-eps-converted-to.pdf}} \caption{Testing several boundary conditions for the one-dimensional model. Left to right: A unit force applied to the first atom; A unit force applied to the second atoms; Forces applied to both atoms; Traction boundary condition that will be derived in the next section. \label{fig:4bcs} } \end{center} \end{figure} \subsection{The derivation of the traction BC} To understand the traction BC, we start by embedding the semi-infinite atom chain into an infinite chain. We recall that $x_i = i$, $i \in \mathbb{Z}$ denotes the equilibrium positions. We take the view point that the BCs acting on the atom at $x_0$ should be determined by the interaction of the semi-infinite chain with the atoms on the left ($i<0$), whose degrees of freedom will be implicitly incorporated. In other words, the atoms $x_i$, $i \in \mathbb{Z}_-$ serves as an environment for the system we consider. We will hence distinguish the two groups of atoms by referring them as system atoms and environment atoms, respectively. This is based on our observation that most atomistic simulations are focused on part of the entire sample due to the small spatial scale associated with molecular models. The problem now is reduced to removing the atoms in the environment. To facilitate the reduction of degrees of freedom from the whole chain to the semi-infinite, we will take the harmonic approximation. This amounts to assuming that the interaction between the atoms $x_i$ and $x_j$ is harmonic if either $i < 0$ or $j < 0$. In this paper, we focus our attention to static problems, which can be formulated as an energy minimization problem. For the current problem, the potential energy will be divided into several terms in accordance with the partition of the system, \begin{equation} E = E_{\text{sys}} + E_{\text{int}} + E_{\text{env}} \end{equation} where $E_{\text{sys}}$ is the interaction among the atoms in the semi-infinite chain on the right: \begin{equation} E_{\text{sys}} = \sum_{i \geq 0} \bigl( V(y_{i+2} - y_{i}) + V(y_{i+1} - y_{i}) \bigr). \end{equation} Meanwhile $E_{\text{int}}$ collects the interaction terms between a system atom and an environmental atom. In terms of the displacement $u_j = y_j - x_j$, we have, \begin{equation} E_{\text{int}} = \frac{\kappa_1}{2} (u_0 - u_{-1})^2 + \frac{\kappa_2}{2} (u_0 - u_{-2})^2 + \frac{\kappa_2}{2} (u_1 - u_{-1})^2, \end{equation} where $\kappa_1 = V''(1)$ and $\kappa_2 = V''(2)$ are two stiffness constants for nearest-neighbor and next-nearest-neighbor interactions. Further, $E_{\text{env}}$ denotes the energy for the environment, given by \begin{equation} E_{\text{env}} = \sum_{i < 0} \bigl[ \frac{\kappa_1}{2} (u_i - u_{i-1})^2 + \frac{\kappa_2}{2} (u_i - u_{i-2})^2 \bigr]. \end{equation} Since the interaction is assumed to be next-nearest-neighbor, the environment atoms only interact with atoms with reference positions $x_0$ and $x_1$, but not other system atoms. Given the displacement $u_0$ and $u_1$, the force balance equations for the environment atoms can be written as \begin{equation}\label{eq:fdhalfchain} \kappa_2 (u_{j+2} - 2 u_j + u_{j-2}) + \kappa_1 (u_{j+1} - 2 u_j + u_{j-1}) = 0, \qquad j \in \mathbb{Z}_-. \end{equation} The general solution of this finite difference equation is given by, \begin{equation} u_j = A + B j + C \lambda^j + D \lambda^{-j}, \end{equation} where $\lambda$ is a root of the characteristic polynomial associated to \eqref{eq:fdhalfchain} (the other roots are $1/\lambda$ and $1$ with multiplicity $2$): \begin{equation}\label{eq:lambda} \lambda = -1 - \dfrac{\kappa_1}{2\kappa_2} \left [1 - \sqrt{1+ \frac{4\kappa_2}{\kappa_1}}\,\right]. \end{equation} We collect some elementary properties of $\lambda$ in the following lemma. \begin{lemma}\label{lem:lambda} Assume $\kappa_1 > 0$ and $\kappa = \kappa_1 + 4 \kappa_2 > 0$, we have $\abs{\lambda} \leq 1$ and $-\kappa_2 \lambda \geq 0$. \end{lemma} \begin{proof} First notice that \begin{equation*} 1 + \frac{4 \kappa_2}{\kappa_1} \leq 1 + \frac{4 \kappa_2}{\kappa_1} + \frac{4 \kappa_2^2}{\kappa_1^2}= \Bigl( 1 + \frac{2 \kappa_2}{\kappa_1}\Bigr)^2, \end{equation*} which yields \begin{equation*} 1 - \sqrt{1 + \frac{4 \kappa_2}{\kappa_1}} \geq - \frac{2 \kappa_2}{\kappa_1}. \end{equation*} For $\kappa_2 > 0$, we then get \begin{equation*} 0 \leq - \frac{\kappa_1}{2 \kappa_2} \biggl( 1 - \sqrt{1 + \frac{4 \kappa_2}{\kappa_1}} \biggr) \leq 1. \end{equation*} Hence, by definition of $\lambda$ in \eqref{eq:lambda}, we get $\lambda \in [-1, 0]$. For $\kappa_2 < 0$, we have \begin{equation*} - \frac{\kappa_1}{2 \kappa_2} \biggl( 1 - \sqrt{1 + \frac{4 \kappa_2}{\kappa_1}} \biggr) \geq 1. \end{equation*} This yields $\lambda \geq 0$. Since $1 + 4 \kappa_2 / \kappa_1 < 1$, we also have \begin{equation*} \sqrt{1 + \frac{4 \kappa_2}{\kappa_1}} \geq 1 + \frac{4\kappa_2}{\kappa_1} \end{equation*} and hence \begin{equation*} - \frac{\kappa_1}{2 \kappa_2} \biggl( 1 - \sqrt{1 + \frac{4 \kappa_2}{\kappa_1}} \biggr) \leq - \frac{\kappa_1}{2 \kappa_2} \biggl( 1 - 1 - \frac{4 \kappa_2}{\kappa_1} \biggr) = 2. \end{equation*} We conclude that $\lambda \leq 1$. \end{proof} Since $\abs{\lambda} < 1$, $\lambda^{j}$ grows exponentially as $j \to -\infty$ and hence unphysical. This leads to the requirement that $C = 0$. The positions of the atoms at $x_0$ and $x_1$ further provide two BCs for \eqref{eq:fdhalfchain}. The remaining one degree of freedom is determined by the traction at the boundary, \begin{equation}\label{eq:tractioncondition} - \kappa_1 (u_{0} - u_{-1}) - \kappa_2 (u_{0} - u_{-2}) - \kappa_2 (u_{1} - u_{-1}) = T, \end{equation} where $T$ is the prescribed traction at the boundary (scalar in $1D$). Intuitively, the traction across a material interface is given by the sum of the forces between two atoms that are on different sides of the interface \cite{Admal2010unified,WuLi14}. This is indeed non-trivial, especially for multi-body interactions. But formulas are available for most empirical potentials \cite{WuLi14}. \begin{remark} We further remark that the traction is conserved since no external force acts on the fictitious atoms: For $j \in \mathbb{Z}_-$, \begin{multline*} \Bigl(- \kappa_1 (u_{j} - u_{j-1}) - \kappa_2 (u_{j} - u_{j-2}) \Bigr) - \kappa_2 (u_{j+1} - u_{j-1}) \\ \stackrel{\eqref{eq:fdhalfchain}}{=} \Bigl(- \kappa_1 (u_{j+1} - u_{j}) - \kappa_2 (u_{j+2} - u_{j}) \Bigr) - \kappa_2 (u_{j+1} - u_{j-1}) \\ = - \kappa_1 (u_{j+1} - u_{j}) - \kappa_2 (u_{j+1} - u_{j-1}) - \kappa_2 (u_{j+2} - u_{j}). \end{multline*} \end{remark} The solution to \eqref{eq:fdhalfchain} can now be found. In particular, the coefficients are \begin{equation*} A = \frac{-\lambda^{-1} u_0 + u_1 + T / \kappa}{1 - \lambda^{-1}}, \quad B = - T / \kappa, \quad \text{and} \quad C = \frac{u_0 - u_1 - T/ \kappa}{1 - \lambda^{-1}}, \end{equation*} where $\kappa = \kappa_1 + 4 \kappa_2$. As a result, the displacements $u_{-1}$ and $u_{-2}$ are given by \begin{subequations}\label{eq:tbc1d} \begin{align} & u_{-1} = (1+ \lambda) u_0 - \lambda u_1 + (1 - \lambda) T / \kappa; \\ & u_{-2} = (1+ \lambda) u_{-1} - \lambda u_0 + (1 - \lambda) T / \kappa. \end{align} \end{subequations} Notice that $u_{-1}$ and $u_{-2}$ depend linearly on the displacements $u_0$, $u_1$ and the traction $T$. This is a result of the harmonic approximation in the environment. Now that the degrees of freedom associated with the atoms further on the left are removed, we can formulate the boundary value problem for the semi-infinite atom chain $x_i$, $i \in \mathbb{Z}_+ \cup \{0\}$ in terms of ghost atoms $x_{-1}$ and $x_{-2}$ at the boundary. In general, the number of needed layers of atoms is determined by the interaction range. With the BCs, the molecular statics model is complete. We consider a slightly more general problem by allowing body forces $f_i$ to be applied to the system atoms. In this case, the force balance equations read as follows, \begin{subequations}\label{eq:forcebalance} \begin{align} & - V'(y_{j+2} - y_j) - V'(y_{j+1} - y_j) + V'(y_j - y_{j-1}) + V'(y_j - y_{j-2}) = f_j, \qquad j \geq 2 \\ & - V'(y_3 - y_1) - V'(y_2 - y_1) + V'(y_1 - y_0) + \kappa_2 (u_1 - u_{-1}) = f_1, \\ & - V'(y_2 - y_0) - V'(y_1 - y_0) + \kappa_1 (u_0 - u_{-1}) + \kappa_2 (u_0 - u_{-2}) = f_0, \end{align} \end{subequations} together with the BCs given by \eqref{eq:tbc1d}. Another observation is that due to the semi-infinite nature, \eqref{eq:forcebalance}-\eqref{eq:tbc1d} can only determine $u$ up to a constant. To uniquely fix the arbitrary constant, we choose $u_0 = 0$. In addition, while the solution $u$ can be a linear function that corresponds to a uniformly stretched (or compressed) state, it is natural to exclude those solutions that grow superlinearly at infinity \cite{luskin2013atomistic}. Hence, the complete set of BCs for the semi-infinite chain consists of the traction BC \eqref{eq:tbc1d} and the conditions \begin{subequations}\label{eq:infbc1d} \begin{align} & u_0 = 0; \\ & \limsup_{j \to \infty} \frac{\abs{u_j}}{j} < \infty. \end{align} \end{subequations} We emphasize that the above two BCs are associated with the {\it semi-infinite} chain under consideration, but not the traction at the left end of the chain. For finite system with a right boundary, appropriate BCs should be chosen to replace \eqref{eq:infbc1d} according to the physical situation. Our emphasis, however, is on the traction condition at the left boundary. Let us summarize the general framework for our BC construction in one-dimensional systems as follows \begin{itemize} \item[Step 1.] Supplement the system with a fictitious environment of atoms with linear approximation; \item[Step 2.] Solve the positions of the environmental atoms with the condition of fixed traction; \item[Step 3.] The BC of the atomistic system is then given in terms of the positions of the ghost atoms. \end{itemize} This procedure can be clearly generalized to one-dimensional atomistic systems with arbitrary short-range interactions. The number of BCs depends on the interaction range. Next, we turn to several properties of the BCs. These will help us better understand the traction BC and also facilitate the extension to higher dimension. \subsection{The continuum limit}\label{sec:contlimit} For continuum elasticity models, traction BCs are imposed as the normal component of the stress. In this subsection, we show that the continuum limit of the reduced system \eqref{eq:forcebalance}, together with the BCs \eqref{eq:tbc1d}, leads to the Cauchy-Born elasticity with continuum traction BC in elasticity. Hence, our BCs \eqref{eq:tbc1d} can be viewed as the molecular statics analog of the traction boundary condition in continuum elasticity. To this end, we adopt the natural rescaling of the system such that the distance between nearest-neighbor atoms in equilibrium becomes $\varepsilon$. We will use superscript to make explicit the dependence on the scaling parameter $\varepsilon$. Hence, the equilibrium positions scale to $x_j^{\varepsilon} = j \varepsilon$, $j \in \mathbb{Z}_+ \cup \{0\}$ and the deformed positions are $y_j^{\varepsilon} = x_j^{\varepsilon} + u^{\varepsilon}(x_j^{\varepsilon})$. We rewrite the force balance equation and the traction BC accordingly: \begin{subequations}\label{eq:forcebalanceeps} \begin{align} & - V'\Bigl(\frac{y_{j+2}^{\varepsilon} - y_j^{\varepsilon}}{\varepsilon}\Bigr) - V'\Bigl(\frac{y_{j+1}^{\varepsilon} - y_j^{\varepsilon}}{\varepsilon}\Bigr) + V'\Bigl(\frac{y_j^{\varepsilon} - y_{j-1}^{\varepsilon}}{\varepsilon}\Bigr) + V'\Bigl(\frac{y_j^{\varepsilon} - y_{j-2}^{\varepsilon}}{\varepsilon}\Bigr) = \varepsilon f_j, \qquad j \geq 2 \label{eq:atomj}\\ & - V'\Bigl(\frac{y_3^{\varepsilon} - y_1^{\varepsilon}}{\varepsilon}\Bigr) - V'\Bigl(\frac{y_2^{\varepsilon} - y_1^{\varepsilon}}{\varepsilon}\Bigr) + V'\Bigl(\frac{y_1^{\varepsilon} - y_0^{\varepsilon}}{\varepsilon}\Bigr) + \kappa_2 \Bigl(\frac{u_1^{\varepsilon} - u_{-1}^{\varepsilon}}{\varepsilon}\Bigr) = \varepsilon f_1, \label{eq:atom1} \\ & - V'\Bigl(\frac{y_2^{\varepsilon} - y_0^{\varepsilon}}{\varepsilon}\Bigr) - V'\Bigl(\frac{y_1^{\varepsilon} - y_0^{\varepsilon}}{\varepsilon}\Bigr) + \kappa_1 \Bigl(\frac{u_0^{\varepsilon} - u_{-1}^{\varepsilon}}{\varepsilon}\Bigr) + \kappa_2 \Bigl(\frac{u_0^{\varepsilon} - u_{-2}^{\varepsilon}}{\varepsilon}\Bigr) = \varepsilon f_0 \label{eq:atom0} \end{align} \end{subequations} with \begin{subequations}\label{eq:tbc1deps} \begin{align} & u_{-1}^{\varepsilon} = (1+ \lambda) u_0^{\varepsilon} - \lambda u_1^{\varepsilon} + \varepsilon (1 - \lambda) T / \kappa; \\ & u_{-2}^{\varepsilon} = (1+ \lambda) u_{-1}^{\varepsilon} - \lambda u_0^{\varepsilon} + \varepsilon (1 - \lambda) T / \kappa. \end{align} \end{subequations} We now take the continuum limit $\varepsilon \to 0$. To the leading order, the equation \eqref{eq:atomj} becomes \begin{equation}\label{eq:contforcebalance} - \divop \bigl[ V'( y'(x)) + 2 V'(2 y'(x))\bigr] = f(x). \end{equation} Note that for the current atomistic interaction potential, the Cauchy-Born stored energy density is given by \cite{EMing:2007a}: \begin{equation} W_{\mathrm{CB}}(A) = V\bigl( I + A \bigr) + V\bigl( 2 (I + A) \bigr). \end{equation} Hence, \eqref{eq:contforcebalance} is exactly the force balance equation for the Cauchy-Born elasticity since \begin{equation} \partial_A W_{\mathrm{CB}}(u') = V'\bigl( 1 + u'(x) \bigr) + 2 V'\bigl( 2 + 2 u'(x)\bigr). \end{equation} Combining \eqref{eq:atom1}, \eqref{eq:atom0} and the BCs \eqref{eq:tbc1deps}, we get \begin{equation} - V'\Bigl(\frac{y_3^{\varepsilon} - y_1^{\varepsilon}}{\varepsilon}\Bigr) - V'\Bigl(\frac{y_2^{\varepsilon} - y_1^{\varepsilon}}{\varepsilon}\Bigr) - V'\Bigl(\frac{y_2^{\varepsilon} - y_0^{\varepsilon}}{\varepsilon}\Bigr) = \varepsilon f_1 + \varepsilon f_0 + T. \end{equation} To the leading order, this yields \begin{equation}\label{eq:contboundarycond} - V'\bigl( 1 + u'(0) \bigr) - 2 V'\bigl( 2 + 2 u'(0)\bigr) = T. \end{equation} As the left hand side is equal to $n \cdot \partial_A W_{\mathrm{CB}}(u') \big \vert_{x = 0}$, where $n$ is the unit exterior normal, the BC \eqref{eq:contboundarycond} is exactly the traction BC for the elastic energy density $W_{\mathrm{CB}}$. \subsection{Linear stability at the equilibrium} We also make the observe that the force balance equations \eqref{eq:forcebalanceeps} with the traction BC \eqref{eq:tbc1deps} can be viewed as a finite difference system with BCs. It is then natural to analyze the stability of such a finite difference system, similar in spirit to the analysis in the context of atomistic-to-continuum methods \cite{LuMing:13, LuMing:14}. We also note that the stability is the crucial ingredient for the rigorous proof of the continuum limit in \S\ref{sec:contlimit}. The stability of molecular statics models under periodic and Dirichlet BCs have been analyzed in \cite{EMing:2007a,ehrlacher2013analysis}. To understand the stability, we linearize the force balance equations \eqref{eq:forcebalance} at the equilibrium (undeformed) state, yielding, \begin{equation}\label{eq:linearforcebalance} - \kappa_2 (u_{j+2} - 2u_j + u_{j-2}) - \kappa_1 (u_{j+1} - 2 u_j + u_{j-1}) = f_j, \qquad j \geq 0, \end{equation} supplemented by the BC \eqref{eq:tbc1d} and \eqref{eq:infbc1d}. Given $T$, we define the map $H_{T}: l^2(\mathbb{N}) \to l^2(\mathbb{N})$ as \begin{equation} (H_{T} u)_j = - \kappa_2 (u_{j+2} - 2u_j + u_{j-2}) - \kappa_1 (u_{j+1} - 2 u_j + u_{j-1}), \qquad j \geq 0 \end{equation} with $u_{-1}$ and $u_{-2}$ determined by \eqref{eq:tbc1deps} (and hence the dependence on $T$). Thus we have \begin{equation} H_T(u)_j - H_0(u)_j = \begin{cases} - \bigl(\kappa_2 ( 2 + \lambda ) + \kappa_1 \bigr) (1 - \lambda) T / \kappa, & j = 0; \\ - \kappa_2 (1 - \lambda) T / \kappa, & j = 1; \\ 0, & \text{otherwise}. \end{cases} \end{equation} Let us introduce a few short-hand notations. First we define the forward difference as \begin{equation} (D u)_j = u_{j+1} - u_j. \end{equation} Moreover, the discrete Laplacian is given by \begin{equation} (\Delta_d u)_j = u_{j+1} - 2 u_j + u_{j-1}. \end{equation} Direct calculations yield, \begin{multline} (\Delta_d \Delta_d u)_j + 4 (\Delta_d u)_j = \bigl( u_{j+2} - 4 u_{j+1} + 6 u_j - 4 u_{j-1} + u_{j-2} \bigr) + 4 \bigl(u_{j+1} - 2 u_j + u_{j-1}\bigr) \\ = ( u_{j+2} - 2 u_j + u_{j-2}). \end{multline} With these preparations, we calculate the quadratic form $\average{u, H_0(u)}$: \begin{equation} \begin{aligned} \average{u, H_0 u} & = - \kappa_2 \sum_{j=0}^{\infty} u_j \bigl( (\Delta_d \Delta_d u)_j + 4 (\Delta_d u)_j \bigr) - \kappa_1 \sum_{j=0}^{\infty} u_j (\Delta u)_j \\ & = - \kappa_2 \sum_{j=0}^{\infty} u_j (\Delta \Delta u)_j - \kappa \sum_{j=0}^{\infty} u_j (\Delta u)_j \end{aligned} \end{equation} Summation by parts gives (recall that $u_0 = 0$) \begin{equation} \sum_{j=0}^{\infty} u_j (\Delta u)_j = - \sum_{j=0}^{\infty} \abs{ (Du)_j}^2. \end{equation} For the fourth order term, we get \begin{equation} \sum_{j=0}^{\infty} u_j (\Delta \Delta u)_j = \sum_{j=0}^{\infty} \abs{ (\Delta u)_j }^2 - u_{-1} (u_1 + u_{-1}) = \sum_{j=0}^{\infty} \abs{ (\Delta u)_j }^2 + \lambda(1 - \lambda) u_1^2, \end{equation} where in the last equality, we used $u_{-1} = - \lambda u_1$ in the case $T = 0$ and $u_0 = 0$. Finally, we have, \begin{equation} \average{u, H_0 u} = \kappa \sum_{j=0}^{\infty} \abs{ ( D u)_j}^2 - \kappa_2 \sum_{j=0}^{\infty} \abs{ (\Delta u)_j }^2 - \kappa_2 \lambda (1 - \lambda) u_1^2. \end{equation} By Lemma~\ref{lem:lambda}, we have $- \kappa_2 \lambda (1 - \lambda) > 0$ as long as $\kappa_1 > 0$ and $\kappa>0$. Thus, we have \begin{equation} \average{u, H_0 u} \geq \kappa \sum_{j=0}^{\infty} \abs{ ( D u)_j}^2 - \kappa_2 \sum_{j=0}^{\infty} \abs{ (\Delta u)_j }^2 \end{equation} Therefore, the scheme is stable as long as the underlying atomistic model is stable. For a general traction $T$, we have \begin{equation} \average{u, H_T u} = \average{u, H_0 u} - \frac{\kappa_2}{\kappa} (1- \lambda) T u_1. \end{equation} Therefore, the stability follows from the stability in the case of $T = 0$. \begin{remark} This analysis also shows that if $\lambda$ in \eqref{eq:tbc1d} is replaced by an appropriate approximation, i.e., $\widetilde{\lambda}\approx \lambda$ satisfying $-\kappa_2\wt{\lambda}(1-\wt{\lambda})>0$, a stable model would also be obtained. \end{remark} \medskip We show that a careless choice of the BC may lead to instability of the scheme. Instead of \eqref{eq:tbc1d}, let us consider an alternative set of BCs (to distinguish, we use $\wt{u}$ for the displacement) \begin{subequations}\label{eq:tbc1d_unstable} \begin{align} & \wt{u}_{-1} = (1+ \lambda^{-1}) \wt{u}_0 - \lambda^{-1} \wt{u}_1 + (\lambda^{-1} - 1) T / \kappa; \\ & \wt{u}_{-2} = (1+ \lambda^{-1}) \wt{u}_{-1} - \lambda^{-1} \wt{u}_0 + (\lambda^{-1} - 1) T / \kappa. \end{align} \end{subequations} It is straightforward to check that this set of BC also yields traction $T$ at the boundary \begin{equation} - \kappa_1 (\wt{u}_0 - \wt{u}_{-1}) - \kappa_2 (\wt{u}_0 - \wt{u}_{-2}) - \kappa_2 (\wt{u}_1 - \wt{u}_{-1}) = T, \end{equation} and hence consistent with the traction BC in continuum elasticity. However, the resulting scheme with the BC \eqref{eq:tbc1d_unstable} is not stable. In fact, we even lose uniqueness: it is easy to check that $\wt{u}_j = \lambda^j - 1$ for $j \geq -2$ satisfies \begin{equation} - \kappa_2 (\wt{u}_{j+2} - 2 \wt{u}_j + \wt{u}_{j-2}) - \kappa_1 (\wt{u}_{j+1} - 2 \wt{u}_j + \wt{u}_{j-1}) = 0, \qquad j \geq 0, \end{equation} and also at the boundary \begin{align} & \wt{u}_{-1} = (1+ \lambda^{-1}) \wt{u}_0 - \lambda^{-1} \wt{u}_1; \\ & \wt{u}_{-2} = (1+ \lambda^{-1}) \wt{u}_{-1} - \lambda^{-1} \wt{u}_0; \\ & \wt{u}_0 = 0; \\ & \limsup_{j \to \infty} \frac{\abs{\wt{u}_j}}{j} = 0. \end{align} \subsection{Connection to BCs with applied forces at the boundary} As we alluded to at the beginning of this section, it is also possible to apply forces ($T_0$ and $T_1$) directly at the boundary to create a traction. But it is not immediately clear how much forces to apply on each of the two atoms at the boundary. Here we will demonstrate the connection to that approach. In particular, this discussion will also shed light on the selection of the forces. If we substitute \eqref{eq:tbc1d} into the force balance equation \eqref{eq:forcebalance}, we get \begin{align*} - V'(y_3 - y_1) - V'(y_2 - y_1) + V'(y_1 - y_0) & = f_1 + T_1; \\ - V'(y_2 - y_0) - V'(y_1 - y_0) = f_0 + T_0. \end{align*} with \begin{align} T_1 & = \kappa_2 (u_{-1} - u_0) = \kappa_2 \bigl( (1 + \lambda) ( u_0 - u_1) + ( 1 - \lambda) T / \kappa \bigr); \\ T_0 & = \kappa_2 (u_{-2} - u_0) + \kappa_1 (u_{-1} - u_0) \\ \notag & = \lambda \bigl( \kappa_1 + \kappa_2 ( 1 + \lambda) \bigr) (u_0 - u_1) + (1 - \lambda) \bigl( \kappa_1 + \kappa_2 ( 2 + \lambda) \bigr) T / \kappa. \end{align} This provides the formulas for the forces. An important observation, however, is that these forces should depend on the displacement of the atoms at $x_0$ and $x_1$. \subsection{Traction BC from the Green's function} To facilitate the extension of the BC to two dimensional systems, we take yet another point of view of the traction BC, from the lattice Green's function perspective. Let us define the lattice Green's function associated with the model \eqref{eq:fdhalfchain}, \begin{equation}\label{eq:greens} - \kappa_2 (G_{j+2} - 2 G_j + G_{j-2}) - \kappa_1 (G_{j+1} - 2 G_j + G_{j-1}) = \delta(j), \qquad j \in \mathbb{Z}. \end{equation} In general, the lattice Green's functions are useful analytical tools for studying lattice distortions around defects (see e.g., \cite{Tewary1973}). A typical route to compute the Green's function is via a Fourier transform, \begin{equation} \wh{G}(\xi) = \sum_{j \in \mathbb{Z}} e^{i \xi j} G_j, \qquad \xi \in [-\pi, \pi) \end{equation} with the inverse given by, \begin{equation}\label{eq:FourierInversion} G_j = \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{-i \xi j} \wh{G}(\xi) \,\mathrm{d} \xi \end{equation} This leads to \begin{equation} - \kappa_2 \bigl( e^{2i \xi} - 2 + e^{-2i \xi} \bigr) \wh{G}(\xi) - \kappa_1 \bigl( e^{i\xi} - 2 + e^{-i\xi} \bigr) \wh{G}(\xi) = 1, \end{equation} and \begin{equation} \wh{G}(\xi) = \frac{1}{4 \kappa_2 \sin^2(\xi) + 4 \kappa_1 \sin^2(\xi / 2)}. \end{equation} However, due to the singularity at $\xi = 0$, the integral \eqref{eq:FourierInversion} with $\wh{G}$ given above is not well defined. A remedy \cite{MaRo02} is to modify \eqref{eq:FourierInversion}: \begin{equation}\label{eq:GreenFunction} \begin{aligned} G_j & = \frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{e^{-i\xi j} - 1}{4 \kappa_2 \sin^2(\xi) + 4 \kappa_1 \sin^2(\xi / 2)} \,\mathrm{d} \xi \\ & = - \frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{2 \sin^2(\xi j/2)}{4 \kappa_2 \sin^2(\xi) + 4 \kappa_1 \sin^2(\xi / 2)} \,\mathrm{d} \xi. \end{aligned} \end{equation} Conceptually, the Green's function \eqref{eq:greens} is only defined up to a constant, and one can fix $G_0 = 0$ by subtracting a (infinite) constant from \eqref{eq:FourierInversion}. As a result, the integral is now well defined as the integrand is regular as $\xi \to 0$. The function $G_j$ defined this way still satisfies the equations \eqref{eq:greens}. We now make the connections to the BCs \eqref{eq:tbc1d}. \begin{lemma} For $j \leq 0$, \begin{equation}\label{eq:Gidentity} G_{j-1} = (1 + \lambda) G_j - \lambda G_{j+1} \end{equation} \end{lemma} \begin{proof} Rewrite \eqref{eq:GreenFunction} using a change of variable $z = \exp(i \xi)$ and the characteristic polynomial associated to the denominator, we get \begin{equation*} \begin{aligned} G_j & = \frac{1}{2\pi \kappa_2 i}\int_{\abs{z} = 1} \frac{(z^{-j} - 1)z}{ (z - 1)^2 (z - \lambda) (z - \lambda^{-1}) } \,\mathrm{d} z \\ & = \frac{1}{2\pi \kappa_2 i} \lim_{\epsilon \to 0} \int_{\gamma_{\epsilon}} \frac{(z^{-j} - 1)z}{ (z - 1)^2 (z - \lambda) (z - \lambda^{-1}) } \,\mathrm{d} z \end{aligned} \end{equation*} where the contour $\gamma_{\epsilon}$ is given by the boundary of $B_1(0) \backslash B_{\epsilon}(1)$ on the complex plane. The second equality uses the fact that the integrand is regular as $z \to 1$. Using this representation, we have \begin{equation*} \begin{aligned} G_{j-1} - (1 + \lambda) G_j + \lambda G_{j+1} & = \frac{1}{2\pi \kappa_2 i} \lim_{\epsilon \to 0} \int_{\gamma_{\epsilon}} \frac{z^{-j} \bigl(z^2 - (1 + \lambda) z + \lambda \bigr)}{ (z - 1)^2 (z - \lambda) (z - \lambda^{-1}) } \,\mathrm{d} z \\ & = \frac{1}{2\pi \kappa_2 i} \lim_{\epsilon \to 0} \int_{\gamma_{\epsilon}} \frac{z^{-j}}{ (z - 1) (z - \lambda^{-1}) } \,\mathrm{d} z. \end{aligned} \end{equation*} As $j \leq 0$ and $\abs{\lambda} < 1$, the integrand is holomorphic in $B_1(0) \backslash B_{\epsilon}(1)$ for any $\epsilon$, and hence the integral vanishes for any $\epsilon$ by Cauchy's integral theorem. Therefore, \eqref{eq:Gidentity} holds. \end{proof} The equation \eqref{eq:Gidentity} is exactly in the same form as the BCs \eqref{eq:tbc1d} when $T=0$. This is not surprising, since the Green's function represents a special set of solutions. In particular, $G_j$ satisfies the homogeneous difference equations \eqref{eq:fdhalfchain}. Nevertheless, this simple observation can be employed to determine the coefficients in the BCs by using the Green's functions as test functions. This will be implemented for problems in two-dimensions, and the implementation will be discussed in the next sections. \section{Implementation in two-dimensional models}\label{sec: 2d} Here we demonstrate how the BC can be extended to two-dimensional systems. \subsection{The traction BC and the induced boundary value problem} For multi-dimensional problems, the BC is typically non-local \cite{MeKaLi06}, in that the displacement of all the atoms at the boundary is coupled. It is also possible to consider nonlocal boundary conditions, for example, in the spirit of the boundary element method for molecular static models by one of the authors \cite{Li:12}. Another alternative is to seek a {\it local} BC, in the sense that the position of the ghost atoms are only determined by the positions of nearby atoms in the system. To make the dependence local, we would employ a ``local flattening'' of the boundary. Roughly speaking, for an atom at the boundary, the position is determined by a homogeneous approximation of the local atom configuration with the local value of the traction tensor. To better explain the idea, we consider the face-centered cubic (FCC) lattice of Aluminum with the axis aligned in $\langle 11 0\rangle$, $\langle 0 0 1\rangle$ and $\langle 1\bar{1} 0\rangle$ orientations. When projected to the $\langle 1\bar{1} 0\rangle$ plane, the lattice spacing in the horizontal and vertical directions are $\frac{a_0}{\sqrt{2}}$ and $\frac{a_0}{2}$, respectively, which makes it look like a triangular lattice, as shown in Fig. \ref{fig: Al2d}. Again, we introduce ghost atoms outside the boundary in order to achieve the desired traction condition. They are represented by open circles in Fig. \ref{fig: Al2d}. \begin{figure}[htp] \begin{center} \scalebox{0.4}{\includegraphics{Al2d-eps-converted-to.pdf}} \end{center} \caption{The projected atomic position of a FCC lattice. Filled circles: Atoms in the interior; Open circles: the ghost atoms introduced outside the boundary. The boxes contain the set of atoms $S_j$ that will be used to determine the displacement of the $j$th atom (see equation \eqref{eq: loc-bc-2d}). } \label{fig: Al2d} \end{figure} Our main goal is to determine the actual position of the ghost atoms based on the displacement of the atoms in the interior and the applied traction $T$, which is a two-dimensional vector. In this case, it is in general cumbersome to obtain the exact boundary condition. Motivated by the one-dimensional traction BC, we seek an approximate BC in the following form, \begin{equation}\label{eq: loc-bc-2d} u_j = \sum_{i\in S_j} B_{ji} u_i + p_j. \end{equation} The shift vector $p$ is similar to the non-homogeneous term in \eqref{eq:tbc1d}, and it will be determined so that the correct traction is obtained. In the case when $p=0$, this boundary condition would coincide with the BCs that models an environment that is at a mechanical equilibrium with zero stress \cite{Li2009b,MeKaLi06}. In principle, an exact BC in this form can be derived, e.g., in \cite{MeKaLi06}, which mathematically, is a discrete analogue of the Dirichlet-to-Neumann (DtN) map. The exact expression is typically nonlocal, in that the summation is over all the atoms near the boundary. But here we choose a local approximation, and restrict the summation in \eqref{eq: loc-bc-2d} to those atoms that are close to the $j$ atoms. These neighbors are collected by the set $S_j.$ Due to the translational symmetry of the lattice, we will use the same set of neighbors when implementing the formula \eqref{eq: loc-bc-2d}, which is also demonstrated in Fig. \ref{fig: Al2d}. More specifically, we start with the layer of ghost atoms closest to the boundary and apply the BCs \eqref{eq: loc-bc-2d}. Once the displacement of these atoms are updated, we move to the next layer, and these steps will be repeated until the position of all the ghost atoms are updated. We now discuss how to determine the coefficients $B_{ji}$. Since they are independent of the applied traction, they can be computed for the case $T=0$ and $p=0.$ In this case, these coefficients can be determined using an optimization procedure, developed in \cite{Li2009b}. More specifically, we choose an objective function as follows, \begin{equation} \min h, \quad h= \sum_{k} |G_{j,k} - \sum_i B_{ji} G_{i,k}|^2. \end{equation} Here $G_{j,k}$ is the two-dimensional lattice Green's function \cite{Tewary1973}. The main observation is that the BC should by satisfied by special solutions, especially the lattice Green's functions $G_{i,k}$, which corresponds to the solution of the linearized molecular statics model when a point force is applied on the $k$th atom. This was already observed for the one-dimensional model. Ideally, the objective function would be zero when the BC is exact. Further, we introduce a constraint, \begin{equation} \sum_i B_{ji}=I, \end{equation} so that the constant solutions are admitted. This is also seen in the one-dimensional system: The two coefficients in \eqref{eq:tbc1d} add up to 1. It remains to estimate the vector $p$. In principle, it should be determined by requiring the traction to arrive at a prescribed value. The total tractions along the boundary is given by the sum of the forces \cite{Admal2010unified,WuLi14}, \begin{equation} t= \sum_{i\in \Omega, j\notin \Omega} f_{ij}. \end{equation} Here, $f_{ij}$ comes from a force decomposition. Namely, \begin{equation}\label{eq: fij} f_i= \sum_{j\ne i} f_{ij}, \quad f_{ij} = -f_{ji}. \end{equation} This formula, which was already indicated by \eqref{eq:tractioncondition}, is consistent with the intuition of Cauchy. The explicit expressions for the force decomposition \eqref{eq: fij}, especially for multi-body potentials, can be found in \cite{Chen2006local,WuLi14}. To control the local traction, we divide the region with ghost atoms into blocks, each denoted by $\Omega^c_\alpha,$ $\alpha=1, 2, \cdots, M$. The computational domain is denoted by $\Omega,$ and the intersection with $\Omega^c_\alpha$, is written as $\partial \Omega_\alpha.$ For each block $\Omega^c_\alpha,$ we introduce a shift vector $p_\alpha$. They are chosen so that the traction along $\partial \Omega_\alpha$ agrees with a prescribed value $t_\alpha$. This arrangement is illustrated in Fig. \ref{fig: blocks}. \begin{figure}[htp] \begin{center} \scalebox{0.4}{\includegraphics{blocks-eps-converted-to.pdf}} \end{center} \caption{Imposing tractions along the boundary: $\Omega$ indicate the computational domain. The atoms outside the boundary are grouped into blocks $\Omega_\alpha^c$ with the intersection with the boundary given by $\partial\Omega_\alpha$. The traction on each $\partial\Omega_\alpha$ is prescribed. } \label{fig: blocks} \end{figure} We now put the mathematical models together. \begin{equation}\label{eq: model2d} \left\{ \begin{aligned} & \frac{\partial}{\partial u_i} V=0, &&\forall x_i \in \Omega, \\ & u_j = \sum_{i} B_{ji} u_i + p_\alpha, & &\forall x_j \in \partial \Omega_\alpha,\\ & \sum_{i\in \Omega, j\in \Omega_\alpha^c} f_{ij} = t_\alpha.& \end{aligned}\right. \end{equation} The first set of equations represent the force balance in the interior, with potential energy given by $V$. The remaining equations serve as BCs with prescribed tractions $t_\alpha$. The unknowns are the atomic displacement, together with the shift vectors $p_\alpha$. The atomic degrees of freedom associated with the atoms outside $\Omega$ has been implicitly taken into account by the second and third equations in \eqref{eq: model2d}. In the next section, we will discuss an implementation method. \subsection{Numerical implementations} Our reduced model \eqref{eq: model2d} consists of a set of nonlinear algebraic equations. It is therefore natural to make use of iterative methods, such as the quasi-Newton's method. In general, this requires the approximation of the Jacobian matrix, since the analytical form is usually not available. The convergence is typically slow, especially when the system is not well prepared. To find an alternative, we notice that in the domain $\Omega$, the molecular statics model is associated with an energy. Therefore, for Dirichlet BCs, where the atoms outside the boundary are held fixed, the solution to the molecular statics model correspond to an energy minimization, which is usually more robust and much more efficient than solving the nonlinear equations. We implement the equations by a domain decomposition approach, and alternate among the three sets of equations in \eqref{eq: model2d}. As an example to explain the idea, we may consider the coupling of the first two sets of equations and assume that $p_\alpha$ is given. We create a few overlapping layers, in which the atoms serve as both ghost atoms and interior atoms. This is illustrated by Fig. \ref{fig: DD2d}. In each iteration, we first update the displacement of all the ghost atoms including those in the overlapping region using \eqref{eq: loc-bc-2d} (or the second equation in \eqref{eq: model2d}). We then turn to the interior atoms, assuming that other atoms are held fixed (open circles in Fig. \ref{fig: DD2d}). By minimizing the energy, we obtain the updated position of the interior atoms. The numerical implementation has been done with the BFGS package \cite{Liu1989lbfgs}. This iteration can be continued until convergence is reached. This is simply the Schwartz iteration \cite{toselli2005domain}) between the two models. \begin{figure}[htp] \begin{center} \scalebox{0.35}{\includegraphics{DD2d-eps-converted-to.pdf}} \caption{An overlapping domain decomposition.} \label{fig: DD2d} \end{center} \end{figure} \subsection{Results from numerical experiments} As a test problem, we consider a dislocation dipole under a shear load. The atoms around the two dislocations with opposite Burgers vectors are shown in Fig. \ref{fig: dpl}. The embedded atom model (EAM) \cite{ercolessi1994interatomic} has been used as the interatomic potential. \begin{figure}[htpb] \begin{center} \scalebox{0.45}{\includegraphics{dpl-eps-converted-to.pdf}} \caption{The atoms near the two dislocations.} \label{fig: dpl} \end{center} \end{figure} We first manually control the vector $p=(p_1,0),$ and observe the influence on the traction changes. As a quasi-static loading, we increase $p_1$ with small increment and then solve the molecular statics model using the domain decomposition method described in the previous section. For each step, we also compute the traction at the boundary. The history of the total boundary traction is shown in Fig. \ref{fig: thist}. We observe that the traction increases as $p_1$ increases. However, when $p_1$ reaches certain critical value, a sudden drop is observed. In this case, the two dislocations move to the boundary, and the entire sample undergoes a complete slip. Fig. \ref{fig: positions} shows the atomic positions before and after the slip. \begin{figure}[hpbt] \begin{center} \scalebox{0.45}{\includegraphics{thist-eps-converted-to.pdf}} \caption{The history of the traction at the boundary.} \label{fig: thist} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \scalebox{0.85}{\includegraphics{positions-eps-converted-to.pdf}} \caption{The position of the atoms before and after the yield stress.} \label{fig: positions} \end{center} \end{figure} In the next experiment, we apply a uniform traction along the upper and lower boundaries. In Fig. \ref{fig: tpvals}, we show the resulting values of $p_1$ along the boundary. While, the resulting tractions have reached the prescribed values $(2\times 10^{-4},0)$, it is clear that the values for $p$ are not homogeneous, mainly due to the presence of the dislocations. The displacement is shown in Fig. \ref{fig: u1}, together with a close-up view of the atomic positions. All these results suggest that the atomic positions are not uniform. Compared to the simulations of dislocation dipoles using the Parrinello-Rahman method (e.g.,\cite{wang2004calculating}), the current approach does not introduce periodic images of the dislocation dipole. Moreover, the uniform shear stress can be applied without forcing a uniform deformation along the boundaries. \begin{figure}[htpb] \begin{center} {\includegraphics[width=5in,height=2.2in]{tvals-eps-converted-to.pdf}}\\ {\includegraphics[width=5in,height=2.2in]{pvals-eps-converted-to.pdf}} \caption{The traction (top) and the shift vector $p_1$ (bottom) along the upper boundary.} \label{fig: tpvals} \end{center} \end{figure} \begin{figure}[hbtp] \begin{center} {\includegraphics[scale=0.17]{u1.jpg}} {\includegraphics[scale=0.32]{atoms.jpg}} \caption{The displacement $u_1$ and a close-up view of the atoms near the two dislocations (generated in Ovito \cite{stukowski2010visualization}).} \label{fig: u1} \end{center} \end{figure} \section{Summary and Discussion} We have formulated boundary conditions that impose a traction force on a molecular statics model. These boundary conditions are derived by taking into account the surrounding elastic environment. Hence, the computational domain is part of a much bigger sample, and artificial boundary effects can be eliminated. In the continuum limit, these boundary conditions coincide with the Neumann boundary condition for continuum elasticity models. We have restricted our discussions to static problems. Extension to dynamic problems will be considered in future works. \medskip \noindent\textbf{Acknowledgment.} The work of J.L. was supported in part by the Alfred P. Sloan foundation and National Science Foundation under award DMS-1312659. \bibliographystyle{amsxport}
1,314,259,995,545
arxiv
\section{Introduction} In the past few years, the heavy quark effective theory (HQET) \cite{neubert,georgi,HQET} has enjoyed much success in treating many aspects of the phenomenology of heavy hadrons \cite{baryons,LUKE}. The elegant tensor formalism developed by Georgi, and used extensively by others, has proven to be a very powerful tool for applications of HQET. There remains, however, one area that is yet to be treated by this tensor formalism, namely that of the strong decays of these hadrons. Clearly, since these decays, by their very nature, involve non-perturbative QCD, we do not expect HQET to allow us to calculate absolute decay rates. However, it will be useful in examining ratios of decay rates. The question of the ratios of decay rates has been addressed by Isgur and Wise \cite{IW}. In their article, they noted that amplitudes for strong decays of heavy mesons were proportional to sums of products of four Clebsch-Gordan coefficients that arise from recoupling of angular momenta in the parent and daughter hadrons. In fact, their result can be expressed slightly more compactly as a $6-J$ symbol. Their argument makes use of the fact that the heavy quark is a spectator in the decay of the heavy hadron, so that only the light component of the hadron, the so called brown-muck, takes an active part in the decay. At the hadronic level, a heavy hadron of total spin $S$ decays to one of spin $S^\prime$, with a light hadron of total angular momentum $S_h$. The daughter hadrons are in a relative $L$-wave, and conservation of angular momentum gives \begin{equation} {\bf S_h+L=J_h, \,\,\, S_\ell^\prime+S_Q=S^\prime}, \end{equation} with \begin{equation} {\bf S^\prime+J_h=S}, \end{equation} where $S_\ell^\prime$ is the spin of the brown muck in the daughter hadron, and $S_Q$ is the spin of the heavy quark. This combination of angular momenta may be represented symbolically as \begin{equation} \left[\left[S_hL\right]_{J_h}\left[S_\ell^\prime S_Q\right]_{S^\prime}\right]_S.\nonumber \end{equation} On the other hand, one can regard this process as proceeding entirely at the level of the brown muck, since the heavy quark is a spectator in the process, so that \begin{equation} {\bf S_\ell^\prime+J_h=S_\ell,\,\,\, S_\ell+S_Q=S} \end{equation} or \begin{equation} \left[\left[\left[S_hL\right]_{J_h}S_\ell^\prime\right]_{S_\ell}S_Q\right]_S. \end{equation} The overlap between these two `wave functions' or coupling schemes is \begin{equation} (-1)^{(S_Q+S_\ell^\prime+J_h+S)}\sqrt{(2S^\prime+1)(2S_\ell+1)} \left\{\matrix{S_Q&S_\ell^\prime&S^\prime\cr J_h&S&S_\ell\cr}\right\}. \end{equation} This object is proportional to the strong matrix element, and the constants of proportionality are the same for the four decays that are possible between two different multiplets. That is, there exist a single set of proportionality constants for the four decays \begin{equation} \left(S_\ell\pm 1/2\right) \to \left(S_\ell^\prime\pm 1/2\right) +h. \end{equation} In this article, we show how to use the tensor method to obtain the same information in a manner that we find somewhat more compact than the `spin-counting' method. In addition, the specific forms of the amplitudes can be obtained using this formalism. Furthermore, the full power of the tensor formalism may then be brought to bear on these processes. For instance, it should be possible to treat the $1/m_Q$ corrections, as well as the radiative ones, to the decay amplitudes. \section{Tensor Formalism} \subsection{General Formalism} In general, we are interested in a matrix element of the form \begin{equation} {\cal M}=\left<{\cal X}(p) {\cal H}^\prime_Q(v^\prime)\left|{\cal O}_s\right| {\cal H}_Q (v)\right>, \end{equation} where ${\cal X}$ is a light hadron, ${\cal H}^\prime_Q$ and ${\cal H}_Q$ are heavy hadrons, and ${\cal O}_s$ is the operator responsible for the strong decay. The problem in trying to say anything useful about this lies in the fact that ${\cal O}_s$ is, in general, a complicated object that is full of non-perturbative QCD dynamics, and about which we know very little. In general, this operator will involve all of the sub-structure of the hadron in a non-trivial way. Despite this difficulty, we do know that ${\cal O}_s$ must be a Lorentz scalar, as well as flavor singlet in all flavors of quarks. In particular, it is flavor singlet in the heavy quark. For the purposes of this discussion, and without any loss of generality, we can write ${\cal O}_s$ as \begin{equation}\label{operator} {\cal O}_s=\sum_ia_i\overline{Q}\Gamma_i Q L_i, \end{equation} where $\Gamma_i$ is a general Dirac matrix, and is one of $1,\,\gamma_\mu,\, \sigma_{\mu\nu}, \,\gamma_\mu\gamma_5,\,\gamma_5$, and $L_i$ contains all of the dynamics involving the brown muck. The $a_i$ are unknown constants. We have written ${\cal O}_s$ in this form in order to explicitly display the heavy quark part. Each $L_i$ has the same Lorentz structure as the corresponding $\Gamma_i$, to ensure that ${\cal O}_s$ is a Lorentz scalar. Note that the $L_i$ are, in principle, many body operators, as the structure of the brown muck is expected to be complicated. While it may be tempting to associate the $\overline{Q}\gamma_\mu Q$ term of ${\cal O}_s$, for instance, with `one-gluon physics', we refrain from making such identifications. This is because we view eqn. (\ref{operator}) simply as a way of parametrizing our ignorance of strong interaction dynamics, and make no interpretations of the physics that could lead to each term. Since we can represent the heavy hadrons as ${\rm heavy \,part}\times{\rm light \,part}$, such as in the Falk representation \cite{falk}, the matrix element of each term in ${\cal O}_s$ factorizes. This means that although the interactions involving the light component are complicated, we can absorb these into a set of unknown form factors (as has been done for weak decays) or unknown coupling constants (for strong decays). All that is left for us to deal with are the heavy quark components, which we know how to treat. Furthermore, we also know how to include corrections due to the finite mass of the heavy quark. However, there are still five terms in ${\cal O}_s$. What helps us further in our treatment of the strong process is the fact that, at leading order in HQET ({\it i. e.}, in the limit that the mass of the quark $Q$ goes to infinity), the heavy quark will act as a spectator in the decay. In particular, its spin indices are unaffected by the decay (this is the same physics contained in the spin-coupling scheme described above). Thus, the only possible form that can contribute is the scalar contribution, $\Gamma_i=1$. Beyond leading order we would expect other terms to contribute. This also means that the light part of the matrix element is simplified, as the operator concerned is a Lorentz-scalar. This identification, that only $\Gamma_i=1$ above can contribute at leading order, is the key to applying the tensor formalism to these decays. All else is now relatively simple, as we know how to `calculate' matrix elements for any arbitrary $\Gamma$, as well as how to include various kinds of corrections. We close this section by noting that the coupling constants to which we have alluded, which are essentially the matrix elements of the light part of the decay operator, depend only on the brown muck, and are therefore independent of the mass of the heavy quark. Thus, for instance, the same set of coupling constants would be valid for decays of hadrons containing $c$ quarks and for hadrons containing $b$ quarks. \subsection{Kinematic Questions} HQET, in conjunction with chiral perturbation theory (ChPT), has been applied to the strong single (and double) pion decays of heavy hadrons \cite{goity,falk1,falk2}. In that treatment, the requirement that the pion momentum be small, combined with the `infinite' masses of the parent and daughter hadrons, leads to the fact that the velocity of the heavy daughter hadron is the same as that of the parent. This is in contrast with the weak decays of these states, in which the heavy quark can receive a large momentum change from the emitted virtual $W$: the velocities of the parent and daughter hadrons are different. In the HQET/ChPT formalism for strong decays, there are therefore two independent kinematic variables, $v$, the velocity of the heavy hadrons, and $p$, the momentum of the pion. In the present formalism, we want to be able to treat the decays of a heavy hadron to another heavy hadron, with the emission of a `light' hadron that may be any of the infinite tower of excited states. Thus, if the light daughter is sufficiently excited, it can provide the large impulse required to bring about a velocity change in the heavy hadrons. In fact, note that if we were to consider decays of charmed mesons to the $a_2$, say, the mass of the light daughter hadron is already a sizable fraction (70\%, in the case of decays to the ground state charmed mesons) of the mass of the daughter hadron. It therefore appears necessary to make use of full momentum conservation through \begin{equation} m_{D^{(j)}}v=m_{D}v^\prime+p, \end{equation} where $v$ is the velocity of the parent, $v^\prime$ is that of the heavy daughter, and $p$ is the momentum of the light daughter. We could therefore use two of the three quantities $v$, $v^\prime$ and $p$ as independent kinematic variables. In this case, since the velocity of the heavy hadron has changed, we are apparently implicitly including $1/m_Q$ (and higher) corrections that are of a purely kinematic nature, and which have no effect on the spin symmetry of HQET. We note, however, that it has become customary to use the physical momentum of the light hadron in examining these decays. This amounts, in essence, to a second choice of kinematics. We note, however, that since the splittings between states of the heavy spectra are independent of the mass of the heavy quark, so too is $p$, the momentum of the light hadron produced in the decay. This means that $v-v^\prime$ must scale as $1/m_Q$. For the purposes of `power counting', it is therefore more convenient to use $p$ as one of the variables, instead of $v-v^\prime$. We have therefore chosen $p$ and $v$ as our kinematic variables. We close by noting that the choice of kinematics will probably become more important for the consideration of $1/m_Q$ corrections. \section{Meson Decays} The starting point for this discussion is the representation of the heavy meson states. For concreteness, let us examine decays of excited $D^{**}$ mesons to ground state $D^{(*)}$ mesons. For these, we use the representations constructed by Falk \cite{falk}. An excited $D$ meson with total angular momentum $J$ will, in general, be represented by an object linear in a polarization tensor, $\eta^{\mu_1\dots\mu_J}(v)$. This polarization tensor is symmetric, transverse and traceless. The latter two properties are expressed by \begin{equation} v_{\mu_1}\eta^{\mu_1\dots\mu_J}(v)=0,\,\, g_{\mu_1\mu_2}\eta^{\mu_1\dots\mu_J}(v)=0. \end{equation} For a state consisting of a heavy quark $Q$ and a light component with the quantum numbers of an antiquark, the specific representation of any particular state will depend on the angular momentum $j$ of the light component (antiquark) of the state. It is thus more convenient to refer to $j$ than to $J$, since there will be a degenerate doublet of states with $J=j\pm 1/2$. The full details of the representations can be found in Falk's article \cite{falk}. We illustrate the tensor method for calculating strong decay matrix elements by examining two specific sets of decays: the generalization to other cases should be obvious. We begin by looking at decays involving single pions, so that we are interested in the matrix element \begin{equation} {\cal M}=\left<D(v^\prime)\pi(p)\left|\overline{c} c\right|D^{(j)}(v)\right>. \end{equation} As identified in the previous section, we are taking the heavy quark operator responsible for the decay as $\overline{c} c$ (and a light scalar operator is understood as multiplying $\overline{c} c$). Note, too, that we are explicitly not using the `chiral limit' of soft pions, as we allow $p$ to be large. In other words, the velocity in the heavy daughter baryon is not the same as in the parent, and momentum is conserved explicitly through \begin{equation} m_{D^{(j)}}v=m_{D}v^\prime+p. \end{equation} In terms of the trace formalism, the matrix element of interest is \begin{equation} \left<D(v^\prime)\pi(p)\left|\overline{c}c\right|D^{(j)}(v)\right>=\sqrt{M_DM_{D^{(j)}}} {\rm Tr}\left[\Pi(p){\cal A}_{{\mu_1\dots\mu_k}}\overline{{\cal D}}(v^\prime){\cal M}_{D^{(j)}}^{{\mu_1\dots\mu_k}}(v)\right], \end{equation} where ${\cal D}(v^\prime)$ is the matrix representation of the meson $D$. The matrix ${\cal A}_{{\mu_1\dots\mu_k}}$ can only have the form \begin{equation} {\cal A}_{{\mu_1\dots\mu_k}}=p_{\mu_1}\dots p_{\mu_k}, \end{equation} while the matrix $\Pi(p)$ must represent the final state pion. The simplest, non-redundant form allowable is \begin{equation} \Pi(p)=a\gamma_5, \end{equation} where the constant $a$ is independent of the mass of the heavy quark, by virtue of our chosen normalization. One could also include a term in $\slash{p}$, but this is redundant. Thus, the matrix element is \begin{equation} \left<D(v^\prime)\pi(p)\left|\overline{c}c\right|D^{(j)}(v)\right>=a\sqrt{M_DM_{D^{(j)}}} {\rm Tr}\left[\gamma_5\overline{{\cal D}}(v^\prime){\cal M}_{D^{(j)}}^{{\mu_1\dots\mu_k}}(v)\right] p_{\mu_1}\dots p_{\mu_k}. \end{equation} Due to the spin symmetry of HQET, the decays to the corresponding vector meson $D^*$, are also described by the same coupling constant $a$, and the corresponding matrix element is \begin{equation} \left<D^*(v^\prime,\varepsilon)\pi(p)\left|\overline{c}c\right|D^{(j)}(v)\right>=a\sqrt{M_{D^*}M_{D^{(j)}}} {\rm Tr}\left[\gamma_5\overline{{\cal D}^*}(v^\prime){\cal M}_{D^{(j)}}^{{\mu_1\dots\mu_k}}(v)\right] p_{\mu_1}\dots p_{\mu_k}. \end{equation} Thus, these four decays are all described in terms of a single, unknown, nonperturbative constant $a$. We now turn to meson decays that are not as simple. We limit our discussion to decays involving light vector mesons ($\rho$, for instance), but the generalization to light hadrons of arbitrary spin should be clear. The matrix element for such a decay (still considering decays to the ground state heavy doublet) is \begin{eqnarray}\label{rho1} \left<D^{(*)}(v^\prime)\rho(p,\epsilon)\left|\overline{c}c\right|D^{(j)}(v)\right>=&&\sqrt{M_{D^{(*)}}M_{D^{(j)}}}\nonumber\\ &&\times{\rm Tr}\left[{\cal R}(p)\epsilon^{*\nu}{\cal A}_{\nu{\mu_1\dots\mu_k}}\overline{{\cal M}}_{D^{(*)}}(v^\prime) {\cal M}_{D^{(j)}}^{{\mu_1\dots\mu_k}}(v)\right]. \end{eqnarray} The most general form for the matrix ${\cal A}_{\nu{\mu_1\dots\mu_k}}$ is \begin{equation} {\cal A}_{\nu{\mu_1\dots\mu_k}}=p_{\mu_1}\dots p_{\mu_{k-1}}\left[av_\nu p_{\mu_k}+ b\gamma_\nu p_{\mu_k}+cg_{\nu\mu_k}\right], \end{equation} while ${\cal R}(p)=1$ is the most general, non-redundant form that represents the $\rho$ meson (the polarization vector of the $\rho$ appears explicitly in eqn. (\ref{rho1})). For a decay in which the parent belongs to one of the $\left(0^-,1^-\right)$ or $\left(0^+,1^+\right)$ multiplets, the term $c g_{\nu\mu_k}$ is absent, as there are then no indices on the matrix representation of the parent hadron. We close this section with a brief discussion of the relationship between the formalism presented here and that of the combined heavy quark effective theory and chiral perturbation theory, for decays involving pions. It is clear that the two approaches are attacking the problem from somewhat different starting points. What we have presented here does not require any constraint on the momentum of the pion produced in the decay, and so may be considered as a formalism that automatically includes all the powers of pion momentum that would arise in a chiral expansion. Note, for instance, that for the $D$-wave decays of the $\left(1^+,2^+\right)$, we would write down the form that we have written above, while in the chiral approach, operators with two powers of the pion momentum must be explicitly constructed \cite{falk1,falk2}. Finally, we note that it is essentially trivial to include two or more pions in any of these decays using this formalism. \section{Baryon Decays} The case of baryon decays may best be subdivided into two separate classes. The first set of decays that we will treat are those in which the heavy daughter hadron is a baryon (such as $\Lambda_b^*\to\Lambda_b\rho$), while in the second class, the heavy daughter hadron will be a meson (such as $\Lambda_b^*\to pB$). \subsection{Heavy Daughter Baryons\label{baryons2}} As with the meson decays, our starting point is the representation of the baryon states. We will simply borrow the representations constructed by Falk. We note, however, that we must divide our baryons into two classes, those with `natural' parity, and those with `unnatural' parity. This description is determined by the spin and parity of the brown muck, denoted $j^P$. If $P=(-1)^j$, the baryon is a natural one, while if $P=(-1)^{(j+1)}$, the baryon is unnatural. The need for this division into natural and unnatural baryons will become clear shortly. Consider the decay $\Sigma_b^{(j)}\to\Lambda_b\pi$, which is described by the matrix element \begin{equation} \label{amdj} \left<\Lambda_b(v^\prime)\pi(p)\left|\overline{b}b\right|\Sigma_b^{(j)}(v)\right>=\overline{u}(v^\prime) R_{{\mu_1\dots\mu_j}}(v) {\cal A}^{{\mu_1\dots\mu_j}}, \end{equation} where the spinor-tensor $R_{{\mu_1\dots\mu_j}}(v)$ represents both states of the doublet, and ${\cal A}^{{\mu_1\dots\mu_j}}$ contains all of the strong interaction dynamics. Since there is a pion in the decay, and assuming that the $\Lambda_b$ is the ground state, then the quantity ${\cal A}^{{\mu_1\dots\mu_j}}$ must be a pseudo-tensor if the $\Sigma_b^{(j)}$ is a natural baryon, or a tensor if it is unnatural. The forms that can be constructed in the two cases are quite different. Let us now examine some more specific examples. Consider the decay $\Sigma_b^{(*)}\to\Lambda_b\pi$, where $\Sigma_b^{(*)}$ belongs to the $\left(1/2^+,3/2^+\right)$ doublet. The matrix element is \begin{equation} \left<\Lambda_b(v^\prime)\pi(p)\left|\overline{b}b\right|\Sigma_b^{(*)}(v)\right>=\overline{u}(v^\prime) R_{\mu}(v) r^{\mu}, \end{equation} with $r^\mu$ a vector, which can only have the form \begin{equation} r_\mu=ap_\mu. \end{equation} On the other hand, if the $\Sigma_b^{(*)}$ belongs to the $\left(1/2^-,3/2^-\right)$ doublet, then $r^\mu$ would be a pseudo-vector, which cannot be constructed from the quantities we have at our disposal. Thus \begin{equation} \left<\Lambda_b(v^\prime)\pi(p)\left|\overline{b}b\right|\Sigma_b^{(1/2^-,3/2^-)}(v)\right>=0. \end{equation} The generalization of this to parent hadrons of higher spin is easy, since then ${\cal A}^{{\mu_1\dots\mu_j}}$ of eqn. (\ref{amdj}) becomes \begin{equation} {\cal A}_{{\mu_1\dots\mu_j}}=ap_{\mu_1}\dots p_{\mu_j} \end{equation} for parents of unnatural parity, or \begin{equation} {\cal A}_{{\mu_1\dots\mu_j}}=0 \end{equation} for parents of natural parity. In the case of the heavy baryons of natural parity, the amplitudes for decays to the ground state with the emission of a single pion vanish at leading order, and should first be non-zero at order $1/m_Q$. Thus, if these states have no other open channels into which they can decay, they should be quite narrow. For decays to final states that are not the ground state, such as to the $(1/2^+,3/2^+)$ multiplet, the decay amplitude is \begin{equation} \left<\Lambda_b^{(1/2^+,3/2^+)}(v^\prime)\pi(p)\left|\overline{b}b\right|\Sigma_b^{(j)}(v)\right>= \overline{R}_\nu(v^\prime) R_{{\mu_1\dots\mu_j}}(v) {\cal A}^{\nu{\mu_1\dots\mu_j}}. \end{equation} If the parent hadron has natural parity, then ${\cal A}^{\nu{\mu_1\dots\mu_j}}$ is a tensor (because the daughter has unnatural parity), and takes the form \begin{equation} {\cal A}_{\nu{\mu_1\dots\mu_j}}=p_{\mu_1}\dots p_{\mu_{j-1}}\left[ag_{\nu\mu_j}+bp_\nu p_ {\mu_j}\right]. \end{equation} For parents of unnatural parity, \begin{equation} {\cal A}_{\nu{\mu_1\dots\mu_j}}=ap_{\mu_1}\dots p_{\mu_{j-1}}\varepsilon_{\nu\mu_j\alpha\beta} v^\alpha p^\beta. \end{equation} As final examples of the application of the formalism to this kind of decay we consider decays to $\rho$ mesons. The matrix element for decays to the ground state is \begin{equation} \left<\Lambda_b(v^\prime)\rho(p,\epsilon)\left|\overline{b}b\right|\Sigma_b^{(j)}(v)\right>= \overline{u}(v^\prime) R_{{\mu_1\dots\mu_j}}(v)\epsilon_\nu^* {\cal A}^{\nu{\mu_1\dots\mu_j}}. \end{equation} If the parent has natural parity, then ${\cal A}^{\nu{\mu_1\dots\mu_j}}$ is a tensor and takes the form \begin{equation} {\cal A}_{\nu{\mu_1\dots\mu_j}}=p_{\mu_1}\dots p_{\mu_{j-1}}\left[ag_{\nu\mu_j}+bv_\nu p_ {\mu_j}\right]. \end{equation} For a parent of unnatural parity, \begin{equation} {\cal A}_{\nu{\mu_1\dots\mu_j}}=ap_{\mu_1}\dots p_{\mu_{j-1}}\varepsilon_{\nu\mu_j\alpha\beta} v^\alpha p^\beta. \end{equation} For decays to the $(1/2^+,3/2^+)$ multiplet, we obtain \begin{equation} \left<\Lambda_b^{(1/2^+,3/2^+)}(v^\prime)\rho(p,\epsilon)\left|\overline{b}b\right|\Sigma_b^{(j)}(v)\right>= \overline{R}_\nu(v^\prime) R_{{\mu_1\dots\mu_j}}(v)\epsilon_\alpha^* {\cal A}^{\nu\alpha{\mu_1\dots\mu_j}}. \end{equation} For parents of unnatural parity, \begin{eqnarray} {\cal A}_{\nu\alpha{{\mu_1\dots\mu_j}}}&=&p_{\mu_1}\dots p_{\mu_{j-2}} \left\{ ag_{\nu\mu_j}g_{\alpha\mu_{j-1}}\right.\nonumber\\ &+&\left.p_\beta\left[bg_{\nu\alpha}g_{\beta\mu_j}+cg_{\nu\mu_j}g_{\alpha\beta}+dg_{\alpha\mu_j}g_{\nu\beta} +eg_{\beta\mu_j}p_\alpha v_\nu\right]\right\}, \end{eqnarray} while for parents of natural parity, \begin{eqnarray} {\cal A}_{\nu\alpha{\mu_1\dots\mu_j}}&=&p_{\mu_1}\dots p_{\mu_{j-1}}\left\{a \varepsilon_{\mu_j\nu\alpha\beta} p^\beta\right.\nonumber\\ &+&\left.v^\beta p^\rho\left[bp^\alpha\varepsilon_{\mu_j\nu\beta\rho}+ cv^\nu\varepsilon_{\mu_j\alpha\beta\rho}+dp^{\mu_j}\varepsilon_{\nu\alpha\beta\rho}\right] \right\}. \end{eqnarray} \subsection{Light Daughter Baryons} For these decays, as with the decays to heavy baryons, it will again be useful to divide the light baryons into natural and unnatural baryons, with a slight modification of the definition. A light baryon of spin $J$ and parity $P$ is considered to be natural if $P=(-1)^{J-1/2}$, unnatural if $P=(-1)^{J+1/2}$. In addition, for states with spin greater than 1/2, we employ the generalized Rarita-Schwinger fields $u_{{\mu_1\dots\mu_j}}(p)$, which satisfy \begin{eqnarray} \slash{p}u_{{\mu_1\dots\mu_n}}(p)&=&mu_{{\mu_1\dots\mu_n}}(p),\nonumber\\ \gamma^{\mu_1}u_{{\mu_1\dots\mu_n}}(p)&=&0,\nonumber\\ p^{\mu_1}u_{{\mu_1\dots\mu_n}}(p)&=&0, \end{eqnarray} where $n=J-1/2$, and we remind the reader that this object is symmetric in all of its Lorentz indices. We first consider the decays $\Lambda_b\to B^{(*)} N$, where $N$ is the ground state nucleon, and the parent represents any of the states that belong to one of the $1/2^+$ singlets. The amplitude for the decay is \begin{equation} \left<B^{(*)}(v^\prime)N(p)\left|\overline{b}b\right|\Lambda_b(v)\right>= \overline{u}(p) {\cal A} \overline{{\cal M}}_{B^{(*)}}(v^\prime) u(v), \end{equation} where ${\cal A}$ is the most general scalar matrix that can be constructed, and ${\cal M}_{B^{(*)}}(v^\prime)$ is the matrix representing the $B^{(*)}$ states. Without loss of generality, we can choose ${\cal A}=a$, a constant. We can generalize this for the decay of any excited state, such as $\Lambda_b^{(j)}\to B^{(*)}N$. The amplitude is \begin{equation} \left<B^{(*)}(v^\prime)N(p)\left|\overline{b}b\right|\Lambda_b^{(j)}(v)\right>= \overline{u}(p) {\cal A}^{{\mu_1\dots\mu_j}}\overline{{\cal M}}_{B^{(*)}}(v^\prime) R_{{\mu_1\dots\mu_j}}(v). \end{equation} For parents of natural parity, ${\cal A}^{{\mu_1\dots\mu_j}}$ is a tensor, and takes the form \begin{equation} {\cal A}^{{\mu_1\dots\mu_j}}=p^{\mu_1}\dots p^{\mu_{j-1}}\left[ap^{\mu_j} +b \gamma^{\mu_j}\right]. \end{equation} For parents of unnatural parity, ${\cal A}^{{\mu_1\dots\mu_j}}$ is a pseudotensor, which we can construct very easily (in this case) by using the tensor of the preceding equation, and multiplying it by $\gamma_5$. Thus, for parents of unnatural parity, \begin{equation} {\cal A}^{{\mu_1\dots\mu_j}}=p^{\mu_1}\dots p^{\mu_{j-1}}\left[ap^{\mu_j} +b \gamma^{\mu_j}\right]\gamma_5. \end{equation} Since the use of the $\gamma_5$ allows us to go from pseudotensor to tensor for these decays, in what follows we will discuss only the decays of parent baryons with natural parity. The last set of decays we consider are $\Lambda_b^{(j)}\to B^{(*)}\Delta$. For the amplitude, we write \begin{equation} \left<B^{(*)}(v^\prime)\Delta(p)\left|\overline{b}b\right|\Lambda_b^{(j)}(v)\right>= \overline{u}_\nu(p) {\cal A}^{\nu{\mu_1\dots\mu_j}} \overline{{\cal M}}_{B^{(*)}}(v^\prime)R_{{\mu_1\dots\mu_j}}(v). \end{equation} Since the $\Delta$ has unnatural parity, ${\cal A}^{\nu{\mu_1\dots\mu_j}}$ must be a pseudotensor (for parents of natural parity), and takes the form \begin{eqnarray} {\cal A}_{\nu{\mu_1\dots\mu_j}}&=&\left\{p_{\mu_1}\dots p_{\mu_{j-1}}\left[av_\nu \gamma_{\mu_j}+bg_{\nu\mu_j}+cv_\nu p_{\mu_j}\right]\right.\nonumber\\ &+&\left.dp_{\mu_1}\dots p_{\mu_{j-2}}g_{\nu\mu_{j-1}}\gamma_{\mu_j}\right\} \gamma_5. \end{eqnarray} \section{Discussion and Conclusion} In the previous sections, we have outlined how the tensor formalism of HQET may be used to examine the strong decays of heavy hadrons. There remain a few points of the formalism that warrant some discussion. First, note that we have not presented any decay rates. Nevertheless, we have examined many cases for these decays, and have found that the ratios of decay rates predicted by Isgur and Wise are indeed obtained. We have not treated the decays of heavy mesons to a heavy baryon and a light antibaryon. However, the formalism for these decays is very similar to that of the last subsection. There is one subtlety involved in some of the matrix elements we have shown. Let's examine the case of the $(1/2^-,3/2^-) \to 1/2^+\rho$, where the $1/2^+$ is the heavy baryon singlet. In this case, the $6-J$ symbol becomes $$\left\{\matrix{1/2&0&1/2\cr J_h&S&1\cr}\right\},$$ which implies that $J_h$ can only have the value 1, regardless of the value of $S$. However, in our formalism, we have used two independent coupling constants, implying two independent amplitudes. The resolution of this apparent contradiction lies in realizing that for this decay, $J_h=1$ can be constructed in two different ways, with $L=0$ or $L=2$ (for the $\rho$, $S_h=1$). Thus, there are indeed two independent amplitudes, corresponding to the two independent partial waves, but the ratio of the two $L=0$ decay amplitudes is the same as that of the two $L=2$ decay amplitudes. In their formalism, Isgur and Wise have pointed out that the total widths for the decays of the two members of a heavy spin multiplet to the two members of another multiplet are identical. In principle, we can obtain this general result in the present formalism, but a proof is beyond the scope of the present article, and is left for possible future work. We note, however, that for all of the cases we have examined explicitly, the sum rule has been found to be valid, as expected. In subsection \ref{baryons2}, we saw that there were some decays that vanished exactly at this order in the $1/m$ expansion. For such amplitudes, the $1/m$ `corrections' are therefore the leading terms, and we believe that these corrections should be studied. In addition, it is important to examine the $1/m$ corrections for the non-vanishing amplitudes, as these may lead to large departures from the leading order predictions. This has been done by Falk \cite{falk1,falk2} for the D-wave decays of the $(1^+,2^+)$ $D^{**}$ mesons to the ground states, in the framework of the combined HQET and chiral perturbation theory. It is of some interest to see the kind of contributions that can arise in the present formalism. In particular, an $S$-wave component is expected for one of these decays. The coupling constants we have introduced are all independent of the mass of the heavy quark present in the parent and daughter hadrons. By virtue of the heavy flavor symmetry, these coupling constants are therefore valid both for charm and beauty decays. Thus, knowing some charmed decay rates, we could predict the corresponding beauty decay rates. Alternatively, we could attempt to extend this formalism down to strange hadrons, treating the $s$ quark as heavy, to glean some information about what to expect in charm. In this case one would certainly expect $1/m$ corrections to be very important. Finally, we close on a very speculative note. The key to the formalism presented herein was the identification of the heavy quark current that plays a role in the decay. For the strong decays, this current was identified as being the unit Dirac matrix. It is possible that this idea can be extended to, for example, electromagnetic processes of heavy mesons. In the decay $D^*\to D\gamma$, for instance, it is expected that the photon will couple both to the heavy quark and to the brown muck. We know what to do in the first case, but not in the second. In the second case, however, we may still be able to use the idea that for this part of the current, the heavy quark is a spectator, so that the heavy quark current is again unity, and one is then left with the matrix elements of the light current, which may be parametrized in some way. This has been done, to some extent, by a number of authors \cite{radiative}. Whether this approach leads to any further development remains to be seen. \section*{Acknowledgement} Thanks go to J. Goity and N. Isgur for discussions and comments, and for reading the manuscript. Thanks also go to Institut des Sciences Nucl\'eaires, Grenoble, France, where part of this work was done. This work was supported by the National Science Foundation through grant \# PHY 9457892, and by the Department of Energy, through contracts DE-AC05-84ER40150 and DE-FG05-94ER40832. \vskip 0.25in
1,314,259,995,546
arxiv
\section{Conclusions} \label{s:conclusions} We have computed a doubly-localized solution of plane Couette flow, which consists of two symmetrically-opposed $\Lambda$-shaped vortices whose legs swirl in opposite directions and are tilted in both the spanwise and wall-normal directions. The solution roughly resembles in size and internal structure the smallest sustained turbulent spots simulated by \citet{LundbladhJFM91} at similar Reynolds number and the doubly-localized, chaotically wandering edge states of \cite{DuguetPF09} and \cite{SchneiderJFM10}. The streamwise exponential decay and the form of the wall-normal overhang profile are well-approximated by the solution of a linearized equation involving the leading terms of the Navier-Stokes equations. Over a range of Reynolds numbers the solution \revision{lies} on the boundary between states that decay to laminar flow and those that grow to turbulence. \section{Introduction} Since the work of \citet{NagataJFM90} a large number of unstable nonlinear equilibrium, traveling-wave, and periodic-orbit solutions of the Navier-Stokes equations have been computed for a variety of canonical flows including pipe, channel, plane Couette, and square-duct flow. These invariant solutions demonstrate the feasibility and fruitfulness of treating well-resolved direct numerical simulations as very-high-dimensional dynamical systems, and they capture, in a precise and elemental form, a number of important coherent flow structures and dynamical processes. Linear stability analysis shows that these solutions have relatively few unstable modes, and that the solutions and their low-dimensional unstable manifolds impose structure on the dynamics of moderately turbulent flows. See \citet{KawaharaARFM12} for a recent review of this work. Most of this work has been done in the context of canonical flows in small computational domains with periodic boundary conditions, resulting in spatially periodic solutions that lie within dynamically invariant periodic subspaces of the same flows on infinite domains. While small periodic `minimal flow units' are useful microcosms for studying turbulence, turbulence in extended domains generally involves large numbers of interacting flow structures, whose dynamic coupling presumably decreases with their separation. Additionally the transition to turbulence in extended domains occurs through the growth of turbulent spots or puffs, consisting of localized patches of unsteady, complex flow within a background of laminar flow \citep{WygnanskiJFM73, TillmarkJFM92,BarkleyPRL05,PhilipPRE11}. These considerations motivate the search for spatially-localized invariant solutions of flows in extended domains. \citet{SchneiderJFM10} found the first known localized solutions, a pair of spanwise-localized, streamwise-periodic equilibrium and traveling-wave solutions of plane Couette flow, further investigated in \citet{SchneiderPRL10}. \citet{AvilaPRL13} found a streamwise-localized relative periodic orbit of pipe flow that closely resembles the transient turbulent puffs of \citet{HofNature06}. \citet{DeguchiJFM13} and \citet{GibsonJFM14} independently found spanwise-localized forms of the periodic EQ7/HVS solution of \citet{ItanoPRL09,GibsonJFM09}. \citet{GibsonJFM14} also presented a number of spanwise-localized and wall-normal-localized traveling waves of channel flow. \cite{KhapkoJFM13} found spanwise-localized relative periodic orbits of the asymptotic suction boundary layer, and \cite{ZammertARX14} found a spanwise- and wall-normal-localized periodic orbit of plane Poiseuille flow. This paper presents a span- and streamwise-localized equilibrium solution of plane Couette flow, the first known invariant solution of the Navier-Stokes equations localized in two homogeneous directions. The numerical procedure by which the doubly-localized solution was found is outlined in \refsec{s:numerics}. Properties of the solution are presented in \refsec{s:solutions}, including its exponential localization, its global quadrupolar flow, the geometrical structure of its rolls and streaks, its wall-normal overhang profile, and the role of its instabilities in the transition to turbulence. \section{Computation of doubly-localized solutions} \label{s:numerics} The mathematical formulation and numerical methods are presented in detail in \citet{GibsonJFM14} (GB14); here we present a brief outline. The Reynolds number $\Rey$ for plane Couette flow is defined in terms of half the relative wall speed, the channel half-height, and the kinematic viscosity, so that the walls at $y = \pm 1$ have velocity $\pm 1$ and the laminar flow solution is given by $y \, \be_x$. The total velocity is expressed as a sum of the laminar flow and the deviation from laminar, $\butot = y \, \be_x + \bu$, and henceforth we refer to the deviation $\bu = [u,v,w]$ as velocity. With these assumptions $\bu$ has zero Dirichlet boundary conditions at the walls, and the nondimensionalized Navier-Stokes equations take the form \begin{equation} \frac{\partial \bu}{\partial t} + y \frac{\partial \bu}{\partial x} + v \: \be_x + \bu \cdot \grad \bu = -\nabla p + \frac {1} {\Rey} \nabla^2 \bu, \quad \nabla \cdot \bu = 0. \label{eq:NSE_PCF} \end{equation} The nondimensionalized computational domain is $[-L_x/2, L_x/2] \times [-1, 1] \times [-L_z/2, L_z/2]$ with periodic boundary conditions in the streamwise $x$ and spanwise $z$ directions. Discretization is performed with standard Fourier-Chebyshev spectral methods in space, 3rd-order semi-implicit finite differencing in time, and 2/3-style dealiasing. The computational domain and spatial discretization are specified in terms of $L_x \times L_z$ and the collocation grid $N_x \times N_y \times N_z$. Equilibria are computed as solutions of $\bff^T(\bu) - \bu = 0$, where $\bff^T$ is the time integration of \refeq{eq:NSE_PCF} for a fixed time $T$, and the discretized equations are solved with a Newton-Krylov-hookstep search algorithm \citep{ViswanathJFM07,ViswanathPTRSA09}. \revision{The choice for the $T$ is determined by a practical balance in the computational solution of the Newton-step equation: too small a value of $T$ results in weak viscous damping and slow convergence of the iterative GMRES algorithm, but too large $T$ reduces the distance $\| \delta \bu \|$ over which the linearization $f^T(\bu + \delta \bu) \approx f^T(\bu) + Df^T \delta \bu$ is accurate. We have found that $T=O(10)$ is a good balance for a wide variety of flow conditions and Reynolds numbers.} The software \revision{and the numerical data for the doubly-localized solution} are available at {\tt www.channelflow.org} \citep{GibsonJFM08,chflow}. Initial guesses for the doubly-localized solutions were produced by applying streamwise windowing to the spanwise-localized forms of EQ7 solution from GB14, or two-dimensional windowing to the doubly-periodic EQ7 solution from \citet{GibsonJFM09}. We used the same $\tanh$-based windowing function as in GB14 equation (2.4), replacing $z$ with $x$ for a streamwise windowing function \begin{equation} W(x) = \frac{1}{4} \; \left(1 + \tanh \left(\frac{6(a-x)}{b} + 3\right)\right) \left(1 + \tanh \left(\frac{6(a+x)}{b} + 3\right)\right). \label{eq:window} \end{equation} As noted in GB14, $W(x)$ is even, smooth, monotonic in $|x|$, and close to unity for the core region $|x| < a$, transitions smoothly to nearly zero over $a < |x| < a + b$, and approaches zero exponentially as $|x| \rightarrow \infty$. The nonzero divergence of windowed velocity fields $W(x) \bu(x,y,z)$ is fixed by revising the wall-normal $v$ component to satisfy incompressibility. To create doubly-localized initial guesses from doubly-periodic solutions, we applied the two-dimensional windowing function $W(x)W(z)$ with different length scales for the core and transition regions in the streamwise and spanwise directions. It was considerably more difficult to find doubly-localized solutions from windowed initial guesses than it was to find the spanwise-localized solutions of GB14. Not only do the doubly-localized solutions require doubly-extended domains, the decay rate of their tails is slower than for the spanwise-localized solutions (see \refsec{s:xtails}), and thus the computational domains must be larger in both $x$ and $z$. Search results were sensitive both to the wavelengths of the underlying periodic or spanwise localized solution and to the choice of windowing parameters. The search landscape for doubly-localized solutions is also vastly more complicated and more sensitive to spatial discretization, with many nonzero local minima and many spurious solutions for under-resolved discretizations. Lastly, doubly-localized initial guesses tended to converge onto the trivial solution $\bu = 0$ (laminar flow), with the search quickly settling onto streaky flow with very little streamwise variation, and then reducing the magnitude of the streaks to zero. Such streaks are dynamical transients that decay to laminar flow under time evolution, but their decay is slow enough that they nearly satisfy the search equation $\bff^T(\bu) - \bu = 0$ for small $T$, thus attracting nearby guesses to a search path that ultimately leads to $\bu = 0$. To prevent the search algorithm from being fooled by such transients, we modified the search equation to $(\bff^T(\bu) - \bu)/(\|\bu\|_{3d} - c) = 0$, where $\| \bu \|_{3d}$ is the \revision{energy norm (see \refsec{s:stability})} of the streamwise-varying portion of $\bu$ and $c$ is a parameter set to some fraction of the value of $\| \bu \|_{3d}$ for the initial guess. Our choices for underlying periodicity and windowing parameters were determined by trial and error, guided by the length scales that approximate solutions took on during the search. To mitigate computational costs, we performed trial-and-error calculations in relatively small domains with poor localization ($80 \times 20$ and $O(10^{-1})$ tails at the perimeter) and then extended solutions from successful searches to larger domains where localization is more pronounced ($200 \times 200$ with $O(10^{-3})$ tails), using either continuation in $L_x,L_z$ or simply doubling the computational domain and reapplying windowing. For doubly-localized solutions we found that solutions were reliably robust to changes in discretization when spectral coefficients were retained to $O(10^{-7})$ in $x,z$ and $O(10^{-10})$ in $y$. The solution presented in the following section is the most robust of several we found, in that it converges quickly at higher spatial resolutions and continues smoothly and easily in $L_x,L_z$ and $\Rey$. This solution was found by applying two-dimensional windowing to the doubly-periodic EQ7 solution, refinement to an exact solution by Newton-Krylov-hookstep, and extension to large domains by repeated doubling, windowing, and refinement. \section{Properties of the doubly-localized solution} \label{s:solutions} \subsection{Global flow} \label{s:global} \begin{figure} {\footnotesize (a)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig1a} {\footnotesize (b)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig1b} \\ {\footnotesize (c)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig1c} {\footnotesize (d)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig1d} \caption{{\bf A doubly-localized equilibrium solution of plane Couette flow} at $\Rey=240$ in a $200 \times 120$ computational domain. (a) Streamwise velocity $u$ in the $y=0$ midplane. The blue-red color axis spans $u \in [-0.5, 0.5]$. (b) Isosurfaces of swirling strength at $s=\pm0.2$ in green/blue, indicating swirling with clockwise/counterclockwise orientation with respect to the positive $x$ axis. The dotted $40 \times 16$ subdomain is shown in detail in \reffig{f:swirling_detail}. (c) Exponential localization, indicated by contours of $y$-integrated energy (see text). Contour levels are set at $10^{-n}$ for $n=0,1,\ldots,5$; the boundaries for $10^{-2}$ through $10^{-5}$ are labeled. (d) Quadrupolar $y$-averaged global flow, shown by a vector plot of $(\bar{u},\bar{w})/\sqrt{\bar{u}^2 + \bar{w}^2}$, where $\bar{u},\bar{w}$ are the $y$ averages of $u,w$. Note that the \revision{vector} spacing in this plot is too coarse to resolve rapid variations in the region surrounding the origin. } \label{f:xzplane_full} \end{figure} \refFig{f:xzplane_full} shows a doubly-localized equilibrium solution of plane Couette flow in a $200 \times 120$ computational domain at $\Rey=240$, discretized with $720 \times 49 \times 1024$ gridpoints. \revision{The solution has the symmetry group $\{e, \sxy, \sz, \sxyz\}$ where \begin{align} \sxy &: [u,v,w](x,y,z) \rightarrow [-u,-v,w](-x,-y,z), \\ \sz &: [u,v,w](x,y,z) \rightarrow [u,v,-w](x,y,-z), \nonumber \end{align} $\sxyz = \sxy \sz$ and $e$ is the identity. We use standard angle-bracket notation from group theory to specify groups in terms of their generators, e.g. $\langle \sxy, \sz \rangle = \{e, \sxy, \sz, \sxyz\}$. The doubly-localized solution acquires $\langle \sxy, \sz \rangle$ symmetry from the windowing breaking the symmetries of EQ7 that involve $x$ and $z$ translation, in the same manner as EQ7-2 of GB14.} The streamwise velocity in the $y=0$ midplane, shown in \refFig{f:xzplane_full}(a), is roughly comparable to the dynamically wandering doubly-localized edge state at $\Rey=400$ shown in figure 5 of \citet{SchneiderJFM10}. Both display patterns of wavy streaks that are spanwise narrow and streamwise elongated, and the significant non-laminar structure in both is confined to a roughly $100 \times 20$ subdomain of the flow. \refFig{f:xzplane_full}(c) shows that the solution is exponentially localized in both span- and streamwise directions, via contours of the $y$-integrated energy $e(x,z) = 1/2 \int_{-1}^{1} \bu \cdot \bu \, dy$. The fingers that extend along the $x$ axis are due to small-wavelength, exponentially decaying streaks of streamwise velocity (see \refsec{s:xtails}). The deviation from elliptical contours near the edges \revision{of the computational domain is an artifact of the periodicity of the domain, which, together with the solution symmetries, induces even symmetry of $e$ about $x=\pm L_x/2$ and $z=\pm L_z/2$.} \revision{In larger computational domains we have observed elliptical contours and exponential decay over four orders of magnitude, with comparable decay rates in $x$ and $z$.} \refFig{f:xzplane_full}(d) shows the direction of the $y$-averaged flow $(\bar{u}, \bar{w})$ by a vector plot of $(\bar{u},\bar{w})/\sqrt{\bar{u}^2 + \bar{w}^2}$. Note the quadrupolar character of the $y$-averaged flow, similar to figure 6 of \citet{SchumacherPRE01} and figure 3 of \citet{DuguetPRL13}. The $\bar{u}, \bar{w}$ flow is streamwise inward along $z = 0$ and spanwise outward along $x = 0$, with a global circulation in each of the four quadrants. The alignment of the $y$-averaged flow with the $x$ and $z$ axes also results from symmetry: $\sz$ symmetry requires that $u$ and $w$ are even and odd in $z$, respectively, about $z=0$, and $\sxy$ symmetry requires that $\bar{u}$ and $\bar{w}$ are odd and even in $x$ about $x=0$. Periodicity in $x$ and $z$ requires the same symmetries about the \revision{$x=\pm L_x/2$ and $z=\pm L_z/2$} edges of the computational domain, so that the $y$-averaged flow aligns with these edges as well \revision{(see \citet{GibsonJFM09})}. \subsection{Internal structure } \label{s:internal} \begin{figure} \begin{tabular}{ccc} \raisebox{22mm}{{\bf z~}} & \raisebox{-0.7mm}{\includegraphics[width=0.0615\textwidth]{fig2a}} & \includegraphics[width=0.85\textwidth]{fig2b} \\ & \raisebox{-2mm}{\bf ~~y} & \hspace{-1.8mm} \raisebox{-6mm}{\includegraphics[width=0.875\textwidth]{fig2c}} \\ & & \raisebox{-2mm}{\bf x} \\ \end{tabular} \caption{{\bf Three-dimensional structure of swirling strength.} Isosurfaces of signed swirling strength at $s=\pm0.2$ in green/blue. The upper-right subfigure is a blow-up of the $x \in [-20, 20]$, $z \in [-8, 8]$ subdomain in \reffig{f:xzplane_full}(b). Dotted lines show the $x$-positions of the streamwise-normal cross sections depicted in \reffig{f:crosssections}, with (a)-(g) at $x = \{0, 2.5, 5, 7.5, 10, 15, 20\}$. Views of the same structure (left) in $y,z$ and (bottom) in $x,y$. } \label{f:swirling_detail} \end{figure} The localized patterns of high- and low-speed streaks shown in \reffig{f:xzplane_full}(a) are generated by the highly localized roll structure illustrated in \reffig{f:xzplane_full}(b). The latter shows isosurfaces of signed swirling strength at $s = \pm 0.2$, about 1/3 of its maximum value. \revision{Swirling strength is defined as the magnitude of the imaginary part of the complex eigenvalues of the velocity gradient tensor $\grad\bu$ \citep{ZhouJFM99}. Signed swirling strength includes a $\pm$ sign indicating the orientation of swirling \citep{WuJFM06}, here the sign of the $x$ component of the swirling axis when oriented with the right-hand-rule.} The swirling is highly localized: the magnitude of swirling drops by a factor of roughly ten between the X-shaped isosurfaces and the edges of the dotted box that marks a $40 \times 16$ subdomain. \refFig{f:swirling_detail} shows a detail of the swirling strength in this subdomain with the same plotting conventions as \reffig{f:xzplane_full}(b). The three perspective plots show an overall X-shaped structure composed of two overlapping $\Lambda$-shaped vortices, whose legs swirl in opposite directions and tilt in both the spanwise and wall-normal directions. Small, weaker vortices of opposite sign flank the legs near their ends. \begin{figure} \begin{tabular}{c} {\footnotesize (a)} \includegraphics[width=0.95\textwidth]{fig3a} \\ {\footnotesize (b)} \includegraphics[width=0.95\textwidth]{fig3b} \\ {\footnotesize (c)} \includegraphics[width=0.95\textwidth]{fig3c} \\ {\footnotesize (d)} \includegraphics[width=0.95\textwidth]{fig3d} \\ {\footnotesize (e)} \includegraphics[width=0.95\textwidth]{fig3e} \\ {\footnotesize (f)} \includegraphics[width=0.95\textwidth]{fig3f} \\ {\footnotesize (g)} \includegraphics[width=0.95\textwidth]{fig3g} \\ \hspace{8mm} \includegraphics[width=0.94\textwidth]{fig3h} \end{tabular} \caption{{\bf Streamwise-normal cross sections of velocity.} Arrows indicate $v,w$ velocity, and color indicates streamwise velocity $u$. The scaling of arrow length to $v,w$ magnitude is the same in all graphs. (a-g) show $y,z$ planes at $x = \{0, 2.5, 5, 7.5, 10, 15, 20\}$; the positions of these planes are marked with dotted lines in \reffig{f:swirling_detail}. \label{f:crosssections} } \end{figure} \refFig{f:crosssections} further illustrates the three-dimensional structure of the $\Lambda$-shaped vortices and their relation to streamwise streaks. The flow in streamwise-normal cross-sections is shown at $x$ positions indicated by lines marked (a-g) in \reffig{f:swirling_detail}. The $x=0$ plane in (a) shows the $y$-symmetric tips of the two opposed $\Lambda$-shaped vortices, concentrated near $z=0$, each drawing midplane fluid towards the wall to form streaks. As $x$ increases in (b)-(d), the swirling of the legs grows in strength, size, and spacing, and moves from the lower wall towards the upper. By (e) the legs have reached the upper wall and weakened, but the opposite-signed vortices that flank the tips of the legs have grown to their greatest strength. These are positioned at about $z = \pm 3$, and they span the distance between the walls. By $x=20$ in (g) swirling in both the legs and the flanking vortices has died out and all that remains are the streamwise streaks. \subsection{Exponential decay and overhang of the streamwise streaks} \label{s:xtails} \begin{figure} {\footnotesize (a)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig4a} {\footnotesize (b)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig4b} \\ {\footnotesize (c)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig4c} {\footnotesize (d)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig4d} \caption{{\bf Streamwise decay of streaks.} (a) The \revision{streamwise velocity $u$ as a function of spanwise coordinate $z$ at $x=50$ (within the streamwise tail) and at $y=0.5$ (near the peak of the overhang profile).} (b) Coefficient $\mu$ of exponential streamwise decay rate $\exp(\mu x/ \Rey)$ as a function of spanwise wavenumber $\gamma$ for the Airy-function approximation of the tails. (c) Exponential decay of streamwise tails in the doubly-localized solution at $\Rey=240$ and $400$ compared to the $\exp(\mu(\gamma)\, x/ \Rey)$ prediction. \revision{$\|u\|_{\infty}$ denotes the maximum magnitude of $u$ over $(y,z)$ as a function of $x$.} (d) Wall-normal overhang profile $u(y)$ at \revision{$x=50$ and $z=0$}, compared to predicted $\hat{u}(y)$ (all curves normalized to unit amplitude). } \label{f:xtails} \end{figure} The long streamwise tails of the doubly-localized solution are dominated by a spanwise-localized band of streaky streamwise velocity. Here we provide a linear analysis that accounts for the exponential streamwise decay rate of the streamwise tails and the form of their wall-normal overhang profile, effects observed in turbulent spots by \citet{LundbladhJFM91} and \citet{DuguetPRL13}. \revision{Direct numerical evaluation} of the magnitudes of different terms in the $u$ component of the Navier-Stokes equation \refeq{eq:NSE_PCF} for the doubly-localized solution shows that the dominant terms in the streamwise tails are \begin{align} y u_x = \Rey^{-1} (u_{yy} + u_{zz}) \label{eq:xtail_pde} \end{align} \revision{where subscripts indicate differentiation.} Although the solution is localized in $z$, a good approximation for the decay rate and overhang profile can be obtained by assuming a $z$-periodic solution of the form $u(x,y,z) = \hu(y) \exp(\ui \gamma z + \mu x/\Rey)$ with $\gamma$ set to match the dominant spanwise wavenumber observed in the streaky tails. Substitution of this ansatz into \refeq{eq:xtail_pde} results in the ODE \begin{align} \hu''(y) - (\gamma^2 + \mu y) \, \hu(y) = 0, \label{eq:xtail_ode} \end{align} which has solutions \begin{align} \hu(y) = a \Ai(\gamma^2 \mu^{-2/3} + \mu^{1/3} y) + b \Bi(\gamma^2 \mu^{-2/3} + \mu^{1/3} y), \label{eq:xtail_airysoln} \end{align} where $\Ai$ and $\Bi$ are Airy functions. The boundary conditions $\hu(\pm 1) = 0$ determine $\mu$ and the relative values of $a$ and $b$ as a function of $\gamma$. Nontrivial solutions of \refeq{eq:xtail_airysoln} require \begin{align} \Ai( \gamma^2 \mu^{-2/3} + \mu^{1/3}) \Bi( \gamma^2 \mu^{-2/3} - \mu^{1/3}) - \Ai( \gamma^2 \mu^{-2/3} - \mu^{1/3}) \Bi( \gamma^2\mu^{-2/3} + \mu^{1/3}) = 0 \label{eq:xtail_eigeneqn} \end{align} which we solve numerically for $\mu$ as a function of $\gamma$, choosing the negative solution $\mu(\gamma)$ closest to zero in order to find the solution with slowest streamwise decay. \refFig{f:xtails}(b) shows $\mu$ as a function of $\gamma$. The $\gamma \approx 0$ modes have the slowest exponential decay, so as $x \rightarrow \infty$ we expect the tails to be dominated by the $\gamma=0$ or fundamental $\gamma = 2\upi/L_z$ mode. This behavior is evident for the doubly-localized equilibrium at $\Rey=240$. \refFig{f:xtails}(a) shows that the streamwise velocity at \revision{$x=50$} has a wide slow streak in the region $|z| < 10$ with weaker small-scale $z$ variation. The power spectrum of this function has \revision{its strongest peak by nearly an order of magnitude} at the fundamental wavenumber $\gamma = 2\upi/L_z \approx 0.1$. The decay rate $\mu(\gamma)$ is nearly constant for small $\gamma$, so for $\Rey = 240$ we take $\gamma = 0$ and find $\mu(0) \approx -13$. \refFig{f:xtails}(c) shows good agreement between the predicted decay $\exp(-13x/240)$ and that observed in the doubly-localized solution. \refFig{f:xtails}(d) shows good agreement between the predicted profile $\hat{u}(y)$ from \refeq{eq:xtail_airysoln} and the streamwise velocity profile $u(y)$ of the doubly-localized solution \revision{at $x=50$, $z=0$}. However, the $1/\Rey$ factor in $\exp(\mu x /\Rey)$ means that small $z$-wavelength transients of sufficient magnitude can persist and even dominate the tails for $x \ll \Rey$. \revision{\refFig{f:xtails}(a) shows that at $x=50$ the small-wavelength $z$ variation for the $\Rey=400$ solution is much stronger than for $\Rey = 240$. The power spectrum of this $z$ profile has peaks of nearly equal magnitude at the fundamental mode $\gamma \approx 0.1$ and at $\gamma \approx 1.5$. From what follows we determine that the exponential decay in $\|u\|_{\infty}$ for the $z$-localized structure is governed by the slightly less energetic $\gamma \approx 1.5$ mode. Since \revision{$\mu(1.5) \approx -19$} from \reffig{f:xtails}(b), the decay of this mode is $\exp(-19x/400)$, which is only a factor of two smaller than the $\exp(-13x/400)$ decay of the $\gamma=0$ mode over a length $x \approx 50$. The match of this decay rate in \reffig{f:xtails}(c) and the corresponding overhang function in \reffig{f:xtails}(d) to the doubly-localized solution} confirms that at $\Rey=400$ the tails for $x < 80$ are governed by the faster-decaying $\gamma \approx 1.5$ mode. For $x = O(\Rey)$ and larger, beyond the limits of the present computational domain, we expect the large-$\gamma$ transients to die out leaving the tails dominated by more slowly decaying small-$\gamma$ modes. \subsection{Stability and the evolution of unstable perturbations} \label{s:stability} \begin{table} \centering \begin{tabular}{lclclclcl} & \multicolumn{2}{c}{$\langle \sxy, \sz \rangle$} & \multicolumn{2}{c}{$\langle \sxy, -\sz \rangle$} & \multicolumn{2}{c}{$\langle -\sxy, \sz \rangle$} & \multicolumn{2}{c}{$\langle -\sxy, -\sz \rangle$}\\ $\Rey \quad$ & pos.\ $\lambda_r$ & max.\ $\lambda_r \quad$ & pos.\ $\lambda_r$ & max.\ $\lambda_r \quad$ & pos.\ $\lambda_r$ & max.\ $\lambda_r \quad$ & pos.\ $\lambda_r$ & max.\ $\lambda_r$\\ 240 & 2 & 0.0370 & 3 & 0.0414 & 3 & 0.0329 & 8 & 0.1283\\ 300 & 4 & 0.0278 & 7 & 0.0314 & 5 & 0.0286 & 14 & 0.1028\\ 340 & 6 & 0.0265 & 9 & 0.0635 & 7 & 0.0636 & 15 & 0.0952\\ 360 & 8 & 0.0750 & 9 & 0.0751 & 7 & 0.0751 & 15 & 0.0920\\ 380 & 8 & 0.0853 & 11 & 0.0854 & 12 & 0.0854 & 19 & 0.0892\\ 400 & 17 & 0.0955 & 12 & 0.0958 & 16 & 0.0958 & 23 & 0.0960\\ \end{tabular} \caption{{\bf Instabilities of the doubly-localized solution.} The number of unstable eigenfunctions (positive real part) and the \revision{real part of the most unstable eigenvalue} are given for each eigenfunction symmetry group and a range of Reynolds numbers. } \label{t:instabilities} \end{table} In minimal flow units, the transition to turbulence is governed by invariant `edge state' solutions whose stable manifolds form separatrices between states that quickly decay towards laminar flow and states that become turbulent \revision{\citep{WangPRL07,SchneiderPRE08}}. Efforts to develop a similar dynamical understanding of transition in extended flows have lead to the computation of a number of localized edge states, but to date these have either been invariant states localized in a single homogeneous direction \citep{SchneiderJFM10,AvilaPRL13,KhapkoJFM13,ZammertARX14} or doubly-localized but chaotically wandering states without well-defined stable and unstable manifolds \citep{SchneiderJFM10,DuguetPRL12}. The doubly-localized invariant solution in this paper thus provides a potential starting point for addressing spatiotemporal transition of extended flows in dynamical terms. We focus on a $100 \times 30$ domain, large enough to exhibit a range of spatiotemporal behavior \citep{PhilipPRE11}, and $230 \leq \Rey \leq 400$, above the $\Rey \approx 228$ saddle-node bifurcation point of the \revision{doubly-localized} solution. \refTab{t:instabilities} summarizes the properties of the leading unstable eigenfunctions categorized by symmetry group. The eigenfunctions $\bv$ of the linearized dynamics about the doubly-localized solution are either symmetric \revision{($\bv = \sigma \bv$)} or antisymmetric \revision{($\bv = -\sigma \bv$)} for each symmetry of the solution and thus have one of four symmetry groups: $\langle \sxy, \sz \rangle$, $\langle \sxy, -\sz \rangle$, $\langle -\sxy, \sz \rangle$, or $\langle -\sxy, -\sz \rangle$. \begin{figure} {\footnotesize (a)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig5a} \hspace{2mm} {\footnotesize (d)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig5d} \\ {\footnotesize (b)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig5b} \hspace{2mm} {\footnotesize (e)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig5e} \\ {\footnotesize (c)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig5c} \hspace{2mm} {\footnotesize (f)} \hspace{-2mm} \includegraphics[width=0.45\textwidth]{fig5f} \\ \caption{{\bf Evolution of unstable eigenfunctions.} (a,d) show $\|\bu(t) \|$ versus time $t$ for small perturbations of the doubly-localized solution along its most unstable eigenfunctions, at (a) $\Rey = 380$ and (d) $\Rey = 400$. Color denotes the symmetries of the eigenfunction perturbations: {\color{red} red} for \revision{$\langle \sxy, \sz \rangle$} symmetric eigenfunctions, black for \revision{$\langle \sxy, -\sz \rangle$}, {\color{blue} blue} for \revision{$\langle -\sxy, \sz \rangle$}, {\color{green} green} for \revision{$\langle -\sxy, -\sz \rangle$}, and {\color{cyan} cyan} for perturbations along combinations of eigenfunctions that break all symmetries. Solid lines indicate the most unstable eigenfunction of each symmetry group, and dashed are the second-most unstable. The midplane streamwise velocity of the most unstable perturbation at (b) $t=0$ and (c) $t=200$ for $\Rey=380$, and (e,f) the same for $\Rey=400$. } \label{f:instabilities} \end{figure} \refFig{f:instabilities} characterizes the temporal evolution \revision{$\bu(t) = f^t(\bu_0 + \delta \bu)$ of the doubly-localized solution $\bu_0$ perturbed} along its most unstable eigenfunctions and along combinations of eigenfunctions that break all symmetries. Perturbation magnitudes were set to $\| \delta \bu \| /\| \bu_0 \| = 10^{-2}$, where $ \|\bu\|^2 = 1/V \int_V \bu \cdot \bu \; d\bx$, with $V$ the volume of the computational domain. For $\Rey \leq 360$ (not shown) most perturbations \revision{produce a short period ($t < 200$) of transient growth, but in all cases monotonic relaminarization $\| \bu(t) \| \rightarrow 0$ begins by $t \approx 500$. At $\Rey = 380$, several perturbations produce long-lived $(t > 1000)$ turbulent spots, and at $\Rey = 400$ all perturbations do.} \refFig{f:instabilities}(b,c) shows a typical decaying spot at $\Rey = 380$, and \reffig{f:instabilities}(e,f) shows a typical growing spot at $\Rey = 400$. \revision{Unlike edge states, the doubly-localized solution has stable manifold of co-dimension greater than 1 for all Reynolds numbers (even with $\langle \sxy, \sz \rangle$ symmetry restriction), so the stable manifold cannot divide state space and form a laminar/turbulent boundary by itself. However, the fact that some perturbations from the doubly-localized solution lead to laminar flow and some to turbulence demonstrates that the solution lies on the laminar/turbulent boundary, and that portions of its unstable eigenspace lie on either side of the boundary, for the approximate range $360 \lesssim \Rey \lesssim 400$. For $\Rey \leq 360$, the solution lies on the laminar side of the boundary, and $\Rey \geq 400$, it lies on the turbulent side.} In all cases the perturbations rapidly generate fine-scale structure in the velocity field, which then either decays or grows in a complex, long-term, and perturbation- and Reynolds-dependent manner. Note that oblique perturbations \revision{(with $\sxyz \in \langle -\sxy, -\sz \rangle$ symmetry)} produce turbulent spots with $\sxyz$ symmetry but little noticeable obliqueness (\reffig{f:instabilities}(c,f)), though eventually such spots can grow to fill the computational domain with a pattern of tilted laminar/turbulent bands.
1,314,259,995,547
arxiv
\section{Introduction} \IEEEPARstart{R}{einforcement} learning (RL) \cite{sutton1988reinforcement} has achieved remarkable results in many applications \cite{,mnih2013playing,silver2017mastering}. However, current RL algorithms often suffer from poor generalization ability. One of the prevailing ideas towards this problem is that current RL methods lack a general and compact representation (abstraction) to express shared and reusable knowledge among different environments with similar semantics \cite{oh2017zero,zhang2017a}. On the other hand, natural language serves exactly as a general representation with interpretability. Therefore, a growing number of research begins to focus on integrating natural language into policy learning, namely language-conditional RL \cite{,DBLP:conf/ijcai/LuketinaNFFAGWR19}. The final goal is to empower the agent with the ability of extracting semantical information in language instructions and learning generalizable policies \cite{jiang2019language,hu2019hierarchical}. However, current methods can hardly learn reusable policies that can adapt to new environments. On the contrary, their learned policies are usually environment-specific which cannot always be reused. For example, a task that requires the agent to pick up a ball will lead the agent to memorize several successful trajectories instead of extracting the semantical concept of ``ball''. As a consequence, the learned representation is unstable and correlates with the environment, which hinders policy generalization. The main reason for the aforementioned problem is that the language is usually applied to an end-to-end policy learning framework \cite{jiang2019language,hu2019hierarchical,devin2019plan}, which directly maps language instructions together with states to actions without explicitly incentivizing the development of invariant representations. An invariant representation is invariant to environment changes that still preserve the semantics of the task (e.g., ``pick up the green ball''), thus is generalizable and reusable across different environments. Hence, two major issues need to be addressed to learn such representations: (1) the motivation for semantical invariance of the representations, and (2) a carefully-designed policy structure that contains invariant components to carry these invariant representations. \begin{figure} \centering \includegraphics[scale = 0.25]{idea.jpg} \caption{We introduce randomness into elements except the goal to decouple the goal element from the environment. With making completing task consist with extracting semantical invariants, we can get reusable representations which can resist the change of the environment.} \label{FIG:1} \end{figure} In this paper, we propose a method termed \emph{element randomization} to learn invariant representations. Our main idea is to introduce randomness in environment-specific elements (or components), e.g., the topological structure of the state space or texture of the objects, to facilitate the agent to extract semantical commonalities that can resist environment changes. This idea has also been used by approaches in other domains, e.g., Domain Randomization (DR) \cite{,tobin2017domain,DBLP:conf/iros/GaoYS0020}. The main difference is that we introduce randomness at the \emph{task level}, meaning that anything that does not correlate with the core semantics of the task may be randomized (e.g., in a maze navigation task, not only the texture of the maze can be randomized, but also the structure of the maze itself can be randomized as long as it does not alter the semantics of the task ``find the exit''), in contrast with prior methods that only perform visual-level randomization. Our approach thus provides motivation for decoupling the entangled elements and ensures invariance (see Fig. \ref{FIG:1}). We also give theoretical justifications, showing that randomizing elements can indeed result in invariant representations. In practice, we design an adaptive model structure to extract semantically invariant representations and learn the corresponding semantically invariant policy. Concretely, we construct a two-level goal-conditioned hierarchical policy network. Our policy network consists of a high-level policy that receives language instructions and generates subgoals as semantically invariant representations, and the low-level policy executes atomic actions in accord with the proposed subgoals. As the subgoals represent stable language-conditioned representations, they can be reversely translated into their language form. That means tasks can be explored and recorded as language trajectories. Therefore, we introduce an external memory to record the trajectories with extensible lengths. With the external memory, the agent can solve a new task with arbitrary length by exploring through selecting subgoals and exploiting by replaying the successful trajectories. To demonstrate the superiority of our method, we conduct experiments on challenging tasks with long horizons based on the BabyAI platform \cite{chevalier-boisvert2018babyai}. Experimental results validate the efficacy of our method, showing that (1) our low-level policy reliably generalizes to tasks against environment changes; (2) our hierarchical policy exhibits extensible generalization in unseen new tasks that can be decomposed into several sub-tasks solvable by the low level; and (3) by storing and replaying language trajectories using the external memory, the agent can accomplish the task in a one-shot fashion, i.e., once one successful trajectory has been attained. In short, our contributions are as follows: \begin{enumerate} \itemsep=0pt \item We propose a new language-conditional policy learning paradigm, which extracts semantically invariant representasions by a novel element randomization method. \item We build an adaptive hierarchical network for simultaneous language comprehending and task executing. We also add an external memory to record abstracted language trajectories for exploring and solving unseen complex tasks. \item We both theoretically and empirically demonstrate the efficacy our model. Experimental results on several difficult tasks show the superiority of our agent compared with several strong baselines. \end{enumerate} \begin{figure*} \centering \includegraphics[scale=.45]{new_exp1.jpg} \caption{We let the agent learning invariable semantical representations and corresponding policies. When facing a task, the agent can explore the environment with low-dimension subgoal space instead of the original state space. Meanwhile, the trajectories can be stored as language form and be replayed after collecting the final reward and the agent can overcome the randomness in the environment.} \label{FIG:2} \end{figure*} \section{Related Work} \subsection{RL Methods with Natural Language Condition} Recently, many researchers begin to focus on learning a framework with the power of natural language, such as \cite{hu2019hierarchical,zhong2019rtfm,xiong2018hierarchical,jiang2019language,bahdanau2018learning,hermann2017grounded} and \cite{,branavan2011learning}. Language can be used in many ways, such as the tool of knowledge transfer \cite{,narasimhan2015language,mei2016listen}, the reward generator \cite{ho2016generative,goyal2019using,wang2019reinforced}, the representation of state or action space \cite{yuan2018counting}, the communicating domain knowledge or task-independent corpora \cite{,oh2017zero,narasimhan2018grounding}. In this paper, the most related work is to use language to describe the task as inputting instructions, hoping language can guide the agent to complete the task and generalize well in some domains. However, comprehending language is such a difficult problem itself. They aiming to use language to help reinforcement learning gaining more cumulative reward. But just using language as additional information, the agent may leverage language as trivial coding of a task \cite{lake2018generalization,liu2020compositional}. Because the agent can learn to comprehend the part of the semantical information, but it is not sure that the semantics is the stable and general one that we want, not to mention that the agent can automatically extract and reuse it. The difference is that our work introduces an invariable subgoal space between language and original actions, which is specially trained to represent object-oriented sub-policies with semantics. These sub-policies is general and reusable in the whole environment with randomness. That means our agent can explore in the low-level subgoal space rapidly and generalize to new tasks with a high probability. \subsection{Element-Oriented Compositional RL } Some researchers analyzed RL tasks by disassembling elements in tasks and construct corresponding policies \cite{,singh1992transfer,meer2020exploiting,garnelo2016towards,zamani2018deep}. Some of them also use symbolic coding as task representations to further leverage these element-oriented policies. These methods have achieved much success in compositional generalization problems. However, to some degree, these methods focus more on task-specific elements (such as a fixed special point in a room), which cannot be reused in all the tasks. It will limit the generalization ability. Different from these works, our method focus on the elements which are interactive in the whole environment, and at any time they are task-agnostic. That is, our sub-policies are object-oriented and can be reused in any task. That guarantees the sub-policy can be described by language without any ambiguity. So that the combination of sub-policies can be recorded as language trajectories and replayed stably. \subsection{HRL Methods for Generalization} Some hierarchical reinforcement learning (HRL) researchers focus on generalization problems, using the measure of multi-task learning or graph representation. Such as \cite{,andreas2017modular} building policy sketches to guide the agent to complete tasks. \cite{,frans2017meta} construct an HRL method with meta parameters as a high-level abstraction. \cite{,sohn2018hierarchical} give the agent a structured subtask graph to represent the relationship of tasks. However, to solve complex tasks and generalize to more tasks, previous works need manual prior knowledge of tasks information more and less. For instance, framework in \cite{,andreas2017modular} need a policy sketch to describe task precisely, which needs human transform tasks into a special form that can be comprehended by the agent. These kinds of methods need not the only representation of the task, but the way that how they are executed in a high-level step. When a new task cannot be expressed by the prior, such as adding an extra high-level step or facing a new task composed of subtasks but the sequence is unseen, their agents can hardly complete the task. Different from their methods, our work takes natural language as a task representation, which is general prior and can describe almost any task. A general representation means that the agent can complete most of the tasks following the instructions, even unseen environment. So our work can adapt to new tasks which are even more complex and difficult than the training tasks. \section{Learning Invariable Semantical Representations by Element-randomization} In this section, to verify the effectiveness of our method, we provide a qualitative mathematical analysis to show that introducing randomness into elements can decouple the goal element with others and make completing tasks equivalent to extracting invariable representations of the goal elements. \subsection{Problem Statement} We consider a finite-horizon, goal-conditional \textit{Markov Decision Process} (MDP) with language as instruction, which is defined as $<\mathcal{S,G,I,A,P,R}, \rho, \gamma >$, where $\mathcal{S}$ is the state set, $\mathcal{G}$ is the invariable subgoal set, $\mathcal{I}$ is the language instruction set, $\rho (I)$ is the initial distribution of instruction $I \in \mathcal{I}$, $\mathcal{A}$ is the action set, $\mathcal{P:S \times I \times A \times S \rightarrow} [0,1)$ is the state transition function representing for the probability from state and instruction to next state, $\mathcal{R:S \times A \rightarrow} \mathbb{R}$ is the reward function, and $\gamma \in [0,1)$ is the discount factor. The objective of reinforcement learning is to learning a policy $\pi(a_t|s_t, I)$. Every instruction can stand for a class of similar tasks. The RL framework is built on the whole task set $\mathcal{T}$. \subsection{Methodology Analysis of Equivalence} To extract invariable semantical representations, we design the task with element-randomization method. Our main idea is that the agent should extract the representations by learning in RL tasks. Here we will give the proof that maximizing the cumulative return with element-randomization is equivalent to maximizing the occurring probabilities of the goal elements. Connecting the invariable subgoals with these elements by RL policy will endow these subgoals fixed meaning and make them semantical representations. \begin{mydef} Let task $T \in \mathcal{T}$. The objective function is maximizing the expected calculative return $G$ from all the tasks $T$. $$ J = \mathbb{E}_{T \in \mathcal{T}} [G]$$ \end{mydef} Different from traditional RL process that maximizing reward from single task or several task, we define the objective function among a large set of tasks. In our setting, an instruction $I$ can represent a class of tasks $T$ with same goal and every task can be completed by several correct trajectories $\tau$. With element-randomization method, we have: \begin{mytheo} Introducing randomness with maximizing the cumulative return $J$ is equivalent to maximizing the occurring probability of invariable goal element $\epsilon$ with the spare reward setting. $$ \mathbb{E}_{T \in \mathcal{T}} [G] = \mathbb{E} [\sum \limits_{\epsilon} \rho (I_\epsilon) \pi(\epsilon| I_\epsilon)] $$ \end{mytheo} \begin{proof} For every language instruction $I$, it will command agent to interact with a goal element $\epsilon$, so that the function can be rewritten as: \begin{equation} J = \mathbb{E}_{T \in \mathcal{T}} [\sum \limits_{\epsilon} \rho (I_\epsilon) G(I = I_\epsilon)] \end{equation} For all the task $T$ of this instruction, there is: \begin{equation} J = \mathbb{E}_{T \in \mathcal{T}} [\sum \limits_{\epsilon} \rho (I_\epsilon) \sum \limits_{T} P(T|I_\epsilon)G(T|I_\epsilon)] \end{equation} where the $P(T|I_\epsilon)$ and $G(T|I_\epsilon)$ are the distribution and corresponding return of task $T$. The return can be further written as follow for all the trajectories $\tau \in T$ with policy $\pi(\tau)$: \begin{equation} G(T|I_\epsilon) = \sum \limits_{\tau} \pi(\tau)R(\tau|T,I_\epsilon) \end{equation} In our design, only when agent interacting with correct element will it gain reward $1$, otherwise $0$: \begin{equation} R(\tau|T,I_\epsilon) = \begin{cases} 1, & \mbox{if } \epsilon \in \tau \\ 0, & \mbox{otherwise} \end{cases} \label{func1} \end{equation} Here $\epsilon \in \tau$ means that agent correctly interact with the goal element in this trajectory. So the total objective function can be written as: \begin{equation} J = \mathbb{E}_{T \in \mathcal{T}} [\sum \limits_{\epsilon} \rho (I_\epsilon) \sum \limits_{T} P(T|I_\epsilon)\sum \limits_{\tau} \pi(\tau)R(\tau|T,I_\epsilon)] \label{func2} \end{equation} Then put equation (\ref{func1}) into (\ref{func2}) and explicitly extract invariable element $\epsilon$, we get \begin{align} \begin{split} J = \mathbb{E}_{T \in \mathcal{T}} [\sum \limits_{\epsilon} & \rho (I_\epsilon) \sum \limits_{T} P(T|I_\epsilon) \sum \limits_{\underbrace{\tau : \epsilon \in \tau}_{\mbox{\shortstack[c]{\fontsize{6.8pt}{\baselineskip}\selectfont successful \\ \fontsize{7.0pt}{\baselineskip}\selectfont trajectories }}}}\\ & \sum \limits_{s,a \in \tau} \underbrace{\pi^{L}(a|s, \epsilon)}_{\mbox{\shortstack[c]{\fontsize{6.8pt}{\baselineskip}\selectfont policy from invariants \\ \fontsize{7.0pt}{\baselineskip}\selectfont to action }} } \underbrace{\pi^{H}(\epsilon | T, I_\epsilon)]}_{\mbox{\shortstack[c]{\fontsize{6.8pt}{\baselineskip}\selectfont policy from language \\ \fontsize{7.0pt}{\baselineskip}\selectfont to invariants }} } \end{split} \label{func3} \end{align} where the irrelevant element will be eliminated by $R = 0$ with sufficient exploration. When introducing randomness into task-agnostic elements, the goal element will be decoupled with others. As a result, getting summation of the probability of $\pi(\epsilon \in \tau)$ is equal to get the total probability of the appearing of $\epsilon$, that is: \begin{equation} J = \mathbb{E} [\sum \limits_{\epsilon} \rho (I_\epsilon) \pi(\epsilon| I_\epsilon)] \end{equation} \end{proof} By the theory, we leverage the subgoal $g \in \mathcal{G}$ to represent the goal element $\epsilon$. As a result, the subgoals represents invariable semantics, which can be reused in any task unambiguous. The policies are also built according to the subgoal, which can be general policies among tasks. \section{Building Hierarchical Semantical Invariants Learning Network} In this section, we will show the main idea of our method and present our framework for training a two levels of hierarchical policy with the object-oriented subgoal space guiding by equation (\ref{func3}) as follow. \begin{enumerate} \itemsep=0pt \item The high-level policy receives language instructions and observation of the environment and chooses subgoal as $\pi^{H}(\epsilon | T, I_\epsilon)$. \item The low-level policy receives the subgoal and executes corresponding actions in stochastic environment as $\pi^{L}(a|s, \epsilon)$. \item For learning extensible policy with the help of language instructions, we will show that we make use of an augmented-memory to record language as abstracted trajectories for replaying in the unseen new tasks without retraining. \end{enumerate} \subsection{Training Low-Level Stable Object-Oriented Subgoal Executor Policy} \begin{figure} \centering \includegraphics[scale=.32]{structure_low.jpg} \caption{We build the low-level policy with multi-process A2C algorithm, which receives both the subgoal from the high-level policy and the pixel observation of the environment. We called it subgoal executor network (SEN).} \label{FIG:3} \end{figure} Here we show how to leverage element randomization method to build stable low-level policy, which we called subgoal executor network (SEN). As we set the subgoals as semantical invariants, it requires the corresponding policies to adapt to the subgoals. That means these policies should be task-agnostic and general. To solve this problem, we design the low-level policy as a stable object-oriented policy, which focuses on specific objects or attribution and can be reused in any task. However, to build a stable object-oriented policy, merely RL method cannot provide sufficient motivation. As we said above, we introduce the element-randomization method to make completing tasks equivalent to learning semantical invariants. Specifically, we use massive simple tasks that can be completed by only interacting with one object. Such as ``opening a door" or ``pick up a ball" with only one goal are all simple tasks. These simple tasks stand for the basic element composing the environment. They are quite few so that an ergodic sampling of them is acceptable and costs little. \begin{algorithm}[h] \caption{Low-level Subgoal Executor Learning Algorithm}\label{algorithm} \begin{algorithmic}[1] \STATE Initialize multi-process actor parameters $\theta^i_a$ for $ i \in [1,n]$ \STATE Initialize multi-process value parameters $\theta^i_v$ for $ i \in [1,n]$ \FOR {episodes in 1,M} \FOR {$i \in [1,n]$} \STATE Reset gradients: $d\theta^i_a$ and $d\theta^i_v$ \STATE Synchronize thread-specific parameters \STATE Sample subgoal $g\in \mathcal{G}$ in uniform distribution \REPEAT \STATE Perform $a_t$ according to policy $\pi(a_t|s_t,g)$ \STATE Receive reward $r_t$ and new state $s_{t+1}$ \STATE $t \leftarrow t+1$ \UNTIL {terminal $s_T$ or $t-t_{start}==t_{max}$} \STATE Set $$R = \begin{cases} 0, & \mbox{for terminal state } s_T \\ V(s_t,\theta'_v), & \mbox{otherwise} \end{cases}$$ \FOR {$j \in {t-1,\dots,t_{start}}$} \STATE $R \leftarrow r_ j + \gamma R$ \STATE Accumulate gradients wrt $\theta'_a$ $$ d\theta_a \leftarrow d\theta_a+\nabla_{\theta_a'}\log\pi(a_j|s_j;\theta_a')(R_i-V(s_i;\theta_v')) $$ \STATE Accumulate gradients wrt $\theta'_v$ $$ d\theta_v \leftarrow d\theta_v + \frac{\partial}{\partial \theta_v}(R_j-V(s_j;\theta_v'))^2 $$ \ENDFOR \ENDFOR \STATE Synchronize and update parameters \ENDFOR \end{algorithmic} \end{algorithm} Then we introduce other interference objects into these tasks. Actually, in a room, the goal object has a random position. And we use an unambiguous one-hot vector as a subgoal to express the final goal so that every dimension of the subgoal represents one fixed object. Each task has disturbances which are independent of the goal of the task, for example, task-independent objects. Only when the agent identifies the subgoal correctly, overcoming the random disturbances and interacts with the correct object, will it obtain a sparse reward which is discounted according to the steps it used. The task setting brings a constraint to the learning process besides RL motivation, forcing the agent to interact with the only object and ignore the disturbances. Then the agent will build a robust object-oriented policy that can be used in any other task and environment consisting of the same elements. Concretely, the structure of the low-level network is shown in Fig. \ref{FIG:3}. The network receives the subgoals and pixel observation. The observation inputs into a a three-layer CNN, of which the output will be given to a one-layer LSTM \cite{,hochreiter1997long}. Then the output embedding of LSTM connects with the subgoal embedding (get from a one-layer FC network), then input into three-layers FC network, learning by multi-process A2C \cite{,mnih2016asynchronous} algorithm to train a stable policy, as shown in algorithm 1. \begin{figure}[t] \centering \includegraphics[scale=.32]{structure_high.jpg} \caption{We build the high-level policy with DQN, which receives the abstract observation and the language instructions and we called language comprehending network (LCN). Also, the learned subgoals can be interpreted into learned language form, be stored in language memory buffer, and be reused in different tasks.} \label{FIG:4} \end{figure} \subsection{Building High-Level Language Comprehending Policy} Here we show how to build abstract high-level policy, which we called language comprehending network (LCN). After building a stable low-level policy, we intend to connect language with object-oriented subgoals. We describe the goal of each simple task with one complete sentence and just use RL method to learn to make a one-step decision in the same tasks with the low-level. For simplicity, the high-level observation is abstracted states similar to subgoals of the existing objects in a room. For instance, if the first position representing for a red box is ``2", it means that there are two red boxes in the current room. It is a way to control the low-level flexibly according to the change of observation instead of making a decision in a fixed step by giving subgoal every 5 steps like traditional HRL methods \cite{,vezhnevets2017feudal,nachum2018data}. However, the high-level policy is not just a one-to-one interpreter from language to subgoals. That will limit the generalization abilities of the policy. We consider leveraging the fuzzy semantics of language. For instance, our policy also learns a fuzzy description such as ``open a door", which does not accurately express the goal. Then the ideal result is that the high-level policy gives the same probability among all the existing doors. In short, the high-level should receive language instruction and the abstracted observation then gives subgoals in sequence. The training tasks should lead the agent to a comprehension of the semantic information according to the observation instead of just interpreting language instruction. That will strongly increase the generalization abilities of the whole policy. Actually, we make use of \textit{nltk} \cite{,bird2006nltk} as a language preprocessing model to change every word into an embedding form Word2Vec \cite{,mikolov2013efficient}. The high-level network receives the language embedding vectors in an LSTM \cite{,hochreiter1997long} model. The observation is an abstracted vector said above representing existing objects from the original pixel state, which is used as input in FC network. Then we connect the output of the two networks as abstract input, build an FC network for training DQN \cite{,mnih2013playing} as shown in algorithm 2. The output is a 24-dimension one-hot vector representing an object. The high-level policy makes decisions when the abstracted state changes or the low-level policy goes beyond the stated max step. When training, only when the agent correctly interacts with the appointed object, will it obtain the sparse reward 1, otherwise 0. \begin{algorithm}[h] \caption{High-level Task and Language Comprehending Policy Training Algorithm}\label{algorithm2} \begin{algorithmic}[1] \STATE Initialize replay memory $\mathcal{D}$ to capacity $N$ \STATE Initialize action-value $Q$ with random weight \STATE Initialize average success rate $s_r = 0$ \STATE Set expected error rate $\varepsilon$ \WHILE {$1 - s_r > \varepsilon$} \STATE Sample instruction $I\in \mathcal{I}$ in uniform distribution \STATE With probability $\epsilon$ select a random subgoal \textit{existed in current observation} \STATE Otherwise select subgoal $g_t = \max_g Q^*(O_t, g_t|I)$ with observation $O_t$ \STATE Wait the low-level executing $g_t$ until success or observation changed \STATE Store transition $(O_t,g_t,r_t,O_{t+1})$ in $\mathcal{D}$ \STATE Sample mini-batch of transitions $(O_j,g_j,r_j,O_{j+1})$ from $\mathcal{D}$ \STATE Set $$y_j = \begin{cases} r_j, & \mbox{for terminal state } S_T \\ r_j+\gamma \max \limits_{g_{j+1}}Q_{\theta^-}, & \mbox{otherwise} \end{cases}$$ \STATE Update network parameters $\theta$ with $$d\theta \leftarrow d\theta + \frac{\partial}{\partial \theta} (y_j-Q(O_j,g_j;\theta))^2$$ \STATE Calculate average success rate $s_r$ every 100 episodes \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Constructing Extensible Policy With Abstract Language Trajectories} Here we show how to build extensible compositional policy by an augmented-memory for generalization. By combing the high-level and low-level policy, we obtain a policy that can correctly execute one sentence task such as ``pick up the red ball" in spite of environmental changing. However, when facing long-horizon tasks, which should be described by a section, the language input pattern is unseen and unrecognizable. That is also the problem that all the end-to-end frameworks facing, due to the limited generalization abilities \cite{,lake2018generalization}. For example, if training the agent by tasks that are described by less than three sentences but testing by the ones described by more than ten sentences, the end-to-end framework struggle to identify the content. Therefore, we design an additional structure to use language to avoid the problem. We reiterate that we have already built stable object-oriented low-level policies. That means not only the language can be correctly executed by these policies, but also when executing the policy, it can be described by language unambiguously and correctly. The result is that, when the agent exploring by subgoal in a new unseen task, what the agent does can be output as the learned language by itself. These languages can be formed as the trained pattern. So if the agent stores these languages, it means that the agent can store trajectories in an abstracted language form, meanwhile can comprehend and reuse them. So the agent can explore a new task rapidly by low-dimension subgoal space and memorize the trajectory. Once the agent obtains the final sparse reward, it can solve the task by replay the language trajectories. If the new task has some randomness, the agent can also explore again by the abstracted trajectories. For example, if ``open the red door" is in the successful trajectory, the agent can heuristically explore by related words such as ``pick up a red object" or ``open a door". That is a way to generalize by abstract language space and build extensible policy. Actually, we interpret every subgoal into language vector form in one sentence and store them into a memory buffer as the abstract language trajectories. When exploring a new task after training, the agent will explore by stochastic policy on subgoal space. When replaying language trajectory, the language will be taken out and be executed in sequence. \section{Experiments} We design experiments in stochastic and partial observation environment to show our method has these superiorities: (i) Our low-level policy can overcome random disturbance, correctly interacting with the object without other redundant actions. With the abstract subgoals and guidance of designed tasks, the low-level policy becomes a shared object-oriented policy which can execute the goal correctly. (ii) In several unseen new tasks without instruction guidance (gradually increasing the complexity), our agent can explore with object-oriented policies in the abstracted subgoal space, which is much smaller than the original state space, such that the agent can efficiently attain extremely sparse reward. (iii) Once obtaining the reward by few-shot exploring, the agent can solve the task by replaying language trajectory memory and explore heuristically by language. Even new tasks with diversified randomness can be solved in a high probability. \subsection{Experiment Setting} We choose BabyAI \cite{,chevalier-boisvert2018babyai} as our experiment platform. In this platform, there are a large number of various tasks which consist of many object-oriented tasks whose final goal will be described by structured synthetic natural language. These tasks are often generated with massive randomness, including random position, attribution, color, and other random disturbing objects. These tasks are all partially observed and long time horizon. They are difficult for the traditional RL method due to the frequent bottleneck state \cite{,mcgovern2001automatic} with sparse reward, which also forces the agent to identify the object and corresponding instructions. Based on the platform, we design a series of tasks, which consist of one or more $7 \times 7$ rooms. In every room, the agent will receive a pixel partial observation of the whole room, an instruction of natural language only describing the final goal of the task, and an abstracted observation, which is a vector representing the existing objects of the room. There are many objects in the room, some of which are the goal of the task and some are the disturbance. To go to the next room, the agent should open the correct door, or pick up the corresponding key of the door. Only when the agent achieves the final goal, will it get the sparse reward. We make use of these tasks to verify the superiority of our method. The experiments setting are shown as follow: \textbf{One Room Task :} As shown in Fig. \ref{FIG:ONE}, they are training tasks to interact with just one object in a single room. The tasks need agent to receive and comprehend the instruction given by the environment, meanwhile overcoming randomness and interacting with the correct object. In this task, we train our low-level policy with pixel observation and high-level policy with abstract observation respectively. The goals of the task consist of six colors and four shapes of objects, which build a multi-task joint training process. \begin{figure}[h] \centering \includegraphics[scale=.5]{actobj.png} \caption{These tasks is used for training basic abilities of the agent. Every task has only one goal object.} \label{FIG:ONE} \end{figure} \textbf{Multiple Room Task :} As shown in Fig. \ref{FIG:THREE}, \ref{FIG:5_9}, they are many test tasks aiming to interact with a series of objects, including collecting keys and opening the right door, and entering in the next room until completing the task. In this task, there is \textit{no language instructions guidance}. We design the number of rooms of 3, 5, 9 with increasing difficulty. Comparing with the training task, these test tasks are all "out-of-domain". Every object in the room is fixed, but the position is random. Only the final object has a reward. Considering that our language memory can change and adapt to different tasks, the baselines are allowed to retrain in the tasks but ours' is not. These hard tasks test the compositional generalization abilities of the method in unseen tasks. \begin{figure}[h] \centering \includegraphics[scale=.5]{platform_32.jpg} \caption{These tasks consist of three room with several keys and doors. The final room has a goal object with reward 1.} \label{FIG:THREE} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=.5]{platform_592.jpg} \caption{These tasks consist of many rooms with disturbing object. They require long-term exploration to obtain the final reward.} \label{FIG:5_9} \end{figure} \textbf{Multiple Room Task with Randomness:} This task is modified from the 9-room task above, where the objects are random besides the positions. For example, the last time there is a ``red door" on the wall, the next time there may be other doors, and the key to open the door changes with it. The task is to test whether the agent can not only generalize from compositional policies but also generalize from language space heuristically. Because the change of the object has a rule, which can be represented by language. Also, the task \textit{do not provide language instructions}. The extremely hard task shows superiorities of the method which can complete it. \begin{figure*} \includegraphics[scale=.82]{result_crop.pdf} \caption{The figure is about the results of the one-room training experiment. They are divided into different kinds and shown according to colors and attributions in 10 classes. They are colors of \textbf{red, green, blue, purple, yellow, grey}, and shape of \textbf{key, box, ball, door}. Actually, the variances of the three shapes are influenced by the relationship of positions while the ball is not. So the result of the ball has lower variance.} \label{FIG:5} \end{figure*} \subsection{Baseline} Here we will introduce the baseline. In our experiments, some of original methods lack the adaptive capabilities of language input and stochastic environment. We make experiments with both these methods and their modified version adapting to the task setting for fair comparison. The comparative baselines are shown as follow: \textbf{Option-Critic.} It is a general and classical end-to-end hierarchical reinforcement learning method for many temporally extended tasks.\cite{,bacon2017the} These methods automatically build options by AC framework with learning, of which the option represents sub-policy for different subtasks. Considering that the method is originally designed without instruction, we design an LSTM network the same to ours, to preprocess language input with \textit{nltk} as additional state. We should ensure that the information that other methods can get is equal to ours. For a fair comparison, we also modify the network and use a one-room task as the pre-training task to improve the performance of the baseline. In Table \ref{table}, they are ``OC4-ORI" for original OC4 method, ``OC4-INS" for OC4 adapting to language input, ``OC4-PRE" for OC4 with language and pre-training in ONE ROOM task. \textbf{HIRO.} It is a data-efficient and general HRL method for long-horizon complex RL tasks \cite{,nachum2018data}. This method builds off-policy model-free RL framework with a correction to re-label the past experience. Same as OC, we also introduce natural language as complete task information, for identifying the final object. Also, we modified the network to adapt to our one-room pre-training task to improve the performance. In table \ref{table}, they are ``HIRO-ORI" for original HIRO method, ``HIRO-INS" for HIRO adapting to language input, ``HIRO-PRE" for HIRO with language and pre-training in ONE ROOM task. \textbf{Flat.} It is the low-level policy of our method without language instruction. The baseline is to show whether the task can be solved by RL \textit{without language}. \textbf{Traditional RL.} It is a baseline shows the capabilities of traditional RL methods. Including ``VANILLA-RL" for basic AC algorithm, ``STOCHASTIC" for stochastic policy, ``SHAPING" for reward shaping method. Many researchers deal with complex tasks by introducing additional rewards for the key objects or ``bottleneck state". Here we will show that in quite long time scale tasks, although agents can get some reward, the poor sampling efficiency of traditional RL will lead to failure and cannot achieve the final goal. \subsection{Comparative Analysis in One-Room Basic Experiment} For learning stable policy, we used an on-line 4-process A2C algorithm. That is, our method collects 4 trajectories and updates the network with an average gradient. That will reduce the variance brought by randomness. For a fair comparison, the baseline also uses multi-process training. The result curves are shown in Fig. \ref{FIG:5}. In this experiment, our low-level policy compares with HIRO and OC methods. The result shows that our low-level policy can overcome the randomness and successfully learning these tasks gradually. OC and HIRO may not adapt to tasks with massive randomness, especially OC relies on the number of option and HIRO needs a correction for states as subgoals to learn policy efficiently. However, when the environment contains stochastic elements, that will make the correction a negative motivation. So that their performance in this task is a little weaker with a lower success rate. Our high-level policy is also pre-trained in this task, receiving abstracted observation and make one-step decision to give subgoal, of which the task is simplified. It is to learn identifying and decision-making abilities. Consider that the difficulty of the task is easier than the low-level policy and baselines, the curve will not be shown in the figure. \begin{table*} \centering \caption{ Success percentage of generalizing tasks, where $<x$ means that success episodes is less than $x$ in 100 episodes. Traditional RL method are adequately tested in more than 500000 episodes. STEP is the minimum step to complete the task with randomness of position.} \scalebox{1.2}{ \begin{tabular}{cccccc} \toprule \midrule &TASK & Three Room & Five Room & Nine Room & Random Nine Room \\ \multirow{3}{*}{\mbox{\shortstack[c]{\fontsize{8pt}{\baselineskip}\selectfont TASK \\ \fontsize{8pt}{\baselineskip}\selectfont SETTING }}} & STEPs-ORI-MIN & 20 $\sim$ 38 & 32 $\sim$ 68 & 44 $\sim$ 98 &44 $\sim$ 98 \\ &STEPs-HIGH & 5 & 9 & 13 & 13 \\ &TEST EPISODE & 3000& 8000 & 15000 & 15000 \\ \midrule &&\multicolumn{4}{c}{\centering SUCCESS PERCENTAGE (success / episodes \%)} \\ \cmidrule(r){3-6} \multirow{3}{*}{\mbox{\shortstack[c]{\fontsize{8pt}{\baselineskip}\selectfont TRADITIONAL \\ \fontsize{8pt}{\baselineskip}\selectfont RL METHOD }}}&STOCHASTIC &$< 1e^{-3}$& $<1e^{-4}$&$<1e^{-4}$&$<1e^{-4}$ \\ &VANILLA-RL &$< 1e^{-3}$& $<1e^{-4}$&$<1e^{-4}$&$<1e^{-4}$ \\ &SHAPING &$< 1e^{-3}$& $<1e^{-4}$&$<1e^{-4}$&$<1e^{-4}$ \\ \midrule \multirow{3}{*}{\mbox{\shortstack[c]{\fontsize{8pt}{\baselineskip}\selectfont BASE \\ \fontsize{8pt}{\baselineskip}\selectfont LINE 1 }}}&HIRO-ORI & $<1$ &$<1$ & $<1$ &$<1$ \\ &HIRO-INS & $<1$ & $<1$ & $<1$ &$<1$ \\ &HIRO-PRE & $ 39 \pm 17$ & $ 5 \pm 10$ & $<1$ &$<1$ \\ \midrule \multirow{3}{*}{\mbox{\shortstack[c]{\fontsize{8pt}{\baselineskip}\selectfont BASE \\ \fontsize{8pt}{\baselineskip}\selectfont LINE 2 }}}&OC4-ORI & $<1$ & $<1$ & $<1$ &$<1$ \\ &OC4-INS & $<1$ & $<1$ & $<1$ &$<1$ \\ &OC4-PRE & $ 22 \pm 11$ & $<1$ & $<1$ & $<1$ \\ \midrule \multirow{2}{*}{\mbox{\shortstack[c]{\fontsize{8pt}{\baselineskip}\selectfont OUR \\ \fontsize{8pt}{\baselineskip}\selectfont METHOD }}}&FLAT & $ 19 \pm 14$ & $ 1 \pm 5$ & $<1$ & $<1$ \\ & \textbf{LCN-SEN} &$ \textbf{79} \pm 14$&$\textbf{35} \pm 15$&$\textbf{ 28} \pm 13 $&$\textbf{ 6} \pm 8 $ \\ \midrule \bottomrule \end{tabular} } \label{table} \end{table*} \begin{figure*} \centering \includegraphics[scale=.5]{visualization3.pdf} \caption{This is the a kind of visualization result of embedding of our network made by T-SNE method. The points of different classes are circular distributions. In this figure, we can see that the clustering result relies on many elements of the environment. Although some objects have different shape, the same color will made them closed. The figure shows that our method does learn some semantical information depending on the training process we designed.} \label{FIG:7} \end{figure*} \subsection{Generalization Experiment from Simple tasks to Complex Tasks} These tasks are all long-horizon tasks. That means these tasks need the high-level policy of all the methods to take more than three-step decisions. Especially the 9-room task needs agent to interact with more than 15 different objects and takes more than 15 steps high-level decisions. These tasks are also extremely hard for most of the current RL methods due to the sparse final rewards and environment with randomness. In these tasks, all the methods are allowed to retrain, except our methods. The one-room task will be seen as a pre-training tasks, and these task can be seen as OOD generalization tasks from the simple one to complex ones. Besides OC and HIRO, we add reward-shaping with traditional Deep RL methods (i.e., A2C), and OC and HIRO without pre-training. We want to verify the fact that, for complex tasks, although we can design an additional reward for shaping, the intricate relationship and the poor sampling efficiency will bring failure either. A frequent reward cannot always help to solve complex tasks. Because only the final goal has a reward, the average reward can represents average success rate to some degree. The result shows that our method can rapidly explore in subgoal space and replay the language trajectory to complete the task in a high success probability. \subsection{Visualization Experiment} We leverage T-SNE \cite{,maaten2008visualizing} to show the semantical embedding learned by our training process as shown in Fig. \ref{FIG:7}. In this experiment, we visualize the embedding output of our network. We can see that our network does learned some semantical information with many kinds of different tasks. These ringlike clustering points has different semantics. Here we just show the result of different attributions. In fact all the elements, such as the shape, colors and the relative positions are all coupled, so all the labels mixed a little. \subsection{Result of Random Long-Horizon Task} The last experiment is shown for our strong ability to solve new difficult tasks with language replay buffer (See in Fig \ref{FIG:2} and Fig. \ref{FIG:5_9}). Facing such a hard task, our method still has a probability to complete the task (see in Table \ref{table}, the result of Random Nine Room). Our agent can explore in subgoal space rapidly even without the instruction of tasks as well as memorize the subgoal trajectory in a small buffer. Once getting the reward, a successful trajectory means the agent can explore the task with randomness by heuristic exploration. To the best of our knowledge, there is no method that can solve such a task with extremely sparse reward due to poor positive sample acquisition. \section{Conclusion} In this paper, we propose a new learning paradigm for building extensible and compositional language policy. Accordingly, we build a hierarchical RL policy with an invariable subgoal setting representing invariable semantics. These subgoals are stable and can be reused in any task of the environment. The two-level hierarchical model means that the agent can explore in low dimension semantical subgoal space. We also build an augmented-memory to record the trajectories of the agent by an abstract language form. It will help the agent generalize to new task by replaying language trajectories. However, the subgoal space of our method is fixed, which means we cannot generalize to completely new tasks, where the elements are unseen. And our method also has some limits which are caused by adapting to the environment. In future work, we will attempt to break through the shortage. Besides, we consider that if we can let the agent learn plenty of linguistic semantics, we can also teach it to think and make an inference by language, even interact with a human. Then it may build a general policy between quite different environments, such that one agent solves tasks in 2D and 3D space with similar semantics in the meantime, and can be guided by a human directly. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,995,548
arxiv
\section*{Introduction} Identifying the crystal structure of a given material is important for understanding and predicting its physical properties. For instance, the hardness of industrial steel is strongly influenced by the atomic composition at grain boundaries, which has been studied in numerous theoretical and experimental investigations\cite{herbig2014atomic, meiners2020observations}. Beyond bulk materials, two- (2D) and one-dimensional (1D) systems have far-reaching technological applications, such as solar energy storage, DNA sequencing, cancer therapy, or even space exploration\cite{ferrari2015science, de2013carbon}. To characterize the crystal structure of a given material, one may assign a symmetry label, e.g., the space group. More generally, one may want to find the most similar structure within a list of given known systems. These so-called structural classes are identified by stoichiometry, space group, number of atoms in the unit cell, and location of the atoms in the unit cell (the Wyckoff positions). Methods for automatic crystal-structure recognition are required to analyze the continuously growing amount of geometrical information on crystal structures, from both experimental and computational studies. Millions of crystal structures alongside calculated properties are available in large computational databases such as the NOvel MAterials Discovery (NOMAD) Laboratory \cite{draxl2019nomad}, AFLOW\cite{mehl2017aflow}, the Open Quantum Materials Database (OQMD)\cite{saal2013materials}, Materials Project\cite{jain2011high}, or repositories specialized in 2D materials\cite{haastrup2018computational,mounet2018two}. In scanning transmission electron microscopy (STEM)\cite{pennycook2011scanning}, atomic positions can be reconstructed from atomic-resolution images for specific systems, e.g., graphene \cite{ziatdinov2017deep}. Three-dimensional atomic positions are provided by atom probe tomography (APT) \cite{gault2012atom} and atomic electron tomography (AET) experiments\cite{zhou2020atomic}. Still, substantial levels of noise due to experimental limitations and reconstruction errors are present in atomic positions, e.g., distortions beyond a level that can be explained by a physical effect or, in case of APT, large amount of missing atoms (at least $20\%$, due to the limited detector efficiency\cite{gault2016brief}). Crystal-structure recognition schemes should be able to classify a large number of structural classes (also beyond bulk materials) while at the same time being robust with respect to theoretical or experimental sources of inaccuracy and physically driven deviations from ideal crystal symmetry (e.g., vacancies or thermal vibrations). Given the large amount of data, the classification should be fully automatic and independent of the manual selection of tolerance parameters (which quantify the deviation from an ideal reference structure). Current methods are based either on space-group symmetry or local structure. For space-group-based approaches (notable examples being Spglib\cite{togo2018texttt} and AFLOW-SYM\cite{hicks2018aflow}), the allowed symmetry operations are calculated directly from the atomic positions to infer a space group label. For local-structure-based approaches, the local atomic neighborhood of each individual atom is classified into a predefined list of reference structures. Examples of these methods are common neighbor analysis (CNA)\cite{honeycutt1987molecular}, adaptive common neighbor analysis (a-CNA)\cite{stukowski2012structure}, bond angle analysis (BAA)\cite{ackland2006applications}, and polyhedral template matching (PTM)\cite{larsen2016robust}. Space-group approaches can treat all space groups but are sensitive to noise, while local-structure methods can be quite robust but only treat a handful of structural classes. Moreover, none of the available structure recognition schemes can recognize complex nanostructures, e.g., nanotubes. To improve on the current state of the art, we build on recent advances in deep learning, which is a subfield of machine learning that yields ground-breaking results in many settings, e.g., image and speech recognition\cite{Goodfellow-et-al-2016}. Previous work using machine learning and neural networks (NNs) for crystal-structure recognition\cite{geiger2013neural,reinhart2017machine,dietz2017machine,ziletti2018insightful} did not go beyond a handful of structural classes while showing robustness at the same time. Here, we propose a robust, threshold-independent crystal-structure recognition framework (ARtificial-Intelligence-based Structure Evaluation, short ARISE) to classify a diverse set of 108 structural classes, comprising bulk, 2D, and 1D materials. Bayesian NNs\cite{gal2016dropout,gal2016uncertainty} are used, i.e., a recently developed family of NNs that yields not only a classification but also uncertainty estimates. These estimates are principled in the sense that they approximate those of a well-known probabilistic model (the Gaussian process). This allows to quantify prediction uncertainty, but also the degree of crystalline order in a material. ARISE performance is compared with the current state of the art, and then applied to various computational and experimental atomic structures. Crystal characterization and identification of hidden patterns is performed using supervised learning (ARISE) as well as the unsupervised analysis (via clustering and dimensionality reduction) of the internal representations of ARISE. \section*{Results} \begin{figure*} \centering \includegraphics[width=\textwidth]{concept_figure.pdf} \caption{ \textbf{Schematic overview of single- and polycrystal characterization framework.} \textbf{a-d} Prediction pipeline of the single-crystal classification model ARISE (ARtificial-Intelligence-based Structure Evaluation). In this work we employ the smooth-overlap-of-atomic-positions (SOAP) descriptor. \textbf{e} Examples of crystallographic prototypes included in the training set. \textbf{f-m} Polycrystal classification framework strided pattern matching (SPM) for slab-like (\textbf{f-j}) and bulk systems (\textbf{k-m}). } \label{fig:single_and_polyc_class_steps} \end{figure*} \subsection*{The input representation} To apply machine learning to condensed-matter and materials science problems, the input coordinates, chemical species, and the lattice periodicity of a given atomic structure are mapped onto a suitable so-called descriptor. Here, the descriptor is a vector that is invariant under rigid translations and rotations of the input structure, as well as under permutations of same-species atoms. Quality and generalization ability of machine-learning models can be significantly increased, if physical requirements known to be true are respected by construction (see Supplementary Methods for more details). Most well-known descriptors in physics and materials science incorporate these physical invariants: symmetry functions\cite{behler2011atom}, the smooth-overlap-of-atomic-positions descriptor (SOAP) \cite{bartok2010gaussian,bartok2013representing}, the many-body tensor representation\cite{huo2017unified}, and the moment tensor potential representation\cite{shapeev2016moment}. In this work, SOAP is used as descriptor (cf. Supplementary Methods). SOAP has been successfully applied to numerous materials science problems such as interatomic potentials fitting \cite{bartok2015gaussian}, structural similarity quantification\cite{de2016comparing}, or prediction of grain boundary characteristics (e.g., energy and mobility)\cite{rosenbrock2017discovering}. Note that any other suitable descriptor that respects above-mentioned physical requirements can be used as input for our procedure. In particular, the ai4materials code library is provided into which alternative descriptors can be readily integrated. \subsection*{The Bayesian deep learning model and the training dataset} Once the crystal structures are converted into vectorial descriptors by means of the SOAP mapping, a NN model is used to arrive at a classification decision (cf. Fig. \ref{fig:single_and_polyc_class_steps}c). NNs are nonlinear machine-learning models: they transform the input in a hierarchical fashion by subsequently applying affine and non-linear transformations in a predefined series of layers. The NN learns these optimal transformations that deform the descriptor space so that a robust classification is achieved. In this way, the model is able to learn complex representations which are becoming more abstract from layer to layer\cite{ziletti2018insightful}. This ability to learn representations\cite{bengio2013representation} is one of the key characteristics distinguishing NNs from other machine-learning algorithms. Various NN architectures have been developed in recent years\cite{Goodfellow-et-al-2016}; in this work, a fully connected NN (multilayer perceptron) is employed. A key component of this work is something rarely addressed in machine learning applied to materials science: quantification of model prediction uncertainty (cf. Fig. \ref{fig:single_and_polyc_class_steps}d). Standard NNs are unable to provide reliable model uncertainty \cite{gal2016dropout}. In a classification setting, there is widespread use of the probability provided by the last layer as uncertainty estimate. These probabilities are typically obtained by normalizing the sum of output values using the so-called softmax activation function. The class with maximal probability corresponds to the final prediction (here of a specific structural class). One may interpret the classification probability as quantification of model confidence. However, this strategy is unreliable as standard NNs tend to erroneously assign unjustified high confidence to points for which a low confidence should be returned instead\cite{gal2016dropout}. The main reason for this behavior is that standard-NN predictions are deterministic, with the softmax output only providing point estimates of the true probability distribution of outputs. In Bayesian NNs, this is addressed by placing distributions over model parameters. This results in probabilistic outputs \textendash\ in contrast to the point estimates from deterministic NNs\textendash\ from which principled uncertainty estimates can be obtained. Gal and Ghahramani\cite{gal2016dropout} showed that high-quality uncertainty estimates (alongside predictions) can be calculated at low cost using stochastic regularization techniques such as dropout\cite{hinton2012improving, srivastava2014dropout} (see Supplementary Methods for more details). After both descriptor and model architecture have been identified, a diverse, comprehensive, and materials-science-relevant training set is constructed. The first \textendash\ and most important \textendash\ step is to define the structural classes which are going to be included in the model: an overview of the structural classes considered in this work is shown in Fig. \ref{fig:single_and_polyc_class_steps}e. This comprehensive collection of structures includes bulk materials of elemental, binary, ternary, and quaternary composition, as well as 2D materials and carbon nanotubes of chiral, armchair, and zigzag type. In practice, given any database, we extract prototypes, i.e., representative structures that are selected according to some predefined rules. Selection criteria are, for instance, fulfillment of geometrical constraints (number of atoms in the unit cell, number of chemical species) or if the structures are observed in experiment. For the elemental bulk materials, we extract from AFLOW all experimentally observed structures with up to four atoms in the primitive cell. This yields 27 elemental solids encompassing all Bravais lattices, with the exception of monoclinic and triclinic structures because of their low symmetry. Note that this selection includes not only the most common structures such as face-centered-cubic (fcc), body-centered-cubic (bcc), hexagonal-close-packed (hcp), and diamond (which cover more than $80\%$ of the elemental solids found in nature\cite{ashcroft2011solid}), but also double-hexagonal close-packed, graphite (hexagonal, rhombohedral, buckled), and orthorhombic systems such as black phosphorus. This goes already beyond previous work using NNs for crystal structure recognition\cite{ziletti2018insightful}, where a smaller set of elemental solids is considered. For binaries, we select the ten most common binary compounds according to Pettifor\cite{pettifor1995bonding}, plus the $\text{L1}_{2}$ structure because of its technological relevance \textendash\ for instance, it being the crystal structure of common precipitates in Ni-based superalloys\cite{reed2008superalloys}. This selection also include non-centrosymmetric structure, i.e. structures without inversion symmetry, such as wurtzite. To challenge the classification method with an increasing number of chemical species, a small set of ternary and quaternary materials is included as a proof-of-concept. Specifically, six ternary perovskites\cite{castelli2015calculated} (organometal halide cubic and layered perovskites) and six quaternary chalcogenides of $\text{A}_2\text{BCX}_4$ type\cite{pandey2018promising} are included due to their relevance in solar cells and photo-electrochemical water splitting devices, respectively. Going beyond bulk materials, we add an exhaustive set of 46 2D materials, comprising not only the well-known elemental structures such as graphene and phosphorene\cite{novoselov20162d} but also binary semiconductors and insulators (BN, GaN), transition metal dichalcogenides (MoS$_2$), and one example of metal-organic perovskites with six different chemical species. Ternary, quaternary, and 2D materials are taken from the computational materials repository (CMR)\cite{landis2012computational}. To demonstrate the ability of the proposed framework to deal with complex nanostructures, 12 nanotubes of armchair, chiral, and zigzag type are included in the dataset. For each prototype, we calculate the SOAP vector with different parameter settings (see Supplementary Methods for more details) as well as periodic and non-periodic boundary conditions to have a comprehensive dataset to train a robust classification model. This results in 39$\,$204 (pristine) structures included in the training set. To optimize the model, the set of pristine structures is split, with $80\%$ being used for training and the remaining $20\%$ for validation. For hyperparameter tuning, we employ Bayesian optimization\cite{10.5555/3042817.3042832}, which allows to optimize functions whose evaluation is computationally costly, making it particularly attractive for deep-learning models. Here, hyperparameters such as learning rate or number of layers are optimized in an automatic, reproducible, and computationally efficient manner to minimize the validation accuracy. A list of candidate models is then obtained, from which the optimal model is selected (see Methods section). We term this model ARISE, and report its architecture in Table \ref{table:mlp_all_data}. \subsection*{Benchmarking} We now compare ARISE's performance on pristine and defective structures with state-of-the-art crystal-structure recognition methods, specifically spglib, CNA, a-CNA, BAA, and PTM (cf. Table \ref{table:accuracy-comparison-single-crystal}). As mentioned in the Introduction, none of the benchmarking methods can treat all the materials shown in Fig. \ref{fig:single_and_polyc_class_steps}e; thus for fairness, the classification accuracy is only calculated for classes for which the respective methods were designed for, implying that most structures are excluded (see \nameref{section:supp_note_1} for more details). The performance on pristine structures is reported in Table \ref{table:accuracy-comparison-single-crystal}. The accuracy in classifying pristine structures is always 100\% as expected, with the only exception being CNA: For this method, the default cutoff only allows to correctly classify fcc and bcc but not hcp structures. For defective structures, the situation is drastically different. Spglib classification accuracy on displaced structures is low, and only slightly improved by using loose setting (up to $1\%$ displacement). For missing atoms, the accuracy is very low already at the $1\%$ level regardless of the setting used. Note, however, that this is actually spglib's desired behavior since the aim of this method is not robust classification. As indicated in the first column of Table \ref{table:accuracy-comparison-single-crystal}, spglib can treat 96 out of the 108 prototypes included in our dataset with the twelve missing prototypes being carbon nanotubes. Methods based on local atomic environments (PTM, BAA, CNA, a-CNA) perform very well on displaced structures, but they suffer from a substantial accuracy drop for missing-atoms ratios beyond 1\%. Their biggest drawback, however, is that they can treat only a handful of classes: three classes for BAA, CNA, and a-CNA, and twelve classes for PTM. ARISE is very robust with respect to both displacements and missing atoms (even concurrently, cf. Supplementary Table \ref{table:suppl_vac_and_displ}), while being the only method able to treat all 108 classes included in the dataset, including complex systems, such as carbon nanotubes. An uncertainty value quantifying model confidence is also returned, which is particularly important when investigating defective structures or inputs that are far out of the training set. We provide a detailed study in \nameref{section:supp_note_3} and Supplementary Fig. \ref{fig:assign_most_sim_proto}, where we challenge ARISE with structures it has not been trained on, i.e., it is forced to fail by construction. We find that ARISE returns non-trivial physically meaningful predictions, thus making it particularly attractive, e.g., for screening large and structurally diverse databases. Moreover, we analyze predictions and uncertainty of ARISE for continuous structural transformations (cf. \nameref{section:supp_note_2} and Supplementary Fig. \ref{fig:Bain_path_resuts}), where we consider the so-called Bain path that includes transitions between fcc, bcc, and tetragonal structures. We also want to emphasize that compared to available methods, the classification via ARISE does not require any threshold specifications (e.g., precision parameters as in spglib). \begin{table*}[ht] \centering \begin{tabular}{ll} \hline Layer type & Specifications \\ \hline \hline Input Layer & Materials representation \\ + Dropout & (SOAP descriptor, size: 316)\\ Fully connected layer & Size: 256 \\ + Dropout + ReLU & \\ Fully connected layer & Size: 512\\ + Dropout + ReLU & \\ Fully connected layer & Size: 256\\ + Dropout + ReLU & \\ Fully connected layer & Size: 108 (= \# classes) \\ + Softmax & \\ \hline \end{tabular} \caption{Architecture of the fully connected Bayesian neural network used in this work. Rectified Linear Unit (ReLU) activation functions are used for all hidden layers. The dropout ratio is 3.17\% for all layers. The total number of parameters is 371,820. While training time was fixed to 300 epochs, hyperopt found a batch size of 64 and a learning rate of 2.16$\cdot10^{-4}$.} \label{table:mlp_all_data} \end{table*} \begin{table*}[] \begin{tabular}{@{}lrrrrrrrrrrrr@{}} \hline \hline & \multicolumn{1}{c}{Pristine} & \multicolumn{1}{l}{} & \multicolumn{5}{c}{Random displacements ($\delta$)} & \multicolumn{1}{l}{} & \multicolumn{4}{c}{Missing atoms ($\eta$)} \\ \cmidrule(lr){4-8} \cmidrule(l){10-13} & & & \multicolumn{1}{c}{0.1\%} & \multicolumn{1}{c}{0.6\%} & \multicolumn{1}{c}{1\%} & \multicolumn{1}{c}{2\%} & \multicolumn{1}{l}{4\%} & \multicolumn{1}{l}{} & \multicolumn{1}{c}{1\%} & \multicolumn{1}{c}{5\%} & \multicolumn{1}{c}{10\%} & \multicolumn{1}{c}{20\%}\\ \cmidrule(r){1-8} \cmidrule(l){9-13} Spglib, loose & 100.00&& 100.00 & 100.00 & 95.26 & 20.00 & 0.00 && 11.23 & 0.00 & 0.00 & 0.00 \\ (96 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ Spglib, tight & 100.00 && 0.00 & 0.00 & 0.00 & 0.00 & 0.00 && 11.23 & 0.00 & 0.00 & 0.00 \\ (96 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ PTM &100.00 && 100.00 & 100.00 & 100.00 & 100.00 & 100.00 && 88.67 & 51.76 & 25.93 & 6.24\\ (12 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ CNA & 66.14 && 62.81 & 62.81 & 54.55 & 32.34 & 31.41 && 55.86 & 32.50 & 15.75 & 3.07 \\ (3 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ a-CNA & 100.00 && 100.00 & 100.00 & 100.00 & 100.00 & 100.00 && 89.25 & 52.81 & 25.92 & 5.37 \\ (3 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ BAA & 100.00 && 100.00 & 100.00 & 100.00 & 100.00 & 97.85 && 99.71 & 88.78 & 65.21 & 25.38 \\ (3 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ GNB &62.63 && 56.50 & 55.94 & 55.56& 54.98 & 52.72 && 54.51 &52.94 & 52.67 & 52.09 \\ (108 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ BNB &75.76 && 65.56 & 65.19 & 63.61& 61.58 & 56.58 && 65.49 &64.00 & 62.43 & 60.48 \\ (108 / 108) & && & & & & && & & & \\ & && & & & & && & & & \\ ARISE &100.00 && 100.00 & 100.00 & 100.00& 99.86 & 99.29 && 100.00 &100.00 & 100.00 & 99.85 \\ (108 / 108) & && & & & & && & & & \\ \hline \hline \end{tabular} \caption{\textbf{Accuracy in identifying the parent class of defective crystal structures.} The defective structures are generated by randomly displacing atoms according to a uniform distribution on an interval $\left[ - \delta \cdot d_{\rm NN}, + \delta \cdot d_{\rm NN} \right]$ proportional to the nearest neighbor distance $d_{\rm NN}$ (central panel), or removing $\eta$\% of the atoms (right panel). The accuracy values shown are in percentage. For benchmarking we use Spglib\cite{togo2018texttt} (with two settings for the precision parameters, so-called loose (position/angle tolerance 0.1\AA/ 5$^\circ$) and tight (position/angle tolerance $10^{-4}$ / 1$^\circ$)), polyhedral template matching (PTM)\cite{larsen2016robust}, common neighbor analysis (CNA)\cite{honeycutt1987molecular}, adaptive common neighbor analysis (a-CNA)\cite{stukowski2012structure}, and bond angle analysis (BAA)\cite{ackland2006applications}. The number of classes which can be treated out of the materials pool in Fig. \ref{fig:single_and_polyc_class_steps}e is shown in parentheses for each method. spglib can assign a space group to all materials except the 12 nanotubes. PTM can only classify 7 elemental and 5 binary materials of those considered in this work. Additional classes are missing for CNA, a-CNA, and BAA as they cannot classify simple cubic (sc) and diamond structures. The approach proposed here can be applied to all classes, and thus the whole dataset is used (see Supplementary Tables \ref{table:prototype_listing_part_I}-\ref{table:prototype_listing_part_III} for a complete list). Moreover, we compare ARISE to a standard Bayesian approach: Naive Bayes (NB). We consider two different variants of NB: Bernoulli NB (BNB) and Gaussian NB (GNB) \textendash\ see the Methods section for more details. ARISE is overwhelmingly more accurate than both NB methods, for both pristine and defective structures. } \label{table:accuracy-comparison-single-crystal} \end{table*} \subsection*{Polycrystal classification} Up to this point, we have discussed only the analysis of single-crystal (mono-crystalline) structures, using ARISE. To enable the local characterization of polycrystalline systems, we introduce strided pattern matching (SPM). For slab-like systems (cf. Fig. \ref{fig:single_and_polyc_class_steps} f), a box of predefined size is scanned in-plane across the whole crystal with a given stride; at each step, the atomic structure contained in the box is represented using a suitable descriptor (cf. Fig. \ref{fig:single_and_polyc_class_steps} g-h), and classified (Fig. \ref{fig:single_and_polyc_class_steps}i), yielding a collection of classification probabilities (here: 108) with associated uncertainties. These are arranged in 2D maps (Fig. \ref{fig:single_and_polyc_class_steps}j). The classification probability maps indicate how much a given polycrystalline structure locally resembles a specific crystallographic prototype. The uncertainty maps quantify the statistics of the output probability distribution (cf. Supplementary Methods). Increased uncertainty indicates that the corresponding local segment(s) deviates from the prototypes known to the model. Thus, these regions are likely to contain defects such as grain boundaries, or more generally atomic arrangements different from the ones included in training. For bulk systems (Fig. \ref{fig:single_and_polyc_class_steps}k), the slab analysis depicted in Fig. \ref{fig:single_and_polyc_class_steps}f-j is repeated for multiple slices (Fig. \ref{fig:single_and_polyc_class_steps}l), resulting in 3D classification probability and uncertainty maps (Fig. \ref{fig:single_and_polyc_class_steps}m). SPM extends common approaches such as labeling individual atoms with symmetry labels\cite{stukowski2012structure}, as the striding allows to discover structural transitions within polycrystals in a smooth way. SPM can be applied to any kind of data providing atomic positions and chemical species. Results obtained via SPM are influenced by the quality of the classification model as well as box size and stride (see Methods section for more details). \subsection*{Synthetic polycrystals} First, the classification via SPM combined with ARISE is performed for a slab-like synthetic polycrystal consisting of fcc, bcc, hcp, and diamond grains (cf. Fig. \ref{fig:synthetic_polycrystal}a). Due to the nature of the system, the SPM boxes near the grain boundaries will contain mixtures of different crystal structures. The results are shown in Fig. \ref{fig:synthetic_polycrystal} b and c: The network assigns high classification probability to the correct prototypes. Uncertainty is low within the grains, increasing at grain boundaries and crystal outer borders in line with physical intuition. The result remains virtually unchanged when introducing atomic displacements (up to $1\%$ of the nearest neighbor distance) while concurrently removing 20\% of the atoms (cf. Supplementary Fig. \ref{fig:supp_four_grains_defective}). The highest classification probabilities (after from the top four shown in Fig. \ref{fig:synthetic_polycrystal}b) are shown in Supplementary Fig. \ref{fig:suppl_four_grains}; a discussion on the stride can be found in Supplementary Fig. \ref{fig:low_resolution_four_grains}. Going beyond classification, we show how unsupervised learning can be used to access structural similarity information embedded in ARISE's internal representations, and use it for the characterization of crystal systems. We consider the mono-species polycrystal shown in Fig. \ref{fig:synthetic_polycrystal}a and collect ARISE's representations of the overall 7$\,$968 local boxes. Next, we employ Hierarchical Density-based Spatial Clustering Applications with Noise (HDBSCAN)\cite{mcinnes2017accelerated, McInnes2017} to identify clusters in the high-dimensional representation space. HDBSCAN estimates the density underlying a given data set and then constructs a hierarchy of clusters, from which the final clustering can be obtained via an intuitive and tunable parameter (see Methods). The obtained clusters correspond to the four crystalline grains in the structure (Fig. \ref{fig:synthetic_polycrystal}d). Points identified as outliers (marked in orange) coincide with grain-boundary and outer-border regions. Next, the high-dimensional manifold of the NN representations is projected in 2D via Uniform Manifold Approximation and Projection (UMAP)\cite{mcinnes2018umap}. UMAP models the manifold underlying a given dataset and then finds a low-dimensional projection that can capture both global and local distances of the original high-dimensional data. This returns a structure-similarity map (Fig. \ref{fig:synthetic_polycrystal}e), which allows to visually investigate similarities among structures: points (structures) close to each other in this map are considered to be similar by the algorithm. Structures belonging to the same cluster are in close proximity to each other, and clearly separated from other clusters. Conversely, outlier points are split across different regions of the map. This is physically meaningful: outliers are not a cohesive cluster of similar structures, but rather comprise different types of grain boundaries (i.e., fcc to bcc transitions or fcc to diamond etc., cf. Supplementary Fig. \ref{fig:supp_hdbscan_pos_gb}). In this synthetic setting, we can also use the classification prediction to further verify the unsupervised analysis: the results obtained via unsupervised learning indeed match ARISE's predictions (cf. Fig. \ref{fig:synthetic_polycrystal}e - Fig \ref{fig:synthetic_polycrystal}f). Moreover, an analysis of the mutual information (Fig. \ref{fig:synthetic_polycrystal}g) reveals that points at the core of the clusters are associated with low uncertainty, while points closer to the boundaries show increased uncertainty. Similar results are obtained for the other layers (cf. Supplementary Fig. \ref{fig:four_grains_umap_hdbscan_full}). We now move to a more realistic system: a model structure for Ni-based superalloys\cite{reed2008superalloys} (c.f Fig. \ref{fig:synthetic_polycrystal}h). Ni-based superalloys are used in aircraft engines due to their large mechanical strength at high temperatures, which derives from ordered L$1_2$ precipitates ($\gamma^\prime$ phase) embedded in a fcc matrix ($\gamma$ phase). We generate an atomic structure consisting of a fcc matrix in which Al and Ni atoms are randomly distributed. In the center, however, the arrangement of Al and Ni atoms is no longer random, but it is ordered such that the L$1_2$ phase is created (c.f Fig. \ref{fig:synthetic_polycrystal}h). The cubic shape of this precipitate is in accordance with experimental observations\cite{raabe_exp_superalloy}. The resulting structure comprises 132$\,$127 atoms over a cube of $120\,\text{\AA}$ length. As shown via a section through the center in Fig. \ref{fig:synthetic_polycrystal}i, fcc is correctly assigned to the matrix, and the precipitate is also detected. The uncertainty is increased at the boundary between random matrix and precipitate, as well as at the outer borders. Fig. \ref{fig:synthetic_polycrystal}j illustrates the L1$_2$ classification probability in a 3D plot. The precipitate is detected in both pristine and highly-defective structures. This analysis demonstrates that ARISE can distinguish between chemically ordered and chemically disordered structures, a feature that will be exploited for the analysis of experimental data in Sec. Application to atomic-electron-tomography data. Another realistic system is shown in Fig. \ref{fig:synthetic_polycrystal}k, which is the lowest-energy structure obtained from an evolutionary structure search\cite{meiners2020observations}. The structural patterns at the grain boundary are also observed in scanning transmission electron microscopy (STEM) experiments. SPM-ARISE correctly identifies the fcc symmetry within the grains (cf. Fig. \ref{fig:synthetic_polycrystal}l) while assigning double hexagonal close-packed (dhcp) symmetry at the grain boundary. The local boxes at the grain boundary contain partial fcc structures while changes in stacking and distortions decrease their symmetry (cf. Fig. \ref{fig:synthetic_polycrystal}m). Also the dhcp phase (ABAC close-packing) contains fcc (ABC) and a lower-symmetry packing (hcp, AB), thus justifying the assignment. To supplement this study, we investigate several examples from the largest, currently available grain-boundary database\cite{zheng2020grain}, including fcc, bcc, hcp, and dhcp symmetry as well as various grain boundary types, which ARISE can classify correctly (cf. Supplementary Figure \ref{fig:supp_gb_database}). Note that ARISE correctly identifies even the $\alpha-$Sm-type stacking (ABCBCACAB). No other fully automatic approach offers a comparable sensitivity. \begin{figure* \includegraphics[width=\textwidth]{synthetic_polycrystals_figure.pdf} \caption{\textbf{Analysis of synthetic polycrystals.} \textbf{a} Mono-species polycrystal consisting of four grains with face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and diamond (dia) symmetry. \textbf{b} Classification probabilities of expected prototypes. \textbf{c} Mutual information map for uncertainty quantification. \textbf{d-g} Unsupervised analysis of internal neural-network representations. \textbf{d} The neural-network representations are extracted for each local segment in \textbf{a} (obtained via SPM). Clustering (via Hierarchical Density-based Spatial Clustering Applications with Noise, HDBSCAN) is applied to this high-dimensional space; the polycrystal is marked according to the resulting clusters (see legend in \textbf{e} for the color assignments). \textbf{e-g} Two-dimensional projection (via Uniform Manifold Approximation and Projection, UMAP) of neural-network representations colored by cluster label, ARISE predicted class, and mutual information, respectively. In \textbf{e}, all points for which HDBSCAN does not assign a cluster are labeled as outlier. In \textbf{f}, all points that are not classified as fcc, diamond, hcp or bcc are labeled as other. Note that while the distances between points are meaningful, the axes merely serve as a bounding window and are not interpretable \textendash\ a situation typically encountered in non-linear methods such as UMAP (cf. section 6\cite{mcinnes2018umap}). \textbf{h-j} Precipitate detection in Ni-based superalloys. \textbf{h} Binary model system (right) and depiction of the two appearing phases (left). \textbf{i} Classification probabilities of expected prototypes and mutual information for a slice through the center of the structure. \textbf{j} 3D-resolved detection of the precipitate via the L1$_2$ classification probability for the pristine (left) and highly-defective case (right), for which 20\% of the atoms are removed and randomly displaced (up to 5\% of the nearest neighbor distance). \textbf{k} Lowest-energy grain boundary structure (Cu, fcc) predicted from an evolutionary search. The so-called Pearl pattern appears at the grain boundary, which is also observed in experiment\cite{meiners2020observations}. \textbf{l} SPM-ARISE analysis, correctly identifying fcc (ABC close-packing) in the grains, while detecting double hexagonal close-packed (dhcp, ABAC) at the grain boundary. \textbf{m} Exemplary analysis of a local box at the grain boundary, illustrating a change in stacking and increased distortions, which motivates the assignment of dhcp (which contains 50\,\% of both fcc and hcp close-packings). } \label{fig:synthetic_polycrystal} \end{figure*} \subsection*{Application to transmission-electron-microscopy experimental images} We now investigate defective structures originating from a completely different data source, namely STEM experiments, to demonstrate the generalization ability of ARISE and its applicability to experimental data. Moreover, we show how global and local analysis can be combined to analyze crystal structures. STEM experiments are a valuable resource to characterize material specimens, and to study, for instance, the atomic structures at grain boundaries\cite{meiners2020observations}. Atomic resolution can be achieved in high-angle annular dark field (HAADF) images. The global assignments of ARISE are tested on two experimental HAADF images of graphene shown in Fig. \ref{fig:STEM}a. These images contain a substantial amount of noise which makes it very challenging to recognize the graphene honeycomb pattern by naked eye. The choice of graphene is motivated by it being a flat 2D materials; $x$ and $y$ atomic positions obtained from STEM images thus provide the actual crystal structure, and not a mere projection. Approximate atomic positions (i.e. $x$ and $y$ coordinates) from HAADF images are obtained via AtomNet\cite{ziatdinov2017deep}, and shown in Fig. \ref{fig:STEM}b. ARISE is then used to classify the structures following the steps summarized in Fig. \ref{fig:single_and_polyc_class_steps}a-d. The top predictions ranked by classification probability are shown in Fig. \ref{fig:STEM}c, together with the uncertainty of the assignments as quantified by the mutual information. ARISE correctly recognizes both images as graphene, despite the substantial amount of noise present in images and reconstructed atomic positions. For the first image (Fig. \ref{fig:STEM}a, left), graphene is predicted with very high probability ($\sim 99\%$). Indeed, the similarity to graphene is apparent, although evident distortions are present in some regions (e.g., misaligned bonds marked in Fig. \ref{fig:STEM}b). The second candidate structure is C$_3$N, predicted with $\sim 1\%$ probability; in C$_3$N, atoms are arranged in a honeycomb lattice, making also this low probability assignment physically meaningful. For the second image (Fig. \ref{fig:STEM}a, right), ARISE also correctly predicts graphene, this time with 79$\%$ probability. The uncertainty is six times larger than in the previous case. Indeed, this structure is much more defective than the previous one: it contains a grain boundary in the lower part, causing evident deviations from the pristine graphene lattice, as illustrated in Fig. \ref{fig:STEM}b (right). The other four candidate structures appearing in the top five predictions (PbSe, MnS$_2$, BN, C$_3$N) are the remaining completely flat monolayers known to the network (out of the 108 structures in the training dataset, only five are flat monolayers). Note that no explicit information about the dimensionality of the material is given to the model. It is also important to point out that ARISE robustness well beyond physical levels of noise is essential to achieve the correct classification despite the presence of substantial amount of noise from both experiment and atomic position reconstruction. Besides the separate classification of single images, ARISE also learns meaningful similarities between images (i.e. structures). To demonstrate this, we analyze a library of graphene images with Si defects\cite{ziatdinov2019building} and quantify their similarity using ARISE's internal representations. Fig. \ref{fig:STEM}d investigates a selection of images which contain the mono-species structures of Fig. \ref{fig:STEM}a (right), e, and systems with up to four Si atoms. Atomic positions are determined via AtomNet. Then, the internal representations from ARISE are extracted and the pairwise cosine similarity is calculated. The cross-similarity matrix is depicted in Fig. \ref{fig:STEM}d, revealing a block matrix form in which the binary and mono-species structures are separated, i.e., more similar to each other, which can be attributed to the number of Si defects. This characteristic reappears for a larger selection of structures (cf. Supplementary Fig. \ref{fig:supp_STEM_sim}), thus confirming the analysis of Fig. \ref{fig:STEM}d. This investigation demonstrates that ARISE learns meaningful similarities, supporting the general applicability of ARISE for similarity quantification. \begin{figure* \begin{center} \includegraphics[width=\textwidth]{STEM_figure.pdf} \end{center} \caption{\textbf{Analysis of HAADF and HRTEM images via ARISE and SPM.} \textbf{a} Experimental high-angle annular dark field (HAADF) images of two graphene structures. White scale bars in all HAADF images in this figure are positioned in the bottom left and correspond to the typical graphene bond length (1.42\,\AA). \textbf{b} The atomic positions are reconstructed from the images via AtomNet\cite{ziatdinov2017deep}. \textbf{c} The resulting atomic structures are analyzed using ARISE. The top predicted structures are shown. Mutual information is used to quantify the classification uncertainty. \textbf{d} Similarity quantification of HAADF images via ARISE. The images in \textbf{a} (right) and \textbf{e} are compared to a selection of graphene systems with Si defects\cite{ziatdinov2019building}. For each image, AtomNet is used for reconstruction and the internal representations of ARISE are extracted (here, second hidden layer). Then, the cross-similarity is calculated using the cosine similarity. A block matrix structure arises that correlates with the number of Si atoms. A similar pattern is observed for a larger selection of structures, cf. Supplementary Fig. \ref{fig:supp_STEM_sim}. \textbf{e} HAADF image and reconstructed atomic positions (analogous to \textbf{a-b}) of a larger sample. Pentagons can be spotted near the grain boundary (see inset). \textbf{f} MnS$_2$ prototype. \textbf{g} Local analysis via strided pattern matching: graphene is the dominant structure. Different prototypes (MnS$_2$) are only assigned - and with high uncertainty (mutual information) - at the grain boundary. \textbf{h} High resolution transmission electron microscopy (HTREM) image of a quasicrystalline structure (icosahedral Al-Cu-Fe, adapted from the original reference\cite{li2016review}, see Methods). While there is an underlying order, the structure is aperiodic (i.e., no translational symmetry is present). As visualized in the zoom, the bright spots align with five-fold symmetry axes and pentagons of different size appear. Based on the reconstruction via AtomNet (bottom right), ARISE (via strided pattern matching) identifies MnS$_2$ as the dominating prototype (\textbf{i}), which similarly to the input structure contains pentagon patterns (\textbf{f}). } \label{fig:STEM} \end{figure*} While so far we have analyzed HAADF images on a global scale, a local analysis via SPM allows to zoom into a given structure and locate sub-structural features. This is particularly useful for polycrystalline and/or larger systems (e.g., more than 1$\,$000 atoms). As illustrative example, we consider the structure in Fig. \ref{fig:STEM}e. The mutual information shown in Fig. \ref{fig:STEM}g (right) clearly reveals the presence of a grain boundary. In Fig. \ref{fig:STEM}g (left), the classification probabilities of graphene and MnS$_2$ (the dominant prototypes) are presented, the latter being assigned at the grain boundary. This assignment can be traced back to pentagon-like patterns appearing near the grain boundary (as highlighted in Fig. \ref{fig:STEM}e), a pattern similar to the one being formed by Mn and S atoms in MnS$_2$ (cf. Fig. \ref{fig:STEM}f). Next, we challenge the established procedure for the local analysis of 2D images with data from a completely different resource. We investigate a high-resolution transmission electron microscopy (HTREM) image of a quasicrystalline structure\cite{levine1984quasicrystals, li2016review}, cf. Fig \ref{fig:STEM}h. The bright spots are ordered aperiodically, making it a very hard task to identify the underlying order by eye. Via the established procedure, MnS$_2$ is predicted as the most similar prototype (cf. Fig. \ref{fig:STEM}i). MnS$_2$ contains pentagon patterns (cf. Fig. \ref{fig:STEM}f) which can also be seen in the quasicrystal (cf. zoom in Fig. \ref{fig:STEM}h). This result suggests that ARISE and SPM are novel and promising tools for the classification of quasicrystalline order in automatic fashion \textendash\ a promising yet under-explored area. \subsection*{Application to atomic-electron-tomography data} While HAADF images are a valuable experimental resource, they only provide 2D projections. 3D structural and chemical information can however be obtained from atomic electron tomography (AET) with atomic resolution achieved in recent experiments\cite{miao2016atomic, zhou2020atomic, chen2013three, xu2015three}. Notably, this technique provides 3D atomic positions labeled by chemical species, to which ARISE and SPM can be readily applied. While extensions to other systems such as 2D materials are reported\cite{tian2020correlating}, metallic nanoparticles have been the main experimental focus so far, specifically FePt systems due to their promises for biomedicine and magnetic data storage\cite{sun2006recent}. First, a FePt nanoparticle\cite{yang2017deciphering} is classified using SPM-ARISE. ARISE's robustness is critical for this application, since the structural information provided by AET experiments are based on reconstruction algorithms that cause visible distortions (cf. Fig. \ref{fig:AET}a). SPM-ARISE primarily detects L1$_2$, L1$_0$, and fcc phases (see Supplementary Fig. \ref{fig:supp_nanoparticle_2017_uncertainty}). This is in line with physical expectations: annealing leads to structural transitions from chemically disordered to ordered fcc (A1 to L1$_2$) or to the tetragonal L1$_0$ phase\cite{sun2006recent, yang2017deciphering}. Besides the expected prototypes, ARISE also finds regions similar to tetragonally distorted, mono-species fcc (cf. Supplementary Fig. \ref{fig:supp_nanoparticle_2017_uncertainty}), which is meaningful given the presence of fcc and the tetragonal phase L1$_0$. To go beyond the information provided by classification and discover hidden patterns and trends in AET data, we conduct an exploratory analysis using unsupervised learning on ARISE's internal representations. While the procedure is similar to the one presented in Fig. \ref{fig:synthetic_polycrystal}d-g, here the analysis is truly exploratory (no ground truth is known), and data comes from experiment. First, all SPM boxes classified as L1$_0$ are extracted, this choice motivated by the physical relevance of this phase, in particular due to its magnetic properties\cite{sun2006recent}. This reduces the number of data points (boxes) from 43$\,$679 to 5$\,$359 \textendash\ a significant filtering step for which the automatic nature of ARISE is essential. In the representation space of the first hidden layer, HDBSCAN identifies seven clusters (and the outliers). To interpret the cluster assignments, we analyze geometrical characteristics of atomic structures (i.e., the local boxes) assigned to the different clusters. Specifically, we consider the nearest neighbor distances between Fe and Pt atoms, $d_{\text{FeFe}}$ and $d_{\text{PtPt}}$, respectively (cf. Supplementary Methods for the definition). For an ideal tetragonal structure, the difference $\Delta d = d_{\text{FeFe}} - d_{\text{PtPt}}$ is zero (cf. Fig. \ref{fig:AET} b, top left); a deviation from this value thus quantifies the level of distortion. Looking at the histograms of the (signed) quantity $\Delta d$ shown in Fig. \ref{fig:AET}b for each cluster, one can observe that each distribution is peaked; moreover, the distribution centers vary from negative to positive $\Delta d$ values across different clusters. The distribution of the outliers is shown for comparison: the $\Delta d$ distribution is very broad, since outlier points are not a meaningful cluster. While overlap exists, the clusters correspond to subgroups of structures, each distorted in a different way, as quantified by $\Delta d$. Thus, we discovered a clear trend via the cluster assignment that correlates with the level of distortion. The cluster separation can be visualized in 2D via UMAP (cf. Fig. \ref{fig:AET}b). Notably, the clusters do not overlap, even in this highly compressed representation (from 256 to 2 dimensions). Some of the clusters may also contain further sub-distributions, which seems apparent for instance from the $\Delta d$ distribution of cluster 6. The regions corresponding to the clusters could be hinting at a specific growth mechanism of the L1$_0$ phase during annealing, although further investigations are necessary to support this claim. The present analysis provides a protocol for the machine-learning driven exploration of structural data: supervised learning is employed to filter out a class of interest (which is not a necessary step, cf. Fig. \ref{fig:synthetic_polycrystal}d-g), then unsupervised learning is applied to the NN representations, revealing regions sharing physically meaningful geometrical characteristics. Finally, we apply ARISE to time-resolved (i.e., four-dimensional) AET data. Specifically, a nanoparticle measured for three different annealing times is investigated\cite{zhou2019observing}. The mutual information as obtained via SPM-ARISE is shown in Fig. \ref{fig:AET}c for five central slices. In regions between outer shell and inner core, the mutual information clearly decreases for larger annealing times, indicating that crystalline order increases inside the nanoparticle (see also Supplementary Fig. \ref{fig:AET_annealing_supp} for more details). This analysis confirms that the predictive uncertainty of ARISE, as quantified by the mutual information, directly correlates with crystalline order. The mutual information can be therefore considered an AI-based order parameter, which we anticipate to be useful in future nucleation dynamics studies. \begin{figure*} \centering \includegraphics[width=\textwidth]{AET_nanoparticle.pdf} \caption{\textbf{Analysis of atomic electron tomography data.} \label{fig:AET} \textbf{a} Side view of FePt nanoparticle ($\sim$23k atoms), with atomic positions and chemical species from atomic electron tomography (AET) data\cite{yang2017deciphering}. \textbf{b} Two-dimensional projection (bottom left) of neural-network representations (first hidden layer) via UMAP for regions classified as L1$_0$ by ARISE. The distribution of the difference between the nearest neighbor distances $d_{\text{FeFe}}$ and $d_{\text{PtPt}}$ (highlighted by bonds in top left part) is shown for each cluster (right), where cluster $i= 0, ..., 6$ is denoted as Ci, while all points for which HDBSCAN does not assign a cluster are labeled as outlier. \textbf{c} Five central slices (mutual information, obtained via strided pattern matching) for three different annealing times (data from four-dimensional AET experiment\cite{zhou2019observing}). } \end{figure*} \section*{Discussion} In this work, Bayesian deep learning is employed to achieve a flexible, robust and threshold-independent crystal classification model, which we term ARISE. This approach correctly classifies a comprehensive and diverse set of crystal structures from computations and experiments, including polycrystalline systems (via strided pattern matching). Given an unknown structure, the network assigns \textendash\ in an automatic fashion \textendash\ the most similar prototypes among 108 possible classes (and quantifies the similarity!), which is a very complicated task even for trained materials scientists, in particular in case of complex and possibly defective 3D structures. ARISE is trained on ideal synthetic systems only and correctly identifies crystal structures in STEM and AET experiments, hence demonstrating strong generalization capabilities. The Bayesian deep-learning model provides classification probabilities, which - at variance with standard NNs - allow for the quantification of predictive uncertainty via mutual information. The mutual information is found to directly correlate with the degree of crystalline order, as shown by the analysis of time-resolved data from AET experiments. This demonstrates the correlation of an information-theory concept with physical intuition. The internal NN representations are analyzed via state-of-the-art unsupervised learning. The clusters identified in this high-dimensional internal space allow to uncover physically meaningful structural regions. These can be grain boundaries, but also unexpected substructures sharing geometrical properties, as shown for metallic nanoparticles from AET experiments. This illustrates how supervised and unsupervised machine learning can be combined to discover hidden patterns in materials science data. In particular, the physical content learned by the NN model is explained by means of unsupervised learning. Since ARISE is not limited to predicting the space group, systems where the space group does not characterize the crystal structure can be tackled (as demonstrated for carbon nanotubes). More complex systems such as quasi-crystals\cite{levine1984quasicrystals}, periodic knots, or weavings\cite{liu2018geometry} could also be considered. Indeed, ARISE can be applied to any data providing Cartesian coordinates labeled by chemical species. Practically, one simply needs to add the new structures of interest to the training set, and re-train or fine-tune (i.e., via transfer learning) the NN with the desired labels. Moreover, the mutual information allows to quantify the defectiveness of a structure; this could be exploited to automatically evaluate the quality of STEM images, for example one may automatically screen for STEM images that are likely to contain structural defects. Applications in active learning\cite{gal2017deep} for materials science are also envisioned, where uncertainty is crucial for example when deciding on the inclusion of additional - typically computationally costly - points in the dataset. \clearpage \section*{Methods} \textbf{Dataset creation.}\ To compute the training set (39$\,$204 data points in total), we include periodic and non-periodic systems. For the former, no supercells are necessary (as SOAP is supercell-invariant for periodic structures). For the latter, a given structure (or rather its unit cell as obtained from the respective database) is isotropically replicated until at least 100 atoms are contained in the structure. Then this supercell structure and the next two larger isotropic replicas are included. With this choice of system sizes, we focus on slab- and bulk-like systems. Note that the network may not generalize to non-periodic structures outside the chosen supercell range. Practically, if the need to classify much smaller or larger supercells arises, one can include additional replicas to the training set and retrain the model (while for larger supercells it is expected that the network will generalize, see also Supplementary Fig. \ref{fig:supercells_cosine_sim_to_pbc_True}). Retraining is computationally easy due to fast convergence time. Note that for 2D structures, only in-plane replicas are considered. Elemental solids and binary compounds are selected from the AFLOW library of crystallographic prototypes\cite{mehl2017aflow}. Ternary, quaternary, and 2D materials are taken from the computational materials repository (CMR)\cite{landis2012computational}. Nanotubes are created using the atomic simulation environment (ASE)\cite{larsen2017atomic} where the chiral numbers (n,m) provide the class labels. We filter out chiral indices (n, m) (with the integer values n,m taking values in $[0,10]$) for which the diameter is in the range $[4\,\mathrm{\AA},6\,\mathrm{\AA}]$ (and skip the cases where $n=m=0$, $n<m$). Then, we increase the length of each nanotube until at least 100 atoms are contained. No additional lengths are included as it was checked that there is no major change in the SOAP descriptor (via calculating the cosine similarity between descriptors representing nanotubes of different length). For more complex nanotubes (for instance, multi-walled systems), this may change. For the cutoff $R_{\text{C}}$, we select the range $[3.0 \cdot d_{\text{NN}} , 5.0 \cdot d_{\text{NN}}]$ in steps of $0.2 \cdot d_{\text{NN}}$ and for $\sigma$ the values $[0.08 \cdot d_{\text{NN}}, 0.1 \cdot d_{\text{NN}}, 0.12 \cdot d_{\text{NN}}]$. We calculate the SOAP descriptor using the quippy package (\url{https://libatoms.github.io/QUIP}), where we choose $n_{\text{max}}=9$ and $l_{\text{max}}=6$ as limits for the basis set expansion, resulting in an averaged SOAP vector of length 316. Furthermore, we increase the dataset by varying the so-called extrinsic scaling factor: For a given prototype, the value of $d_{\text{NN}}$ will deviate from the pristine value in presence of defects. Thus, to inform the network that the computation of $d_{\text{NN}}$ is erroneous, we scale each pristine prototype not only by $1.0 \cdot d_{\text{NN}}$ but also $0.95\cdot d_{\text{NN}}$ and $1.05\cdot d_{\text{NN}}$. We term the factors 0.95, 1.0, 1.05 extrinsic scaling factors. One may also see this procedure as a way to increase the training set. To create defective structures, we explained in the main text (cf. Table \ref{table:accuracy-comparison-single-crystal}) how defects (displacements, missing atoms) are introduced. Note that we use the term missing atoms and not vacancies since the percentages of removed atoms we consider are well beyond regimes found in real materials. Also note that displacements as high as 4\% of the nearest neighbor distance might already cause a transition to the liquid phase in some solids. Still, as noted in the Introduction, experimental and computational data often present levels of distortions which are comparable or even substantially exceed these regimes. We introduce defects for all pristine prototypes included in the training set (specifically, for the supercells \textendash\ for both periodic and non-periodic boundary conditions, while for nanotubes only non-periodic structures are used). Since the defects are introduced randomly, we run 10 iterations of defect creation on each prototype. Then we calculate SOAP for all of these defective structures for one specific parameter setting ($R_{\text{C}} = 4.0 \cdot d_{\text{NN}}, \sigma = 0.1 \cdot d_{\text{NN}}$, extrinsic scaling factor $=1.0$), which corresponds to the center of the respective parameter ranges included in the training set. Finally, we obtain 5880 defective structures for each defect ratio. In total, we compute defectives structures for three defect types (missing atoms and displacements introduced both separately and combined) for eight different defect ratios, giving in total 141,120 defective data points. \textbf{Neural-network architecture and training procedure.}\ At prediction time, we need to fix $T$, the number of forward-passes being averaged (cf. Supplementary Methods). We chose $T=10^3$ for all results except Fig. \ref{fig:STEM}c and Supplementary Fig. \ref{fig:assign_most_sim_proto}, for which we increase $T$ to $10^{5}$ in order to get stable assignments in case of high uncertainty and very low probability candidates (i.e., $<1.0\%$). Still, the most similar prototypes can be obtained already with $10^3$ iterations. Training is performed using Adam optimization\cite{kingma2014adam}. The multilayer perceptron is implemented in Keras\cite{chollet2015} using Tensorflow\cite{abadi2016tensorflow} as backend. Furthermore we optimize hyperparameters such as the number of layers using Bayesian optimization, specifically the Tree-structured Parzen estimator (TPE) algorithm as provided by the python library hyperopt\cite{10.5555/3042817.3042832} (cf. Supplementary Methods for more details). The initial training set is split (80/20\% training / validation split of pristine structures, performed using scikit-learn, in stratified fashion, using a random state of 42) and the accuracy on the validation set is used as the performance metric to be minimized via hyperopt (for 50 iterations). Fast convergence (followed by oscillations around high accuracy values) or divergence is typically observed, which is why we train for a fixed number of epochs (300) and save only the model with the best performance on the validation set. Training is performed on 1 GPU (Tesla Volta V100 32GB) on the Talos machine-learning cluster in collaboration with the Max Planck Computing and Data facility (MPCDF). We observe that accuracies around $99\%$ can be reached after few iterations, with individual training runs converging within 20 minutes, depending on model complexity. Practically, strong models are obtained via this procedure, while further fine-tuning can be made to reach perfect accuracies. First, we restrict to one setting of training parameters (see previous section). From a computational efficiency point of view, this is also the the preferred choice since one has to compute only one descriptor per structure during prediction time. We select $R_\text{C}=4.0\cdot d_{NN}$ and $\sigma=0.1\cdot d_{\text{NN}}$ as well as an extrinsic scaling factor of 1.0. These choices are at the center of the respective parameter ranges. While the model with highest validation accuracy (on the whole training set) determined via hyperopt usually gives very strong performance, it is not necessarily the best possible one, especially in terms of generalization ability to defective structures. To find the optimal (i.e., most robust) model we select some of the best models (e.g., top 15) found via hyperopt and rank them based on their performance on pristine and defective structures (again for one setting of $R_\text{C}, \sigma$). In particular, we restrict to defective points with either $\leq 5\%$ atoms missing or $<1\% $ atomic displacement, which comprises 35$\,$280 data points (six different defect ratios with 5$\,$880 points each). The number of pristine data points is 396. Using this strategy, we can identify a model with 100\% accuracy on pristine and defective structures, which is reported in the last line of Table \ref{table:accuracy-comparison-single-crystal}. The accuracy on the whole training set comprising 39$\,$204 data points is 99.66\%. We also investigate the performance on higher defect ratios beyond physically reasonable perturbations, since this is typically encountered in atom-probe experiments. In particular, we investigate three defect types (missing atoms, displacements, and both of them) comprising 105$\,$840 data points. The results for missing atoms ($>5\%$) and displacements ($>0.6\%$) can be found in Table \ref{table:accuracy-comparison-single-crystal} and Supplementary Table \ref{table:suppl_high_defects}. Classification accuracies on structures with both missing atoms and displacements are specified in Supplementary Table \ref{table:suppl_vac_and_displ}. Note that training and model selection only on pristine structures can yield robust models, especially if the number of classes is reduced. For instance, training only on binary systems using a pristine set of 4$\,$356 data points (full SOAP parameter range) gives perfect accuracy on both the full training set and 3$\,$960 defective structures (displacements $\leq 0.06\% $ and $\leq5\%$ missing atoms \textendash\ for the setting $R_\text{C}=4.0\cdot d_{\text{NN}}, \sigma = 0.1\cdot d_{\text{NN}}$, extrinsic scaling factor 1.0). Note that in general, if fewer classes are considered (e.g., $\sim$ 20), the training time can be significantly reduced (e.g., to a few minutes). \textbf{Naive Bayes} We employ the implementation provided by scikit-learn (\url{https://scikit-learn.org/stable/modules/naive_bayes.html}), where two assumptions for the likelihood $P(x_i|y)$ of the features $x_i$ given the labels $y$ are tested: A Gaussian distribution (Gaussian Naive Bayes, short GNB) and a multivariate Bernoulli distribution (Bernoulli Naive Bayes, short BNB). We observe that the BNB model yields improved results compared to GNB, while both being significantly less accurate than ARISE. \textbf{Unsupervised learning: clustering and dimensionality reduction.}\ HDBSCAN\cite{mcinnes2017accelerated, McInnes2017} is a density-based, hierarchical clustering algorithm (see also the online documentation \url{https://hdbscan.readthedocs.io/en/latest/}). The final (so-called flat) clustering is derived from a hierarchy of clusters. The most influential parameter is the minimum cluster size that determines the minimum number of data points a cluster has to contain \textendash\ otherwise it will be considered an outlier, i.e., not being part of any cluster. Practically, one can test a range of values for the minimum cluster size, in particular very small, intermediate and large ones \textendash\ for instance for the results on the synthetic polycrystal in Fig. \ref{fig:synthetic_polycrystal}a, we test the values $\{25, 50, 100, 250, 500, 1\,000\}$. In line with intuition, the number of clusters grows (shrinks) for smaller (larger) values of minimum cluster size. A coherent picture with 4 clusters and clear boundaries (as indicated by the outliers) arises for minimum cluster size values of around 500, for which we report the results in Fig. \ref{fig:synthetic_polycrystal}d-g and Supplementary Fig. \ref{fig:four_grains_umap_hdbscan_full}. Moreover, we test the influence of the so-called minimum distance parameter in Supplementary Fig. \ref{fig:supp_hdbscan_pos_gb}, where for Fig. \ref{fig:synthetic_polycrystal}e-g, we choose a minimum distance parameter of 0.9. For the nanoparticle data discussed in Fig. \ref{fig:AET}c, we observe that most of the points are considered outliers since the data contains substantially more distortions. To address this, we use the soft clustering feature of HDBSCAN, which allows to calculate a vector for each data point whose $i$-th component quantifies the probability that the given data point is member of cluster $\textit{i}$. Then, we can infer a cluster assignment for points that would normally be considered outliers, by selecting for each point the cluster whose membership probability is maximal (while considering a point an outlier if all probabilities are below a certain threshold for which we choose 10\,\%). For the minimum cluster size, we find that for values below 10 the number of clusters quickly grows while shrinking for larger values. We report the results for a minimum cluster size of 10 and a minimum distance parameter of 0.1 in Fig. \ref{fig:AET}c. To visualize the clustering results, we use the manifold-learning technique UMAP\cite{mcinnes2018umap} (see also the online documentation \url{https://umap-learn.readthedocs.io/en/latest/}). This method uses techniques from Riemannian geometry and algebraic topology to capture both the global and local structure of a manifold that underlies a given dataset. One of the most important parameters is the number of neighbors that will be considered to construct a topological representation of the data, where a small value takes only the local structure into account, while a large value considers the global relations between data points. We choose values of 500 for Fig. \ref{fig:synthetic_polycrystal}e-g and 50 for \ref{fig:AET}c, above which the 2D embeddings do not change significantly. \textbf{Synthetic polycrystal generation} The structure in Fig. \ref{fig:synthetic_polycrystal}a is generated via the open-source software Atomsk\cite{hirel2015atomsk}. \textbf{Strided pattern matching parameters.}\ Two parameters are most important for strided pattern matching analysis: Firstly, the stride defines the resolution and may be chosen arbitrarily small or large to increase or decrease the visualization of structural features. Note that the sliding allows us to discover smooth transitions, while the smoothness is determined by the step size. This way, boundary effects between neighbored local regions are reduced compared to the case of slightly or non-overlapping boxes (e.g., in the simple voxelization case). In particular, a small stride (e.g., 1\,\AA) mitigates boundary effects due to the discretization, which otherwise can influence the final classification and uncertainty maps. SPM is trivially parallel by construction, thus allowing the time-efficient characterization of large systems. Clearly, in a naive implementation, this procedure scales cubically with stride size. Practically, one may choose a large stride (in particular if the structure size would exceed computing capabilities) to obtain low-resolution classification maps, which may suffice to identify regions of interest. Then, one may zoom into these areas and increase the stride to obtain high resolution classification maps revealing more intricate features. Secondly, the box size determines the locality, i.e., the amount of structure that is averaged to infer the crystallographic prototype being most similar to a given local region. If this parameter is chosen too large, possibly interesting local features may be smoothed out. We recommend to use box sizes larger than 10-12$\text{\AA}$, as in these cases, the number of contained atoms is typically within the range of the supercells the network is trained on (i.e., at least 100 atoms). The generalization ability to smaller structures depends on the prototype (cf. Supplementary Fig. \ref{fig:supercells_cosine_sim_to_pbc_True}), and in general, if a smaller box size is desired while using our model, the practical solution is to add smaller supercells in the training set and retrain the network. Note that the shape of the local regions may be chosen to be different from boxes, e.g., spheres or any other shape that fits the application at hand. Moreover, we chose the grid in which the structure is strided to be cubic, while other discretizations are possible. Note that a one-dimensional striding can be applied to rod-like systems such as carbon nanotubes. In this work, we choose the following SPM parameters: For the slab analysis in Fig. \ref{fig:synthetic_polycrystal}a, we choose a $1\, \text{\AA}$ stride and a box size equal to the slab thickness ($16\,\text{\AA}$). For the superalloy model system we choose the same box size but reduce the stride to $3\, \text{\AA}$, since this system is much larger and we want to demonstrate that for these systems, smaller strides still yield reasonable results. For the grain-boundary structure in Fig. \ref{fig:synthetic_polycrystal}k, a stride of $2\, \text{\AA}$ and a box size of $10\, \text{\AA}$ suffice to characterize the system. For the 2D STEM analysis (cf. Fig. \ref{fig:STEM}g), we choose a stride of 4 (in units of pixels since atoms are reconstructed from images, while for typical graphene bond lengths of 1.42\,\AA~ the relation 1\,\AA~ $\approx$ 8.5 can be inferred). Moreover, we select a box size of 100 pixels ($\approx 12\,\text{\AA}$). For the quasicrystalline structure in Fig. \ref{fig:STEM}h,i, which has been cropped from the original reference\cite{li2016review} and rescaled to a $1000\times1000$ pixel image (using standard settings in the GIMP Image editor), a box size of 100 pixels and stride of 10 pixels suffice to detect the MnS$_2$ prototype as dominant pattern. For the nanoparticle analysis, we choose a stride of $1\, \text{\AA}$ and box size of $12\,\text{\AA}$ for all of Fig. \ref{fig:AET}, except the clustering analysis, for which we reduce the stride to $2\text{\AA}$, to avoid an overcrowded 2D map. The box size of $16\,\text{\AA}$ (which allowed to distinguish chemically disordered fcc from ordered L1$_2$, cf. Fig. \ref{fig:synthetic_polycrystal}h-j) yields comparable results (see Supplementary Fig. \ref{fig:supp_nanoparticle_2017_uncertainty}), while finding less L1$_0$ symmetry and more fcc since a larger amount of structure is averaged. Due to L1$_0$ showing special magnetic properties, we are interested in having a larger pool of candidate regions, which is why we choose a box size of $12\,\text{\AA}$ (corresponding to the smallest value such that the average number of atoms in each box is greater than 100). \textbf{Atomic electron tomography.}\ ARISE's predictions are reliable since all the symmetries that typically occur in FePt nanoparticles are included in the training set \textendash\ except the disordered phase for which it has been demonstrated in the analysis of the Ni-based superalloy model system that ARISE is sensitive to chemical ordering. Moreover, a supplementing study reveals that ARISE can analyze structural transformations, in particular similar to the ones taking place in nanoparticles (cf. \nameref{section:supp_note_2} and Supplementary Fig. \ref{fig:Bain_path_resuts}, where the so-called Bain path is investigated). Due to diffusion, the shape of the three nanoparticles (cf. Fig. \ref{fig:AET}c) and thus the number of atoms is changing. Rough alignment of the nanoparticles was checked using point set registration: Specifically, we employed the coherent point drift algorithm\cite{myronenko2010point} as implemented in the python package pycpd (\url{https://github.com/siavashk/pycpd}). We extracted only the core of the nanoparticle, which is reported to remain similar during the annealing procedure\cite{zhou2020atomic}. After applying the algorithm, the remaining mismatch is negligible (3-10$^\circ$ for all three Euler angles). \subsection*{Data availability} The training and test data, trained neural-network model, as well as all relevant geometry files and datasets that are generated in this study have been deposited at Zenodo under accession code \url{https://doi.org/10.5281/zenodo.5526927}. The geometry file of the so-called Pearl structure analyzed in Fig. \ref{fig:synthetic_polycrystal}k-m is available in Edmond (the Open Access Data Repository of the Max Planck Society) under accession code \url{https://edmond.mpdl.mpg.de/imeji/collection/zV4i2cu2bIAI8B}. The experimental HAADF image datasets and trained neural-network models that are employed in this study for reconstructing atomic positions are available under accession codes \url{https://github.com/pycroscopy/AICrystallographer/tree/master/AtomNet} and \url{https://github.com/pycroscopy/AICrystallographer/tree/master/DefectNet}. The HRTEM data used in this study (Fig. \ref{fig:STEM}h) has been adapted (see Methods) from the original publication\cite{li2016review}, where it is published under a Creative Commons Attribution 4.0 International License. The AET data used in this study is available in the Materials Data Bank (MDB) under accession code \url{https://www.materialsdatabank.org/}. \subsection*{Code availability} A Python code library ai4materials containing all the code used in this work is available at \url{https://github.com/angeloziletti/ai4materials}. In more detail, ai4materials provides tools to perform complex analysis of materials science data using machine learning techniques. Furthermore, functions for pre-processing, saving, and loading of materials science data are provided, with the goal to ease traceability, reproducibility, and prototyping of new models. An online tutorial to reproduce the main results presented in this work can be found in the NOMAD Analytics-Toolkit at \url{https://analytics-toolkit.nomad-coe.eu/tutorial-ARISE}. \section{Robust recognition and exploratory analysis of crystal structures via Bayesian deep learning - Supplementary Information} \section{Supplementary Methods} \textbf{Isotropic scaling.} To reduce the dependency on lattice parameters, we isotropically scale each prototype according to its nearest neighbor distance $d_{\text{NN}}$. This way, one degree of freedom is eliminated, implying that all cubic systems are equivalent and thus are correctly classified by construction. To compute $d_{\text{NN}}$, we calculate in a first step the histogram of all nearest neighbor distances. Since the area of spherical shells grows with the squared radius, we divide the histogram by the squared radial distance. Then, we use the center of the maximally populated bin as the nearest neighbor distance $d_{\text{NN}}$. Dividing the atomic position by $d_{\text{NN}}$ yields the final isotropically scaled structure, which is used for calculating the SOAP descriptor. Alternatively, one may use the mean of the nearest neighbors as $d_{\text{NN}}$, which, however, is more prone to defects. In case of multiple chemical species, we consider all possible substructures as formed by the constituting species to calculate the SOAP descriptor (see next paragraph). For each of the substructures, we compute $d_{\text{NN}}$, while we determine the histogram of neighbor distances only from distances between atoms whose chemical species coincide with those of the substructure. For instance, given the substructure ($\alpha$, $\beta$), i.e., the atomic arrangement of atoms with species $\beta$ as seen from the perspective of atoms with species $\alpha$, we consider only $\alpha$-atoms and determine all distances to $\beta$-atoms. \textbf{SOAP descriptor.} As discussed in the main text, encoding of physical requirements we know to be true is crucial for machine-learning application. For instance, in crystal classification, two atomic structures that differ only by a rotation must have the same classification label. This is not guaranteed if real space atomic coordinates are used as descriptor (cf. Fig. \ref{fig:single_and_polyc_class_steps}a). As an attempt to fix this, one might include a discrete subset of orientations in the training set, hoping that the model will generalize to unseen rotations. However, there is no theoretically guarantee that the model will learn the rotational symmetry, and if it does not, it will fail to generalize and return different predictions for symmetrically equivalent structures. In contrast, when a rotationally invariant descriptor is employed, only one crystal orientation needs to be included in the training set and the model will generalize to all rotations by construction. This reasoning readily applies to other physics requirements such as translational, or permutation invariance (for atoms with same chemical species). In the following, we provide details on adapting the standard SOAP descriptor such that its number of components is independent on the number of atoms and chemical species. Starting with the simple case of one chemical species, we consider a local atomic environment $\mathscr{X}$, being defined by a cutoff region (with radius $R_\text{C}$) around a central atom, located at the origin of the reference frame. Each atom within this area is represented by a Gaussian function centered at the atomic position $\mathbf{r}_i$ and with width $\sigma$. Then, the local atomic density function of $\mathscr{X}$ can be written as\cite{bartok2013representing} \begin{equation} \rho_\mathscr{X}(\mathbf{r})=\sum_{i\in \mathscr{X}} \exp{\left(-\frac{(\mathbf{r}-\mathbf{r}_i)^2}{2\sigma^2}\right)} =\sum_{blm}c_{blm}u_b(r) Y_{lm}(\hat{\mathbf{r}}), \label{equation:local_atomic_density} \end{equation} where in the second step, an expansion in terms of spherical harmonics $Y_{lm}(\hat{\mathbf{r}})$ and a set of radial basis functions $\{ u_b(r)\}$ is performed. One can show that the rotationally invariant power spectrum is given by\cite{bartok2013representing} \begin{equation} p(\mathscr{X})_{b_1b_2l}=\pi\sqrt{\frac{8}{2l+1}}\sum_{m}(c_{b_1 lm})^{\dagger} c_{b_2 lm}. \label{equation:SOAP_power_spectrum} \end{equation} These coefficients can be arranged in a normalized (SOAP) vector $\hat{\mathbf{p}}\ (\mathscr{X})$, describing the local atomic environment $\mathscr{X}$. In total, we obtain as many SOAP vectors as atoms in the structure, which one can average to obtain a materials descriptor independent of the number atoms $N_{\text{at}}$. Another possibility (the standard setting in the software we use) is to average the coefficients $c_{blm}$ first and then compute Eq. \ref{equation:SOAP_power_spectrum} from this\cite{mavracic2018similarity}. The cutoff radius $R_\text{C}$ and $\sigma$ (cf. Eq. \ref{equation:local_atomic_density}) are hyperparameters, i.e., supervised learning cannot be used directly to assign values to these parameters, while their specific choice will affect the results. Typically, one would employ cross-validation while here, we take a different route: First, we assess the similarity between SOAP descriptors using the cosine similarity to identify parameter ranges that provide sufficient contrast between the prototypes. Using this experimental approach, we find that values near $\sigma = 0.1 \cdot d_{\text{NN}}$ and $R_{\text{C}} = 4.0 \cdot d_{\text{NN}}$ yield good results. Then we augment our dataset with SOAP descriptors calculated for different parameter settings. The extension to several chemical species is achieved by considering all possible substructures as formed by the constituting atoms: Considering NaCl, we first inspect the lattice of Cl atoms as seen by the Na atoms, which we denote by $(\text{Na}, \text{Cl})$; this means that Na atoms are considered as central atoms in the construction of the local atomic environment while only Cl atoms are considered as neighbors. A similar construction is made for the remaining substructures $(\text{Na}, \text{Na})$, $(\text{Cl}, \text{Na})$, and $(\text{Cl}, \text{Cl})$, which may be quite similar depending on the atomic structure. For each substructure, we compute the SOAP vectors via Eq. \ref{equation:SOAP_power_spectrum}, obtaining a collection of SOAP vectors. Averaging these gives us four (in case of NaCl) averaged SOAP vectors. Averaging the latter again, yields a materials representation being independent on the number of atoms and chemical species. Formally, given a structure with $S$ species $\alpha_1, ..., \alpha_S$, we consider all substructures formed by pairs of species $(\alpha_i, \alpha_j), j=1,...,S$, resulting in $S^2$ averaged SOAP vectors $<\hat{\mathbf{p}}_{\alpha_i\alpha_j}>_{N_{\text{at},\alpha_i}}$, where the bracket represents the average over number of atoms $N_{\text{at}}$ of species $\alpha_i$. These vectors are averaged over, yielding the final vectorial descriptor $<<\hat{\mathbf{p}}_{\alpha_i\alpha_j}>>_{\alpha_i\alpha_j}$. Note that this construction of SOAP deviates from the previously reported way of treating multiple chemical species in the following way: Usually, for each atom, one constructs the following power spectra\cite{de2016comparing} \begin{equation} p(\mathscr{X})_{b_1b_2l}^{\alpha\beta}=\pi\sqrt{\frac{8}{2l+1}}\sum_{m}(c_{b_1 lm}^{\alpha})^{\dagger} c_{b_2 lm}^\beta, \label{equation:partial_power_spectra} \end{equation} where the coefficients originate from basis set expansion as in Eq. \ref{equation:SOAP_power_spectrum}, while the density $\rho$ is constructed separately for each species. For a specific $\alpha$ and $\beta$, the coefficients of Eq. \ref{equation:partial_power_spectra} can be collected into vectors $\mathbf{p}_{\alpha\beta}$. In case of $\alpha\neq\beta$, cross-correlations, i.e., products of coefficients from different densities are used to construct the vectors $\mathbf{p}_{\alpha\beta}$, which are missing in our version. \textbf{Bayesian deep learning.} As discussed in the main text, one can think of Bayesian neural networks as standard neural networks with distributions being placed placed over the model parameters. This results in probabilistic outputs from which principled uncertainty estimates can be obtained. The major drawback is that training and obtaining predictions from traditional Bayesian neural networks is generally difficult because it requires solving computationally costly high-dimensional integrals. For classification, expensive calculations are required to determine $p(y=c|x, \text{D}_{\text{train}})$, which is the probability that the classification is assigned to a class $c$, given input $x$ and training data $\text{D}_{\text{train}}$. Then, for a specific input $x$ (in our case the SOAP descriptor), the most likely class $c$, i.e., the one with largest $p(y=c|x, \text{D}_{\text{train}})$ is the predicted class. Gal and Ghahramani\cite{gal2016dropout} showed that stochastic regularization techniques such as dropout\cite{hinton2012improving, srivastava2014dropout} can be used to calculate high-quality uncertainty estimates (alongside predictions) at low cost. In dropout, neurons are randomly dropped in each layer before the network is evaluated for a given input. Usually, dropout is only used at training time with the goal of avoiding overfitting by preventing over-specialization of individual units. Keeping regularization also at test time allows to quantify the uncertainty. Practically, given a new input, one collects and subsequently aggregates the predictions while using dropout at prediction time. This gives a collection of probabilities being denoted as $p(y=c|x, \omega_t)$, which is the probability of predicting class $c$ given the input $x$ at a specific forward-pass $t$, with model parameters $\omega_t$. From this collections of probabilities, one can estimate the actual quantity of interest, $p(y=c|x, \text{D}_{\text{train}})$, by a simple average\cite{gal2016dropout}: \begin{equation} p(y=c|x, \text{D}_{\text{train}}) \approx \frac{1}{T} \sum_{t=1}^{T} p(y=c|x,\omega_t), \label{equation:approximation_class_probability} \end{equation} where $T$ is the number of forward-passes (see Methods section ``Neural network architecture and training procedure'' for details on how we choose this parameter). While the average can be used to infer the class label $c$, additional statistical information, which reflects the predictive uncertainty, is contained in the collected forward-passes, i.e., the probabilities $p(y=c|x,\omega_t)$ which effectively yield a histogram for each class and define, when varying over all possible $c$, a (discrete) probability distribution. For instance, mutual information can be used to quantify the uncertainty from the expressions $p(y=c|x,\omega_t)$. Specifically, for a given test point $x$, the mutual information between the predictions and the model posterior $p(\omega|\text{D}_{\text{train}})$ (which captures the most probable parameters given the training data) is defined as\cite{houlsby2011bayesian, gal2016dropout} \begin{equation} \begin{aligned} & \mathbb{I} \left[y, \omega \vert x, \text{D}_{\text{train}}\right] \approx \\ & - \sum_{c} \left( \dfrac{1}{T} \sum_{t} p \left(y=c \vert x, \boldsymbol{\omega}_t \right) \right)\log \left( \dfrac{1}{T} \sum_{t} p \left(y=c \vert x, \boldsymbol{\omega}_t \right) \right) \\ & + \frac{1}{T} \sum_{c}\sum_{t} p \left(y=c \vert x, \boldsymbol{\omega}_t \right) \log p \left(y=c \vert x, \boldsymbol{\omega}_t \right). \label{equation:mutual_information} \end{aligned} \end{equation} \textbf{Hyperparameter optimization.} The Tree-structured Parzen estimator (TPE) algorithm\cite{bergstra2011algorithms, 10.5555/3042817.3042832} is an example of a Bayesian optimization technique. Specifically, one has to define a search space which can comprise a variety of parameters such as the learning rate or model size specifics such as the number of layers or neurons. Then, the goal is to minimize a performance metric (in our case, we maximize the accuracy by minimizing its negative). For large search spaces, iterating through each possible combination, i.e., performing a grid search, will get expensive very quickly. Random search is one alternative, while Bayesian methods such as TPE can be more efficient. Approaches such as grid or random search assign uniform probability to each hyperparameter choice, which implies that a long time is spent at settings with low reward. This becomes particularly troublesome if the performance metric is expensive to calculate. In Bayesian methods such as TPE, the objective is replaced by a computationally cheaper surrogate model (for instance, Gaussian process or random forest regressor). New hyperparameters are selected iteratively in a Bayesian fashion. Specifically, the selection is based on an evaluation function (typically so-called expected improvement) taking into account the history of hyperparameter selections and thus avoiding corners of the search space with low reward. The search space is chosen the following way (alongside the chosen hyperopt commands hp.choice or hp.uniform): \begin{itemize} \item Number of layers (2, 3, 4, 5), hp.choice \item Number of neurons in each layer (256 or 512), hp.choice \item Batch size, (64 or 128), hp.choice \item Learning rate, range (0.01,0.0001), hp.uniform \item Dropout rate, range (0.01, 0.05), hp.uniform \end{itemize} \section{Supplementary Notes} \subsection{Supplementary Note 1 \label{section:supp_note_1}} In the following, we provide details on the benchmarking. For spglib, we only include prototypes from AFLOW. The reason for excluding structures from the computational materials repository (CMR) is that we do not always have the correct or meaningful labels for all structures. For instance, some 2D materials are specified as P1 in the database, which cannot be used as a correct label. Furthermore, for quaternary chalcogenides, the expected symmetries (as specified in the corresponding reference\cite{pandey2018promising}) cannot be reconstructed, which is most likely due to local optimization effects. Similar observations were made for the ternary Perovskites. More careful choice of precision parameters or additional local optimization may help. Thus, to enable a fair comparison, the benchmarking in the main text only reports results on elemental and binary compounds from AFLOW (where we know the true labels), while the performance on both AFLOW and CMR data is shown in Supplementary Tables \ref{table:accuracy-comparison-single-crystal-with-stars}, \ref{table:suppl_high_defects}, and \ref{table:suppl_vac_and_displ}. To avoid the impression that spglib is not applicable to ternary, quaternary, and 2D materials, we still provide the label ``96/108'' behind spglib methods in the benchmarking tables. Note that non-periodic structures are excluded for benchmarking (again only in the main table), in particular carbon nanotubes, since these systems cannot be treated by spglib. For the other benchmarking methods, which are common neighbor analysis (CNA, a-CNA), bond angle analysis (BAA), and polyhedral template matching (PTM), we use implementations provided in OVITO\cite{stukowski2009visualization}, where for BAA we apply the Ackland jones method. As for spglib, only periodic structures were included. BAA, CNA, a-CNA all include fcc, bcc, and hcp structures, while PTM contains in addition sc, diamond, hexagonal diamond, graphene, graphitic boron nitride, L10, L12, zinc blende, and wurtzite. Each of the frameworks provide one label for each atom, i.e., for a structure with $N$ atoms we obtain $N$ labels. To obtain an accuracy score, we compare these $N$ predictions to $N$ true labels, which correspond to the space group associated with the prototype label (e.g., 194 for hcp). For CNA, we select the standard cutoff (depending on its value one is able to detect bcc but not hcp and vice versa). Also for BAA (Ackland jones) and a-CNA standard settings are used. For PTM, an RMSD cutoff of 0.1 was used (again default in OVITO). Note that PTM can also distinguish different sites of the L12 structure. For simplicity, we did not label the L12 structure by sites and take this classification into account, but always assign a true label as soon as an atom was assigned to the L12 class (even if it might be not the correct site). Furthermore, for ARISE periodic and non-periodic structures are included, while for the benchmarking methods only periodic structures are considered. While for spglib, translational symmetry is violated by construction, the other methods can in principle be applied to these systems. However, when calculating the accuracy for a given non-periodic structure, we have to choose a label for the boundary atoms. If we select the same label for these atoms as for the central ones (which have a sufficiently larger number of neighbors), these methods will usually predict the class ``None'' and interpreting this as a ``misclassification'' would decrease the total classification accuracy. Therefore, for a fair comparison, we exclude non-periodic structures. \clearpage \begin{table*}[] \begin{tabular}{@{}lrrrrrrrrrrrr@{}} \hline \hline & \multicolumn{1}{c}{Pristine} & \multicolumn{1}{l}{} & \multicolumn{5}{c}{Random displacements ($\delta$)} & \multicolumn{1}{l}{} & \multicolumn{4}{c}{Missing atoms ($\eta$)} \\ \cmidrule(lr){4-8} \cmidrule(l){10-13} & & & \multicolumn{1}{c}{0.1\%} & \multicolumn{1}{c}{0.6\%} & \multicolumn{1}{c}{1\%} & \multicolumn{1}{c}{2\%} & \multicolumn{1}{l}{4\%} & \multicolumn{1}{l}{} & \multicolumn{1}{c}{1\%} & \multicolumn{1}{c}{5\%} & \multicolumn{1}{c}{10\%} & \multicolumn{1}{c}{20\%}\\ \cmidrule(r){1-8} \cmidrule(l){9-13} Spglib (loose) & 100.00&& 100.00 & 100.00 & 95.26 & 0.20 & 0.00 && 11.23 & 0.00 & 0.00 & 0.00 \\ & && & & & & && & & & \\ Spglib* (loose) & 67.71 && 67.71 & 67.71 & 65.83 & 14.51 & 0.00 && 15.73 & 0.03 & 0.00 & 0.00 \\ & && & & & & && & & & \\ Spglib (tight) & 100.00 && 0.00 & 0.00 & 0.00 & 0.00 & 0.00 && 11.23 & 0.00 & 0.00 & 0.00 \\ & && & & & & && & & & \\ Spglib* (tight) & 83.33 && 0.00 & 0.00 & 0.00 & 0.00 & 0.00 && 17.53 & 0.00 & 0.00 & 0.00 \\ & && & & & & && & & & \\ PTM & 100.00 && 100.00 & 100.00 & 100.00 & 100.00 & 100.00 && 88.67 & 51.76 & 25.93 & 6.24 \\ & && & & & & && & & & \\ PTM* & 8.78 && 11.37 & 11.37 & 11.37 & 11.37 & 11.37 && 10.08 & 5.90 & 2.96 & 0.71 \\ & && & & & & && & & & \\ CNA & 66.14 && 62.81 & 62.81 & 54.55 & 32.34 & 31.41 && 55.86 & 32.50 & 15.75 & 3.07 \\ & && & & & & && & & & \\ CNA* & 1.44 && 1.62 & 1.62 & 1.40 & 0.83 & 0.81 && 1.44 & 0.84 & 0.41 & 0.08 \\ & && & & & & && & & & \\ a-CNA & 100.0 && 100.0 & 100.0 & 100.0 & 100.0 & 100.0 && 89.25 & 52.81 & 25.92 & 5.37 \\ & && & & & & && & & & \\ a-CNA* & 2.49 && 3.08 & 3.08 & 3.08 & 3.08 & 3.08 && 2.75 & 1.64 & 0.81 & 0.17 \\ & && & & & & && & & & \\ BAA & 100.0 && 100.0 & 100.0 & 100.0 & 100.0 & 97.85 && 99.71 & 88.78 & 65.21 & 25.38 \\ & && & & & & && & & & \\ BAA* & 2.49 && 3.08 & 3.08 & 3.08 & 3.08 & 3.03 && 3.08 & 2.74 & 2.02 & 0.81 \\ & && & & & & && & & & \\ GNB &62.63 && 56.50 & 55.94 & 55.56& 54.98 & 52.72 && 54.51 &52.94 & 52.67 & 52.09 \\ & && & & & & && & & & \\ BNB &75.76 && 65.56 & 65.19 & 63.61& 61.58 & 56.58 && 65.49 &64.00 & 62.43 & 60.48 \\ & && & & & & && & & & \\ \textbf{ARISE} (this work) &100.00 && 100.00 & 100.00 & 100.00& 99.86 & 99.29 && 100.00 &100.00 & 100.00 & 99.85 \\ \hline \hline \end{tabular} \caption{Accuracy in identifying the parent class of defective crystal structures. Two lines are shown for each of the methods used for benchmarking (spglib, PTM, CNA, a-CNA, BAA): In rows without stars, the accuracy is calculated only for structures for which the respective method was designed for; for instance, spglib can be applied to every structure of Fig. \ref{fig:single_and_polyc_class_steps}e except the 12 nanotubes (note that we only include prototypes from AFLOW for spglib, cf. Supplementary Note 1). This is also true for the other methods, while additional structures have to be removed for instance for CNA, a-CNA, and BAA as they cannot classify simple cubic and diamond structures. In starred rows, all 108 classes summarized in Fig. \ref{fig:single_and_polyc_class_steps}e are included, leading to the strong decrease in performance. In contrast, the neural network approach proposed here can be applied to all classes, and thus the whole dataset was used. Moreover, we compare ARISE to a standard Bayesian approach: Naive Bayes (NB). We consider two different variants of NB: Bernoulli NB (BNB) and Gaussian NB (GNB), where the whole dataset was used \textendash\ see the Methods section for more details. ARISE is overwhelmingly more accurate than both NB methods, for both pristine and defective structures. } \label{table:accuracy-comparison-single-crystal-with-stars} \end{table*} \begin{table*}[] \centering \begin{tabular}{@{}lrrrrrrrrrrrr@{}} \hline \hline & \multicolumn{1}{l}{} & \multicolumn{2}{c}{Random displacements ($\delta$)} & \multicolumn{1}{l}{} & \multicolumn{2}{c}{Missing atoms ($\eta$)} \\ \cmidrule(lr){3-4} \cmidrule(l){5-7} & & \multicolumn{1}{c}{7\%} & \multicolumn{1}{r}{10\%} & \multicolumn{1}{l}{} & \multicolumn{1}{c}{25\%} & \multicolumn{1}{c}{30\%} \\ \cmidrule(r){1-4} \cmidrule(l){5-7} Spglib (loose) && 0.00 & 0.00 && 0.00 & 0.00 \\ && & && & \\ Spglib* (loose) && 0.00 & 0.00 && 0.00 & 0.00 \\ && & && & \\ Spglib (tight) && 0.00 & 0.00 && 0.00 & 0.00 \\ && & && & \\ Spglib* (tight) && 0.00 & 0.00 && 0.00 & 0.00 \\ && & && & \\ PTM && 100.00 & 94.34 && 3.33 & 1.72 \\ && & && & \\ PTM* && 11.37 & 10.71 && 0.38 & 0.19 \\ && & && & \\ CNA && 31.41 & 24.20 && 1.38 & 0.55 \\ && & && & \\ CNA* && 0.81 & 0.62 && 0.04 & 0.01 \\ && & && & \\ a-CNA && 99.99 & 94.55 && 2.60 & 1.03 \\ && & && & \\ a-CNA* && 3.08 & 2.90 && 0.08 & 0.03 \\ && & && & \\ BAA && 87.79 & 69.68 && 14.25 & 7.35 \\ && & && & \\ BAA* && 2.77 & 2.22 && 0.49 & 0.30 \\ && & && & \\ GNB && 50.73 & 48.62 && 51.33 &50.32 \\ && & && & \\ BNB && 48.81 & 43.28 && 59.78 &58.18 \\ && & && & \\ \textbf{ARISE} (this work) && 97.82 & 94.56 && 99.86 &99.76 \\ \hline \hline \end{tabular} \caption{Accuracy in identifying the parent class of defective crystal structures for high displacements (percentage $\delta$) and missing atoms (percentage $\eta$). } \label{table:suppl_high_defects} \end{table*} \begin{table*}[] \begin{tabular}{@{}lrrrrrrrrrrrr@{}} \hline \hline & \multicolumn{1}{l}{} && \multicolumn{8}{c}{Missing atoms and displacements ($\eta$, $\delta$)} \\ \cmidrule(lr){3-9} \cmidrule(l){9-12} & \multicolumn{1}{c}{(1\%, 0.1\%)} & \multicolumn{1}{c}{(5\%, 0.6\%)} & \multicolumn{1}{c}{(10\%, 1\%)} & \multicolumn{1}{c}{(15\%,2\%)} & \multicolumn{1}{l}{(20\%,4\%)} & \multicolumn{1}{l}{(25\%,7\%)} & \multicolumn{1}{l}{(30\%,10\%)} \\ \cmidrule(r){1-8} Spglib (loose) & 11.32 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & & & & & & & \\ Spglib* (loose) & 15.76 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & & & & & & & \\ Spglib (tight) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & & & & & & & \\ Spglib* (tight) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & & & & & & & \\ PTM & 88.68 & 51.78 & 25.60 & 12.75 & 6.41 & 3.19 & 1.46 \\ & & & & & & & \\ PTM* & 10.08 & 5.90 & 2.92 & 1.45 & 0.73 & 0.36 & 0.16 \\ & & & & & & & \\ CNA & 55.77 & 31.95 & 13.83 & 4.41 & 2.03 & 0.79 & 0.19 \\ & & & & & & & \\ CNA* & 1.44 & 0.82 & 0.36 & 0.11 & 0.05 & 0.02 & 0.00 \\ & & & & & & & \\ a-CNA & 89.21 & 52.36 & 26.01 & 12.13 & 6.07 & 2.40 & 0.97 \\ & & & & & & & \\ a-CNA* & 2.75 & 1.62 & 0.81 & 0.38 & 0.19 & 0.08 & 0.03 \\ & & & & & & & \\ BAA & 99.72 & 88.98 & 65.17 & 42.62 & 25.95 & 15.58 & 6.63 \\ & & & & & & & \\ BAA* & 3.07 & 2.75 & 2.02 & 1.34 & 0.82 & 0.50 & 0.22 \\ & & & & & & & \\ GNB & 54.92 & 52.86 & 52.11 & 50.70 & 49.92 & 47.94 & 42.65 \\ & & & & & & & \\ BNB & 64.44 & 61.79 & 58.86 & 56.26 & 52.31 & 45.03 & 40.14 \\ & & & & & & & \\ \textbf{ARISE} (this work) & 100.00 & 100.00 & 100.00 & 99.88 & 99.29 & 97.31 & 92.50 \\ \hline \hline \end{tabular} \caption{Accuracy in identifying the parent class of defective crystal structures, with both missing atoms (percentage $\eta$) and displacements (percentage $\delta$) introduced at the same time. The results show that ARISE is also robust for highly defective structures where displacements and missing atoms are present at the same time. } \label{table:suppl_vac_and_displ} \end{table*} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=\textwidth]{bain_path_results_other_format.pdf} \end{center} \caption{Application of ARISE for the characterization of the Bain path. \textbf{a} Structures occurring in the Bain path, obtained by varying c/a; increasing c/a from 1.0 (bcc) leads to transitions to tetragonal phases and finally to the fcc structure (c/a=$\sqrt{2}$) . \textbf{b} Classification (argmax predictions, left) and uncertainty (mutual information, right) for geometries in the range $c,a \in [3.0\,\text{\AA}, 6.0 \text{\AA}]$. Geometries included in the training set are marked by stars in \textbf{b,c}. As we isotropically scale the structures, geometries with constant c/a are equivalent, which is indicated by solid lines.} \label{fig:Bain_path_resuts} \end{figure*} \subsection{Supplementary Note 2 \label{section:supp_note_2}} The Bain transformation path describes a structural transitions between bcc and fcc symmetries via intermediate tetragonal phases\cite{bain1924nature} of body-centered \textendash\ or equivalently \textendash\ face-centered tetragonal symmetry. Originally investigated for iron\cite{bain1924nature}, the Bain path is relevant in thermo-mechanical processing \textendash\ a central aspect for steel properties\cite{zhao1995continuous} \textendash\ as it serves as a model for temperature-induced transitions between fcc $(\gamma)$ and bcc ($\alpha$) iron\cite{grimvall2012lattice}. The Bain path is also crucial for understanding properties of epitaxial films\cite{scheffler_bain_path, 2nd_most_prominent_bain_path_paper} or metal nanowires\cite{bain_path_dft_metal_nanowires}. Practically, the structures constituting a Bain path can be obtained by varying the ratio $c/a$ between lattice parameters $a$ and $c$ of a tetragonal structure (cf. Supplementary Fig. \ref{fig:Bain_path_resuts}a); $c/a=1$ corresponds to a cubic structure. We generate tetragonal geometries for lattice parameters $a,c$ taking values in $[3.0\,\text{\AA}, 6.0\,\text{\AA}]$ with steps of 0.05\,\AA, resulting in 3721 crystal structures. These structures are then classified with ARISE, and the results depicted via classification and uncertainty maps in Supplementary Fig. \ref{fig:Bain_path_resuts}b and c, respectively. Each point in these maps corresponds to a prediction for a specific geometry. We include in the training set fcc, bcc, and tetragonal geometries with structural parameters known experimentally; they are shown as stars in Supplementary Fig. \ref{fig:Bain_path_resuts}b. Specifically, the lattice parameters $(a, c, c/a)$ are $(3.155\text{\AA}, 3.155\text{\AA}, 1.0)$ for the bcc\cite{davey1925lattice} and $(3.615 \text{\AA}, 5.112 \text{\AA}, \sqrt{2})$ for the fcc prototype\cite{straumanis1969lattice}, while two tetragonal structures (being assigned one common label ``tetragonal'') are included with $(3.253 \text{\AA}, 4.946, \text{\AA}, 1.521$) in case of In\cite{deshpande1969anisotropic} and $(3.932 \text{\AA}, 3.238 \text{\AA}, 0.824)$ for $\alpha-\text{Pa}$\cite{zachariasen1959crystal}. We isotropically scale every geometry to remove one degree of freedom (see Supplementary Methods section), so that all possible cubic lattices are effectively equivalent; this allows the model to generalize by construction to all cubic lattices regardless of the lattice parameter. The same holds for tetragonal structures (i.e., two degrees of freedom) with constant $c/a$ ratio. As visual aid, we mark lines of constant c/a in Supplementary Fig. \ref{fig:Bain_path_resuts}b-c starting from the four structures included in the training set. Note that any path connecting the constant c/a ratios corresponding to fcc and bcc structures constitutes a \text{Bain path}. To obtain a classification label, we select the class with the higher classification probability through a so-called argmax operation (i.e., the label $c$ maximizing Eq. \ref{equation:approximation_class_probability}). These predictions are shown in Supplementary Fig. \ref{fig:Bain_path_resuts}. The model is able to detect the bcc and fcc phases in the expected areas, while all prototypes not being fcc, bcc, or tetragonal are correctly labeled as ``Other''. We point out that only four structures \textendash\ corresponding to points in the plot marked by the four stars \textendash\ are included in the training set, while all other 3717 structures are model (test) predictions. We can also observe that the model correctly predict the presence of a a tetragonal phase between fcc (yellow band) and bcc (green band), even though no tetragonal structures from this region are included in the training set. This transition is smooth, only interrupted by small areas for which other, low-symmetry prototypes are assigned, but with high uncertainty, as quantified by the mutual information, cf. Supplementary Fig. \ref{fig:Bain_path_resuts} c. We provide the classification probabilities of all assigned prototypes in Supplementary Fig. \ref{fig:suppl_bain_path}. In general, increased uncertainty appears at transitions between the assignments of different prototypes. We also note that there is a smooth transition for classification probabilities at the transition between prototypes (cf. Supplementary Fig. \ref{fig:suppl_bain_path}). These results represent a first indication that the network has learned physically meaningful representations. Surprisingly, for large or small $c/a$ ratios, i.e., points far outside the training set, other (low-symmetry) phases appear such as base-centered orthorhombic molecular iodine or face-centered orthorhombic $\gamma-$Pu with small uncertainty. While it may be desirable to avoid overconfident predictions far away from the training set, the assignments could be actually physically justified given the similarities between tetragonal and orthorhombic lattices, the most evident being that all angles in both crystal systems are $90^\circ$. We note that the transition to these prototypes is encompassed by regions of high uncertainty also in this case in agreement with physical intuition. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{assigning_most_sim_proto.pdf} \caption{ Three examples for assigning the most similar prototype(s) (right panel) to structures for which the corresponding structural class is not contained in the training set of ARISE (left panel). For each prediction, space group and classification probabilities of the top predictions are specified together with an uncertainty estimate (mutual information). The space groups are returned via spglib, where we choose the highest symmetry that is found for all combinations of precision parameters $(0.1, 0.01, 0.001, 0.0001) [\text{\AA}]$ and angle tolerances $(1, 2, 3, 4, 5)[ ^\circ]$. } \label{fig:assign_most_sim_proto} \end{figure*} \subsection{Supplementary Note 3 \label{section:supp_note_3}} In the following, we investigate scenarios in which the model is forced to fail, i.e., we analyze ARISE out-of-sample predictions. To assess the physical content learned by the network, we investigate its predictions \textendash\ and thus its generalization ability \textendash\ on structures corresponding to prototypes not included in the training. This is of particular relevance if one wants to use predictions of ARISE \textendash\ for applications such as screening of large databases, or create low-dimensional maps for a vast collection of materials\cite{isayev2015materials}. Given an unknown structure, the network needs to decide \textendash\ among the classes it has been trained on \textendash\ which one is the most suitable. It will assign the most similar prototypes and quantify the similarity via classification probabilities, providing a ranking of candidate prototypes. The uncertainty in the assignment, as quantified by mutual information, measures the reliability of the prediction. Note that the task of assigning the most similar prototype(s) to a given structure among 108 possible classes (and quantifying the similarity) is a very complicated task even for trained materials scientists, in particular in case of complex periodic and possibly defective three-dimensional structures. We consider three examples (cf. Supplementary Fig. \ref{fig:assign_most_sim_proto} left): fluorite and tungsten Carbide (from AFLOW) where the correct labels are known, and one structure from the NOMAD encyclopedia (see last paragraph of this section for details on the provenance), for which the assigned space group is 1, i.e., no symmetry can be identified (via spglib). In all three cases there is no prototype in the dataset which represent a match for any of these structures. This is on purpose: the network will ``fail'' by construction since the correct class is not included in the possible classes the network knows (and needs to choose from). Analyzing how the network fails will give us insight on the physical content of the learned model. This test can also be viewed as discovering ``unexpected similarities'' across materials of different chemical composition and dimensionality. Following the pipeline for single-crystal classification summarized in Fig. \ref{fig:single_and_polyc_class_steps}, we compute classification probabilities and mutual information, yielding the assignments shown in Supplementary Fig. \ref{fig:assign_most_sim_proto} right. To rationalize the predictions shown in Supplementary Fig. \ref{fig:assign_most_sim_proto} from a physical standpoint, we inspect the substructures formed by the chemical species in both original and assigned structures. This is motivated by our choice of materials representation as averaged SOAP descriptor of substructures (see Supplementary Methods for more details). The two most similar prototypes to fluorite (Ca$\text{F}_2$) are CsCl and NaCl, both consisting of two inter-penetrating lattices of the same type, two sc lattices for CsCl and two fcc lattices for NaCl. Fluorite contains both sc (F atoms) and fcc (Ca atoms) which is likely why CsCl and NaCl are assigned, together with a ternary halide tetragonal perovskite, also containing sc symmetry (via Cs and Sn atoms, respectively). For tungsten carbide (WC), W and C form two hexagonal lattices. In the unit cell of the most similar prototype, FeSi, 60$^\circ$ angles are formed within the substructures of each species (see dashed lines in the unit cell), thus justifying this classification. Furthermore, two quaternary chalcogenides appear as further candidates. This similarity \textendash\ hard to assess by eye \textendash\ originates by the presence of angles close to 60$^\circ$ for $S$ atoms (yellow) for both $\text{Cu}_2\text{CdGeS}_4$ and $\text{Cu}_2\text{ZnSiS}_4$ (marked in the figure for $\text{Cu}_2\text{CdGeS}_4$). Also note that these two quaternary prototypes, $\text{Cu}_2\text{ZnSiS}_4$ and $\text{Cu}_2\text{CdGeS}_4$ are a result of substituting Ge and Si with isoelectric elements Zn and Cd, which implies that these structures are expected to be similar. This explains why they both appear as candidates for structures being similar to tungsten carbide. Finally, for the compound $\text{Tm}_{\text{23}}\text{Se}_{\text{32}}$ from the NOMAD encyclopedia, the model identifies NaCl as the most similar prototype. Looking at the structure from different angles, especially from the top (cf. Supplementary Fig. \ref{fig:assign_most_sim_proto}, left part), a similarity to cubic systems can be identified. The classification method robustness to missing atoms makes the apparent gaps in the side-view negligible, and thus rationalizes the NaCl assignment. Regarding the uncertainty quantification (via mutual information), increased uncertainties appear for fluorite and tungsten carbide, since besides the top prediction with more than $70\%$ classification probability, other prototypes are possible candidates for the most similar prototype. For the NOMAD structure $\text{Tm}_{\text{23}}\text{Se}_{\text{32}}$, the network is quite confident, most likely because no other good candidates are presented among the binaries included in the 108 classes dataset. These results show that the model \textendash\ even when forced to fail by construction \textendash\ returns (highly non-trivial) physically meaningful predictions. This makes ARISE particularly attractive for screening large and structurally diverse databases, in particular assessing structures for which no symmetry label can be obtained with any of the current state-of-the-art methods. In addition to the analysis in Supplementary Fig. \ref{fig:assign_most_sim_proto}, we present some results for further out-of-sample structures: \begin{itemize} \item Boron nitride (bulk, graphitic, \url{http://aflowlib.org/CrystalDatabase/AB_hP4_194_c_d.html}) classified as hexagonal graphite (probability 63.32\%), mut.inf. 0.7278 \item Cementite (\url{http://aflowlib.org/CrystalDatabase/AB3_oP16_62_c_cd.html}) classified as MnP with probability 49.14\%, mut.inf. 0.7176 \item $\text{CuTi}_3$ (L60 Srukturbericht, \url{http://aflowlib.org/CrystalDatabase/AB3_tP4_123_a_ce.html}) classified as bct $\alpha$-Pa with probability 78.41\%, mut.inf. 0.8539 \item Benzene (\url{http://aflowlib.org/CrystalDatabase/AB_oP48_61_3c_3c.html}) classified as nanotube (chiral indices (n,m)=(5,2)) with probability 68.48\%, mut.inf. 0.6249 \item Rutile (\url{http://aflowlib.org/CrystalDatabase/A2B_tP6_136_f_a.html}) classified as orthorhombic halide perovskite with probability 44.62\%, mut.inf. 0.8733 \item NbO (\url{http://aflowlib.org/CrystalDatabase/AB_cP6_221_c_d.html}), which is NaCl with 25\% ordered vacancies on both the Na and Cl sites, classified as NaCl with probability 99.96\%, mut.inf. 0.0027 \item Moissanite 4H SiC (\url{http://aflowlib.org/CrystalDatabase/AB_hP8_186_ab_ab.html}) classified as wurtzite with probability 99.74\%, mut.inf. 0.0166 \item $\text{K}_2\text{PtCl}_6$ (\url{http://aflowlib.org/CrystalDatabase/A6B2C_cF36_225_e_c_a.html}) classified as NaCl with probability 61.4\%, mut.inf. 0.5402 \end{itemize} The structure taken from the NOMAD encyclopedia has the ID mp-684691 in Materials project, where further details can be found, e.g., on on the experimental origin. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{bain_path_supplementary.pdf} \caption{Bain path - all prototypes with increased classification probability: \textbf{a} Mutual information plot showing the spots of high and low uncertainty for different geometries. \textbf{b} Classification probability maps corresponding to bcc, fcc and two tetragonal phases. \textbf{c} Representative selection of other prototypes showing non-zero classification probabilities. } \label{fig:suppl_bain_path} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{four_grains_defective_supplementary.pdf} \caption{Investigation of distorted synthetic polycrystals. \textbf{a} Analysis on defective version of mono-species polycrystal shown in Fig. \ref{fig:synthetic_polycrystal}a. \textbf{b} Analysis on defective version of superalloy structure shown in Fig. \ref{fig:synthetic_polycrystal}h. \label{fig:supp_four_grains_defective}} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{supercells_cosine_sim_to_pbc_True.pdf} \caption{Cosine similarity plots for elemental, binary, ternary, and quaternary compounds as well as 2D materials (for SOAP settings $R_\text{C} = 4.0 \cdot d_{\text{NN}}, \sigma = 0.1 \cdot d_{\text{NN}}, $ and $\text{exsf} = 1.0$ corresponding to the center of the parameter range used in the training set). Each line corresponds to a particular prototype. The x-axis corresponds to three different (non-periodic) supercells, where supercell ``0'' stands for the smallest isotropic supercell (for instance $4\times4\times4$ repetitions) for which at least 100 atoms are obtained. Supercells ``1'' and ``2'' correspond to the next two bigger isotropic replicas (e.g., $5\times5\times5$ and $6\times6\times6$). The y-axis corresponds to the cosine similarity of the respective supercell to the periodic structure, i.e., the case of infinite replicas. One can see a continuous increase of similarity with larger supercell size, where for the largest supercell, the similarity is greater than 0.8 for all prototypes. Thus, it is to be expected that systems sizes larger than those included in the training set can be correctly classified by ARISE. For smaller systems, however, generalization ability will depend on the prototype. Practically, one can include smaller supercells in the training set, which is not a major problem due to fast convergence time. } \label{fig:supercells_cosine_sim_to_pbc_True} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=0.9\textwidth]{four_grains_umap_hdbscan_full.pdf} \caption{Unsupervised analysis analogous to Figure \ref{fig:synthetic_polycrystal}d-g, for all layers (before the ReLU or rather the softmax function) with a minimum distance of 0.1 and a number of neighbors of 500.} \label{fig:four_grains_umap_hdbscan_full} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{four_grains_suppl.pdf} \caption{Probability maps of the most important prototypes for both pristine (\textbf{a}) and defective (\textbf{b}) version of the mono-species polycrystal in Fig. \ref{fig:synthetic_polycrystal}a. } \label{fig:suppl_four_grains} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{elemental_solid_polycrystal_low_resolution.pdf} \caption{Mono-species elemental polycrystal investigation via strided pattern matching using lower resolution (stride of 3.0 \AA in both $x$ and $y$ direction opposed to 1.0 \AA\ as in Fig. \ref{fig:synthetic_polycrystal}). Choosing the stride is a trade-off between computation time and resolution. For instance, at the grain boundary between diamond and hcp, the transition from diamond to hexagonal diamond to hcp (cf. Supplementary Fig. \ref{fig:suppl_four_grains}) are recognized in Fig. \ref{fig:synthetic_polycrystal}b, while being obscured in the presented low resolution pictures. } \label{fig:low_resolution_four_grains} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{min_dist_supplementary.pdf} \caption{ Connection between UMAP embedding and real space. This figure confirms the observation that ARISE's representations of different spatial regions (crystalline regions but in particular grain boundaries, here: transitions between fcc, bcc, hcp, and diamond) are mapped to different regions in the UMAP projection. \textbf{a} Influence of the $\text{min}\textunderscore\text{dist}$ parameter in the UMAP projection (number of neighbors fixed to 500). In line with intuition, for larger $\text{min}\textunderscore\text{dist}$, points appear more spread. In particular, connected subregions appear in the clusters, whose connection to real space is investigated in \textbf{b}: The connected strings of points actually correspond to transitions within and between crystalline regions. This is demonstrated by traversing a circle around the center of the real space structure (\textbf{c}) and coloring the embedded points according to the angle. \label{fig:supp_hdbscan_pos_gb}} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{nanoparticle_2017_supplementary.pdf} \caption{ Comparison of crystal maps (slice through center, most important prototypes and mutual information) for AET nanoparticle data\cite{yang2017deciphering} for two different box sizes. Dashed lines indicate the crystal boundaries in all 2D maps. ARISE allows to detect the appearance of the tetragonally distorted fcc prototype (In). For larger box sizes, the fcc assignment increases in the center and also the L1$_2$ classification probability rises. While the central slice of the L1$_0$ prototype for a box size of 16\,\AA\ shows only weak signal, a slice slightly below reveals higher probability (see bottom, isolated slice), i.e., ARISE does not overlook this physically relevant phase. \label{fig:supp_nanoparticle_2017_uncertainty}} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{AET_annealing_supplemetary_figure.pdf} \caption{Quantitative study of mutual information distribution for different annealing times. \textbf{a} Histogram of mutual information values for each annealing time (where the corresponding histograms are normalized via dividing each bin by the total number of boxes). Only mutual-information values smaller than 0.2 are shown, which correspond to the ``dark'', i.e., low mutual information spots in Fig. \ref{fig:AET}c. \textbf{b} Cumulative distribution calculated for the histogram shown in \textbf{a}. From \textbf{a}, \textbf{b} it is apparent that the number of low-uncertainty boxes increases for larger annealing times. \textbf{c-d} Investigation of the radial distribution of the mutual information. \textbf{c} Histograms of uncertainty (mutual information) obtained via spatially binning the SPM maps of \ref{fig:AET}\textbf{c} into spherical shells, where the median is computed for each bin. Given a mutual information value, the associated radius is calculated as the distance of the center of the corresponding box (as obtained via SPM) to the center of the most central box. \textbf{d} Each panel shows the difference between the cumulative distributions of two annealing times, where the cumulative distributions are calculated from the histograms shown in \textbf{b}. In addition the histograms are normalized the following way: Given the times $t_1, t_2$ with $t_1<t_2$, the cumulative sum of $t_2-t_1$ is calculated and then divided by the cumulative sum of time $t_1$ such that the fractional change from $t_1$ to $t_2$ is obtained. One can conclude that in \textbf{c} a clear decrease of mutual information can be spotted in specific regions, e.g., for the radial region 15-20\,\AA. The cumulative sums that are used in \textbf{d} allow to quantify the order more globally in the sense that each bin (of the cumulative sum corresponding to a specific annealing time) is proportional to the spherically averaged integral from radius zero up to the radius corresponding to the bin. Since the particle sizes are changing over time due to diffusion, the particles have different size. Thus, we single out a radius at which to compare the global order: for instance comparing the bins corresponding to a radius of r=25\,\AA, we see that for all three panels, the values are negative and thus the structure that has been annealed longer shows larger global order. } \label{fig:AET_annealing_supp} \end{figure*} \clearpage \clearpage \begin{table} \begin{center} \begin{tabular}{lllll} \# & Prototype & Symmetry & Material type & Data source \\ \hline \hline 1.& bcc (W) & 229, cubic & Bulk, Elemental solid & AFLOW / NOMAD \\ 2.& diamond (C) & 227, cubic & Bulk, Elemental solid & AFLOW / NOMAD \\ 3.& fcc (Cu) & 225, cubic & Bulk, Elemental solid & AFLOW / NOMAD \\ 4.& $\alpha$-Po & 221, (simple) cubic & Bulk, Elemental solid & AFLOW / NOMAD \\ 5.& hcp (Mn) & 194, hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 6.& $\alpha$-La (dhcp) & 194, hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 7.& Hexagonal diamond & 194, hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 8.& Hexagonal graphite & 194, hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 9.& Sn & 191, (simple) hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 10.& Buckled graphite & 186, hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 11.& $\alpha$-As & 166, rhombohedral & Bulk, Elemental solid & AFLOW / NOMAD \\ 12.& $\alpha$-Hg & 166, rhombohedral & Bulk, Elemental solid & AFLOW / NOMAD \\ 13.& $\alpha$-Sm & 166, rhombohedral & Bulk, Elemental solid & AFLOW / NOMAD \\ 14.& $\beta$-O & 166, rhombohedral & Bulk, Elemental solid & AFLOW / NOMAD \\ 15.& $\beta$-Po & 166, rhombohedral & Bulk, Elemental solid & AFLOW / NOMAD \\ 16.& $\gamma$-Se & 152, trigonal hexagonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 17.& Rhombohedral graphite & 166, rhombohedral & Bulk, Elemental solid & AFLOW / NOMAD \\ 18.& $\alpha$-Pa & 139, (body-centered) tetragonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 19.& $\beta$-Sn & 141, (body-centered) tetragonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 20.& In & 139, (body-centered) tetragonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 21.& $\gamma$-N & 136, (simple) tetragonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 22.& $\beta$-Np & 129, (simple) tetragonal & Bulk, Elemental solid & AFLOW / NOMAD \\ 23.& $\gamma$-Pu & 70, (face-centered) orthorhombic & Bulk, Elemental solid & AFLOW / NOMAD \\ 24.& $\alpha$-Ga & 64, (base-centered) orthorhombic & Bulk, Elemental solid & AFLOW / NOMAD \\ 25.& Black phosphorus & 64, (base-centered) orthorhombic & Bulk, Elemental solid & AFLOW / NOMAD \\ 26.& Molecular iodine & 64, (base-centered) orthorhombic & Bulk, Elemental solid & AFLOW / NOMAD \\ 27.& $\alpha$-U & 63, (base-centered) orthorhombic & Bulk, Elemental solid & AFLOW / NOMAD \\ \end{tabular} \end{center} \caption{Complete list of prototypes (part I) included in the training set of this work. If provided by the respective resources, information on space group, crystal system or Bravais lattice is listed. \label{table:prototype_listing_part_I}} \end{table} \clearpage \clearpage \begin{table} \begin{center} \begin{tabular}{lllll} \# & Prototype & Symmetry & Material type & Data source \\ \hline \hline 28.& NaCl & 225, cubic & Bulk, Binary compound & AFLOW / NOMAD \\ 29.& CsCl & 221, cubic & Bulk, Binary compound & AFLOW / NOMAD \\ 30.& L1$_2$ (Cu$_3$Au) & 221 (simple) cubic & Bulk, Binary compound & AFLOW / NOMAD \\ 31.& Zinc blende (ZnS) & 216, (face-centered) cubic & Bulk, Binary compound & AFLOW / NOMAD \\ 32.& FeSi & 198 (simple) cubic & Bulk, Binary compound & AFLOW / NOMAD \\ 33.& NiAs & 194, hexagonal & Bulk, Binary compound & AFLOW / NOMAD \\ 34.& Wurtzite (ZnS) & 186, hexagonal & Bulk, Binary compound & AFLOW / NOMAD \\ 35.& L1$_0$ (CuAu) & 123, (simple) tetragonal & Bulk, Binary compound & AFLOW / NOMAD \\ 36.& CrB & 63, (base-centered) orthorhombic & Bulk, Binary compound & AFLOW / NOMAD \\ 37.& MnP & 62, (simple) orthorhombic & Bulk, Binary compound & AFLOW / NOMAD \\ 38.& FeB & 62, (simple) orthorhombic & Bulk, Binary compound & AFLOW / NOMAD \\ 39.& AgNbO$_3$ & cubic & Bulk, Ternary compound & CMR \\ 40.& CsSnI$_3$ & cubic & Bulk, Ternary compound & CMR \\ 41.& CsSnCl$_3$ & tetragonal & Bulk, Ternary compound & CMR \\ 42.& Cs$_2$WO$_4$ & tetragonal & Bulk, Ternary compound & CMR \\ 43.& Ca$_3$Ge$_2$O$_7$ & tetragonal & Bulk, Ternary compound & CMR \\ 44. & CsSnCl$_3$ & orthorhombic & Bulk, Ternary compound & CMR \\ 45.& Cu$_2$BaGeSe$_4$ & 144 (trigonal) & Bulk, Quaternary compound & CMR \\ 46. & Cu$_2$CdSnS$_4$ & 121 (tetragonal) & Bulk, Quaternary compound & CMR \\ 47.& Cu$_2$ZnSnS$_4$ & 82 (tetragonal) & Bulk, Quaternary compound & CMR \\ 48.& Cu$_2$KVS$_4$ & 40 (orthorhombic) & Bulk, Quaternary compound & CMR \\ 49.& Cu$_2$CdGeS$_4$ & 31 (orthorhombic) & Bulk, Quaternary compound & CMR \\ 50.& Cu$_2$ZnSiS$_4$ & 7 (monoclinic) & Bulk, Quaternary compound & CMR \\ \end{tabular} \end{center} \caption{Complete list of prototypes (part II) included in the training set of this work. \label{table:prototype_listing_part_II}} \end{table} \clearpage \clearpage \begin{table} \begin{center} \begin{tabular}{lllll} \# & Prototype & Symmetry & Material type & Data source \\ \hline \hline 51. & Graphene & 191 (hexagonal) & 2D Materials & CMR \\ 52. & Ti$_3$C$_2$ & 187 (hexagonal) & 2D Materials & CMR \\ 53. & Ti$_3$C$_2$O$_2$ & 187 (hexagonal) & 2D Materials & CMR \\ 54. & MoS$_2$ & 187 (hexagonal) & 2D Materials & CMR \\ 55. & Ti$_3$C$_2$H$_2$O$_2$ & 187 (hexagonal) & 2D Materials & CMR \\ 56. & GaS & 187 (hexagonal) & 2D Materials & CMR \\ 57. & BN & 187 (hexagonal) & 2D Materials & CMR \\ 58. & Ti$_2$CH$_2$O$_2$ & 164 (trigonal) & 2D Materials & CMR \\ 59. & Ti$_2$CO$_2$ & 164 (trigonal) & 2D Materials & CMR \\ 60. & CdI$_2$ & 164 (trigonal) & 2D Materials & CMR \\ 61. & CH & 164 (trigonal) & 2D Materials & CMR \\ 62. & CH$_2$Si & 156 (trigonal) & 2D Materials & CMR \\ 63. & Ti$_4$C$_3$ & 156 (trigonal) & 2D Materials & CMR \\ 64. & BiTeI & 156 (trigonal) & 2D Materials & CMR \\ 65. & Ti$_4$C$_3$O$_2$ & 156 (trigonal) & 2D Materials & CMR \\ 66. & GeSe & 156 (trigonal) & 2D Materials & CMR \\ 67. & MoSSe & 156 (trigonal) & 2D Materials & CMR \\ 68. & Ti$_4$C$_3$H$_2$O$_2$ & 156 (trigonal) & 2D Materials & CMR \\ 69. & AgBr$_3$ & 150 (trigonal) & 2D Materials & CMR \\ \end{tabular} \end{center} \caption{Complete list of prototypes (part III) included in the training set of this work. \label{table:prototype_listing_part_II}} \end{table} \begin{table} \begin{center} \begin{tabular}{lllll} \# & Prototype & Symmetry & Material type & Data source \\ \hline \hline 70. & TiCl$_3$ & 150 (trigonal) & 2D Materials & CMR \\ 71. & BiI$_3$ & 147 (trigonal) & 2D Materials & CMR \\ 72. & FeSe & 129 (tetragonal) & 2D Materials & CMR \\ 73. & PbSe & 123 (tetragonal) & 2D Materials & CMR \\ 74. & GeS$_2$ & 115 (tetragonal) & 2D Materials & CMR \\ 75. & C$_3$N & 65 (orthorhombic) & 2D Materials & CMR \\ 76. & FeOCl & 59 (orthorhombic) & 2D Materials & CMR \\ 77. & P & 28 (orthorhombic) & 2D Materials & CMR \\ 78. & PdS$_2$ & 14 (monoclinic) & 2D Materials & CMR \\ 79. & MnS$_2$ & 14 (monoclinic) & 2D Materials & CMR \\ 80. & GaSe & 12 (monoclinic) & 2D Materials & CMR \\ 81. & TiS$_3$ & 11 (monoclinic) & 2D Materials & CMR \\ 82. & WTe$_2$ & 11 (monoclinic) & 2D Materials & CMR \\ 83. & HfBrS & 7 (monoclinic) & 2D Materials & CMR \\ 84. & RhO & 6 (monoclinic) & 2D Materials & CMR \\ 85. & SnS & 6 (monoclinic) & 2D Materials & CMR \\ 86. & NiSe & 6 (monoclinic) & 2D Materials & CMR \\ 87. & AuSe & 6 (monoclinic) & 2D Materials & CMR \\ \end{tabular} \end{center} \caption{Complete list of prototypes (part IV) included in the training set of this work. \label{table:prototype_listing_part_II}} \end{table} \clearpage \clearpage \begin{table} \begin{center} \begin{tabular}{lllll} \# & Prototype & Symmetry & Material type & Data source \\ \hline \hline 88. & VTe$_3$ & 6 (monoclinic) & 2D Materials & CMR \\ 89. & ReS$_2$ & 2 (monoclinic) & 2D Materials & CMR \\ 90. & ScPSe$_3$ & 1 (triclinic) & 2D Materials & CMR \\ 91. & PbA$_2$I$_4$ & 1 (triclinic) & 2D Materials & CMR \\ 92. & PbS & 1 (triclinic) & 2D Materials & CMR \\ 93. & CrW$_3$S$_8$ & 1 (triclinic) & 2D Materials & CMR \\ 94. & VPSe$_3$ & 1 (triclinic) & 2D Materials & CMR \\ 95. & CrWS$_4$ & 1 (triclinic) & 2D Materials & CMR \\ 96. & MnPSe$_3$ & 1 (triclinic) & 2D Materials & CMR \\ 97. & Carbon nanotube & armchair, (3,3), $30.0^\circ, 4.07\,$\AA & Nanotubes, mono-species & ASE \\ 98. & Carbon nanotube & armchair, (4,4), $30.0^\circ, 5.42\,$\AA & Nanotubes, mono-species & ASE \\ 99. & Carbon nanotube & chiral, (4,2), $19.11^\circ, 4.14\,$\AA & Nanotubes, mono-species & ASE \\ 100. & Carbon nanotube & chiral, (4,3), $25.28^\circ, 4.76\,$\AA & Nanotubes, mono-species & ASE \\ 101. & Carbon nanotube & chiral, (5,1), $8.95^\circ, 4.36\,$\AA & Nanotubes, mono-species & ASE\\ 102. & Carbon nanotube & chiral, (5,2), $16.1^\circ, 4.89\,$\AA & Nanotubes, mono-species & ASE\\ 103. & Carbon nanotube & chiral, (5,3), $21.79^\circ, 5.48\,$\AA & Nanotubes, mono-species & ASE\\ 104. & Carbon nanotube & chiral, (6,1), $7.59^\circ, 5.13\,$\AA & Nanotubes, mono-species & ASE\\ 105. & Carbon nanotube & chiral, (6,2), $13.9^\circ, 5.65\,$\AA & Nanotubes, mono-species & ASE\\ 106. & Carbon nanotube & chiral, (7,1), $6.59^\circ, 5.91\,$\AA & Nanotubes, mono-species & ASE\\ 107. & Carbon nanotube & zigzag, (6,0), $0.0^\circ, 4.7\,$\AA & Nanotubes, mono-species & ASE\\ 108. & Carbon nanotube & zigzag, (7,0), $0.0^\circ, 5.48\,$\AA & Nanotubes, mono-species & ASE \end{tabular} \end{center} \caption{Complete list of prototypes (part V) included in the training set of this work. For the carbon nanotubes, the symmetry column specifies the configuration type (chiral, zigzag or armchair) together with the corresponding chiral numbers (n,m), the chiral angle $\theta$ and the nanotube diameter. \label{table:prototype_listing_part_III}} \end{table} \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{gb_database_supplementary.pdf} \caption{Five representative examples from the grain boundary database (GBDB), which is the largest, currently available database of DFT-computed grain boundary properties\cite{zheng2020grain}. This database can be accessed via Materials Project or \url{http://crystalium.materialsvirtuallab.org/}. For each structure, four lines of information are provided: The first line specifies the information that is required to uniquely describe a grain boundary structure\cite{lejvcek2010grain}, where first the $\Sigma$-parameter is given, followed by rotation angle, rotation axis and grain-boundary plane. The relative orientation of two neighboring grains is described by three degrees of freedom (rotation angle and axis). The two degrees of freedom specified via the grain-boundary plane complete the unique characterization of a grain-boundary structure. The second line specifies the element and the entry number of the polymorph in the database (for a given element, multiple grain boundaries can be available). The third line specifies the materials project ID and the space group. The last line specifies the grain-boundary type (twist, tilt) alongside the dominating crystal structure. The database entries correspond to periodic cells that contain a grain boundary. We replicate this initial cell isotropically (in the plane parallel to the grain boundary) until at least 1000 atoms are contained in the structure. For all examples, the dominating phase and grain boundary regions are correctly detected as shown via the 3D classification probability maps of the most popular assignments according to ARISE. These selected structures illustrate the advantages of ARISE in the following way: \textbf{a} shows that ARISE can detect dhcp symmetry in a polycrystal. In particular, the close-packing corresponding to dhcp cannot be classified in comparable automatic fashion by any of the available methods. For hcp (\textbf{b}) and fcc (\textbf{c}), the dhcp assignments only appear at the grain boundary. \textbf{d} and \textbf{e} are two different grain boundary types that do not only differ in their defining degrees of freedom but also are of tilt (\textbf{d}) and twist (\textbf{e}) type. ARISE distinguishes the local structures at the grain boundary which is indicated by its assignments: while for the twist type (\textbf{e}) hcp is the dominating assignment at the grain boundary, for the tilt type the hcp probability drops to zero at the grain boundary (except for the outer borders). The following SPM parameters are chosen for all examples: A stride of 2\,\AA\ suffices to resolve the main characteristics. For a box size of 16\,\AA\ at least 100 atoms are contained in the boxes within the grains. \label{fig:supp_gb_database}} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{haadf_stem_full.pdf} \caption{Cross similarity matrix for a selection of the defect library\cite{ziatdinov2019building} that is larger than in the main text (Fig. \ref{fig:STEM}d). Specifically, 140 structures as well as the mono-species structures from Fig. \ref{fig:STEM}a (right),e are considered. For reconstruction of atomic positions, Atomnet is employed, where for the structures from the library, atomic positions are reconstructed using a model that can also classify the chemical species. We employed the model that is available at \url{https://github.com/pycroscopy/AICrystallographer/tree/master/DefectNet}. \label{fig:supp_STEM_sim}} \end{figure*}
1,314,259,995,549
arxiv
\section{Introduction} In the framework of the new generation of solar system exploration missions, high priority is given to the observation of the so-called Inner Larger Moons of Saturn, namely, Mimas, Enceladus, Tethys and Dione \cite{Vision:2011}. In particular, the geyser-like jets venting water vapor, ammonia, salts, hydrogen and organics observed by Cassini at the south pole of Enceladus in 2005, 2008 and 2015 \cite{Porco:2006, Spencer:2006, Parkinson:2007, McKay:2008, McKay:2014, MacKenzie:2016, Khawaja:2019} have placed this moon among the targets to search for life and habitability features in the outer solar system (see also \cite{Hartogh:2011}). The scientific questions related to the nature of Enceladus' ejecta can only be answered by carrying out dedicated missions capable of extended observations of the key features observed during Cassini's close passages. This raises the need for specialised orbits offering long close-up views of the surface of this moon. The design of science orbits around planetary satellites brings challenges because of the perturbing effect of the planet's gravity. Near-polar orbits around Enceladus are unstable and can only be reached by expensive change-of-plane maneuvers \cite{Scheeres:2001}. Previous studies for planetary probes have managed to identify long-term stable orbits in Saturn-Enceladus Hill's model. For example, Russell \& Lara \cite{Russell:2009a} performed a global grid search in the unaveraged Saturn-Enceladus Hill's model including the spherical lunar gravity terms, and identified long-term stable orbits with altitudes near 200 km and inclinations approaching 65$^{\circ}$. Lara et al. \cite{Lara:2010} computed a higher-order approximation in the averaged Hill's model and applied it to the Saturn-Enceladus system, finding a stable, quasi-circular frozen orbit around the moon with average altitude and inclination of 183 km and 61$^{\circ}$, respectively. This contribution presents a systematic design and analysis of orbits around Enceladus in the Saturn-Enceladus Circular Restricted Three-Body Problem (CR3BP). This model accounts for the perturbing effect of the planet's gravity and offers very good approximations to $n$-body solutions since the influence of the other moons is negligible owing to their small masses and their large distances from Enceladus. Since the goal is to observe the south pole of this moon, the orbits must develop in 3D. Halo orbits, a type of periodic Libration Point Orbits (LPOs) around the Lagrange points $L_1$ and $L_2$ \cite{Richardson:1980, Howell:1984} are here computed for the Saturn-Enceladus system and employed as departure and end points of transfer trajectories. These connections exhibit a significant out-of-plane component (inherited by their progenitor Halo orbits) and make close approaches to the surface of the moon. Since they shadow heteroclinic transfers, they will be referred to as s-heteroclinics. The computation is based on finding the intersections between the stable and unstable hyperbolic invariant manifolds (HIMs) of the departure and arrival orbits. The reader is referred to the fundamental work of G\'omez et al. \cite{Gomez:2004}, Canalias \& Masdemont \cite{Canalias:2006} and Barrab\'es et al. \cite{Barrabes:2009} for the computation of heteroclinic connections between LPOs in the Sun-Earth and Earth-Moon systems using HIMs, and to Davis et al. \cite{Davis:2018} for the application to the design of a connection between Halo orbits of $L_1$ and $L_2$ in the Saturn-Enceladus system. The present investigation confirms and extends the latter work and identifies more connections. The existence of maneuver-free transfers and the periodic character of the Halo orbits can be exploited to construct a fuel-efficient exploration tour of the moon made of chains of itineraries in which the departure and arrival Halo orbits are used as parking orbits between consecutive transfers and as gates to reach other moons in the system. The latter concept would extend to three dimensions the low-energy, low-thrust inter-moon connections designed in previous contributions by these and other authors \cite{Russell:2009, Fantino:2016, Fantino:2019}. The s-heteroclinics between Halo orbits at Enceladus are here proposed as science orbits for the observation of the surface features of interest. Hence, an important aspect of the work is the study of the observational performance of these trajectories. Kinematical and geometrical parameters such as transfer times, distances from the surface, speeds relative to an Enceladus-centered inertial frame, times of overflight, instantaneous and cumulative surface coverage parameters and ground tracks are computed and analysed aiming to assess the suitability of the computed solutions for scientific use. A preliminary stage of this work was presented in Salazar et al. \cite{Salazar:2019}. The paper is organised as follows. Section~\ref{sec:model} summarizes the relevant characteristics of the CR3BP and illustrates the families of Halo orbits around $L_1$ and $L_2$ computed and employed in this work. The methodology adopted for the design of connections between these orbits and the resulting solutions are presented in Sect.~\ref{sec:heteroclinics}. Section~\ref{sec:observ} exposes the observational properties of the computed trajectories. Discussion and conclusions follow in Sect.~\ref{sec:concl}. \section{The CR3BP, Halo orbits and their stable and unstable HIMs} \label{sec:model} The CR3BP models the motion of a massless body (here the spacecraft, S/C) subjected to the gravitational attraction of two primaries of mass $m_1$ (the first primary, here the planet) and $m_2$ (the second primary, here the moon), assumed to move on circular orbits about their center of mass \cite{Szebehely:1967}. The equations of motion of the third body are conveniently expressed in the synodical barycentric reference frame ($O$, $x$, $y$, $z$) with the two primaries stationary on the $x$-axis. The total mass $\left(m_1+m_2\right)$ of the system and the distance $r_0$ between the primaries are adopted as the units of mass and length, respectively. The unit of time is defined by setting the period $T$ of the orbits of the primaries equal to $2\pi$. This corresponds to assigning unit value to their orbital mean motion. As a result, the positions of the primaries in normalised units are $\left(\mu,0,0\right)$ and $\left(\mu-1,0,0\right)$, respectively (Fig.~\ref{fig:CR3BP}), $\mu$ being the normalised mass of the second primary and the mass ratio of the system: $\mu= m_2/ \left(m_1+m_2\right)$. \begin{figure}[h!] \centering \includegraphics[scale=0.3]{CR3BP.png} \caption{The primaries, the third body and the five libration points in the synodical barycentric reference frame of the CR3BP.} \label{fig:CR3BP} \end{figure} For the equations of motion of the S/C in this model, the definition of the Jacobi constant $C_J$ and the existence, location and properties of the five equilibrium points $L_i$ ($i$=1,2,..,5), the reader is referred to fundamental literature, such as \cite{Szebehely:1967}. When $m_2 \ll m_1$, $L_1$ and $L_2$ (see Fig.~\ref{fig:CR3BP}) approximately lie at the intersections of the $x$-axis with the Hill sphere, centered at the smaller primary and having radius \cite{Capderou:2005} \begin{equation} \label{eq:rH} r_H = \left(\frac{m_2}{3m_1}\right)^{1/3} r_0. \end{equation} Table~\ref{tab:CR3BP} reports the basic features of the Saturn-Enceladus CR3BP. \begin{table}[h!] \begin{center} \caption{Basic features of the Saturn-Enceladus CR3BP: mass of Saturn ($m_1$), mass ratio ($\mu$), mean physical radius of Enceladus ($R$), distance between Saturn and Enceladus ($r_0$), orbital period of the system ($T$), radius of the Hill sphere (raw data from \cite{Horizon}). \vspace{5mm}} \label{tab:CR3BP} \begin{tabular}{cccccc} \hline\noalign{\smallskip} $m_1$ & $\mu$ & $R$ & $r_0$ & $T$ & $r_H$\\ (kg) & & (km) & (km) & (day) & (km)\\ \noalign{\smallskip}\hline\noalign{\smallskip} $5.68336\cdot10^{26}$ & $1.899309 \cdot 10^{-7}$ & 252.1 & $2.38042 \cdot 10^{5}$ & 1.37 & 948.7\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} The linear approximation of the equations of motion close to an equilibrium point leads to families of LPOs. Halo orbits around $L_1$ and $L_2$ have been computed and used in this work owing to their out-of-plane component, which offers opportunities for extended coverage of Enceladus' surface. Each libration point admits two symmetric families of Halos, the so-called Northern and Southern Halo orbits \cite{Howell:1984}, the symmetry being across the $xy$-plane: a Southern Halo orbit can be obtained from a Northern Halo orbit through the transformation $z \rightarrow -z$, $\dot{z} \rightarrow -\dot{z}$. Families of Halo orbits have been computed over a wide energy interval and using an energy discretization such that all the families have 100 members at identical values of $C_J$ between 3.000055 and 3.000131. The periods are between 0.6 and 0.7 days. Figure~\ref{fig:Halos} shows the four families of Northern and Southern Halo orbits around $L_1$ and $L_2$. The reference frame with axes $\xi$, $\eta$ and $\zeta$ is synodical and centered at Enceladus, and the units are unnormalised. \begin{figure}[h!] \centering \includegraphics[scale = 0.170]{Halos.png} \caption{Families of Northern (left) and Southern (right) Halo orbits around $L_1$ and $L_2$ (Enceladus-centered synodical frame with unnormalised units).} \label{fig:Halos} \end{figure} The desired branches of the stable and unstable HIMs of the Halo orbits have been approximated by applying a suitable small perturbation in the direction of the stable and unstable eigenvector of the monodromy matrix after appropriate time-transformation through the state transition matrix. Then, these states have been propagated backward and forward in time to globalize the respective trajectories \cite{Parker:1989}. Figure~\ref{fig:HIMs} shows the stable (in blue) and unstable (in red) HIMs of Northern Halo orbits around $L_1$ and $L_2$ with $C_J$ = 3.000102. It has been observed that many trajectories impact Enceladus soon after leaving the vicinity of the Halo orbit as a consequence of the large size of the moon relative to its Hill sphere (respective radii of 252 and 949 km), and this complicates the design of transfers between Halo orbits in the system (see also \cite{Davis:2018}). \begin{figure}[h!] \centering \includegraphics[scale=0.22]{HIMs_Halo_L2.jpg} \\ \includegraphics[scale=0.22]{HIMs_Halo_L1.jpg}\\ \caption{Branches of the unstable (red) and stable (blue) HIMs of Northern Halo orbits with $C_J$ = 3.000102 around $L_2$ (top) and $L_1$ (bottom) in the Saturn-Enceladus system (Enceladus-centered synodic frame with unnormalised units).} \label{fig:HIMs} \end{figure} In this exploration, the search for s-heteroclinics has been conducted between orbits with the same Jacobi constant. This facilitates the identification of very low-cost transfers by the method explained below. \section{Computation of s-heteroclinics between Halo orbits} \label{sec:heteroclinics} Intersections between HIMs of different stability character and belonging to different Halo orbits constitute the mechanism here explored to move within the Hill sphere of Enceladus and perform close approaches to the moon. In the spatial CR3BP, six state variables characterize a trajectory in phase space. Hence, the above intersection is not easy to determine. Poincar\'e sections are a way of reducing the dimension of a trajectory, thus facilitating its visualization \cite{Koon:2000, Koon:2011}. In the case at hand, the intersections with a plane in configuration space reduces the dimensionality to five. Adding the Jacobi constant as a relationship among variables removes one more dimension. Hence, eventually the cuts of the HIMs with the plane must be analysed in four dimensions. Different representations have been proposed to visualize four state variables. For example, Haapala \& Howell \cite{Haapala:2013, Haapala:2014} adopted a single segment to represent simultaneously four states: two states are indicated by the coordinates of the segment base-point, and two additional coordinates are represented by the length. Geisel \cite{Geisel:2013} represented $y$, $z$, $\dot{y}$ in a three-dimensional visual environment in which $\dot{z}$ is displayed using color. Paskowitz \& Scheeres \cite{Paskowitz:2006a} and Davis et al. \cite{Davis:2018} chose spherical coordinates to represent the states at the closest approach to the primary (periapsis map). In this work, s-heteroclinic connections are designed by propagating the HIMs until their first crossing with the plane $\Sigma$ defined by $x=\mu-1$. $\Sigma$ is orthogonal to the $x$-axis through the center of Enceladus. Numerical experiments showed that this choice is the most suitable to obtain a transversal cut of the flow. Given the symmetries of the problem, only transfers for which $\dot{x} > 0$ at the first crossing of the HIMs with $\Sigma$ have been considered. Then, a planar visualization based on drawing a vector to represent $y$, $z$, $\dot{y}$, $\dot{z}$ has been adopted: the origin of the vector defines the $yz$-position, whereas its length and orientation indicate the corresponding velocity projection (Fig.~\ref{fig:sigma}). The remaining component of the state, namely $\dot{x}$, is determined by the value of $C_J$. A zero-cost s-heteroclinic connection exists between two Halo orbits with the same $C_J$ when two vectors from their HIMs coincide in position, magnitude and direction in the above described Poincar\'e section. In practice, the solution is built by identifying the trajectories whose vectors differ the least. Such differences constitute the position and velocity error of the solution. A requirement for distance and velocity errors to be respectively less than 1 km and 1 m/s for a connection to be accepted has been introduced. Additionally, a safety distance of 20 km from the surface of Enceladus has been imposed. As an example, consider the transfer from a Southern Halo orbit around $L_2$ to a Northern Halo orbit around $L_1$ with $C_J$ = 3.000118. The majority of trajectories (about $65\%$) either impact Enceladus or escape from the Hill sphere. Figure~\ref{fig:example_him} (top) illustrates the trajectories that intersect $\Sigma$. The vector representation of the intersections involving this subset of non-impacting orbits is shown in Fig.~\ref{fig:example_him} (bottom). The blue arrows are associated with the $L_1$ stable manifold, whereas the red arrows represent the $L_2$ unstable manifold. The transfer with the lowest position and velocity error (respectively of 0.26 km and 0.85 m/s) is shown in Fig.~\ref{fig:Example_Sol}. The time of flight $\Delta T$ from Halo to Halo is 38.4 hours. \begin{figure}[h!] \centering \includegraphics[scale = 0.25]{Sigma_2.png} \caption{Vector representation of the four state variables $y$, $z$, $\dot{y}$, $\dot{z}$.} \label{fig:sigma} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.35]{Example_HIMs_anew.png}\\ \includegraphics[scale = 0.266]{Example_HIMs_bnew.png} \caption{Top: non-impacting trajectories of the unstable (red) and stable (blue) HIMs that intersect $\Sigma$ and originate from a Southern Halo orbit around $L_2$ and a Northern Halo orbit around $L_1$, respectively, both with $C_J$ = 3.000118 (Enceladus-centered synodical frame with unnormalised units). Bottom left: vector representation of the intersections with $\Sigma$ (physical units). Bottom right: vectorial representation of the two trajectories that minimize the position and velocity errors.} \label{fig:example_him} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.225]{Example_Sol.png} \caption{3D view of the s-heteroclinic connection from a Southern Halo orbit around $L_2$ to a Northern Halo orbit around $L_1$ with $C_J$ = 3.000118 (Enceladus-centered synodic reference frame with unnormalised units).} \label{fig:Example_Sol} \end{figure} Owing to the symmetries of the model, four types of connections between Northern and Southern families need to be explored in order to account for all the possible combinations of departure and arrival orbits. These are: Northern $L_1$ to Northern $L_2$ (type A), Southern $L_2$ to Northern $L_1$ (type B), Northern $L_1$ to Southern $L_1$ (type C) and Southern $L_2$ to Northern $L_2$ (type D). The application of the above method to the families of Halo orbits described in Sect.~\ref{sec:model} has allowed to identify four solutions with position and velocity errors below the chosen threshold: two transfers between $L_1$ and $L_2$ with $C_J$ = 3.000118, and two transfers with $C_J$ = 3.000072 between Northern and Southern orbits around the same libration point. It is worth noting that the solution of type A of this investigation is in good agreement with the result of Davis et al. \cite{Davis:2018}. The time of flight along the solutions ranges from 38.4 to 57.6 days. Table~\ref{tab:conn} summarizes the basic features of these s-heteroclinics, i.e., connection type, Jacobi constant, time of flight, and distance and velocity errors. An Enceladus-centered reference frame with inertial axes $X$, $Y$, $Z$ has been introduced. $X$, $Y$, $Z$ are assumed to be parallel to $x$, $y$, $z$ at the beginning of a transfer ($t$=0). Figures~\ref{fig:Type_A} to \ref{fig:Type_D} show the four trajectories. Each figure contains six plots: two planar projections and the 3D trajectory in the synodical (left) and in the inertial (right) reference frames centered at Enceladus. Open circles and squares have been drawn at constant intervals of time along the stable and unstable portions of the trajectory, respectively. The representation of the trajectories of Figs.~\ref{fig:Type_A} to \ref{fig:Type_D} in the ($X$, $Y$, $Z$) frame allows to determine the evolution of their osculating orbital elements, i.e., semimajor axis $a$, eccentricity $e$, inclination $i$, argument of periapsis $\omega$, longitude of the ascending node $\Omega$ (Figs.~\ref{fig:OrbEl_Type_A} to \ref{fig:OrbEl_Type_C}). All elements vary considerably as a result of the perturbing effect of Saturn's gravity. It is interesting to note the wide range of values covered by inclination and eccentricity, the latter reaching as high as 1, i.e., the escape condition. \begin{table}[h!] \begin{center} \caption{Features of the s-heteroclinic connections between Halo orbits in the Saturn-Enceladus CR3BP: connection type, Jacobi constant, time of flight $\Delta T$ from Halo to Halo, distance error $\Delta p$ and velocity error $\Delta v$ at $\Sigma$. \vspace{5mm}} \label{tab:conn} \begin{tabular}{ccccc} \hline\noalign{\smallskip} Connection type & $C_J$ & $\Delta T$ & $\Delta p$ & $\Delta v$ \\ & & (hour) & (km) & (m/s) \\ \noalign{\smallskip}\hline\noalign{\smallskip} A: Northern $L_1$ to Northern $L_2$ & 3.000118 & 50.4 & 0.41 & 0.52\\ B: Southern $L_2$ to Northern $L_1$ & 3.000118 & 38.4 & 0.26 & 0.85\\ C: Northern $L_1$ to Southern $L_1$ & 3.000072 & 57.6 & 0.75 & 0.23\\ D: Southern $L_2$ to Northern $L_2$ & 3.000072 & 57.6 & 0.23 & 0.17\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \begin{figure}[h!] \centering \includegraphics[scale = 0.125]{Type_A.png} \caption{S-heteroclinic connection of type A with $C_J$ = 3.000118: Enceladus-centered unnormalised synodical frame (left), Enceladus-centered inertial frame (right), planar projections (top and middle), 3D view (bottom).} \label{fig:Type_A} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.125]{Type_B.png} \caption{S-heteroclinic connection of type B with $C_J$ = 3.000118: Enceladus-centered unnormalised synodical frame (left), Enceladus-centered inertial frame (right), planar projections (top and middle), 3D view (bottom).} \label{fig:Type_B} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.125]{Type_C.png} \caption{S-heteroclinic connection of type C with $C_J$ = 3.000072: Enceladus-centered unnormalised synodical frame (left), Enceladus-centered inertial frame (right), planar projections (top and middle), 3D view (bottom).} \label{fig:Type_C} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.125]{Type_D.png} \caption{S-heteroclinic connection of type D with $C_J$ = 3.000072: Enceladus-centered unnormalised synodical frame (left), Enceladus-centered inertial frame (right), planar projections (top and middle), 3D view (bottom).} \label{fig:Type_D} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.23]{OrbEl_Type_A.png} \caption{Evolution of the osculating orbital elements for the trajectory of type A shown in Fig.~\ref{fig:Type_A} (Enceladus-centered reference frame with inertial axes).} \label{fig:OrbEl_Type_A} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.23]{OrbEl_Type_B.png} \caption{Evolution of the osculating orbital elements for the trajectory of type B shown in Fig.~\ref{fig:Type_B} (Enceladus-centered reference frame with inertial axes).} \label{fig:OrbEl_Type_B} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.23]{OrbEl_Type_C.png} \caption{Evolution of the osculating orbital elements for the trajectories of type C and D shown in Figs.~\ref{fig:Type_C} and \ref{fig:Type_D} (Enceladus-centered reference frame with inertial axes).} \label{fig:OrbEl_Type_C} \end{figure} \section{Observational performance} \label{sec:observ} Figure~\ref{fig:h_v} shows the time history of the altitude $h$ and the magnitude $v$ of the inertial velocity of the S/C over the s-heteroclinic connections of Figs.~\ref{fig:Type_A} to \ref{fig:Type_D}. The minimum altitude above the lunar surface is 150 km for the solutions with $C_J$ = 3.000118, and approximately 300 km for those with $C_J$ = 3.000072, whereas the maximum altitude is 850 km and 1000 km, respectively. The velocity is always lower than $150$ m/s. \begin{figure}[h!] \centering \includegraphics[scale = 0.21]{h_v.png} \caption{Time history of the altitude $h$ above Enceladus and the inertial velocity $v$ over the solutions of Figs.~\ref{fig:Type_A} (top), \ref{fig:Type_B} (middle), and \ref{fig:Type_C}-\ref{fig:Type_D} (bottom).} \label{fig:h_v} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.45]{v_h_Kepler.png} \caption{Magnitude of the velocity for Keplerian circular orbits around Enceladus as a function of the altitude above the surface.} \label{fig:v_h_Kepler} \end{figure} For the sake of comparison, Fig.~\ref{fig:v_h_Kepler} shows the magnitude of the circular velocity on Keplerian orbits in the same range of altitudes as the solutions here discussed. The instantaneous coverage of the surface of the moon can be quantified by the parameters $\Lambda_1$ and $\Lambda_2$ shown in Fig.~\ref{fig:coverage}a. They represent the limits of the central angle of coverage of amplitude $2\alpha$ and are measured positively northwards from the equator. The angle $\alpha$ depends on the equatorial radius $R$ and the altitude $h$ of the S/C through \begin{equation} \alpha = \cos^{-1}\left(\frac{R}{R+h}\right). \end{equation} If $\phi$ denotes the latitude of the S/C, then $\Lambda_1$ and $\Lambda_2$ are defined as \begin{eqnarray} \Lambda_1 & = & \phi - \alpha, \\ \Lambda_2 & = & \phi + \alpha. \label{eq:3} \end{eqnarray} For example, when $\phi$ = $40^{\circ}$ and $h$ = 500 km, $\alpha$ = $70.4^{\circ}$, $\Lambda_1$ = $-30.4^{\circ}$, $\Lambda_2$ = $110.4^{\circ}$ and the instantaneous coverage extends from below the equator till beyond the north pole (Fig.~\ref{fig:coverage}a). For $\phi$ = $65^{\circ}$ and $h$ = 200 km, $\alpha$ = $56.1^{\circ}$, $\Lambda_1$ = $8.9^{\circ}$, $\Lambda_2$ = $121.1^{\circ}$ and the S/C can see part of the northern hemisphere including the pole but does not see the equator (Fig.~\ref{fig:coverage}b). Eventually, for a polar view with $\phi$ = $-90^{\circ}$ and $h$ = 200 km, $\alpha$ = $56.1^{\circ}$, $\Lambda_1$ = $-146.1^{\circ}$, $\Lambda_2$ = $-33.9^{\circ}$ and the visible area is centered around the south pole (Fig.~\ref{fig:coverage}c). Therefore, when the interval [$\Lambda_1$,$\Lambda_2$] includes either $-90^{\circ}$ or $+90^{\circ}$, the S/C has access to the corresponding pole. \begin{figure}[h!] \centering \includegraphics[scale = 0.495]{coverage.png} \caption{Definition of the instantaneous coverage parameters $\phi$, $\alpha$, $h$, $\Lambda_1$ and $\Lambda_2$ and their value for three different S/C positions.} \label{fig:coverage} \end{figure} The time history of $\Lambda_1$ and $\Lambda_2$ for the transfers of Figs.~\ref{fig:Type_A} to \ref{fig:Type_D} is visualised in Fig.~\ref{fig:lambda_hist} (this figure contains three plots because the results for Figs.~\ref{fig:Type_C} and \ref{fig:Type_D} are identical). The significant out-of-plane motion of these trajectories allows observation of both polar regions. The yellow areas in Fig.~\ref{fig:lambda_hist} represent the instantaneous amplitude of coverage as a function of time, limited by the curves of $\Lambda_1$ and $\Lambda_2$. The plots also show the altitude of the S/C, whereas the horizontal dashed lines indicate the two poles. The total duration of the observation windows for the south pole amounts to approximately 4 hours per transfer when $C_J = 3.000118$ (type A), just above 6 hours when $C_J = 3.000118$ (type B) and $21$ hours when $C_J = 3.000072$ (types C and D). Additionally, we have computed the total time of overflight $\tau$ (defined as the total access time of a specific surface point) for the entire surface of Enceladus on each transfer. This parameter depends on the visibility of a given point from the S/C and this, in turn, is expressed by the condition $\epsilon \ge 0$, where $\epsilon$ is the elevation of the S/C on the local horizon (see Fig.~\ref{fig:horizon}). By discretising $\Delta T$ in $N$ intervals of duration $\delta t$ and assigning to each an elementary time of overflight $\delta \tau_i$ ($i$ = 1,2,...,$N$), \begin{equation} \label{eq:3} \delta\tau_i = \begin{cases} \delta t & \text{if } \epsilon_i \geq 0 \\ 0 & \text{otherwise,} \end{cases} \end{equation} yields the total time of overflight $\tau$ at the given location as \begin{equation} \label{eq:4} \tau = \sum_{i=1}^{N}\delta\tau_i. \end{equation} The computation of $\tau$ has been carried out in the Enceladus-centered synodical frame, in this way taking into account the effect of the spin of the moon which is synchronous with its orbital motion. Thus, point G (Fig.~\ref{fig:horizon}) is stationary and its components are given by \begin{equation} \label{eq:5} {\bf R}_G = R \left(\cos{\lambda}\cos{\beta}, \sin{\lambda}\cos{\beta}, \sin{\beta}\right)^T, \end{equation} with $\lambda$ and $\beta$ the geographical longitude and latitude of $G$. Then, the co-elevation $\theta$ of the S/C from G is obtained through \begin{equation} \label{eq:7} \cos \theta = {\bf R}_G \cdot {\bf r}_G = {\bf R}_G \cdot \left({\bf r} - {\bf R}_G \right), \end{equation} with ${\bf r}$ = $\left(\xi,\eta,\zeta\right)^T$ the Enceladus-centered synodical position vector of the S/C at the given time. Eventually, \begin{equation} \label{eq:6} \epsilon = 90^{\circ} - \theta. \end{equation} The geographical maps of $\tau$ reported in Fig.~\ref{fig:tau} have been obtained by discretising the surface of Enceladus at intervals of 0.01 radians in longitude and latitude. \begin{figure}[h!] \centering \includegraphics[scale = 0.23]{lambda_hist.png} \caption{Time history of the instantaneous coverage parameters $\Lambda_1$, $\Lambda_2$ and the altitude $h$ as functions of time over the transfers of Figs.~\ref{fig:Type_A} (top), \ref{fig:Type_B} (middle), and \ref{fig:Type_C}-\ref{fig:Type_D} (bottom).} \label{fig:lambda_hist} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.22]{horizon.png} \caption{Definition of local horizon for a point (G) on the surface of Enceladus, and the corresponding elevation $\epsilon$ and co-elevation $\theta$ of the S/C.} \label{fig:horizon} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale = 0.20]{tau.png} \caption{Geographical maps of total time of overflight for the connections of Figs.~\ref{fig:Type_A} to \ref{fig:Type_D}, respectively from top to bottom.} \label{fig:tau} \end{figure} Eventually, we have computed the ground track of the S/C, i.e., the projection of the S/C's orbit onto the surface of Enceladus. Figure~\ref{fig:GT} shows the result for the four solutions of this study. \begin{figure}[h!] \centering \includegraphics[scale = 0.22]{GT.png} \caption{Ground tracks for the trajectories of Figs.~\ref{fig:Type_A} to \ref{fig:Type_D}, respectively from top to bottom.} \label{fig:GT} \end{figure} \section{Discussion and conclusions} \label{sec:concl} The discovery of geyser-like jets from the south pole of Enceladus makes this moon remarkably interesting from the scientific point of view. This fact is boosting plans for the robotic exploration of this realm. The need for suitable science orbits cannot be satisfied by conventional moon-S/C two-body solutions, given the strong instabilities caused in this case by the gravity of the planet. Following the path traced by other contributions in the framework of the CR3BP and Hill's model, this work proposes the use of periodic solutions around libration points, in particular Halo orbits, to generate low-energy trajectories for scientific purposes. Such trajectories are s-heteroclinic connections between Northern and Southern Halo orbits around $L_1$ and $L_2$ of the CR3BP formed by Saturn, Enceladus and the S/C. The search for these connections has been carried out in a systematic way by varying the energy of the progenitor Halo orbits. A planar, vectorial visualization of position and velocity components has facilitated the identification of intersections between HIMs originating from two orbits at a suitable Poincar\'e section. At the adopted energy discretisation (100 Halo orbits in each family covering a Jacobi constant range between 3.000055 and 3.000131), four trajectories with negligible position and velocity errors have been identified. Such errors represent a state mismatch at the Poincar\'e section between the stable and the unstable portion, but the magnitude is negligible (well below 1 km in position and less than 1 m/s in velocity) and can be dealt with during the navigation. Hence, these solutions can be regarded as maneuver free. They correspond to Jacobi constant values in the middle of the assumed range, hence they are associated with Halo orbits of intermediate size. Although a refined search could widen the result set around these orbits, they can be considered representative and illustrative of the behaviour and performance of this type of solutions. On the other hand, it must be emphasized that HIM trajectories associated with large or small Halo orbits exhibit a large rate of collision with Enceladus or escape from the Hill sphere and are less likely to provide maneuver-free connections. The solutions of this work are characterized by times of flight in the range from 38 to 58 hours, large fractions of which are spent looping around Enceladus. The LPOs serve as departure and arrival gates for each transfer. Given their periodic character, they can be used as parking orbits between consecutive flights around the moon. The inspection of the evolution of the osculating Keplerian elements has shown large variations of, in particular, the eccentricity and the inclination. At times, the former reaches escape values, which suggests the need for an appropriate navigation strategy, as expected for a chaotic system like the CR3BP and, even more so, in the light of the above mentioned escapes and collisions. The observed variations in inclination help achieve the objective of the work, i.e., the design of orbits with a significant out-of-plane motion and access to the polar regions of the moon. The distances from the surface vary between 150 and 1000 km. The speeds relative to an Enceladus-centered inertial frame are in the range 0.08 to 150 m/s, in good agreement with the reference values for Keplerian circular orbits in the same altitude interval, implying that the innovative aspect is not the low speed itself (which is however extremely convenient in the framework of an {\it in situ} mission), but the fact that these trajectories take into account the main perturbation acting on two-body orbits, i.e., the gravity of Saturn. In other words, the trajectories have been obtained in a dynamical model that can be considered realistic, hence accurate. An aspect that certainly deserves attention is the perturbation caused by the main harmonics of the gravity field of the two primaries, particularly the $J_2$ zonal term. This is the subject of on-going work. The analysis of the observational performance of the proposed trajectories has shown that the complete surface of Enceladus is visible from the S/C and that uninterrupted windows of access to the southern polar region exist and extend over several hours, the specific duration depending on altitude: for example, in the s-heteroclinic transfer from the Northern Halo around $L_1$ to the Northern Halo around $L_2$ (type A) at $C_J$ = 3.000118 the south pole is seen during two hours from below 400 km altitude, whereas in the solution connecting a Northern Halo with a Southern Halo at $L_1$ (type C) with $C_J$ = 3.000072 the south pole is visible for over 20 hours distributed along four windows at different altitudes. The detailed assessment of the time of overflight (defined as the time spent by the S/C above the local horizon) of a regular grid of points over the surface has been expressed in the form of geographical color maps. These maps show that the local cumulative visibility is never shorter than 4 hours (polar regions) with peaks of 20 or even 40 hours for wide equatorial bands (up to $\pm$ 60 degrees latitude). Eventually, the ground tracks help understand the motion of the S/C with respect to the surface. This motion (computed taking into account the spin rate of the moon) exhibits both prograde and retrograde components, a fact that reflects the large variations in the osculating Keplerian elements in this dynamical model. In conclusion, the trajectories designed and studied in this investigation exhibit appealing properties that make them suitable science orbits for a future mission aiming at giving answers to our fundamental questions regarding the origin and nature of the peculiar features detected at Enceladus. \section*{Acknowledgements} This work been supported by Khalifa University of Science and Technology's internal grants FSU-2018-07 and CIRA-2018-085.
1,314,259,995,550
arxiv
\section{Introduction} In computer science, fingerprinting is a procedure that summarizes the input data by mapping it to a much shorter item \cite{Broder1993}. Similarly to human fingerprints, such transformation contains the basic information and properties of the original data, so it can be used to identify it among other samples. Regarding the acoustic field, audio fingerprinting is understood as an algorithm that extracts the main component taking into account the perceptual characteristics of the audio \cite{Cano2005concepts}. Most of the time, these techniques are applied over the spectrogram representation of the signal. \cite{wang2003} developed the idea of a constellation map for Shazam Entertainment in order to implement an audio search algorithm. Over the years, many different techniques have been developed \cite{Cano2005review}, for example, we can also find recognition of activities of daily living via audio fingerprinting \cite{Pires2018}. This paper follows our last work \cite{lopezgarcia2022}, where an in-depth example of the application of audio fingerprinting was applied over music plagiarism. \section{Methodology} With the aim of implementing a fingerprint extraction for a given musical signal $X_t$, we have designed an algorithm that computes a global peak detection over the spectrogram associated to give us its constellation map. Let $N_{FFT}$ and $N_O$ be the length of the Fast Fourier Transform (FFT) window and the number of elements to overlap between segments respectively, we first compute the spectrogram of the signal ($S_{tfa}$), by using the Hamming window, in order to get the (time, frequency, amplitude) vectors by considering these two parameters. Such representation contains the amplitude spatial information to analyze. Our engine search determines whether a time-frequency point can be considered locally relevant according to its neighborhood. Then, the detection is processed regarding a required band. Let $\{T_i\}_{i=1}^n$ and $\{F_j\}_{j=1}^m$ be the time and frequency bands of the spectrogram with the amplitude of the event, we can reformulate the spectrogram $S_{tfa} = (T_i)_{i=1}^n = (F_j)_{j=1}^m$ as its rows and columns representations. As part of the engine search, we define two windows $\phi_T^{d_T}$ and $\phi_F^{d_F}$ to process the local pairwise comparisons with a respective length of $d_T$ and $d_F$, whose functionality is to extract a number of elements of the band and return the local maximum. Without limiting the generality of the foregoing, we can mathematically describe the time-band window mechanism with a length of $0< d_T\le n$ and structure $T_i = (T_i^1, ..., T_i^n)$ as: \begin{equation} \displaystyle\phi_T^{d_T}(T_i) = \left(\max{\{T_i^k, ..., T_i^{k+d_T}\}}\right)_{1\le k\le n-d_T-1} \hbox{, per each } i\in\{1,...,n\}. \label{eq:peak_search} \end{equation} When we group all the values we drop those elements that have an equal index to avoid duplicates. Hence, we can group the window of each band to create the set: \begin{equation} \displaystyle\Phi_T^{d_T} = \{\phi_T^{d_T}(T_i)\}_{i=1}^n. \label{eq:fingerprint} \end{equation} This way, we get the topologically prominent elements per each feature vector. Owing to the equation \ref{eq:peak_search}, it is easy to note that even though there are $n-d_T-1$ matches, the window $\phi_T^{d_T}(T_i)$ may contain a smaller number of elements whenever $d_T > 2$. Depending on how restrictive we need to be, we can proceed with just one of the bands or combine them to create a more stringent search and distortion resistance since it is returned only the peaks that are prominent in both directions. Finally, the algorithm merges all the band-dependent peaks, as shown in equation \ref{eq:fingerprint}, to give us the total number of spatial points that determine the so-called audio fingerprint. In \autoref{fig:spectrogram}, we can see a graphical example of an audio fingerprint. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{spectrogram_peaks.png} \caption{Example of a spectrogram with its fingerprint stacked.} \label{fig:spectrogram} \end{figure} \section{Algorithm} Our engine search, which boosts SpectroMap, processes audio signals in order to return an output file with the (time, frequency, amplitude) peaks detected in its spectrogram representation. Thus, it can be combined with the Mercury software to complete an in-depth comparison between music excerpts. Figure \ref{fig:algorithm} has a cursory description of the performance of SpectroMap. The algorithm basically batches the files by means of the following steps: \begin{enumerate}[label={Step \arabic*}, align=left] \item Decide the window to use and set the parameters $N_{FFT}$ and $N_O$. \item Read the audio file to get its amplitude vector and its sample rate. \item Compute the spectrogram through the associated Fourier transformations. \item Set a fixed window length ($d_T$, $d_F$ or both) for the pairwise comparisons. \item\label{it:setting} Choose the settings to proceed with the peak detection over a selected band or a combination of both. \item Create an identification matrix consisting of a binary matrix with the same shape as the spectrogram with the position of the highlighted prominences. \item Extract such elements and create a file with the (time, frequency, amplitude) vectors. \end{enumerate} Regarding step~\ref{it:setting}, the authors highly recommend selecting both bands to perform the peak detection since the output is more filtered and spatially consistent. For the remainder steps, the choice is a personal decision that depends on the scope of the research. It is worth mentioning that the limitations of the method depend on the functionality of the Signal module of the SciPy library. Both installation and usage are described in our GitHub repository \cite{spectromap2022}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{peak_search.png} \caption{Flowchart with the inner architecture of the algorithm implemented that detects the topological peaks of a spectrogram.} \label{fig:algorithm} \end{figure} \section{Case study: Processing of environmental and urban sound events}\label{sec:results} The aim of this section is to present an experiment in which the performance of SpectroMap is analyzed in terms of computational cost. To this end, we have evaluated the speed of our algorithm over two datasets. On the one hand, Urban Sound 8K \citep{Salamon2014} is an audio dataset that contains 8732 labeled sound excerpts. The files are pre-sorted into ten folds in order to help in the reproduction and comparison of machine learning experiments. The samples have a duration of $\approx 4$s and they are classified as urban sounds from 10 classes: air conditioner, car horn, children playing, dog bark, drilling, engine idling, gunshot, jackhammer, siren, and street music. On the other hand, ESC-10 \citep{piczak2015} is a labeled collection of 400 environmental audio recordings suitable for benchmarking methods of environmental sound classification. In particular, ESC-10 is a subset of the major dataset ESC-50, which contains 2000 audio excerpts with a total size of $\approx 600$MB publicly available (\url{https://github.com/karolpiczak/ESC-50#download}). For both datasets, the main use commonly attached is the classification task via supervised AI models. We can find robust performance ($94.6\%$ accuracy) utilizing CNN architectures \cite{Hassan2019} for the Urban Sound 8K set and other applications in low-cost monitoring devices \cite{Mydlarz2017}. For the ESC-50 dataset, and so the ESC-10, it has been shown that deep architectures such as Transformers \cite{Chen2022} and CNNs \cite{Elizalde2022} can learn with high precision ratios from this kind of audio sources with $97.00\%$ and $96.70\%$ of accuracy respectively. In \autoref{table:results} is presented the computational cost associated with the audio fingerprinting extraction task. All the experiments were produced by using seconds as time magnitude. For both datasets, it is computed the peak detection per folder and per audio sample. The Python script utilized to obtain the \autoref{table:results} is displayed in \autoref{sec:python_implementation}. The computer that conducted the experiments was equipped with an AMD Ryzen 7 3700u with 16GB RAM running in Ubuntu 20.04.3 LTS OS. \begin{table}[ht!] \centering \caption{Computational costs in seconds produced for the audio fingerprinting task for both Urban Sound 8K and ESC-10 experiments.} \label{table:results} \begin{tabularx}{\textwidth}{c @{\extracolsep{\fill}} crrrrrr \toprule & & & \multicolumn{4}{c}{\textbf{Processing times (seconds)}} & \\ \cmidrule{4-7} & Set & Files & Min & \multicolumn{1}{c}{Mean} & Max & Total & it/s \\ \midrule \multirow{10}{*}{\rotatebox[origin=c]{90}{\textbf{Urban Sound 8K}}} & 1 & 874 & 0.0128 & 0.9690 $\pm$ 0.7436 & 3.7271 & 846.9286 & 1.0319 \\ & 2 & 889 & 0.0152 & 0.7567 $\pm$ 0.5106 & 2.8948 & 672.7798 & 1.3213 \\ & 3 & 926 & 0.0179 & 0.5169 $\pm$ 0.3851 & 2.4533 & 478.6649 & 1.9345 \\ & 4 & 991 & 0.0174 & 0.9705 $\pm$ 0.6401 & 2.9550 & 961.7774 & 1.0303 \\ & 5 & 937 & 0.0265 & 0.6943 $\pm$ 0.5281 & 2.6831 & 650.5726 & 1.4402 \\ & 6 & 824 & 0.0282 & 0.6107 $\pm$ 0.4044 & 2.6635 & 503.2451 & 1.6373 \\ & 7 & 839 & 0.0302 & 0.8225 $\pm$ 0.5418 & 2.8535 & 690.0994 & 1.2157 \\ & 8 & 807 & 0.0496 & 0.8512 $\pm$ 0.7100 & 3.6769 & 686.9888 & 1.1746 \\ & 9 & 817 & 0.0324 & 0.8549 $\pm$ 0.5987 & 4.2657 & 698.4591 & 1.1697 \\ & 10 & 838 & 0.0314 & 0.6919 $\pm$ 0.6479 & 2.6639 & 579.8281 & 1.4452 \\ \midrule \multirow{5}{*}{{\rotatebox[origin=c]{90}{\textbf{ESC-10}}}} & 1 & 80 & 0.2669 & 0.4024 $\pm$ 0.1443 & 0.8562 & 32.1973 & 2.4846 \\ & 2 & 80 & 0.1466 & 0.2741 $\pm$ 0.1195 & 0.6906 & 21.9293 & 3.6480 \\ & 3 & 80 & 0.1562 & 0.2895 $\pm$ 0.1356 & 0.6511 & 23.1636 & 3.4536 \\ & 4 & 80 & 0.1341 & 0.2923 $\pm$ 0.1331 & 0.6861 & 23.3877 & 3.4205 \\ & 5 & 80 & 0.1341 & 0.2719 $\pm$ 0.1072 & 0.5526 & 21.7594 & 3.6765 \\ \bottomrule \end{tabularx} \end{table} \section{Python implementation}\label{sec:python_implementation} This section is dedicated to the application of the SpectroMap algorithm to some kind of example. In particular, the module is designed to process either a raw signal or a spectrogram. For the first case, we make use of the \verb|spectromap| object. For the second case, we apply the \verb|peak_search| function. In addition, the script that reproduces the results shown is displayed at the end in order to easy reproducibility purposes. The library was written with the Python 3.8 version and its usage depends just on NumPy 1.19 and SciPy 1.6.3. packages. The repository is under the GNU General Public License v3.0. \subsection{Application over a raw signal} \begin{python} import numpy as np from spectromap.functions.spectromap import spectromap # Generate a random signal y = np.random.rand(44100) kwargs = {'fs': 22050, 'nfft': 512, 'noverlap':64} # Instantiate the SpectroMap object SMap = spectromap(y, **kwargs) # Get the spectrogram representation plus its time and frequency bands f, t, S = SMap.get_spectrogram() # Extract the topological prominent elements from the spectrogram. # Coordinates matrix as (time, freq) # Peak matrix. fraction = 0.15 # Fraction of spectrogram to compute local comparisons condition = 2 # Axis to analyze (0: Time, 1: Frequency, 2: Time+Frequency) id_peaks, peaks = SMap.peak_matrix(fraction, condition) # Get the peaks coordinates as as (s, Hz, dB)-array. extraction_t_f_dB = SMap.from_peaks_to_array() \end{python} \subsection{Application over a given spectrogram} \begin{python} from spectromap.functions.spectromap import peak_search fraction = 0.05 # Fraction of spectrogram to compute local comparisons condition = 2 # Axis to analyze (0: Time, 1: Frequency, 2: Time+Frequency) id_peaks, peaks = peak_search(spectrogram, fraction, condition) \end{python} \subsection{Application over a dataset} Here is presented the script that reproduces the experimental cases for the Urban Sound 8K described in \autoref{sec:results} \begin{python} # OS packages import os import time # Parallel computing from pqdm.processes import pqdm # Sinal processing modules import numpy as np import librosa from spectromap.functions.spectromap import spectromap NFFT = 1024 FRACTION = 1/3 # Fraction of spectrogram to compute local comparisons CONDITION = 2 # Axis to analyze (0: Time, 1: Frequency, 2: Time+Frequency) def spectromap_for_urbansound8K(file): # Transform file file = os.path.join(folder_path, file) # Read file signal, sample_rate = librosa.load(file, mono=True) kwargs = {'fs': sample_rate, 'nfft': NFFT} # Initialize SpectroMap t0 = time.time() SMap = spectromap(signal, **kwargs) # Get the spectrogram representation plus its time and frequency bands f, t, S = SMap.get_spectrogram() # Extract the topological prominent elements from the spectrogram id_peaks, peaks = SMap.peak_matrix(FRACTION, CONDITION) # Get the peaks coordinates as as (s, Hz, dB)-array. extraction_t_f_dB = SMap.from_peaks_to_array() cost_time = time.time() - t0 return cost_time # Path to Urban Sound 8K audio excerpts path_UrbanSound8K = os.path.join('./', 'UrbanSound8K/audio') computational_costs = [] FOLDERS = os.listdir(path_UrbanSound8K) for folder in FOLDERS: print('\n\nProcessing: {}'.format(folder)) folder_path = os.path.join(path_UrbanSound8K, folder) # Start process t0 = time.time() folder_time = pqdm(os.listdir(folder_path), spectromap_for_urbansound8K, n_jobs = os.cpu_count()) folder_time = np.array(folder_time) # End process computational_costs.append(folder_time) np.save('UrbanSound8K_computational_costs.npy', computational_costs) \end{python} \section{Conclusions} We have introduced SpectroMap, a peak detection algorithm whose main application is the extraction of audio fingerprints. It has been shown that SpectroMap is an effective and fast algorithm with an average of 1.340 and 3.336 iterations per second for the datasets presented in the case study (Urban Sound 8K and ESC-10). The code and Python implementation of the package has been presented in a straightforward manner in order to ease applicability and reproducibility. Even though we have not emphasized on the underlying application on audio signals with comparison purposes, such an application can be found in our last paper \cite{lopezgarcia2022}.
1,314,259,995,551
arxiv
\section{Introduction} The recent study of many-body systems interacting via long-range potentials, such as gravitating particles or unscreened plasmas (see \cite{draw2002,campa2008,campa2009,dauxois2010} for other examples), has revealed an interesting property of the entropy that went unnoticed for a surprisingly long time, namely, that it can be \textit{nonconcave} as a function of the energy in the thermodynamic limit. It was known before this discovery that the entropy of finite-size systems could be nonconcave because of boundary or surface contributions (see, e.g., \cite{huller1994b,ispolatov2001b,behringer2006} and \cite{binder1984,challa1986,lee1991} for implicit references to this idea in the context of finite-size scaling). But, in almost all cases, it was assumed that this nonconcavity disappears when taking the thermodynamic limit because the ``bulk'' entropy, which is supposedly always concave as a function of energy, dominates over the ``surface'' entropy. We now know that the situation is more complicated and, at the same time, more interesting. If the interaction in a homogeneous many-particle system is \textit{short-range} (see \cite{campa2009} for a definition), then the thermodynamic entropy of this system is essentially always concave as a function of its energy, as has been proved years ago by Ruelle \cite{ruelle1969} (see also Lanford \cite{lanford1973}) using a separation or so-called ``subadditivity'' argument. However, if the interaction is \textit{long-range}, then the subadditivity argument does not work, and the entropy can be either concave or nonconcave \cite{draw2002,campa2008,campa2009}. The latter possibility has for some time been known to arise in mean-field systems, but the crucial and relatively recent input that the study of systems such as gravitating particles has provided is that nonconcave entropies also arise in systems involving physical interactions that are genuinely ``long-range''. In this sense, nonconcave entropies cannot be dismissed as an artifact of mean-field approximations---they are ``physical''. In fact, it is by now established that the nonconcavity of the entropy is related to many interesting physical phenomena, including \begin{itemize} \item the existence of energy regions where the heat capacity, defined microcanonically, is negative (something forbidden in the canonical ensemble) \cite{lynden1977,lynden1999,gross2001,draw2002,campa2008,campa2009}; \item the appearance of first-order phase transitions as well as metastable states in the canonical ensemble \cite{bouchet2005,touchette2005a,campa2009}; \item the nonequivalence of the microcanonical and canonical ensembles at the thermodynamic and equilibrium macrostate levels \cite{ellis2000,touchette2004b}; \item a possible ergodicity breaking in microcanonical dynamics \cite{mukamel2005}. \end{itemize} We will not be directly concerned here with any of these phenomena; instead, we will consider a problem of a more technical nature having to do with how entropies that are nonconcave can be calculated in practice. Our starting point is the age-old thermodynamic result stating that the entropy of a thermodynamic system is the Legendre transform of its free energy, and vice versa. This duality property can only be true, obviously, if the entropy is concave, since Legendre transforms only yield concave functions. Hence, if one knows or suspects that the entropy of a system is nonconcave, then this entropy cannot be obtained from the canonical ensemble by calculating the Legendre transform of the free energy. How can the entropy be calculated then? The goal of this paper is to describe and illustrate in the simplest way possible the various methods that have been proposed in the last few years to answer this question. Five methods, which cover the latest studies on the topic of nonconcave entropies, will be covered: \begin{enumerate} \item Microcanonical contraction (Sec.~\ref{secmicro}); \item Metastable branches of the canonical free energy (Sec.~\ref{seccrit}); \item Generalized canonical ensembles, with special emphasis on the Gaussian ensemble and a new ensemble called the \textit{Betrag} ensemble (Sec.~\ref{secgen}); \item Restricted canonical ensemble (Sec.~\ref{secres}); \item Inverse Laplace transform (Sec.~\ref{secinv}). \end{enumerate} Each of these methods will be illustrated with a simple spin system introduced in \cite{touchette2003,touchette2008} as a pedagogical model of equilibrium statistical mechanics having a nonconcave entropy. The model, as will become clear, is not meant to represent any real physical system, but has the advantage of being exactly solvable, which makes it useful for demonstrating how the methods listed above work in practice, and for illustrating along the way many general results about nonconcave entropies. In theory, all of the methods that will be discussed can be used to calculate any nonconcave entropy, but we will see that some may be more effective or more ``tractable'' than others in practice, depending on the system considered. The question of selecting the ``right'' method for a given system will be discussed at the end of the paper, along with some open problems related to generalized canonical ensembles. \section{Canonical ensemble} \label{seccan} Before we start discussing methods that can be used to calculate nonconcave entropies, let us convince ourselves that the Legendre transform of the canonical free energy does not yield the microcanonical entropy when the latter is nonconcave. This will give us the opportunity to introduce the basic notations used in this paper. Let $H_N(\omega)$ be the Hamiltonian of a classical $N$-particle system, and let $\omega$ denote a configuration or \emph{microstate} of this system, and $\Lambda_N$ its configuration space. We define the \emph{thermodynamic free energy} or \emph{free energy density} of the canonical ensemble by the limit \begin{equation} \varphi(\beta)=\lim_{N\rightarrow\infty}-\frac{1}{N}\ln Z_N(\beta), \label{eqfe1} \end{equation} where \begin{equation} Z_N(\beta)=\int_{\Lambda_N} \mathrm{e}^{-\beta H_N(\omega)}\, \mathrm{d} \omega \end{equation} is the $N$-particle partition function. The problem that concerns us here is to determine whether $\varphi(\beta)$ can be used to obtain the \emph{thermodynamic entropy} or \emph{entropy density} of the microcanonical ensemble, defined by \begin{equation} s(u)=\lim_{N\rightarrow\infty}\frac{1}{N}\ln \Omega_N(u), \label{eqs1} \end{equation} where \begin{equation} \Omega_N(u)=\int_{\omega\in \Lambda_N: h_N(\omega)=u}\mathrm{d}\omega=\int_{\Lambda_N}\delta(h_N(\omega)-u)\, \mathrm{d}\omega \label{eqdens1} \end{equation} is the density of states, which gives the volume (or, more pictorially, the number) of microstates $\omega$ that have a \textit{mean energy} $h_N(\omega)=H_N(\omega)/N$ equal to $u$. As mentioned in the introduction, the common answer to this problem given by most thermodynamics textbooks (see \cite{silhavy1997} for an exception) is that $s(u)$ can \emph{always} be obtained as the Legendre transform of $\varphi(\beta)$, since $\varphi(\beta)$ and $s(u)$ are Legendre transforms of one another. This implies, incidentally, that one can also \emph{always }calculate $\varphi(\beta)$ as the Legendre transform of $s(u)$. But is this really the complete answer? What if $\varphi(\beta)$ is nondifferentiable, as is the case when there is a first-order phase transition in the canonical ensemble? How does one define the Legendre transform for this case? Also, what if $s(u)$ is nonconcave? The latter question naturally arises when studying long-range systems. If $s(u)$ is nonconcave, then the Legendre transform of $\varphi(\beta)$ cannot yield $s(u)$ simply because Legendre transforms only yield concave functions \cite{rockafellar1970,tiel1984}. By reading a bit about convex analysis, one learns in fact that the Legendre transform of $\varphi(\beta)$, defined as \begin{equation} \varphi^*(u)=\inf_\beta\{\beta u-\varphi(\beta)\}, \label{eqlf1} \end{equation} yields a concave function corresponding in general to the concave envelope of $s(u)$.\footnote{The transformation defined Eq.~(\ref{eqlf1}) is actually a generalization of the Legendre transform known as the \emph{Legendre-Fenchel} transform, which can be applied to nondifferentiable as well as nonconcave functions; see \cite{rockafellar1970,tiel1984} for more details. In this paper, we refer to this transform as the Legendre transform for simplicity.} Thus, if $s(u)$ is concave, then $s(u)=\varphi^*(u)$, which is to say that $s(u)$ is the Legendre transform of $\varphi(\beta)$. However, if $s(u)$ is nonconcave, then $s(u)\neq\varphi^*(u)$. In this case, only the concave part of $s(u)$ will be recovered from the Legendre transform $\varphi^*(u)$ of $\varphi(\beta)$. These mathematical results are illustrated in the next example using a simple spin model which will stay with us for the rest of the paper. For proofs of these results, the reader should consult the classical book of Rockafellar \cite{rockafellar1970} or the more readable treatise of van Tiel \cite{tiel1984}. \begin{example}[Block-spin model~\cite{touchette2008}] \label{ex1} Consider the following Hamiltonian: \begin{equation} H_{N}=\frac{N}{2}y +\sum_{i=1}^{N/2} \sigma_i, \label{eqh1} \end{equation} where $y$ and $\sigma_i$, $i=1,2,\dots,N/2$, are spin variables taking values in the set $\{-1,+1\}$. The first term in the Hamiltonian represents the energy of a block of $N/2$ ``frozen'' spins constrained to take the same spin value $y$ ($N$ is assumed to be even). The second term represents the energy of a second block of $N/2$ ``free'' spins which do not interact with each other nor with the first block of spins. The entropy density $s(u)$ of this spin system can easily be calculated from the definition of this quantity, i.e., from Eqs.~(\ref{eqs1}) and (\ref{eqdens1}). This calculation is presented in \cite{touchette2008} with the result \begin{equation} s(u)=\frac{1}{2}\left\{ \begin{array}{lll} s_\sigma(2u+1) & & u\in [-1,0) \\ s_\sigma(2u-1) & & u\in [0,1], \end{array} \right. \label{eqs2} \end{equation} where \begin{equation} s_\sigma(v)=-\left( \frac{1-v}2\right) \ln \left( \frac{1-v}2\right) -\left( \frac{1+v}2\right) \ln \left( \frac{1+v}2\right) \end{equation} is the entropy of the ``free'' spins $\sigma_i$. As is clear from Fig.~\ref{figcan1}, $s(u)$ is a nonconcave function of $u$ as it has more than one maximum\footnote{It is easy to see that a concave function can only have one maximum.} and its derivative is non-monotonic. To verify that this nonconcave entropy cannot be obtained from $\varphi(\beta)$, we proceed to calculate $\varphi^*(u)$. The direct evaluation of $Z_N(\beta)$ yields for this model \begin{equation} Z_{N}(\beta)=(\mathrm{e}^{\beta N/2}+\mathrm{e}^{-\beta N/2})(\mathrm{e}^{\beta}+\mathrm{e}^{-\beta})^{N/2}, \label{eqpart1} \end{equation} so that \begin{equation} \varphi(\beta)=-\frac{1}{2}|\beta|-\frac{1}{2}\ln (2\cosh\beta). \label{eqfr1} \end{equation} From this expression, plotted in Fig.~\ref{figcan1}, we then compute the Legendre transform defined in Eq.~(\ref{eqlf1}). This calculation can again be found in \cite{touchette2008}; the result is \begin{equation} \varphi^{*}(u)=\frac{1}{2} \left\{ \begin{array}{lll} s_\sigma(2u+1) & & u\in [-1,-\frac{1}{2})\\ \ln2 & & u\in [-\frac{1}{2},\frac{1}{2}]\\ s_\sigma(2u-1) & & u\in (\frac{1}{2},1]. \end{array} \right. \end{equation} This function is plotted in Fig.~\ref{figcan1}. We see, as announced, that $\varphi^*(u)$ is a concave function corresponding to the concave envelope of $s(u)$. The part of $s(u)$ that coincides with $\varphi^*(u)$ for $u\in [-1,-\frac{1}{2}]\cup[\frac{1}{2},1]$ is called the \emph{concave} parts of $s(u)$, whereas the part such that $s(u)<\varphi^*(u)$, seen for $u\in(-\frac{1}{2},\frac{1}{2})$, is called the \emph{nonconcave} part of $s(u)$. \end{example} \begin{figure*}[t] \centering \includegraphics{canonical1} \caption{Left: Microcanonical entropy $s(u)$ of the block-spin model defined in Example~\ref{ex1}. Center: Canonical free energy $\varphi(\beta)$ of the model. Right: Concave envelope of $s(u)$ (blue) obtained from the Legendre transform of $\varphi(\beta)$.} \label{figcan1} \end{figure*} We can get more insights into the results presented above and illustrated in Fig.~\ref{figcan1} by noting the following extra results of convex analysis: \begin{itemize} \item The entropy $s$ at the point $u$ is equal to the Legendre transform of $\varphi$ if and only if one can place a line above the graph of $s(u)$ that touches $s(u)$ without intersect it. When this is possible, we say that $s$ admits a \emph{supporting line} at $u$. Mathematically, this property is expressed as follows: $s=\varphi^*$ at $u$ if, and only if, there exists $\beta\in\mathbb{R}$ such that \begin{equation} s(v)\leq s(u)+\beta (v-u) \end{equation} for all $v$. See \cite{touchette2004b,touchette2009} for more details on the concept of supporting lines. \item The concave envelope or \emph{concave hull} of $s(u)$ is obtained by constructing the set of all the supporting lines of $s(u)$; see Fig.~\ref{figsupp1}.\footnote{Mathematically, the concave envelope of $s(u)$ is also given by the smallest concave function that majorizes $s(u)$ or by a geometrical construction known as Maxwell's construction; see Sec.~4 of \cite{ellis2004} or Chap.~3 of \cite{touchette2003}.} \item If $\varphi$ is differentiable at $\beta$, then \begin{equation} s(u_\beta)=\varphi^*(u_\beta)=\beta u_\beta-\varphi(\beta), \label{eqdlf1} \end{equation} where $u_\beta=\varphi'(\beta)$.\footnote{This result is essentially a consequence of the so-called G{\"a}rtner-Ellis Theorem of large deviation theory; see Sec.~5.2 of \cite{touchette2009}.} \end{itemize} The first result provides a useful geometric understanding of the points of the entropy that can or cannot be obtained from the Legendre transform of the free energy. This is illustrated in Fig.~\ref{figsupp1}. As for the third result, it shows that $s(u)$ is correctly given by the Legendre transform of $\varphi(\beta)$ for all points $u$ lying in the image of the derivative of $\varphi(\beta)$, i.e., all points $u$ such that $u=\varphi'(\beta)$ for some $\beta\in\mathbb{R}$. This implies, in particular, that if $\varphi(\beta)$ is everywhere differentiable and the image of $\varphi'$ coincides with the domain of $s$ (i.e., the set of allowed or ``realizable'' values for $H_N/N$), then $s=\varphi^*$ holds globally. We will often come back to this result in the rest of the paper when treating generalized canonical ensembles. \begin{figure*}[t] \centering \includegraphics{supportinglines1} \caption{(Color online) Left: Illustration of the concept of supporting lines: the line in blue is supporting but not the line in red. Right: The concave envelope of $s(u)$ is given by the set of all supporting lines.} \label{figsupp1} \end{figure*} \section{Microcanonical contraction} \label{secmicro} The block-spin model that we have studied in the previous example is simple enough that we can obtain its nonconcave entropy $s(u)$ directly from the definition of this quantity. But, of course, for more realistic and hence more complex models, one should not hope to be able to obtain $s(u)$ in this way. How can $s(u)$ be calculated then? One answer finds its inspiration from large deviation theory, and attempts to derive $s(u)$ by maximizing another entropy subject to the energy constraint. The basis and hypotheses behind this method are the following \cite{ellis2000}. Given the Hamiltonian $H_N(\omega)$, one must be able to find a \emph{macrostate} $M_N(\omega)$ such that the following two properties are satisfied: \begin{itemize} \item The mean energy $h_N(\omega)=H_N(\omega)/N$ can re-written as a function of $M_N(\omega)$ either exactly or asymptotically in the thermodynamic limit $N\rightarrow\infty$. Mathematically, this implies that there exists a function $\tilde h(m)$ such that \begin{equation} |h_N(\omega)-\tilde h(M_N(\omega))|\rightarrow 0 \end{equation} uniformily for all $\omega\in\Lambda_N$ as $N\rightarrow\infty$. The function $\tilde h(m)$ is called the \emph{energy representation function}. \item There exists an entropy function $\tilde s(m)$ for $M_N(\omega)$, which is to say that the following limit exists: \begin{equation} \tilde s(m)=\lim_{N\rightarrow\infty}\frac{1}{N} \ln \Omega_N(M_N=m), \end{equation} where \begin{equation} \Omega_N(m)=\int_{\omega\in \Lambda_N: M_N(\omega)=m}\mathrm{d}\omega=\int_{\Lambda_N}\delta(M_N(\omega)-m)\, \mathrm{d}\omega \end{equation} counts the number of microstates such that $M_N(\omega)=m$. The function $\tilde s(m)$ is called the \emph{macrostate entropy}. \end{itemize} When both of these conditions are satisfied, it is relatively easy to show (see \cite{touchette2003,ellis2004}) that \begin{equation} s(u)=\sup_{m:\tilde h(m)=u} \tilde s(m). \label{eqcon1} \end{equation} This formula is what we refer to as a \emph{microcanonical contraction}. The word ``contraction'' comes from the fact that this formula can be derived from a result known in large deviation theory as the \emph{contraction principle} \cite{touchette2009}. The same result can also be seen as a form of maximum entropy principle expressing $s(u)$ as the constrained maximization of the macrostate entropy $\tilde s(m)$. It can be shown that the constrained maximizers of $\tilde s(m)$ such that $\tilde h(m)=u$ correspond physically to the equilibrium values of $M_N$ in the microcanonical ensemble with mean energy $u$. By denoting the set of such maximizers by $\mathcal{E}^u$, we can therefore re-express Eq.~(\ref{eqcon1}) as $s(u)=\tilde s(\mathcal{E}^u)$. \begin{example} \label{exmicro1} The calculation of the entropy $s(u)$ of the block-spin model via the contraction formula of Eq.~(\ref{eqcon1}) is presented in \cite{touchette2008}. It is easy to see that a natural choice of macrostate for this model is $m=(y,p)$, where $y$ is the spin value of the ``frozen'' block of spins, and $p$ is the proportion of $+1$ spins in the block of ``free'' spins. In terms of this macrostate, we obviously have \begin{equation} \tilde h(y,p)=\frac{y}{2}+p-\frac{1}{2}. \end{equation} The macrostate entropy for this choice of macrostate is, up to a $1/2$ factor, the Boltzmann-Shannon entropy: \begin{equation} \tilde s(p)=-\frac{1}{2}p\ln p-\frac{1}{2}(1-p)\ln (1-p). \end{equation} This entropy does not depend on $y$ because the block of ``frozen'' spins does not contribute to the entropy of the model. We refer the reader to \cite{touchette2008} again for the calculation of $s(u)$ based on this energy representation function and macrostate entropy. \end{example} Other examples of calculations of entropies based on the microcanonical contraction formula include the mean-field Blume-Emery-Griffiths model \cite{barre2001,ellis2004}, the mean-field Potts model \cite{costeniuc2005a,barre2005} (see also Example 5.4 of \cite{touchette2009}), the mean-field Hamiltonian model \cite{barre2005,campa2006a}, the so-called mean-field $\phi^4$ model \cite{hahn2005,hahn2006}, as well as a variant of this model having a nonconcave entropy $s(u,m)$ as a function of the energy $u$ and magnetization $m$ \cite{campa2007}. From this list and the form of Eq.~(\ref{eqcon1}), one might conclude that this equation is only good for mean-field models, as these are presumably the only models whose Hamiltonian can be re-expressed as a function of some specially chosen macrostates or ``mean-fields''. However, this is not the case. In theory, at least, it is always possible to express the Hamiltonian of any model, including short-range models, as a function of an infinite-dimensional macrostate known as the \emph{empirical process} (see \cite{ellis1985} and Sec.~5.3.4 of \cite{touchette2009}). But given the infinite-dimensional nature of this macrostate, the calculation of $s(u)$ from it is typically impractical if not impossible. For this reason, the microcanonical contraction formula has mostly, if not only, been used in the context of mean-field and long-range models, which are in any case the models for which nonconcave entropies are expected to arise. \section{Metastable branches of the free energy} \label{seccrit} The microcanonical contraction formula discussed in the previous section involves a \emph{constrained} maximization problem which can be transformed, following the theory of Lagrange multipliers, into an \emph{unconstrained} maximization by considering the function \begin{equation} G_\beta(m)= \tilde s(m)-\beta \tilde h(m), \end{equation} which involves a Lagrange multiplier $\beta$ associated with the constraint $\tilde h(m)=u$. The question that we ask in this section is: Can we obtain the set of constrained (global) maximizers of $\tilde s(m)$ with $\tilde h(m)=u$, which was denoted in the previous section by $\mathcal{E}^u$, from the set $\mathcal{E}_\beta$ of (global) unconstrained maximizers of the new function $G_\beta(m)$? The answer is, no, at least if $s(u)$ is nonconcave. Using techniques similar to those leading to the microcanonical contraction formula, it can indeed be proved that the canonical free energy $\varphi(\beta)$ is given by the set $\mathcal{E}_\beta$ through the formula \begin{equation} \varphi(\beta)=\inf_{m}\{\beta\tilde h(m)-\tilde s(m)\}=-\sup_{m} G_\beta(m)=-G_\beta(\mathcal{E}_\beta). \label{eqcon2} \end{equation} Therefore, if we were able to obtain $\mathcal{E}^u$ from $\mathcal{E}_\beta$, we would be in a position to obtain $s(u)$ from $\varphi(\beta)$. But we know that this is not possible when $s(u)$ is nonconcave. Hence, $\mathcal{E}_\beta$ cannot be the same as $\mathcal{E}^u$ in general. This result may be surprising but does not contradict the theory of Lagrange multipliers. What this theory actually says is that the global maximizers of $\tilde s(m)$ subject to the constraint $\tilde h(m)=u$ are contained in the set of \emph{critical points} of $G_\beta(m)$, which include the global maximizers of $G_\beta(m)$, but also any local maximizers, minimizers and saddle-points that this function may have. The theory simply does not say what the constrained maximizers of $\tilde s(m)$ correspond to at the level of $G_\beta(m)$. To obtain this information, one must go deeper into the structure of the microcanonical contraction formula, Eq.~(\ref{eqcon1}), and its canonical counterpart, Eq.~(\ref{eqcon2}), to find the following (see \cite{touchette2005a}): \begin{enumerate} \item If $s$ is nonconcave at $u$, then the elements of $\mathcal{E}^u$ correspond either to local minima of $G_\beta(m)$ or saddle-points of this function depending on the local curvature of $s(u)$. \item If $s$ is concave at $u$, then the elements of $\mathcal{E}^u$ are also elements of $\mathcal{E}_\beta$ for some $\beta\in\mathbb{R}$, which is consistent with the fact that $s=\varphi^*$ in this case. \end{enumerate} We reach two conclusions from these results. The first is that the microcanonical and canonical ensembles are equivalent at the level of thermodynamic properties and equilibrium values of macrostates when $s(u)$ is concave \cite{ellis2000,touchette2003,touchette2004b,touchette2009}. The second is more pragmatic: It is possible to obtain $s(u)$ from the knowledge of the critical points of $G_\beta(m)$, but we must consider all the critical points of this function, not just its global maximizers \cite{bouchet2005}. We must locate, in particular, the local maxima of $G_\beta(m)$, which corresponds physically to \emph{metastable} values of $M_N$ in the canonical ensemble, as well as saddle-points of $G_\beta(m)$, which correspond to \emph{unstable} values of $M_N$ in the same ensemble. This conclusion is put to use in the next example. \begin{example} \label{exmbfe1} For the block-spin model, the function $G_\beta(m)$ has the simple form \begin{equation} G_\beta(y,p)=\tilde s(p)-\beta\,\tilde h(y,p),\qquad y\in\{-1,1\},\quad p\in[0,1], \end{equation} where $\tilde s(p)$ and $\tilde h(y,p)$ are the macrostate entropy and energy representation function, respectively, introduced in Example~\ref{exmicro1}. The calculation of the critical points of this form of $G_\beta(y,p)$ can be found in \cite{touchette2008} as well as in Sec.~5.1 of \cite{touchette2003}. For the purpose of this section, there are two points to note about this solution: \begin{enumerate} \item $G_\beta(y,p)$ has no saddle-points, but has local maxima, i.e., metastable states, for all $\beta\in\mathbb{R}$; \item If we denote the set of metastable points of $G_\beta(y,p)$ for a given $\beta$ by $\mathcal{M}_\beta$, then $s(u)=\tilde s(\mathcal{M}_\beta)$ for $u=\tilde h(\mathcal{M}_\beta)\in [-\frac{1}{2},\frac{1}{2}]$. \end{enumerate} The last point demonstrates that the metastable states of $G_\beta(y,p)$ can be used, as claimed, to recover nonconcave points of $s(u)$. In fact, for this particular model, $\mathcal{M}_\beta$ recovers the whole nonconcave region of $s(u)$. The set $\mathcal{E}_\beta$ of \emph{stable} or \emph{equilibrium} macrostates recovers only the concave parts of $s(u)$. \end{example} The nonconcave points of $s(u)$ can also be related, at the thermodynamic level, to metastable \emph{branches} of the canonical free energy function $\varphi(\beta)$ rather than metastable \emph{states} of the canonical ensemble, as was done above. This is illustrated next. \begin{figure*}[t] \centering \includegraphics{metastablebranches1} \caption{(Color online) Left: Plot of $\varphi_1(\beta)$ (blue), $\varphi_2(\beta)$ (purple), and $\varphi(\beta)$ (dashed). The branches of $\varphi_1(\beta)$ and $\varphi_2(\beta)$ that lie above $\varphi(\beta)$ are metastable branches of the free energy. Right: The complete nonconcave entropy is recovered by taking the Legendre transform of $\varphi_1(\beta)$ and $\varphi_2(\beta)$.} \label{figmeta1} \end{figure*} \begin{example} \label{exmb1} The exact partition function shown in Eq.~(\ref{eqpart1}) can be put in the form \begin{equation} Z_{N}(\beta)=Z_{N}^{(1)}(\beta)+Z_{N}^{(2)}(\beta), \label{eqz2} \end{equation} where \begin{equation} Z_{N}^{(1)}(\beta)=\mathrm{e}^{\beta N/2}(\mathrm{e}^{\beta}+\mathrm{e}^{-\beta})^{N/2},\qquad Z_{N}^{(2)}(\beta)=\mathrm{e}^{-\beta N/2}(\mathrm{e}^{\beta}+\mathrm{e}^{-\beta})^{N/2}. \label{eqz3} \end{equation} From these two partition functions, it is natural to define two free energy functions, $\varphi_1(\beta)$ and $\varphi_2(\beta)$, using the definition of the free energy shown in Eq.~(\ref{eqfe1}): \begin{equation} \varphi_1(\beta)=-\frac{1}{2}\beta-\frac{1}{2}\ln (2\cosh\beta),\qquad \varphi_2(\beta)=\frac{1}{2}\beta-\frac{1}{2}\ln (2\cosh\beta). \label{eqfe2} \end{equation} The relation between these two free energies and $\varphi(\beta)$ is easily found by using the expression of Eq.~(\ref{eqz2}) in the limit defining $\varphi(\beta)$ to find \begin{equation} \varphi(\beta)=\inf\{\varphi_1(\beta),\varphi_2(\beta)\}= \left\{ \begin{array}{lll} \varphi_2(\beta) & & \beta\leq0\\ \varphi_1(\beta) & & \beta>0. \end{array} \right. \end{equation} This result is illustrated in Fig.~\ref{figmeta1}. The branches of $\varphi_1(\beta)$ and $\varphi_2(\beta)$ that do not contribute to $\varphi(\beta)$ can be interpreted as \emph{metastable} branches of $\varphi(\beta)$, since they continue, in the sense of analytical continuation, the two `stable' branches of $\varphi(\beta)$ while remaining above the `true' minimal equilibrium free energy $\varphi(\beta)$.\footnote{Recall that equilibrium states of the canonical ensemble correspond, according to Gibbs, to those states minimizing the free energy.} As was the case for the metastable states of $G_\beta(y,p)$ studied in the previous example, these metastable branches of $\varphi(\beta)$ completely determine $s(u)$ by Legendre transform: \begin{equation} s(u)=\sup \{\varphi^*_1(u),\varphi_2^*(u)\}= \left\{ \begin{array}{lll} \varphi_1^*(u) & & u\in [-1,0)\\ \varphi_2^*(u) & & u\in [0,1]. \end{array} \right. \label{eqsm1} \end{equation} This result is also illustrated in Fig.~\ref{figmeta1}, and is proved directly by calculating the Legendre transforms of $\varphi_1(\beta)$ and $\varphi_2(\beta)$. \end{example} The idea of analytically continuing the free energy around a phase transition point to characterize metastable states was studied for some time in the context of short-range models \cite{langer1967,langer1969,newman1977,newman1980}. However, it was somewhat abandoned after it was realized that continued free energies do not provide correct estimates for the lifetime of metastable states. With hindsight, one could argue that these estimates were wrong because metastables states of short-range systems do not persist in the thermodynamic limit; they arise because of surface effects or, more precisely, because of a ``sub-bulk'' nonconcavity of the entropy, which, as mentioned, disappears in the thermodynamic or ``bulk'' limit. For this reason, metastable states of short-range systems cannot be associated with metastable branches of the free energy because, if they were, then the entropy would have to be nonconcave. For long-range systems, the situation is different, since these can have states that are truly metastable in the thermodynamic sense, and are associated with metastable branches of $\varphi(\beta)$, as illustrated by the previous examples. The same phenomenon has also been studied in the context of gravitating systems; see, e.g., \cite{chavanis2006} for a recent review. Still, one must be careful: It is known that different entropies having the same concave envelope lead, by Legendre transform, to the same $\varphi(\beta)$, so it is not possible to uniquely determine $s(u)$ by analytically continuing $\varphi(\beta)$. The existence of metastable branches of $\varphi(\beta)$ must be determined, ultimately, by calculating $Z_N(\beta)$, as was done in the previous example. \section{Generalized canonical ensembles} \label{secgen} The use of generalized ensembles to obtain nonconcave entropies was extensively discussed in previous publications (see \cite{costeniuc2005,touchette2005a,costeniuc2006,touchette2006b,costeniuc2006b}), so we will be brief here. The idea of this method is to obtain $s(u)$ from the Legendre transform of a modified or \emph{generalized free energy function} having the form \begin{equation} \varphi_g(\beta)=\lim_{N\rightarrow\infty}-\frac{1}{N}\ln Z_{N,g}(\beta), \label{eqgfe1} \end{equation} where \begin{equation} Z_{N,g}(\beta)=\int_{\Lambda_N} \mathrm{e}^{-\beta H_N(\omega)-Ng(H_N(\omega)/N)}\, \mathrm{d} \omega \label{eqgpf1} \end{equation} is the generalized partition function. In these expressions, $g$ is a function of the mean energy $H_N/N$, assumed to be continuous. Different choices for this function determine different generalized canonical ensembles that can be used, under some conditions on $g$ (see below), to obtain $s(u)$ even when this function is nonconcave. In the following, we will consider two generalized ensembles corresponding to two choices of $g$, and will show that both of these ensembles recover the nonconcave entropy of the block-spin model, but that one is more effective than the other for this purpose. The general result at play behind these two ensembles was proved in \cite{costeniuc2005,costeniuc2006} and can be stated in a simple form as follows: If, for a given choice of function $g$, $\varphi_g(\beta)$ is differentiable at $\beta$, then \begin{equation} s(u)=\beta u-\varphi_g(\beta)+g(u) \label{eqgenlf1} \end{equation} for $u=\varphi_g'(\beta)$. This modified Legendre transform, which is written in short as $s=\varphi_g^*+g$, generalizes the standard Legendre transform shown in Eq.~(\ref{eqdlf1}) in an obvious way. In particular, if we are able to find a function $g$ such that $\varphi_g(\beta)$ is everywhere differentiable and the range of $H_N/N$ coincides with the image of $\varphi_g'(\beta)$, then $s=\varphi^*_g+g$ for all $u$ in the domain of $s(u)$. We will see next that such a function can be constructed in some appropriate limit. \subsection{Gaussian ensemble} The choice $g(u)=\gamma u^2/2$ with $\gamma\in\mathbb{R}$ in Eq.~(\ref{eqgpf1}) leads to a generalized ensemble known as the \emph{Gaussian ensemble}. This ensemble was first introduced in the context of Monte Carlo simulations by Hetherington \cite{hetherington1987a,hetherington1987}, who also discussed its physical interpretation in terms of finite-size heat baths (see \cite{stump1987,challa1988a,challa1988}), and was later re-investigated in the context of nonconcave entropies in \cite{costeniuc2005,costeniuc2006}. It has been applied so far to obtain the nonconcave entropies of two spin models, namely, the mean-field Potts model \cite{costeniuc2006b}, and mean-field Blume-Emery-Griffitths model \cite{frigori2009}. We apply it next to the block-spin model. \begin{example} \label{exgauss1} The first natural step to take in trying to calculate the generalized partition (\ref{eqgpf1}) with $g(u)=\gamma u^2/2$ is to use the Gaussian integral \begin{equation} \mathrm{e}^{-\gamma u^2/2}=\sqrt{\frac{\gamma}{2\pi}}\int_{-\infty}^\infty \mathrm{e}^{-\gamma t^2/2-\mathrm{i} \gamma u t}\, \mathrm{d} t, \end{equation} valid for $\gamma>0$, to obtain \begin{equation} Z_{N,\gamma}^G(\beta)=\sqrt{\frac{\gamma N}{2\pi}}\int_{-\infty}^\infty\mathrm{d} t\, \mathrm{e}^{-\gamma N t^2/2}\, Z_N(\beta+\mathrm{i}\gamma t) \label{eqgauss1} \end{equation} for the Gaussian partition function. Unfortunately, the resulting integral over $t$ cannot be evaluated in general, and in particular not for the block-spin model, despite the simplicity of this model. This point will be discussed in more detail in the concluding section. A similar integral can be obtained for $\gamma<0$, and this one can actually be evaluated using a saddle-point approximation, but, as will be discussed below, the case $\gamma<0$ is not useful for obtaining nonconcave entropies. It is possible in the end to calculate the Gaussian free energy $\varphi_\gamma(\beta)$ of the block-spin model by generalizing the macrostate representation of $\varphi(\beta)$ found in Eq.~(\ref{eqcon2}) to the Gaussian ensemble: \begin{equation} \varphi_\gamma^G(\beta)=\inf_{y,p}\left\{\beta\tilde h(y,p)+\frac{\gamma}{2}\tilde h(y,p)^2-\tilde s(p)\right\}. \end{equation} The minimization problem involved in this expression is easily solved. For $\gamma>0$ and $\beta<0$, the expression between the the curly brackets above is globally minimized for $y=1$ and $p$ solving \begin{equation} \beta+\gamma p-\tilde s'(p)=0. \end{equation} For $\gamma>0$ and $\beta>0$, on the other hand, the same expression is globally minimized for $y=-1$ and $p$ solving \begin{equation} \beta+\gamma(p-1)-\tilde s'(p)=0. \end{equation} Both equations for $p$ are transcendental equations that can be solved numerically to obtain $\varphi_\gamma^G(\beta)$. The result of this numerical calculation is shown in Fig.~\ref{figgauss1}. As can be seen, the Gaussian free energy $\varphi^G_\gamma(\beta)$ obtained for $\gamma>0$ retains the nondifferentiable point of $\varphi(\beta)=\varphi^G_0(\beta)$ at $\beta=0$, but tends to become ``less nondifferentiable'' when $\gamma$ increases, as its left- and right-derivatives approach $0$ for increasing $\gamma$. This behavior of $\varphi^G_\gamma(\beta)$ is illustrated in the lower-left panel of Fig.~\ref{figgauss1}, which shows the plot of $u_{\beta,\gamma}=\partial_\beta\varphi_\gamma^G(\beta)$ for increasing values of $\gamma$. From this plot, we immediately see that the Legendre transform of the Gaussian ensemble, which takes the form \begin{equation} s(u_{\beta,\gamma})=\beta u_{\beta,\gamma}+\frac{\gamma}{2}u_{\beta,\gamma}^2-\varphi_\gamma^G(\beta), \label{eqglf1} \end{equation} should recover more and more nonconcave points of $s(u)$ as $\gamma$ increases. This is confirmed by the right-hand side plot of Fig.~\ref{figgauss1}, which shows the part of the entropy $s(u)$ recovered by Eq.~(\ref{eqglf1}) for $\gamma=0$ (canonical ensemble), $\gamma=5$, $\gamma=10$, and $\gamma=15$. \end{example} \begin{figure*}[t] \centering \includegraphics{gaussian1} \caption{(Color online) Gaussian ensemble. Upper left: Gaussian free energy $\varphi^G_\gamma(\beta)$ of the block-spin model for different values of $\gamma$ (see right). Lower left: $\beta$-derivative of $\varphi^G_\gamma(\beta)$. Right: Quadratic Legendre transform of the Gaussian free energy, which recovers the nonconcave entropy $s(u)$ (dashed line) as $\gamma\rightarrow\infty$. The entropy is recovered precisely where $\varphi^G_\gamma$ is differentiable.} \label{figgauss1} \end{figure*} The Gaussian ensemble is an interesting statistical ensemble not only because it can be used to recover nonconcave entropies, as illustrated above, but also because it allows for a natural ``parabolic'' generalization of the concept of supporting lines discussed in Sec.~\ref{seccan}. Because of the quadratic nature of the function $g$ defining this ensemble, it can indeed be proved (see \cite{costeniuc2005,costeniuc2006}) that $s(u)$ is given by the ``quadratic'' Legendre transform of $\varphi_\gamma(\beta)$ shown in Eq.~(\ref{eqglf1}) if \begin{equation} s(v)\leq s(u)+\beta(v-u)+\frac{\gamma}{2}(v-u)^2 \end{equation} for all $v$. We say in this case that $s$ admits a \emph{supporting parabola} with curvature $\gamma$ at the point $u$; see Fig.~\ref{figsupp2}. Therefore, the points of $s(u)$ that are recovered by the Gaussian ensemble with parameter $\gamma$ are all (and only) those points that admit a supporting parabola with curvature $\gamma$ or, equivalently, all the points of $s(u)$ coinciding with the \emph{parabolic hull} of this function; see Fig.~\ref{figsupp2}. We see from this result that the entropy of the block-spin model is obtained only in the limit $\gamma\rightarrow\infty$ because the entropy of this model has a cusp or ``corner'' at $u$, which can only be ``supported'' by a degenerate parabola of infinite curvature, as shown in the center plot of Fig.~\ref{figsupp2}. The same result also explains why the Gaussian ensemble is able to recover nonconcave points of $s(u)$: By modifying the Legendre transform with an added quadratic term, we are able to ``reach'' with a supporting parabola points of $s(u)$ that cannot be ``reached'' with a supporting line; see Fig.~\ref{figsupp2}. This implies, naturally, that the Gaussian ensemble with $\gamma>0$ can only recover more points of $s(u)$ as compared with the canonical ensemble, at least if $s(u)$ has a nonconcave region (and is nondegenerate). (Of course, if $s(u)$ is concave, then the Gaussian ensemble with $\protect{\gamma}>0$ necessarily recovers the whole of $s(u)$ just as the canonical ensemble does.) Conversely, the Gaussian ensemble with $\gamma<0$ must recover \emph{fewer} points of $s(u)$ than the canonical ensemble because points supported by a supporting line may not be supported by a parabola with inverted curvature. This explains our previous observation that the Gaussian ensemble with $\gamma<0$ is not useful for obtaining nonconcave entropies. \begin{figure*}[t] \centering \includegraphics{supportingparabola1} \caption{(Color online) Left: Illustration of the concept of supporting parabola. Center: Supporting parabolas lying at the center of $s(u)$. The $\gamma$ values of these parabolas are those reported in Fig.~\ref{figgauss1}. The entropy at $u=0$ is recovered only in the limit where $\gamma\rightarrow\infty$ because $s(u)$ has a corner at $u=0$. Right: The parabolic envelope of $s(u)$ is given by the set of all supporting parabolas with given curvature $\gamma$.} \label{figsupp2} \end{figure*} \subsection{Betrag ensemble} We now consider a different ensemble defined by the choice $g(u)=\gamma |u|$ with $\gamma\in\mathbb{R}$, which will be referred to as the \emph{Betrag ensemble}.\footnote{This ensemble could also be called the ``absolute value ensemble'', but German seems to provide a better name.} This ensemble was mentioned in \cite{costeniuc2006b}, and is somewhat related to piecewise linear Legendre transforms \cite{ellis1995}, but was never applied before to any equilibrium models. \begin{figure*}[t] \centering \includegraphics{betrag1} \caption{(Color online) Betrag ensemble. Upper left: Betrag free energy $\varphi^B_\gamma(\beta)$ of the block-spin model for different values of $\gamma$ (see right). Lower left: Derivative of $\varphi^B_\gamma(\beta)$. Right: Deformed Legendre transform of the Betrag free energy, which recovers the nonconcave entropy $s(u)$ (dashed line) as $\gamma\rightarrow\infty$. The entropy is recovered precisely where $\varphi^B_\gamma$ is differentiable.} \label{figbet1} \end{figure*} \begin{example} \label{exbet1} The partition function \begin{equation} Z_N^B(\beta)=\int_{\Lambda_N}\mathrm{e}^{-\beta H_N(\omega)-\gamma |H_N(\omega)|}\, \mathrm{d}\omega \end{equation} associated with the Betrag ensemble can easily be calculated for the block-spin model because the energy of this model is positive when $y=1$ and negative when $y=-1$. The term $|H_N|$ is easily separable, as a result, and we obtain \begin{equation} Z_{N,\gamma}^B(\beta)=Z_{N}^{(1)}(\beta-\gamma)+Z_{N}^{(2)}(\beta+\gamma), \end{equation} where $Z_{N}^{(1)}(\beta)$ and $Z_{N}^{(2)}(\beta)$ are the two canonical partition functions defined in Eq.~(\ref{eqz3}). Given the free energies $\varphi_1(\beta)$ and $\varphi_2(\beta)$ shown in Eq.~(\ref{eqfe2}), we therefore obtain \begin{equation} \varphi^B_\gamma(\beta)=\inf\{\varphi_1(\beta-\gamma),\varphi_2(\beta+\gamma)\} =\left\{ \begin{array}{lll} \varphi_1(\beta-\gamma) & & \beta>0\\ \varphi_2(\beta+\gamma) & & \beta\leq 0 \end{array} \right. \end{equation} for the free energy of the Betrag ensemble. This free energy function is shown in Fig.~\ref{figbet1}. As for the Gaussian free energy, we see that $\varphi^B_\gamma(\beta)$ has a nondifferentiable point at $\beta=0$ for all the values of $\gamma$ considered, but that the image of the derivative of $\varphi_\gamma^B(\beta)$, which we denote by $u_{\beta,\gamma}$ in the lower-left plot of Fig.~\ref{figbet1}, fills more and more points of the interval $[-1,1]$ as $\gamma$ is increased. These properties of $\varphi^B_\gamma(\beta)$ were also observed for the Gaussian free energy $\varphi^G_\gamma(\beta)$, and imply that the modified Legendre transform of the Betrag ensemble, given by\footnote{See the general result shown in Eq.~(\ref{eqgenlf1}).} \begin{equation} s(u_{\beta,\gamma})=\beta u_{\beta,\gamma}+\gamma |u_{\beta,\gamma}|-\varphi^B_\gamma(\beta), \end{equation} recovers more and more points of $s(u)$ as $\gamma$ is increased. This is illustrated in the right-hand side plot of Fig.~\ref{figbet1}. As for the Gaussian ensemble, one can show that the Betrag ensemble recovers the full entropy only in the limit $\gamma\rightarrow\infty$. The comparison of Figs.~\ref{figgauss1} and \ref{figbet1} shows, however, that the Betrag ensemble is more efficient at obtaining the full entropy than the Gaussian ensemble. Indeed, both ensembles recover $s(u)$ over an interval of the form $I_\gamma=[-1,-u_\gamma]\cup [u_\gamma,1]$, but $u_\gamma$ converges to $0$ as $\gamma\rightarrow\infty$ faster for the Betrag ensemble than for the Gaussian ensemble; see Fig.~\ref{figaeq1}. This difference in convergence is related to the way the two ensembles achieve equivalence \cite{costeniuc2005}: For the Gaussian ensemble, the limit $\gamma\rightarrow\infty$ is required to recover the whole of $s(u)$ because, as already noted, $s(u)$ has a cusp at $u=0$, whereas for the Betrag ensemble, the limit is needed because $s'(u)$ diverges around its cusp.\footnote{This implies, in particular, that if $s(u)$ has a cusp with finite left- and right-derivatives, then the Betrag ensemble achieves equivalence for a finite value $\gamma>0$, whereas the Gaussian ensemble still requires the limit $\gamma\rightarrow\infty$.} \end{example} \begin{figure}[t] \centering \includegraphics{comparegaussbetrag1} \caption{(Color online). Comparison of $u_\gamma$ for the Gaussian ensemble (blue line) and Betrag ensemble (purple line).} \label{figaeq1} \end{figure} The possibility of applying the Betrag ensemble to other models rests on being able to separate the partition function of this ensemble into two sums: one involving microstates having a positive energy, and one involving the complementary set of microstates having a negative energy. Such a separation is easily achieved for the block-spin model because of the structure of its Hamiltonian, but one cannot be so optimistic, of course, as to assume that this sort of partitioning trick can be achieved for more realistic models; it all depends on the form of the Hamiltonian $H_N$ that one considers. In addition to this consideration, it should be clear that, if the nonconcave region of $s(u)$ is located in a region of positive energy, then the Betrag ensemble will not be able to recover or ``express'' any nonconcave points of $s(u)$. In this case, one must replace the function $g(u)=\gamma |u|$ by $g(u)=\gamma |u-u_0|$, where $u_0$ is some fixed value of the mean energy located inside the nonconcave region of $s(u)$. In practice, this means that in order to use the Betrag ensemble in any useful way, one must have some prior information about where $s(u)$ is nonconcave in order to choose the right $u_0$. \section{Restricted canonical ensemble} \label{secres} The concept of restricted canonical ensemble or \emph{restricted partition function} was developed by Penrose and Lebowitz \cite{penrose1979a} not for calculating nonconcave entropies, but as a way to study metastable states in the canonical ensemble. However, from the discussion of these two topics found in Sec.~\ref{seccrit}, one should expect that restricted partition functions may also be useful for obtaining nonconcave entropies. The idea behind restricted partition functions is, as the name suggests, to restrict the sum over all microstates $\omega\in\Lambda_N$ defining $Z_N(\beta)$ to a subset of $\Lambda_N$ which will be denoted by $R_N$. Thus instead of calculating $Z(\beta)$, one attempts to calculate \begin{equation} Z^R_N(\beta)=\int_{R_N}\mathrm{e}^{-\beta H_N(\omega)}\, \mathrm{d}\omega. \end{equation} The choice of $R_N$ is determined by the fact that, when $s(u)$ is concave, the sum over $\Lambda_N$ in the standard partition function $Z(\beta)$ is dominated in the thermodynamic limit ($N\rightarrow\infty$) by a subset of microstates of $\Lambda_N$ corresponding to the equilibrium states of the canonical ensemble having a fixed energy. However, when $s(u)$ is nonconcave, there is a further important -- yet subdominant -- contribution to the sum of $Z(\beta)$ coming from metastable or unstable states of the canonical ensemble. If one chooses $R_N=\Lambda_N$, then only the dominant equilibrium states will contribute in the partition function. But if one selects $R_N$ so as to exclude the dominant states, then $Z^R_N(\beta)$ will be dominated by the metastable or unstable states. In this case one should be able to recover the nonconcave points of $s(u)$, or at least some of them, by taking the Legendre transform of the free energy function $\varphi^R(\beta)$ associated with $Z^R_N(\beta)$. The next example shows how this works in practice. \begin{example} We have seen with the Betrag ensemble that the microstate space $\Lambda_N$ of the block-spin model can be partitioned, with respect to $H_N$, into microstates of positive and negative energy. Let us use this partition to define a restricted partition function $Z_N^+(\beta)$ by summing only over the microstates having a positive energy: \begin{equation} Z_N^+(\beta)=\sum_{y=1,\sigma_1,\ldots,\sigma_N} \mathrm{e}^{-\beta H_{N}}. \end{equation} Going back to the example~\ref{exmb1}, it is easy to see that $Z_N^+(\beta)$ is nothing but the ``metastable'' partition function $Z_{N}^{(2)}(\beta)$ defined in Eq.~(\ref{eqz3}). Therefore, \begin{equation} \varphi_+^*(u)=\varphi_2^*(u)=\frac{1}{2}s_\sigma(2u-1) \end{equation} for $u\in [0,1]$. A similar result can be derived for $u\in [-1,0]$ by calculating a restricted partition function $Z_N^-(\beta)$ for the microstates having a negative energy. In this case, $Z_N^-(\beta)=Z_N^{(1)}(\beta)$ and so \begin{equation} \varphi_-^*(u)=\varphi_1^*(u)=\frac{1}{2}s_\sigma(2u+1) \end{equation} for $u\in [-1,0]$. Hence, although neither of the restricted partition functions recovers the whole of $s(u)$, their combination does. \end{example} The difficulty of working with restricted partition functions is similar to that of working with the Betrag ensemble: in both cases, one must be able to calculate a partition function over some restricted set of microstates. Whether this can be done in practice depends on the Hamiltonian $H_N$ considered and, more precisely, on the possibility to use symmetries of this Hamiltonian to partition $\Lambda_N$ in easily-definable sets of microstates. The block-spin model has such a symmetry, as we have seen, which allows for a straightforward calculation of $Z^+_N(\beta)$ and $Z^-_N(\beta)$, as well as $Z_N^B(\beta)$, the betrag partition function. In fact, for this model, the functions $Z^+_N(\beta)$ and $Z^-_N(\beta)$ merely re-create $Z_N^B(\beta)$ but in two separate partition functions instead of one. Of course, if $H_N$ admits an energy representation function and macrostate entropy function, then restricted partition functions can be calculated for many conceivable restrictions of $\Lambda_N$ simply by restricting the integrals over the macrostate $M_N$ that result from the macrostate representation. In this context, the restriction method can be seen as a way to locate the critical points of the function $G_\beta(m)$, considered in Sec.~\ref{seccrit}, by restricting the range of values allowed for $M_N$. Finally, note that in the extreme case where the sum-over-states of the standard partition function $Z_N(\beta)$ is restricted only to microstates of constant energy $u$, the resulting restricted free energy function is necessarily equal to the entropy, up to some constant. This follows, of course, because the microcanonical ensemble is a special restricted ensemble that considers only microstates with a constant energy. \section{Inverse Laplace transform} \label{secinv} The last method that we discuss has been introduced recently in \cite{touchette2010}. Its basis is the inverse Laplace transform that expresses the density of state $\Omega_N(u)$ in terms of the partition function $Z_N(\beta)$: \begin{equation} \Omega_N(u)=\frac{1}{2\pi \mathrm{i}}\int_{r-\mathrm{i}\infty}^{r+\mathrm{i}\infty} Z_N(\beta)\,\mathrm{e}^{\beta Nu}\,\mathrm{d}\beta. \label{eqilt1} \end{equation} This integral is a complex integral along the path or contour $\Re(\beta)=r$, often referred to as the \emph{Bromwich contour}. The value of $r$ used to position this contour must be chosen in the region of convergence of $Z_N(\beta)$, but is otherwise arbitrary. Since the inverse Laplace transform expresses the density of states exactly in terms of the partition function, it can be used, obviously, to obtain $s(u)$ from $Z_N(\beta)$ even if the former is nonconcave. In fact, it is known that, since the entropy is a thermodynamic-limit function, one needs to know in general only the asymptotic form of $Z_N(\beta)$ as $N\rightarrow\infty$ to obtain $s(u)$ via the inverse Laplace transform. One has to be careful, however, to retain the dominant \emph{and} subdominant terms of $Z_N(\beta)$ when performing any approximations of the Bromwich integral. If one retains only the dominant term, then only the concave envelope of $s(u)$ is recovered, in accordance with our discussion of metastable branches of $\varphi(\beta)$ (see Secs.~\ref{seccrit} and \ref{secres}). This point is illustrated next. \begin{example} Given $\varphi(\beta)$, we can approximate the partition function as $Z_N(\beta)\approx\mathrm{e}^{-N\varphi(\beta)}$, plug this approximation into the integral of the inverse Laplace transform of Eq.~(\ref{eqilt1}), and then naively approximate the resulting integral by its saddlepoint (see, e.g., Appendix C.1 of \cite{touchette2009}) to obtain \begin{equation} \Omega_N(u)\approx \mathrm{e}^{N\inf_\beta\{\beta u-\varphi(\beta)\}}=\mathrm{e}^{N\varphi^*(u)} \label{eqapp1} \end{equation} and so $s(u)=\varphi^*(u)$. This result is correct, as we know, if $s(u)$ is concave, but not if $s(u)$ is nonconcave; see Sec.~\ref{seccan} and especially Eq.~(\ref{eqlf1}). A better approximation for $Z_N(\beta)$ is suggested by Eqs.~(\ref{eqz3}) and (\ref{eqfe2}): \begin{equation} Z_N(\beta)\approx \mathrm{e}^{-N\varphi_1(\beta)}+\mathrm{e}^{-N\varphi_2(\beta)}. \end{equation} Using this expression in Eq.~(\ref{eqilt1}), we obtain \begin{equation} \Omega_N(u)\approx \mathrm{e}^{N\varphi_1^*(u)}+\mathrm{e}^{N\varphi_2^*(u)} \label{eqapp2} \end{equation} instead of Eq.~(\ref{eqapp1}), so that \begin{equation} s(u)=\sup \{\varphi^*_1(u),\varphi_2^*(u)\}. \label{eqsm2} \end{equation} We know from Eq.~(\ref{eqsm1}) that this last formula recovers the correct entropy. Therefore, in the case of the block-spin model, the approximation shown in Eq.~(\ref{eqapp2}) is sufficient to obtain $s(u)$. \end{example} The previous example can be generalized to any model whose partition function can be put in the form \begin{equation} Z_N(\beta)=\sum_j c_{N,j}(\beta)\, \mathrm{e}^{-N\varphi_j(\beta)}, \end{equation} where $\varphi_j(\beta)$ are concave and smooth functions of $\beta$ that do not depend on $N$, and $c_{N,j}(\beta)$ are functions of $\beta$ that are sub-exponential in $N$. In \cite{touchette2010} it is shown that if these assumptions are satisfied and the coefficients $c_{j,N}(\beta)$ have no poles in the $\beta$-complex plane, then $s(u)$ is given by a direct generalization of Eq.~(\ref{eqsm2}) involving the ``metastable'' free energies $\varphi_j(\beta)$. However, if any of these coefficients have poles in $\beta$, then $s(u)$ is given by a more complicated formula involving the $\varphi_j(\beta)$'s as well as the poles of $c_{N,j}(\beta)$. The surprising effect of these poles is that they determine the presence of linear branches in the graph of $s(u)$, which arise in short-range systems having first-order phase transitions. For more details on these results, the reader is referred again to \cite{touchette2010}. \section{Comments and open problems} \label{secconc} The examples given in the previous sections are very simple, but provide nevertheless a useful guide as to how the different methods that we have covered in this paper can be applied in practice to obtain the nonconcave entropy of more realistic models. They provide, in particular, a good illustration of the properties that one should look for when selecting the right method to use. On the one hand, if the Hamiltonian considered has any symmetries that can be used to partition the microstate space $\Lambda_N$ in easily-definable regions having different energies, then it may be possible to obtain $s(u)$ using the Betrag ensemble or the restricted canonical ensemble. On the other hand, if the Hamiltonian admits a macrostate representation, discussed in Sec.~\ref{secmicro}, then the microcanonical contraction formula or its canonical version involving $G_\beta(m)$ will provide a more direct way to obtain $s(u)$, although all the other methods can also be used in this case, since they all admit a macrostate representation. If none of these cases apply, then the application of any of the methods discussed here is likely to lead to difficult or even untractable calculations. This should hardly come as a surprise: after all, the calculation of the standard partition function $Z_N(\beta)$ is known to be a difficult problem in general, and so must be the calculation of any generalizations of $Z_N(\beta)$. In fact, if one cannot analytically calculate $Z_N(\beta)$ for a given model, then it is very unlikely that one will be able to analytical calculate any of the generalized partition functions described before. In this case, one may have to resort to approximation methods or numerical methods, such as Monte Carlo methods based on generalized ensembles (see, e.g., \cite{hetherington1987a,hetherington1987,stump1987,challa1988a,challa1988,neuhaus2006}). To conclude this paper, we present next a short list of open problems related to the generalized canonical ensembles discussed in Sec.~\ref{secgen}. The first problem is relevant for practical calculations in the Gaussian ensemble, whereas the second is concerned with the numerical implementation of generalized ensembles. The last two problems point to some interesting connections with convex analysis. \begin{itemize} \item \emph{Gaussian integral for the Gaussian ensemble}: Study the integral of Eq.~(\ref{eqgauss1}), which expresses the Gaussian partition function in terms of a complex transform of the standard partition function, in order to see if this integral can be approximated in any useful way. We have already commented on the fact this integral cannot be solved for the block-spin model, and is unlikely to be computable in general. The reason for this is that the integrand $\mathrm{e}^{-\gamma N t^2/2}\, Z_N(\beta+\mathrm{i}\gamma t)$ is highly oscillatory when $\gamma>0$, which prevents one from performing any form of saddle-point approximation. Other types of approximation may be possible, however. \item \emph{Generalized canonical ensembles and multicanonical simulation methods}: There is a strong suggestion that the generalized canonical ensembles discussed in Sec.~\ref{secgen} are related to a set of numerical methods known collectively as \emph{multicanonical} methods or \emph{umbrella sampling} methods (see, e.g., \cite{torrie1974,berg1991,berg1992,lee1993,janke1998b,berg2003,lundow2009}). The exact connection, however, has yet to be made explicit. \item \emph{Physical interpretation of generalized ensembles}: We mentioned in Sec.~\ref{secgen} that the Gaussian ensemble can be interpreted physically as a statistical-mechanical ensemble describing a sample system coupled to a finite-size heat bath (as opposed to the canonical ensemble which describes a sample system coupled to an \emph{infinite}-size heat bath). Is there a similar physical interpretation for the Betrag ensemble? Are there other generalized ensembles for which a physical interpretation or ``realization'' can be found or constructed? \item \emph{Supporting functions for generalized canonical ensembles}: We have seen that the concept of supporting lines, which provides a geometrical interpretation of the Legendre transform, is generalized in the Gaussian ensemble to the concept of supporting parabolas. It is not known whether other generalized ensembles admit a similar notion of supporting function. One may ask, for example, whether the supporting function of the Betrag ensemble is the ``absolute value'' function. More generally, are there other types of supporting functions for other choices of $g(u)$? \item \emph{Moreau transforms}: The quadratic Legendre transform of the Gaussian ensemble, defined in Eq.~(\ref{eqglf1}), appears to be related to a functional transform known in convex analysis as the \emph{Moreau transform} \cite{rockafellar1988}. Are there any known properties of the latter transform that could be used to simplify calculations in the Gaussian ensemble? Moreover, are there generalizations of the Legendre or the Moreau transform that could be used to define other types of generalized canonical ensembles? \end{itemize} \section*{Acknowledgments} I would like to thank Oscar Bandtlow for bits of German offered during tea time, Rosemary J. Harris for reading the manuscript, and an anonymous referee for bringing Ref.~\cite{silhavy1997} to my attention. The hospitality of The Rockefeller University, where most of this paper was written, is also gratefully acknowledged. This work is supported by an RCUK Interdisciplinary Academic Fellowship. \section*{References}
1,314,259,995,552
arxiv
\section{Data} \label{sec:data} In this section, we first explain the ad platform setup in Section~\ref{sec:platform_setup}; specifically Yahoo Gemini ad platform, however, the underlying hierarchical structure is fairly standard in the advertising industry. This is followed by our method for leveraging the ad platform setup to form ordered pairs of creatives (Section~\ref{sec:pairs_generation}); the ordered pairs of creatives have a crucial role in our proposed methods to solve the tasks outlined in Section~\ref{sec:problem_formulation}. In Section~\ref{sec:kp_extraction}, we cover additional steps to automatically annotate the ad creative pairs with matched keyphrases and identified image tags. Finally in Section~\ref{sec:insights}, we describe data insights which motivate our approaches. \begin{figure}[!htb] \centering \includegraphics[width=0.7\columnwidth]{cmp_setup.pdf} \caption{Ad campaign setup with multiple ad-groups and ad-ids. Difference in CTRs across ad-ids in the same ad-group can be attributed to differences in ad text and image. Ad-ids $1$ and $2$ form a different-image-same-text (D-I-S-T) pair, while $1$ and $3$ form a different-text-same-image (D-T-S-I) pair.} \label{fig:cmp_setup} \end{figure} \subsection{Ad platform setup} \label{sec:platform_setup} As shown in Figure~\ref{fig:cmp_setup}, an advertiser in the Yahoo Gemini ad platform can create multiple campaigns and each campaign can have multiple ad-groups. Each ad-group is tied to a pre-specified target audience. For example, if the advertiser is a major telecommunications company, different campaigns may represent different offerings from the company (\emph{e.g.}, mobile phone plans and WiFi routers) whereas examples of ad-group targeting can be \textit{seniors in New York City} and \textit{males in San Francisco}. As shown in Figure~\ref{fig:cmp_setup}, there can be multiple ad-ids in an ad-group; each ad-id has an ad text and image associated with it. For each qualifying user for the ad-group, one of the ad-ids is shown at random; in other words, if there is CTR performance difference across the ad-ids, it can be purely attributed to the differences in ad image and text across the ad-ids in the ad-group. For the example shown, the difference in CTRs of ad-id $1$ and $2$ can be attributed to the difference in the ad image, while for ad-ids $1$ and $3$, the difference can be attributed to the difference in ad text. However, in the case of ad-ids $2$ and $3$, the CTR difference is a result of differences in both the image and text. \subsection{Constructing ad creative pairs}\label{sec:pairs_generation} We use data from ad-groups across multiple advertisers to form two datasets: (i) different-text-same-image (D-T-S-I) dataset, and (ii) different-image-same-text (D-I-S-T) dataset as described below. \subsubsection{D-T-S-I dataset} \label{sec:DTSI} To create this dataset, from each ad-group, we create pairs of ad-ids (creatives) such that in each pair the ad text is different but the ad image is same. Furthermore, in each such pair, we order the ad-ids as (source, target) where source CTR is lower than target CTR. For example, in Figure~\ref{fig:cmp_setup}, (ad-id $1$, ad-id $3$) form such a (source, target) pair in the D-T-S-I dataset. We collect such pairs using ad-groups across multiple advertisers. In case multiple pairs have the same source ad text (but different target ad text), we only retain the pair with highest CTR difference, and discard the other (duplicate-source) pairs. Finally, we keep the pairs where the relative CTR difference is higher than $\Delta\%$ (design choice). The intuition behind creating such pairs is to provide training examples to an ad text refinement model, \emph{e.g.}, for generating the target ad text given the source ad text (explained in Section~\ref{sec:generation}). \subsubsection{D-I-S-T dataset} \label{sec:DIST} To create this dataset, from each ad-group, we create pairs of ad-ids such that the ad image is different but the ad text is same. As in the D-T-S-I dataset, we order the ad-ids in the pair as (source, target) where source CTR is lower than target CTR; in Figure~\ref{fig:cmp_setup}, (ad-id $1$, ad-id $2$) is an example of such a pair. We collect such pairs across ad-groups of multiple advertisers. If there are pairs with the same source image, we retain the pair with the highest CTR difference and discard the other duplicates. Finally, we filter out pairs with relative CTR difference below $\Delta\%$ (design choice). The intuition behind creating such pairs is to provide training examples for refined ad images given source ad text and image. \subsection{Keyphrases and image tags annotation}\label{sec:kp_extraction} For each pair in the D-T-S-I and D-I-S-T datasets, we add metadata in the form of matched keyphrases and image tags (explained below). \paragraph*{Keyphrases} We first form a vocabulary of keyphrases using an unsupervised keyphrase extraction method\footnote{We used multipartite-rank \cite{multipartite_rank} method implemented in the PKE keyphrase extraction package \cite{pke}. Choice of this method (versus others in PKE, \emph{e.g.}, TF-IDF, and Position-rank \cite{position-rank}) was guided by visual inspection of results on representative advertisers.} on the collective ad text corpus (including both source and target ad text from all pairs). For example, from retail advertisers, typical examples of extracted keyphrases include phrases like \textit{free shipping} and \textit{limited time offers}, while from telecommunication advertisers, examples include \textit{high speed internet} and \textit{bundle deals}. Using the obtained vocabulary of keyphrases, for each pair in the D-T-S-I and D-S-T-I datasets, we add a list of exact matches found in the source and target ad text. \paragraph*{Image tags} Image tags are the objects detected in an image via the (pre-trained) Inception Resnet v2 object detection model as in the Open Images V2 repository \cite{openimages}. We extract these image tags from the source and target ad images in D-T-S-I and D-I-S-T datasets. Inception Resnet v2 \cite{openimages} is a convolutional neural network trained by Google on Flickr images in the Open Images V2 dataset. It has about $5000$ classes (possible tags in an image). Each image can have multiple tags and the model returns a list of inferred tags with confidence scores. We retain all tags with a score above $0.8$. For example, the ad image in ad-id $2$ in Figure~\ref{fig:cmp_setup} has tags \textit{woman, child, face}, whereas the image in ad-id $1$ has the tag \textit{multimedia}. \subsection{Insights from D-I-S-T and D-T-S-I datasets}\label{sec:insights} Based on $5$ months (July--November 2019) of data from the Yahoo Gemini platform, we gathered several insights from D-T-S-I and D-I-S-T datasets spanning a sample of over $3500$ advertisers. The minimum CTR difference ($\Delta$) in each source-target pair was kept at $10\%$. We highlight key insights below which guided our proposed approaches (additional statistics are covered later in Section~\ref{sec:results}). \paragraph*{High word overlap between source and target text} In the D-T-S-I dataset, the average number of words in both target and source ad text is close to $13$ (sequence length), but there is a $60\%$ overlap between words in source and target. This indicates: (i) target retains a lot of words from the source (plausibly to preserve context), and (ii) there are word replacements in source to keep the sequence length roughly the same. Hence, a \textit{copy mechanism} which can selectively copy parts of the source text while introducing new words in target looks intuitive for the ad text generation task (details in Section~\ref{sec:generation}). \paragraph*{Discriminative power of keyphrases and image tags} An advertiser category-wise case study using the D-T-S-I dataset revealed that the presence of certain keyphrases in the target ad text (and their absence in the source) consistently led to higher CTR relative to the source. For example, in the case of retail category advertisers, such keyphrases included \textit{free shipping} and \textit{limited time offer}. In a parallel study using the D-I-S-T dataset, we observed analogous results with image tags. For example, for telecommunication advertisers, we found that target images with human elements (\emph{i.e.} having tags \textit{woman}, \textit{man}, \textit{child}) had higher CTR than source images with just \textit{multimedia} tag. The above insights motivate the use of a ranking approach for recommending keyphrases and image tags for refining an input ad creative (details in Section~\ref{sec:ranking}). \begin{comment} use the files in the ranking folder data_stats_same_image.tsv same_text_src_vanilla.txt tgt_VAL_vanilla.txt data_stats_same_text.tsv same_text_tgt_vanilla.txt tgt_vanilla.txt data_stats_variable_image.tsv \end{comment} \begin{comment} \textbf{Text-only generation} Say we have text $t1 > t2 > t3$ in terms of CTR. We go over the same image and variable image versions below. \\ \textbf{Same image constraint [TEXT only generation]:} \begin{itemize} \item form text pairs ordered by CTR, such that the image is the same in both the pairs. $ t2 \rightarrow t1 $, $t3 \rightarrow t1 $, $t3 \rightarrow t2$. \item in an ad group, each ad id should have an unique target. In case the targets are different, then only one target is retained based on higher CTR. For example, $t3 \rightarrow t2$ should be removed since there is a $t3 \rightarrow t1$ which has higher CTR. \item After removing such 'suboptimal' translations, we need to modify the training examples based on CTR difference threshold. \begin{itemize} \item OPTION 1: discard the pairs with CTR difference below threshold. \item OPTION 2: for the pairs with CTR difference below threshold, modify it to self-translation (both input and output sequence are identical). \end{itemize} \item extra features for the 'translation': \begin{itemize} \item landing page \item advertiser category \item ADDITIONAL multimodal input: we could feed in the image features for the purpose of assisting text generation \end{itemize} \end{itemize} \textbf{Variable image setup [TEXT + IMAGE TAG generation]:} \begin{itemize} \item form ad id pairs ordered by CTR, such that at least the image or the text is different in both pairs. $ (t2, i2) \rightarrow (t1, i2) $, $(t3,i3) \rightarrow (t1, i2) $, $(t3, i3) \rightarrow (t2, i2)$. \item in an ad group, each ad id should have an unique target ad id. In case the targets are different, then only one target is retained based on higher CTR. The same filter used in the previous method (same image version) can be used to retain the target with the highest CTR for a given source ad id. \item NOTE: for the purely variable image setup, one could sort the ad ids by CTR and then form pairs between all others as source and the top CTR ad as the target. \item Features \begin{itemize} \item landing page \item advertiser category \item image features \end{itemize} \end{itemize} \subsection{Pairs generation - updated} For same image setup, all pairs within an adgroup are formed first such that only the text varies in the pair and everything else is same. Then for a given source text in the adgroup only one target source text is retained such that the target leads to highest CTR lift across other possible targets for the same source. For the variable image setup, the top CTR ad id is found first, and set as target. All other ad ids are then set as source for this fixed target. \end{comment} \section{Discussion} \label{sec:discussion} Our results show the efficacy of using A/B test data across advertisers for both generation and ranking formulations of ad creative refinement. Account teams testing the proposed models requested additional evidence in the form of CTR of similar ads (\emph{i.e.}, with recommended keyphrases and image tags) to convince advertisers to approve tests for refined creatives. Studying the extent of adoption by advertisers and using this feedback to control creative generation is a promising direction for future research. \begin{comment} \paragraph*{Abundance of different-image-different-text data} While creating the creative pairs datasets, we noticed an overwhelming majority of pairs (derived from the same ad-group) with changes in both ad text and image. We had to discard such pairs since they neither belonged to D-I-S-T nor the D-T-S-I dataset. This is linked to advertisers testing creatives in an unorganized manner; nevertheless, such pairs still have the potential to assist creative refinement in setups where the text and image needs to be jointly refined (potential future work). In addition, simply using image tags for textual tasks may not be sufficient, as seen in our experiments. Using cross-modal embeddings \cite{www20_joey} is a promising direction in this context. \paragraph*{Contextual and controlled text generation} The context in our ad text generation setup was implicit (assumed to be specified by the source text). Using the context more explicitly (\emph{e.g.}, specifying the target audience, using advertiser's Wikipedia page \cite{www20_joey}, or landing page \cite{microsoft_ad_generation_kdd19}), with the copy mechanism is a promising direction. Such an explicit context can also be used to restrict (control) the generation vocabulary for better relevance. \paragraph*{Online tests} Online tests with generated ad text without humans-in-the-loop is challenging because: (i) new text typically needs advertiser approval, and (ii) there is room for errors in the current state-of-the-art text generation models. However, in our experience, advertisers are open to recommendations for improving creatives. Hence, a reasonable online test (future work) would be to recommend refinements based on the models in this paper, track adoption rate among advertisers, and link it to the lift in online metrics. \end{comment} \begin{comment} \begin{itemize} \item put a note on duplicate campaigns, duplicate advertiser ids \item reproducibility notes: open source tools like openNMT, PKE, RESNET-101?, matchzoo used. \end{itemize} \end{comment} \section{Introduction} \label{sec:introduction} The image and text used for an online ad (collectively called an ad creative) can be influential in targeting online users on a large scale \begin{figure}[ \centering \includegraphics[width=1 \columnwidth]{pull_figure_kdd.pdf} \caption{Ad creative refiner based on parallel A/B tests done by multiple advertisers. Advertiser $1$ may learn in isolation that having human elements leads to better CTR than multimedia images; while advertiser $N$ may learn that using "limited time" in ad text works better than "great". The proposed refiner collects data across A/B tests to recommend ad text and image refinements for a given input ad creative.} \label{fig:pull_figure} \end{figure} Large businesses (advertisers) typically employ creative strategists to design ad creatives; these creative strategists may conduct market research to see trending themes and also gather insights from past ad campaigns in related product categories. Such advertiser specific creative customization is mostly a manual, expensive, and time consuming process. In contrast, small businesses typically resort to free online tools, \emph{e.g.}, stock image libraries \cite{shutterstock}, and generic creative insights \cite{taboola_trends} to compile ad images and text; such tools can reduce the time to design creatives but tend to be generic (\emph{e.g.}, lacking in business-specific customization). Once the ad creatives are ready, both large and small advertisers need to conduct online A/B tests to validate the effectiveness of their creatives, and subsequently discard low performing creatives from their ad campaigns. In addition, to reduce the chances of online users getting tired of seeing the same ad repeatedly on a particular website (\emph{i.e.}, ad fatigue \cite{ad_fatigue_schmidt}), advertisers need to frequently go through the design$\rightarrow$A/B test$\rightarrow$refresh ad creatives cycle . Again, such cycles tend to be time consuming and there is an emerging need for data-driven approaches to speed up the whole process of designing and refreshing creatives. \par In this paper, we highlight a key observation that accelerates the above creative design process, and can be explained as follows. Advertisers typically test their creatives via A/B tests in ad platforms (\emph{e.g.}, Yahoo Gemini, Facebook Ads), \emph{i.e.}, they try out a set of creatives on online users in a controlled setup such that the click-through-rate (CTR) performance \cite{mappi_CIKM} difference across the creatives can be solely attributed to the ad text and image. However, advertisers conduct and learn from such A/B tests in isolation as illustrated in Figure~\ref{fig:pull_figure}. As shown, advertiser $1$ who is an internet service provider, may learn via an A/B test that having human elements in the ad image works better than having gadgets in the image (since the ad text is same across the two creatives in the example, the performance difference can be attributed to the ad images). Via a separate A/B test, a different advertiser $N$ (selling boots) may learn that using \textit{new} and \textit{limited time} in the ad text works better than using \textit{great}. Our key observation in the illustrated example is that although the advertisers are learning in isolation, the ad platform can learn across advertisers. In fact, most ad platforms are authorized to use performance data across advertisers in an \textit{aggregate manner} to help advertisers perform better; however, using A/B test data across advertisers in a collaborative manner to automate ad creative refinement is a largely unexplored topic. In this paper we address several sub-problems in ad creative (text and image) refinement exploiting the above observation by using multi-advertiser A/B test data: \begin{enumerate} \item ad text generation: given an input ad creative, the task is to generate refined ad text, \item ad text keyphrase recommendation: given an input ad creative, the task is to recommend keyphrases for inclusion in the refined ad text, and \item ad image tag recommendation: given an input ad creative, the task is to recommend image tags (objects) to guide the selection of a refined ad image. \end{enumerate} Another novelty in our proposed approaches for the above tasks is that they do not depend on intermediate models such as CTR prediction as required in previous work \cite{microsoft_ad_generation_kdd19,mappi_CIKM} but rely on pairs of examples of the form: (low CTR creative, high CTR creative) where the CTR is based on the same population of users (\emph{i.e.}, targeting is fixed). Both creatives in a pair are sourced from the same advertiser, and at a high level, the task of refining can be seen as \textit{translating} the low CTR creative (source) to the high CTR creative (target). As we discuss in this paper, such pairs can be naturally collected from A/B tests conducted by multiple advertisers in an ad platform. Our main contributions are as follows. \begin{itemize} \item We solve three tasks around ad creative refinement: (i) ad text generation, (ii) keyphrase recommendation, and (iii) image tag recommendation. \item For ad text generation, we demonstrate that using a copy mechanism to selectively copy parts of the input ad text while introducing new words in the refined (generated) text is significantly better than baselines. \item For keyphrase and image tag recommendation, we demonstrate the efficacy of a deep relevance matching model for ranking keyphrases and image tags. We also show the relative robustness of keyphrase ranking (compared to text generation) in a cold-start scenario with unseen advertisers. We observed a $87\%$ CTR increase via such recommendations for a major advertiser on Yahoo Gemini. \end{itemize} The remainder of the paper is organized as follows. Section~\ref{sec:related} covers related work, and Section~\ref{sec:problem_formulation} covers problem formulation. Section~\ref{sec:data} explains data sources, and creation of pairs of creatives for training ad refinement models. Section~\ref{sec:method} covers proposed methods, Section~\ref{sec:results} covers experimental results, and there is a discussion in Section~\ref{sec:discussion}. \section{Generation and ranking models} \label{sec:method} We now describe, the proposed solutions for tasks 1-3 (Section~\ref{sec:problem_formulation}). The text generation approach for task $1$ (Section~\ref{sec:task1}) is explained in Section~\ref{sec:generation}. For tasks $2$ (Section~\ref{sec:task2}) and $3$ (Section~\ref{sec:task3}), the proposed keyphrase/image tag ranking model is explained in Section~\ref{sec:ranking}. \subsection{Ad text generation model for task $1$} \label{sec:generation} Task $1$ can be formulated a sequence-to-sequence (seq2seq) prediction task, where given an input ad text (source sequence of tokens), the predicted output should be a refined version of the input ad text (target sequence) with a higher expected CTR. The construction of the D-T-S-I dataset (in Section~\ref{sec:DTSI}) is naturally suited for training such a seq2seq model, since in each pair the target ad text has higher CTR than the source ad text (the same ad-group, and same image constraints in each pair eliminate all other confounding factors affecting CTR). Given the D-T-S-I dataset, to solve task $1$, we propose using an encoder-decoder architecture with a mechanism to selectively \textit{copy} words from the source text; the intuition behind the proposal, and underlying architecture details are explained below. \paragraph*{Intuition} We borrow ideas from state-of-the-art models in abstractive summarization \cite{see_pointer_generator} and use it to solve task $1$ as follows. We use an encoder-decoder architecture with attention \cite{bahdanau2014neural}, along with a copy mechanism \cite{see_pointer_generator} as shown in Figure~\ref{fig:copy_mech}. \begin{figure}[!htb] \centering \includegraphics[width=0.8\columnwidth]{copy_atn_model_kapil_www.pdf} \caption{Encoder-decoder with attention and copy mechanism for generating refined (target) ad text given source ad.} \label{fig:copy_mech} \end{figure} In our setup, the motivation for using the copy mechanism is driven by the observation that there is a $60\%$ overlap between source and target words in the D-T-S-I dataset (as mentioned in Section~\ref{sec:insights}). It is plausible that copying some words from the source is good enough to preserve the underlying context, while adding new words in the target can boost the CTR. We describe the underlying model details below. \paragraph*{Model details:} We use a bidirectional LSTM encoder for the source sequence and an LSTM decoder for the target sequence \cite{bahdanau2014neural}. Following \cite{luong2015attention}, the attention distribution is computed as: \begin{align} e^t_i &= h_i^\top W_\text{att} s_t, \quad a^t = \text{softmax}\left( e^t \right), \end{align} where $h_i$ is the encoder hidden state, $s_t$ is decoder state at step $t$, $a^t$ is the attention distribution, and $W_\text{att}$ represents the learnable parameters. The attention-weighted sum of all encoder hidden states is used compute the context vector as: \begin{align} c_t = \sum_i a^t_i h_i. \end{align} The generation probability $p_{gen}$ for step $t$ is computed using the context vector ($c_t$), decoder state ($s_t$) and decoder input ($x_t$) as: \begin{align} p_\text{gen} = \sigma\left( w^\top_c c_t + w_s^\top s_t + w_x^\top x_t + b_\text{ptr}\right), \end{align} where $w_c$, $w_s$, $w_x$ are vectors and $b_\text{ptr}$ is a scalar, all of which are learnable; $\sigma(\cdot)$ denotes the sigmoid function. Here, $p_\text{gen}$ is used to \textit{softly} choose between generating a word from the entire vocabulary versus copying a word (token) from the input sequence (via sampling from the attention distribution $a^t$). The vocabulary distribution for generating a new word can be computed as: \begin{align} \mathbb{P}_\text{vocab} = \text{softmax}\left( V' ( V \left [ s_t; c_t \right ] + b ) + b' \right), \end{align} where $V$, $V'$, $b$, and $b'$ are learnable parameters. With $p_{gen}$, the effective distribution over the vocabulary can be written as: \begin{align} \mathbb{P}(y) = p_\text{gen} \mathbb{P}_\text{vocab}(y) + (1-p_\text{gen}) \sum_{i:y_i=y} a^t_i, \end{align} where $y$ is a word in the vocabulary. For training, the loss at step $t$ ($\mathcal{L}_t$) is the negative log-likelihood associated with target word $y^\ast_t$, and that of the whole sequence is simply the average: \begin{align} \mathcal{L}_{t} = -\log \left( \mathbb{P}(y^\ast_t)\right), \quad \mathcal{L} = \frac{1}{T} \sum_{t=0}^T \mathcal{L}_{t}. \end{align} Our implementation of the above model leveraged OpenNMT-Py \cite{opennmt} with: train steps = 200k, optimizer = SGD, and batch size = 128. \subsection{Ranking model for tasks $2$ and $3$} \label{sec:ranking} We consider solving the keyphrase (and image tag) recommendation problem via a ranking model, where the model outputs a list of keyphrases (and image tags) in decreasing order of relevance for a given ad creative. We describe below the model for the keyphrase ranking task; the image tag ranking model is analogous, and we skip its description for brevity. We use the state-of-the-art pairwise deep relevance matching model (DRMM)~\cite{drmm,drmm_topk} whose architecture for our recommendation setup is shown in Figure~\ref{fig:drmm}. It is worth noting that our pairwise ranking formulation can be changed to accommodate other multi-objective or list-based loss-functions. We chose the DRMM model since it is not restricted by the length of input, as most ranking models are, but relies on capturing local interactions between query and document terms. Given a \emph{(source ad text , target keyphrase)} combination, the model first computes the top-$k$ interactions between the source ad text words and the keyphrases. These interactions are passed through a multi-layer perceptron (MLP), and the overall score is aggregated with a query term gate which is a softmax function over all terms in that query. \begin{comment} For gating, we use the softmax function as follows: \begin{align} g_i = \frac{\exp(w_g {x_i}^{(q)} )}{ \sum_{j=1}^M \exp(w_g {x_j}^{(q)} ) }, \nonumber \end{align} where $w_g$ denotes the weight vector of the term gating network and ${x_i}^{(q)}$ denotes the i-th term in the Wikipedia page. \end{comment} \begin{figure}[!htb] \centering \includegraphics[width=0.7\columnwidth]{drmm_topk.pdf} \caption{DRMM-(top $k$) for keyphrase/image tag ranking.} \label{fig:drmm} \end{figure} DRMM employs a pair-wise ranking loss function as described below. We denote the source ad text by just $src$ in the following explanation. Given a triple ($src$, $p^+$, $p^-$) where keyphrase $p^+$ is ranked higher than keyphrase $p^-$ with respect to $src$, the loss function is: \begin{align} \mathcal{L} (src,p^+,p^-;\theta)=max(0, 1-s(src,p^+) + s(src,p^-)) , \end{align} where $s(src, p)$ denotes the predicted matching score for keyphrase $p$, and the source ad text. Metadata in the form of image tags, and advertiser category can be introduced as additional query terms. In our implementation, we used the top-k version of DRMM \cite{drmm_topk} in Match-Zoo \cite{matchzoo} with $k=20$ and ADAM optimizer. \section{Problem Formulation} \label{sec:problem_formulation} We study three tasks around creative refinement as described below. \subsection{Task 1: ad text generation} \label{sec:task1} In this task, the goal is to generate refined ad text (output) given an input ad (text and image). For example, considering the illustration in Figure~\ref{fig:pull_figure} for advertiser $\#N$, if the input ad text is `\textit{great offers on cowboy boots!}', a possible generated output could be `\textit{limited time offer on new cowboy boots!}'. We assume that the input ad image is retained for use with the output ad text. Additional metadata in the form of ad image (tags) and associated advertiser category is also assumed to be available. The output ad text is expected to have at least $\Delta \%$ better CTR performance compared to the input ad text (where $\Delta$ is a design choice) and the output ad text is assumed to be targeted to the same population of users as the input ad. \subsection{Task 2: ad text keyphrase ranking}\label{sec:task2} This is a simpler variant of task $1$, where instead of generating the entire ad text, the task is to recommend keyphrases in the refined ad text. We formulate this as a ranking problem, where one needs to rank keyphrases from a given vocabulary, for inclusion in the refined ad text. For example, in Figure~\ref{fig:pull_figure} for advertiser $\#N$, if the input ad text is `\textit{great offers on cowboy boots!}', a recommended list of keyphrases could have `\textit{limited time}' and `\textit{new}' as the top ranked keyphrases. The motivation here is to study cases when target text generation is hard to achieve, but useful keyphrase recommendations can still be provided. The objective is to recommend keyphrases that would increase the CTR if included in the ad text while keeping all other aspects of the ad (such as ad image) constant. \subsection{Task 3: ad image tag ranking} \label{sec:task3} In this task, given an input ad image and text, the goal is to recommend image tags (output) to refine the ad image. Image tags essentially correspond to objects in the image, and are sourced from a given vocabulary of tags (explained later in Section~\ref{sec:kp_extraction}). This task is the visual parallel of task $2$, where instead of recommending textual keyphrases, we recommend tags for refining the ad image. The image tags can be used to select an ad image from a pool of images (\emph{e.g.}, via a stock image library \cite{shutterstock}); however, selecting or generating the final ad image is beyond the scope of this paper, and our study is limited to recommending image tags for the refined ad image. For example, in Figure~\ref{fig:pull_figure} for advertiser $\#1$, if the input ad text is the one with multimedia devices, a recommended list of image tags could contain `\textit{human}' as a top ranked image tag. In addition, we assume that the input ad text is retained, and is available as metadata along with the associated advertiser category. The selection of an ad image based on the recommended tags is expected to increase the CTR of the refined creative. \section{Related Work} \label{sec:related} \subsection{Online advertising} Today, advertisers work with ad platforms \cite{mappi_CIKM,gemx_kdd,Google_FTRL} to launch campaigns that show ads to users on different websites. Advertisers design \emph{one or more creatives} with the help of creative strategists to target relevant online users and measure the effectiveness of campaigns with metrics such as click-through-rate (CTR $= \frac{clicks}{impressions}$) are associated with the ad creative being tested. It is common for advertisers to do exploratory A/B tests with a large pool of creatives to efficiently learn which creative works best (popularly known as dynamic creative optimization in the industry) \cite{explore_exploit_li}. However, automatically understanding ad creatives (multi-modal in nature due to the presence of text and an image) and leveraging this understanding to create a pool of relevant creatives for A/B testing is emerging as an active area of research as described below. \begin{comment} The publicly available \textit{creatives dataset} \cite{creative_dataset} has been a key enabler for many recent works on understanding ad creatives. This dataset was introduced in \cite{cvpr_kovashka}, where the authors focused on automatically understanding the content in ad images and videos from a computer vision perspective. The dataset has ad creatives with annotations including topic (category), questions and answers (\emph{e.g.}, reasoning behind the ad, expected user response due the ad). In a followup work~\cite{kovashka_eccv2018}, the focus was on understanding symbolism in ads (via object recognition and image captioning) to match human-generated statements describing actions suggested in the ad. \end{comment} Understanding content in ad images and videos from a computer vision perspective was first studied in \cite{cvpr_kovashka}, where manual annotations were gathered from crowdsourced workers for: ad category, reasons to buy products advertised in the ad, and expected user response given the ad. Leveraging the dataset in this work, \cite{self_recsys2019} studied recommending keywords for guiding a brand's creative design. However, \cite{self_recsys2019} was limited to only text inputs for a brand (\emph{e.g.}, the brand's Wikipedia page), and the recommendation was limited to single words (keywords). In \cite{www20_joey}, the setup in \cite{self_recsys2019} was extended by including multi-modal information from past ad campaigns, \emph{e.g.}, images, text in the image (OCR), and Wikipedia pages of associated brands. In this paper, we focus on refining existing (input) ad creatives, \emph{i.e.}, the refinement is specific to the input ad creative as opposed to providing recommendations for an input advertiser in \cite{self_recsys2019,www20_joey}. In addition, the usage of A/B test data across advertisers is another key difference with respect to prior work. Our approaches are limited to consuming only CTR data across advertisers (and not conversions), since in most cases, it is \textit{owned} by the ad platform, which is typically authorized to use aggregated data across advertisers to make system-wide improvements (not biased towards a particular advertiser). \subsection{Relevance matching} One of our goals is to recommend a set of highly relevant keyphrases and image tags for improving an (input) ad creative. This can be modeled as a query-document relevance ranking problem \cite{drmm}, or as a collaborative filtering problem where \textit{user-item} latent representations are used for recommendations \cite{koren_MF,neural_collaborative_filtering}. However, given the restriction on the number of keyphrases/image tags recommended, and their relevance to the advertiser under consideration, we focus on relevance ranking models (\emph{e.g.}, DRMM \cite{drmm} and variants \cite{drmm_topk}) in our keyphrase and image tag ranking setups. \begin{comment} \subsection{Visual-linguistic embeddings} With an increasing interest in joint vision-language tasks like visual question answering (VQA) \cite{vqa_iccv2015}, and image captioning \cite{}, there has been lot of recent work on visual-linguistic embeddings which are key enablers in the above mentioned tasks. Among the recent embedding \begin{itemize} \item LXMERT \cite{lxmert} \item VILBERT \cite{vilbert} \end{itemize} The work differs from image captioning because of the hidden meaning of ads which usually assume world knowledge on the viewer's part -- USE points from ADVISE paper. \end{comment} \subsection{Text-to-text generation} We formulate ad text generation as a sequence-to-sequence prediction task, which is common in natural language processing problems like machine translation and abstractive summarization. State-of-the-art performance in machine translation is typically obtained with an encoder-decoder neural architecture with attention \cite{luong2015attention}. In abstractive summarization, where both the source and target sequences are in the same language, an additional mechanism to copy input tokens to the output sequence has proven to be beneficial \cite{see_pointer_generator}. In the context of ad text generation, recent work \cite{microsoft_ad_generation_kdd19} explored the use of an encoder-decoder architecture to automatically generate ad text based on an advertiser's webpage. The main differences between our work and \cite{microsoft_ad_generation_kdd19} lie in: (i) studying ad refinement as opposed to generating an ad from scratch, (ii) the use of A/B test data across advertisers to train refinement models. \section{Results} \label{sec:results} In this section, we first cover notable statistics of the D-T-S-I and D-I-S-T datasets in Section~\ref{sec:stats}, followed by a description of evaluation metrics in Section~\ref{sec:metrics}. This is followed by results on ad text generation, keyphrase ranking, and image tag ranking. \begin{table}[th] \centering \begin{tabular}{|l||l|l||l|l|} \hline task &\multicolumn{2}{|l|}{ keyphrase ranking} &\multicolumn{2}{|l|}{ image-tag ranking} \\ \hline & source & target & source & target \\ \hline \multicolumn{5}{|l|}{ vanilla-split } \\ \hline \# tokens (1) & 12.24$\pm3.6$ & 12.25$\pm3.6$ & 13.44$\pm3.8$ & 13.44$\pm3.8$ \\ \# tokens (2) & 12.37$\pm3.6$ & 12.32$\pm3.6$ & 13.44$\pm3.7$ & 13.44$\pm3.7$ \\ \# kp/img (1) & & 12.02$\pm4$ & & 7.14$\pm3.8$ \\ \# kp/img (2) & &12.13$\pm4$ & & 7.06 $\pm3.8$ \\ \hline \end{tabular} \caption{Mean ($\pm$std) of attributes of train (1) and test (2) sets: number of words in ad text (\# tokens), number of matched keyphrases (\# kp), and number of image tags (\# img).} \label{table:dataset_stats} \end{table} \begin{table*}[h] \centering \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline model & BLEU & ROUGE-1 F & ROUGE-2 F & ROUGE-L F & kp-P & kp-R & kp-F \\ \hline \hline \textbf{vanilla-split} & & & & & & & \\ \hline \hline baseline (pred=src) & 56.28 & 63.49 & 50.79 & 61.13 & 0.643 & 0.644 & 0.643 \\ ATTN & 50.74 & 57.62 & 47.26 & 56.01 & 0.552 & 0.548 & 0.55 \\ ATTN + CP & 59.38 & 65.61 & 55.13 & 63.79 & 0.661 & 0.648 & 0.655 \\ ATTN + CP + CAT & \textbf{59.45} & \textbf{65.74} & \textbf{55.35} & \textbf{63.91} & \textbf{0.661} & \textbf{0.649} & \textbf{0.655} \\ ATTN + CP + CAT + IMG & 58.37 & 65.63 & 55.18 & 63.82 & 0.663 & 0.646 & 0.654 \\ \hline \hline \textbf{cold-start split} & & & & & & & \\ \hline \hline baseline (pred=src) & 56.01 & 63.69 & 51.02 & 61.57 & 0.643 & 0.637 & 0.64 \\ ATTN &16 & 26.64 & 13.29 & 25.02 & 0.195 & 0.177 & 0.185 \\ ATTN + CP & 34.39 & 45.26 & 30.86 & 42.81 & 0.462 & 0.422 & 0.441 \\ ATTN + CP + CAT & \textbf{35.91} & \textbf{47.52} & \textbf{32.64} & \textbf{44.69} & \textbf{0.494} & \textbf{0.434} & \textbf{0.462} \\ ATTN + CP + CAT + IMG & 33.42 & 44.33 & 29.53 & 41.76 & 0.422 & 0.37 & 0.394 \\ \hline \end{tabular} \caption{Ad text generation results: ATTN denotes the LSTM encoder-decoder with attention model, CP denotes copy mechanism, CAT denotes adding category, and IMG denotes adding source image tags.} \label{tab:generation_results} \end{table*} \subsection{Dataset statistics} \label{sec:stats} The D-T-S-I and D-I-S-T datasets were built using a sample of $5$ months of data from Yahoo Gemini (July-November 2019). The data consisted of over $3500$ advertisers ($> 8500$ campaigns in English for U.S. audiences, $\sim 100$ categories), and each ad-id considered in the dataset had over $10,000$ impressions. After the filtering process (\emph{i.e.}, keeping only source target pairs with more than $\Delta=10\%$ CTR difference, and removing duplicate sources), the D-T-S-I dataset consisted of over $20,000$ pairs while the D-I-S-T dataset consisted of over $10,000$ pairs. Each dataset was randomly divided into train, test and validation sets in proportions of $77\%$, $20\%$, and $3\%$ respectively; we will refer to this as a \textit{vanilla} split. In addition to the vanilla split, we created a cold-start split where there was no overlap between advertisers in train, test and validation sets; this presents a much more difficult (versus vanilla split) learning problem with unseen advertisers Additional dataset statistics are shown in Table \ref{table:dataset_stats} . \subsection{Evaluation metrics} \label{sec:metrics} For ad text generation, we use standard metrics for text generation problems: (i) BLEU \cite{BLEU}, and (ii) ROUGE scores \cite{ROUGE}. We introduce metrics to gauge the presence of matched (target) keyphrases in the generated sequence: (i) keyphrase-precision (kp-P), (ii) keyphrase-recall (kp-R), and (iii) keyphrase -F (kp-F). In other words, we compute precision and recall for target keyphrases, considering the list of tokens in the generated text. For both keyphrase and image tag ranking, we use: (i) precision at $k$ ($P@k$), (ii) recall at $k$ ($R@k$), and (iii) normalized cumulative discounted gain at $k$ ($NDCG@k$). \subsection{Ad text generation results} Table ~\ref{tab:generation_results} covers generation results for the vanilla, and cold-start cases (metrics on test set). The baseline scores are for the case when the source ad text is considered as the predicted ad text (\emph{i.e.}, no change in input), and compared with the target ad text. In Table~\ref{tab:generation_results}, CAT and IMG denote the addition of category and image tags to beginning of the input sequence (image tags in alphabetical order). The main observations are as follows. \paragraph*{Copy mechanism works} In both vanilla and cold-start cases, there is a significant lift in the metrics due to the copy mechanism. In case of vanilla split, the copy mechanism is able to beat the baseline (predicted sequence = source sequence) metrics. However, in cold-start, it is below the baseline (but is better than the no-copy version). \paragraph*{Category helps} There is a consistent improvement in metrics on using category metadata in the input sequence. As expected, category information provides a relatively higher lift ($4.4\%$ above ATTN+CP in ROUGE-L F) for cold-start split compared to vanilla split ($0.2\%$ lift). In comparison, adding image tags to the input sequence (along with category) does not provide any lift (suggesting the need for better ways to incorporate image information). \paragraph*{Cold-start is challenging} We computed the histogram of ROUGE-L F scores on the test set using the best model (ATTN+CAT+CP) for the vanilla-split (Figure~\ref{fig:hist_vanilla}) and cold-start split (Figure~\ref{fig:hist_coldstart}) cases; for both splits, the baseline ROUGE-L F is around $61$. As shown, for vanilla, the distribution has a significant number examples above the baseline, while the distribution's mass significantly shifts below baseline for cold-start. The listed examples of generated text give a sense of how good the generated outputs are in terms of human judgement vis-a-vis ROUGE-L F scores. \begin{figure}[ \centering \includegraphics[width=1 \columnwidth]{hist_final.pdf} \caption{Histogram of ROUGE-L F scores in test set for vanilla-split (ATTN + CP + CAT model). Two anonymized examples are also shown with their ROUGE-L F scores.} \label{fig:hist_vanilla} \end{figure} \begin{figure}[ \centering \includegraphics[width=1 \columnwidth]{hist_coldstart.pdf} \caption{Histogram of ROUGE-L F scores in test set for cold-start split (ATTN + CP + CAT model). Two anonymized examples are also shown with their ROUGE-L F scores.} \label{fig:hist_coldstart} \end{figure} The keyphrase based metrics for generation (kp-P,R,F) as reported in Table~\ref{tab:generation_results}, are helpful in gauging the extent to which target keyphrases appear in the generated text. For example, in the lower scored generated text in Figure~\ref{fig:hist_vanilla}, \textit{clearance} is correctly introduced, but \textit{price cuts} is incorrectly introduced. Although \textit{price cuts} is incorrect given the target text (which has a guaranteed CTR lift), it remains to be seen if it leads to a lower CTR online (beyond the scope of this paper). \subsection{Keyphrase ranking results} \label{sec:keyphrase_ranking} Table~\ref{tab:ranking_results_text} shows the results for keyphrase ranking. The baselines included methods using: (i) cosine similarity (EMB-SIM) based on Glove \cite{glove} embeddings for keyphrases and input text (average of word embeddings), and (ii) TF-IDF representation of source ad text and keyphrases is used to compute similarity and keyphrases are ranked in descending order of similarity. \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline model & P\@5 & P\@10 & R\@5 & R\@10 & ndcg\@5 & ndcg\@10 \\ \hline \hline \textbf{vanilla} & & & & & & \\ \hline \hline EMB-SIM & 0.17 & 0.10 & 0.07 & 0.09 & 0.19 & 0.14 \\ TF-IDF & 0.33 & 0.26 & 0.15 & 0.23 & 0.35 & 0.30 \\ DRMM & 0.50 & 0.39 & 0.25 & 0.38 & 0.53 & 0.47 \\ \hline + CAT & \textbf{0.51} & \textbf{0.40} & \textbf{0.25} & \textbf{0.39} & \textbf{0.53} & \textbf{0.48} \\ + CAT + IMG & 0.41 & 0.32 & 0.21 & 0.32 &0.43 &0.39 \\ \hline \hline \textbf{cold st.} & & & & & & \\ \hline \hline EMB-SIM & 0.12 & 0.07 & 0.05 & 0.06 & 0.14 & 0.11 \\ TF-IDF & 0.27 & 0.21 & 0.12 & 0.18 & 0.29 & 0.26 \\ DRMM & 0.38 & 0.29 & 0.22 & 0.32 & 0.41 & 0.37 \\ \hline + CAT & \textbf{0.42} & \textbf{0.32} & \textbf{0.24} & \textbf{0.36} & \textbf{0.45} & \textbf{0.40} \\ + CAT + IMG & 0.34 & 0.26 & 0.20 & 0.30 & 0.36 & 0.33 \\ \hline \end{tabular} \caption{Kephrase ranking: baselines versus DRMM, and the effect of adding category and image tags as query terms.} \label{tab:ranking_results_text} \end{table} As shown in Table~\ref{tab:ranking_results_text}, for both splits, using DRMM with category features performs the best in terms of all metrics. \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline split & metric & add-0 & add-1 & add-2 & add-3 & add-10 \\ \hline \hline cold-start & kp-P & 0.50 & 0.50 & 0.49 & 0.46 & 0.35 \\ cold-start & kp-R & 0.43 & 0.45 & 0.46 & 0.47 & 0.53 \\ \hline \end{tabular} \caption{Ranking-aided keyphrase metrics for generation. Add-0 denotes no assistance, and add-10 denotes adding top $10$ ranked keyphrases in the generation output.} \label{tab:assisted_kp} \end{table} Cold-start best performance is comparable to the vanilla split best performance (\emph{e.g.}, $7\%$ drop in $R@10$, compared to $33\%$ drop in $kp-R$ in Table~\ref{tab:generation_results} for generation). Hence, keyphrase ranking seems to be more \textit{robust} to unseen advertisers compared to ad text generation. As seen in text generation, naively adding image tags to the input along with category does not generalize well (mildly hurts performance). We suspect that since image tags represent objects, they provide no additional context for the ranker to select better keyphrases. Most often, keyphrases provide more information about the brand, and image tags that represent objects may not add any complementary information about the brand directly that the ranking model can exploit. In future, we shall explore features that encapsulate information in the image directly \cite{www20_joey} rather than use image tags for keyphrase ranking. \begin{comment} We suspect that since image tags represent objects, they provide no additional context for the ranker to select better keyphrases. Most often, keyphrases provide more information about the brand and image tags that represent objects may not add any complementary information about the brand directly that the ranking model can exploit. In future, we shall explore features that encapsulate information in the image directly rather than use image tags for keyphrase ranking. \end{comment} We also study the possibility of \textit{assisting} generation results with corresponding ranking results. Table~\ref{tab:assisted_kp} shows the boost in kp-R for the best generation results (ATTN+CP+CAT), when the corresponding (top-$r$) outputs of the DRMM + CAT model are added to the list of matched keyphrases in generated text. As shown for cold-start, just adding the top ranked keyphrase (add-1) improves the recall ($0.43\rightarrow 0.45$) without affecting the precision ($0.5$). This indicates that ranking results can complement generation results in a helpful manner (illustrative cold-start examples in Figure~\ref{fig:ranking_examples}). \begin{figure}[ \centering \includegraphics[width=1 \columnwidth]{ranking_sample_outputs.png} \caption{Sample keyphrase ranking results vis-a-vis generated ad text (cold-start); the keyphrase recommendations can cover target keyphrases missed by poor generation.} \label{fig:ranking_examples} \end{figure} \subsection{Ad image tag ranking results} \label{sec:image_tag_ranking} Table~\ref{tab:ranking_results_image} shows the ranking results for image tags (baselines, and CAT + IMG feature additions in DRMM). \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline model & P\@5 & P\@10 & R\@5 & R\@10 & ndcg\@5 & ndcg\@10 \\ \hline \hline \textbf{vanilla} & & & & & & \\ \hline \hline EMB-SIM & 0.16 & 0.15 & 0.12 & 0.22 & 0.18 & 0.23 \\ TF-IDF & 0.27 & 0.24 & 0.21 & 0.35 & 0.29 & 0.37 \\ DRMM & 0.49 & 0.34 & 0.35 & 0.49 & 0.53 & 0.50 \\ \hline + CAT & 0.50 & 0.35 & 0.36 & 0.49 & 0.54 & 0.51 \\ + CAT + IMG & \textbf{0.51} & \textbf{0.37} & \text{0.34} & \textbf{0.49} & \textbf{0.55} & \textbf{0.52} \\ \hline \hline \textbf{cold st.} & & & & & & \\ \hline \hline EMB-SIM & 0.16 & 0.15 & 0.11 & 0.20 & 0.18 & 0.22 \\ TF-IDF & 0.28 & 0.24 & 0.20 & 0.33 & 0.29 & 0.36 \\ DRMM & 0.41 & 0.31 & 0.29 & 0.43 & 0.44 & 0.44 \\ \hline + CAT & 0.43 & 0.33 & 0.31 & 0.45 & 0.46 & 0.46 \\ + CAT + IMG & \textbf{0.53} & \textbf{0.37} & \textbf{0.38} & \textbf{0.52} & \textbf{0.58} & \textbf{0.55} \\ \hline \end{tabular} \caption{Image tag ranking: baselines versus DRMM, and the effect of adding category and image tags as query terms.} \label{tab:ranking_results_image} \end{table} Using DRMM with category and image tags performs the best. The efficacy of image tags (in source) to predict relevant tags (in target) may be linked to common modality. \begin{table}[h] \centering \begin{tabular}{|l|c|} \hline category & top $5$ ranked image tags \\ \hline apparel & clothing, face, hair, girl, pattern \\ job portals & face, clothing, multimedia, road, man, woman \\ auto & wheel, car, motorcycle, clothing, face \\ real estate & man, woman, mansion, bedroom, kitchen \\ \hline \end{tabular} \caption{Frequent top ranked image tags by category.} \label{tab:image_tag_insights} \end{table} We also report the frequent top ranked image tags for selected categories in Table~\ref{tab:image_tag_insights} using the DRMM + CAT + IMG model. \subsection{Online results} \label{sec:online_results} We deployed the ranking models for tasks $2$ and $3$ (\emph{i.e.}, keyphrase and image tag rankers) as an internal service for Yahoo Gemini account teams which manage campaigns of major advertisers. To study end-to-end adoption, we partnered with the account team for an Internet service provider. Using their existing creative (text and image) as input, the top keyphrase and top image tag recommendation were considered. The advertiser approved an A/B test for the refined creative (incorporating both image and text refinements together) versus their existing creative. The A/B test was conducted for $2$ weeks via Yahoo Gemini, and the refined creative showed an $87\%$ improvement in CTR, validating the model recommendations. \begin{comment} We chose telecommunication industry to study the end-to-end adoption by partnering with the account team. With their existing creatives (text and image) as input, the model recommendations were mainly centered around (i) adding offer and discount related phrases to the ad creative, and (ii) using different objects in images to attract audience. The advertiser approved the A/B test with the refined creative (incorporating both image and text refinements) versus their existing creative. -- SHAUNAK -- With their existing creatives (text and image) as input, the model recommendations were centered around (i) adding urgency offer and discount related phrases to the ad creative, and (ii) using different objects in images for higher user engagement. \end{comment}
1,314,259,995,553
arxiv
\partial{\partial} \def\tag#1 {\eqno(#1)} \def\mathcal{\mathcal} \def\text{Var\/}{\text{Var\/}} \newcommand{\includegraphics}{\includegraphics} \newcommand{\raisebox}{\raisebox} \newcommand\risS[6]{\raisebox{#1pt}[#5pt][#6pt]{\begin{picture}(#4,15)(0,0) \put(0,0){\includegraphics[width=#4pt]{#2.eps}} #3 \end{picture}}} \def\red#1{\begin{color}{red}#1\end{color}} \def\blue#1{\begin{color}{blue}#1\end{color}} \begin{document} \maketitle {\center\small Mathematics Subject Classifications: 05A16, 05C80, 57M15, 60C05, 60F05} {\center\small Keywords: chord diagrams, random, genus, limit, distribution} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}{Lemma}[section] \newtheorem{Proposition}{Proposition}[section] \newtheorem{Corollary}{Corollary}[section] \numberwithin{equation}{section} \begin{abstract} Let $G_n$ be the genus of a two-dimensional surface obtained by gluing, uniformly at random, the sides of an $n$-gon. Recently Linial and Nowik proved, via an enumerational formula due to Harer and Zagier, that the expected value of $G_n$ is asymptotic to $(n - \ln n)/2$ for $n\to\infty$. We prove a local limit theorem for the distribution of $G_n$, which implies that $G_n$ is asymptotically Gaussian, with mean $(n-\ln n)/2$ and variance $(\ln n)/4$. \end{abstract} \section{Introduction and main result} In topology, it is traditional to represent a surface by gluing the sides of a polygon \cite{Ma,ZT}. The information of which side is glued to which can be encoded by a chord diagram. For example, the classical presentation of torus by gluing the opposite sides of a square can be presented by a chord diagram with two intersecting chords: $$\risS{-20}{torus}{}{70}{25}{30}\qquad\qquad \risS{-30}{torus-sq}{}{70}{0}{0}\qquad\qquad \risS{-15}{cd22}{\put(35,34){$b$}\put(35,0){$a$} \put(0,35){$a$}\put(0,-2){$b$}}{40}{0}{0}\ = ``abab" $$ Combinatorially, a chord diagram with $n$ chords is the same as a word of length $2n$ where each letter occurs precisely twice; think about the letters written at the end of the chords and the word is to be read off the circle counterclockwise starting with the point $(0,1)$. Given a chord diagram $D$, the surfaces represented by it can be recovered in the following way. First, attaching a lower semi-sphere to the circle of $D$ and thickening the chords of $D$ to narrow bands we obtain a surface with boundary: $$D=\risS{-15}{cd34}{}{40}{0}{0}\quad \risS{-1}{totor}{}{30}{0}{0}\quad \risS{-27}{basket-3}{}{60}{40}{25}\ . $$ Then attaching a disc, {\it face}, to each boundary component of that surface we will get the desired closed surface. We consider only orientable surfaces here. By {\em genus} of a chord diagram we understood the genus of this surface. In this paper we are interested in the distribution of genus $G_n$ of the chord diagram chosen uniformly at random from among all $(2n-1)!!$ such diagrams with $n$ chords. For example, for $n=3$ the genus $G_3$ takes only two values 0 and 1 according to the picture: $$\begin{array}{c||@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \mbox{genus 0}&\risS{-15}{cd35}{}{40}{20}{20}& \risS{-15}{cd35-1}{}{40}{0}{0}&\risS{-15}{cd33}{}{40}{0}{0}& \risS{-15}{cd33-1}{}{40}{0}{0}&\risS{-15}{cd33-2}{}{40}{0}{0} \\ \hline \mbox{genus 1}&\risS{-15}{cd31}{}{40}{30}{20}& \risS{-15}{cd31-1}{}{40}{0}{0}&\risS{-15}{cd31-2}{}{40}{0}{0}& \risS{-15}{cd31-3}{}{40}{0}{0}&\risS{-15}{cd31-4}{}{40}{0}{0} \\ &\risS{-15}{cd31-5}{}{40}{30}{20}& \risS{-15}{cd34}{}{40}{0}{0}&\risS{-15}{cd34-1}{}{40}{0}{0}& \risS{-15}{cd34-2}{}{40}{0}{0}&\risS{-15}{cd32}{}{40}{0}{0} \\ \end{array} $$ Our work was inspired by a recent paper of Linial and Nowik~\cite{LN} who estimated the expected value of $G_n$: $$ \text{E\/}[G_n]\sim \frac{n -\ln n}{2}. $$ This estimate is implied by a harmonic sum expression for $\text{E\/}[G_n]$ they derived from a Harer-Zagier formula~\cite{HZ} for the bivariate generating function of $\{c_{n,g}\}$, with $c_{n,g}$ being the number of $n$-chord diagrams of genus $g$. In principle, the Harer-Zagier formula can be used to obtain sharp asymptotics for higher order moments of $n+1-2G_n$. The formulas get progressively messier, which makes a distributional analysis of $G_n$ quite problematic. From two first moments we deduce that that the standard deviation of $G_n$ is of order $(\ln n)^{1/2}$. Using this simple information and a contour integration formula based on the Harer-Zagier formula, we prove a local limit theorem for the distribution of $G_n$, i. e. a sharp asymptotic estimate for the numbers $c_{n,g}$, with $|g-\text{E\/}[G_n]| \ll (\ln n)^{7/10}$. As a corollary, $G_n$ is shown to be asymptotically normal, with mean $(n-\ln n)/2$ and standard deviation $\sqrt{\ln n}/2$. The Harer-Zagier formula was discovered in \cite{HZ} for the purposes of computation of the Euler characteristic of moduli spaces of complex curves. It is tightly related to matrix models of quantum gravity \cite{tH}. We recommend a remarkable book \cite{LZ} for an excellent exposition of the Harer-Zagier formula and its relation to different areas of mathematics and physics. The topological construction of surface from a chord diagram above gives a graph with single vertex and $n$ loops embedded into a surface and dividing the surfaces into a number $F$ of cells, faces. Then the Euler characteristic of the surface is $1-n+F$. On the other hand, the Euler characteristic of an orientable surface of genus $G$ is equal to $2-2G$. Thus for the number of faces $F$ we have $F=n-2G+1$. By our result, with high probability the genus $G_n$ of a random chord diagram is very close to $(n-\ln n)/2$. Hence the number of faces $F_n$ typically tends to be very small, of order $\ln n$, relative to the number of edges $n$. Geometrically it means that typically there is at least one face with a large number of sides, of order $n/\ln n$. It would be interesting to explore the distribution of the number of sides of individual faces. For example, is there typically just one face with that many sides, or there are several (how many) such faces? Going out on a limb, we conjecture that, analogously to cycles of a uniformly random permutation on $[n]$, with high probability there exist several faces, with $\Theta(n)$ sides each. To conclude, we should mention that the primary focus of Linial and Nowik~\cite{LN} was a random {\it directed\/} $n$-chord diagram, generating a random oriented surface in a different way, for which a counterpart of the Harer-Zagier formula is unknown. In that case they used an ingenious combinatorial argument to show that $$ \text{E\/}[G_n]=\frac{n}{2}-\Theta(\ln n). $$ \bigskip {\bf Acknowledgment.} We would like to thank Nathan Linial and Tahl Nowik for useful comments on the first version of the paper. \section{Harer-Zagier formula} In terms of the distribution $\{p_{n,g}:=\text{ P\/}(G_n=g)\}$, the Harer-Zagier formula~\cite{HZ} is equivalent \begin{equation}\label{1} 1+2\sum_{n,g}p_{n,g}x^{n+1}y^{n+1-2g}=\left(\frac{1+x}{1-x}\right)^y, \end{equation} \cite{LN}. To illustrate the power of \eqref{1}, let us compute $p_{n,n/2}=\text{ P\/}(G_n=n/2)$, which is the probability that the random surface has exactly one face, $F_n=1$. (Of course, $p_{n,n/2}=0$ for $n$ odd.) It follows from \eqref{1} that $$ 1+2\sum_np_{n,n/2}x^{n+1}=[y^1]\left(\frac{1+x}{1-x}\right)^y=\ln\frac{1+x}{1-x}. $$ Hence, for even $n$, $$ p_{n,n/2}=[x^{n+1}]\sum_{j\text{ odd}}\frac{x^j}{j}\Longleftrightarrow p_{n,n/2}=\frac{1}{n+1}. $$ Observe that $1/(n+1)$ is the probability that the uniformly random permutation $\omega_{n+1}$ of $[n+1]$ is cyclic. More generally, for $k\ge 1$, \begin{align} 2\text{ P\/}(F_n=k)=&\,[x^{n+1}]\,\frac{1}{k!}\left[\ln (1+x)-\ln (1-x)\right]^k\notag\\ =&[x^{n+1}]\,\frac{2^k}{k!}\left(\,\sum_{j\text{ odd}}\frac{x^j}{j}\right)^k.\label{more} \end{align} Let ${\cal O}_{a,b}$ denote the total number of permutations of $[a]$ consisting of $b$ odd cycles. From \eqref{more} and a standard exponential identity $$ \sum_{a,b}\frac{x^ay^b}{a!}\,{\cal O}_{a,b}=\exp\left(y\sum_{j\text{ odd}}\frac{x^j}{j}\right), $$ it follows that \begin{multline}\label{pF=k} \text{ P\/}(F_n=k)=2^{k-1}[x^{n+1}]\sum_a\frac{x^a}{a!}{\cal O}_{a,k} =2^{k-1}\frac{{\cal O}_{n+1,k}}{(n+1)!}\\ =2^{k-1}\text{ P\/}\bigl(\omega_{n+1}\text{ consists of }k\text{ odd cycles}\bigr). \end{multline} \par\smallskip\noindent We will show that the equation \eqref{1} can also be used to find sharp asymptotic expression for the moments $\text{E\/}[G_n^k]$. However, we will use only the first two moments, as obtaining a limiting distribution of $G_n$ via the moments method appears to be quite problematic. Besides we set our sights higher, on a {\it local\/} limit theorem, for which the moments-based techniques are too crude in principle. Our main tool is contour integration already implicit in the derivation of Theorem 2 in~\cite{HZ} from the equation \eqref{1}. This theorem (see also~\cite[Corrolary~3.1.8]{LZ}) states that \begin{equation}\label{5} c_{n,g}=(2n-1)!!\, p_{n,g}= \frac{(2n)!}{(n+1)!(n-2g)!}\,\,[t^{2g}]\,\left(\frac{t/2}{\tanh (t/2)}\right)^{n+1}, \end{equation} where $[t^{2g}](f(t))$ denotes the coefficient at $t^{2g}$ in the power series Taylor expansion of the function $f(t)$. The contour integration we mentioned is a simple consequence of \eqref{5}: \begin{equation}\label{3} p_{n,g}=\frac{2^n}{2\pi i\,(n-2g)!(n+1)} \oint\limits_{C}\frac{1}{t^{2g+1}}\left(\frac{t/2}{\tanh(t/2)}\right)^{n+1}\,dt\ ; \end{equation} here $C$ is a simple closed contour surrounding $0$ such that all the non-zero roots of $\tanh (t/2)=0$ are outside of $C$. Later on we will choose $C$ depending on $n$ which will allow as to get the desired asymptotics of $p_{n,g}$. \section{Asymptotics of $p_{n,g}$} The formula \eqref{1} is perfectly tailored for asymptotic evaluation of the {\it factorial\/} moments $\text{E\/}\bigl[(n+1-2G_n)_k\bigr]$, $k\ge 1$, ($(m)_k:=m(m-1)\cdots (m-k+1)$). Indeed, differentiating \eqref{1} $k$ times with respect to $y$, and setting $y=1$, we get \begin{equation*} 2\sum_n x^{n+1}\text{E\/}\bigl[(n+1-2G_n)_k\bigr]=\frac{1+x}{1-x}\left(\ln\frac{1+x}{1-x}\right)^k. \end{equation*} So \begin{equation*} \text{E\/}\bigl[(n+1-2G_n)_k\bigr]=[x^{n+1}]\,\frac{1+x}{2(1-x)}\left(\ln\frac{1+x}{1-x}\right)^k. \end{equation*} Using an asymptotic formula for $[x^m] (1-x)^{-\alpha}\bigl(x^{-1}\ln (1-x)^{-1}\bigr)^{\beta}$, (Flajolet and Sedgewick~\cite{FS}, Section VI.2), it is straightforward to write down a series-type asymptotic formula for $\text{E\/}\bigl[(n+1-2G_n)_k\bigr]$. In particular, for $k=1$, \begin{equation}\label{ex^1} \text{E\/}[n+1-2G_n]=\ln n +\ln 2 -\Gamma^\prime(1)+O(\ln^{-1}n). \end{equation} Equivalently \begin{equation}\label{exG=} \text{E\/}[G_n]=\frac{n}{2}-\frac{\ln n}{2}+\frac{1}{2}(1-\ln 2+\Gamma^\prime(1))+O(\ln^{-1}n). \end{equation} This sharp estimate can also be obtained from the harmonic sum--type formula already obtained in \cite{LN}. Analogously, for all $k\ge 1$ \begin{equation}\label{ex^k} \text{E\/}\bigl[(n+1-2G_n)_k\bigr]=\ln^k n +O(\ln^{k-1} n). \end{equation} The relation \eqref{ex^k} implies that $$ \frac{n+1-2G_n}{\ln n}\to 1, $$ {\it in probability\/}. Moreover, using \eqref{ex^1} and \eqref{ex^k} for $k=2$, we see that $$ \text{Var\/}[n+1-2G_n]=O(\ln n); $$ So, by Chebyshev's inequality, \begin{equation}\label{CM} \text{ P\/}\bigl\{|G_n-\text{E\/}[G_n]|\ge (\ln n)^{1/2+\varepsilon}\bigr\}\le_b (\ln n)^{-2\varepsilon}. \end{equation} More expressively, by \eqref{exG=}, \begin{equation}\label{pr,G=} G_n=\frac{n}{2}-\frac{\ln n}{2}+o_p\bigl((\ln n)^{1/2+\varepsilon}\bigr),\quad\forall\varepsilon>0; \end{equation} the $o_p$ notation means that the remainder scaled by $(\ln n)^{1/2+\varepsilon}$ converges to zero in probability. The upshot of \eqref{CM}-\eqref{pr,G=} is that from now we may, and will, focus on the generic values $g$ of $G_n$ satisfying \begin{equation}\label{upshot} \left|g-\frac{n}{2}-\frac{\ln n}{2}\right|\le (\ln n)^{1/2+\varepsilon}. \end{equation} At the risk of belaboring the obvious, the equation \eqref{CM} is equivalent to \begin{equation}\label{risk} \sum_{g\text{ meets }\eqref{upshot}}\!\!\!\!\!p_{n,g}= 1 - O\bigl((\ln n)^{-2\varepsilon}\bigr). \end{equation} Armed with \eqref{risk}, we will determine an asymptotic formula for $p_{n,g}$, with $g$ in the range \eqref{upshot}. By \eqref{3}, \begin{equation}\label{2.1} p_{n,g}=\frac{2^{-1}}{2\pi i (n-2g)!(n+1)}\oint\limits_{C}\frac{1}{t^{2g+1}}\left(\frac{t}{\tanh(t/2)}\right)^{n+1}\,dt. \end{equation} Since $$ \tanh (t/2)=\frac{e^t-1}{e^t+1}, $$ its roots are $2\pi \nu i$, $\nu=\pm 1,\pm 2,\dots$. Since the integrand in \eqref{2.1} is odd, we seek $C$ symmetric with respect to the origin $t=0$. One would normally consider a circular contour of radius $t=t(n,g)$, where $t(n,g)$ is an absolute minimum point of $$ f(t,g)=\frac{1}{t^{2g}}\left(\frac{t}{\tanh(t/2)}\right)^{n+1}, \quad t\ge 0. $$ However, for $g$ in the range \eqref{upshot}, $t(n,g)$ turns out to be asymptotic to $\ln n$; so a circle of that radius would enclose not only $t=0$, but also lots of imaginary zeroes $2\pi \nu i$, $\nu\neq 0$, of $\tanh(t/2)$. That is, this circle is inadmissible. Instead we will select as $C$ a thin horizontal rectangular contour; its short vertical sides pass through the points $t=\pm \bar t$, with $\bar t\sim \ln n$, and the long horizontal sides pass through the points $t=\pm \pi i$, the zeroes of $\coth(t/2)$ closest to the origin $t=0$. $$\risS{-5}{contour}{\put(1,30){$-\bar t$}\put(227,30){$\bar t$} \put(114,32){$0$}\put(122,61){$\pi i$}\put(121,14){$-\pi i$}}{250}{60}{0} $$ Observe that we confine ourselves to the same $\bar t=\bar t(n)$ for all $g$ satisfying \eqref{upshot}. How to choose $\bar t$ ? Our guiding intuition is that, for some $\bar g$ in the range \eqref{upshot}, $(\bar t,\bar g)$ is a stationary, saddle-type, point of a logarithmically-sharp approximation of $f(t,g)/(n-2g)!$. \par\smallskip\noindent Since $n-2g\sim \ln n$ for $g$ in \eqref{upshot}, we have \begin{equation}\label{stir} (n-2g)! =\sqrt{2\pi (n-2g)}\left(\frac{n-2g}{e}\right)^{n-2g}\bigl(1+O((\ln n)^{-1})\bigr). \end{equation} So we define $u=2g$, and introduce \begin{multline}\label{Htu=} H(t,u)=(n-u)\ln\frac{e}{n-u}+(n+1)\ln\frac{t}{\tanh(t/2)}-u\ln t\\ =(n-u)\ln\frac{e}{n-u}+(n+1)\ln\coth (t/2)+(n+1-u)\ln t. \end{multline} A stationary point of $H(t,u)$ is a solution of \begin{align} H_t(t,u)=&\, -(n+1)\frac{1}{\sinh t}+\frac{n+1-u}{t}=0,\label{H_t=0}\\ H_u(t,u)=&\, \ln(n-u)-\ln t.\label{H_u=0} \end{align} From \eqref{H_u=0}, $t=n-u$, and \eqref{H_t=0} becomes \begin{equation}\label{t,root} \frac{1+t}{t}\,\sinh t=n+1. \end{equation} Taking logarithms of both sides of \eqref{t,root}, we easily obtain \begin{equation}\label{bar t=} \bar t = \ln (2n) -\frac{1}{\ln (2n)} +O(\ln^{-2} n). \end{equation} The corresponding value $\bar g$ is therefore \begin{equation}\label{bar g=} \bar g=\frac{\bar u}{2}=\frac{n-\bar t}{2}=\frac{n-\ln n}{2} +O(1). \end{equation} And $\bar g$ is well within the target range \eqref{upshot}! Using $\bar u=n-\bar t$ and \eqref{bar t=}, we compute \begin{multline}\label{Hbartbaru=} H(\bar t,\bar u)=\bar t+\ln\bar t+(n+1)\ln\frac{e^{\bar t}+1}{e^{\bar t}-1}\\ =\ln(2n)+\ln\ln n+(n+1)\bigl(2e^{-\bar t}+O(e^{-2\bar t})\bigr)\\ =\ln(2n\ln n) +O(\ln^{-1}n). \end{multline} Furthermore, for $g=u/2$ satisfying \eqref{upshot}, and an intermediate $\tilde u$, \begin{multline}\label{Hbartu=} H(\bar t,u)=H(\bar t,\bar u)+H_u(\bar t,\bar u)(u-\bar u)+\frac{1}{2}H_{uu}(\bar t,\tilde u)(u-\bar u)^2\\ =H(\bar t,\bar u)-\frac{1}{2(n-\bar u)}(u-\bar u)^2+O\bigl(|u-\bar u|^3(n-\bar u)^{-2}\bigr)\\ =H(\bar t,\bar u)-\frac{1}{2\ln n}(u-\bar u)^2+O\bigl((\ln n)^{-\delta}\bigr); \end{multline} here $\delta:=-3\varepsilon+1/2>0$ if $\varepsilon<1/6$, which we assume from now on. \par\smallskip\noindent Putting together \eqref{upshot} and \eqref{stir}-\eqref{Hbartu=}, we transform \eqref{2.1} into \begin{equation}\label{transpng0} \begin{aligned} p_{n,g}=&\,\frac{(\ln n)^{1/2}}{(2\pi)^{3/2}i}\,\exp\left[-\frac{(u-\bar u)^2}{2\ln n}+O\bigl((\ln n)^{-\delta} \bigr)\right]\\ &\times \oint\limits_C\frac{1}{t}\cdot\left(\frac{\coth(t/2)}{\coth(\bar t/2)}\right)^{n+1} \cdot\left(\frac{t}{\bar t}\right)^{n+1-u}\,dt. \end{aligned} \end{equation} Since $u=2g$ is even, the integrand in \eqref{transpng0} is odd, just like the one in \eqref{2.1}. Consequently, the contour integral is twice the contour integral over $C^*=C_1\cup C_2 \cup C_3$; here $C_1=\{t=-\pi i+x,\,0\le x\le\bar t\,\}$, $C_2=\{t=\bar t+iy:\,-\pi\le y\le \pi\}$, and $C_3=\{t=\pi i+x:\,\bar t\ge x\ge 0\}$. Using the main branch of $\ln z$, i. e. with the cut $\{z:\text{Im }z=0,\,\text{Re }z\le 0\}$, the integrand in \eqref{transpng0} for $t\in C^*$ can be written as $t^{-1}e^{h(t,u)}$, where $$ h(t,u):=(n+1)\ln\frac{\coth(t/2)}{\coth(\bar t/2)}+(n+1-u)\ln\frac{t}{\bar t}. $$ Let us show that, asymptotically, $u$ can be replaced with $\bar u$, i. e. the contour integral is almost independent of $u$. \par\smallskip\noindent On $C_2$, since $\bar t\sim\ln n$, \begin{align*} h(t,u)=&\,h(t,\bar u)+(u-\bar u)\ln\frac{\bar t+iy}{\bar t}\\ =&\,h(t,\bar u)+O\bigl(\bar t\,^{-1}|u-\bar u|\bigr)\\ =&\,h(t,\bar u)+O\bigl((\ln n)^{\varepsilon-1/2}\bigr), \end{align*} and \begin{equation*} |h(t,\bar u)|\le_b ne^{-\bar t}+(n+1-\bar u)\bar t\,^{-1}=O(1). \end{equation*} Therefore \begin{equation}\label{intC2=} \oint\limits_{C_1}\frac{1}{t}e^{h(t,u)}\,dt=\oint\limits_{C_1}\frac{1}{t}e^{h(t,\bar u)}\,dt +O\bigl((\ln n)^{\varepsilon-3/2}\bigr). \end{equation} \par\smallskip\noindent On $C_3$, since $\bar t=\ln n+O(1)$, \begin{align} h(t,u)=&\,(n+1)\ln\frac{e^{x+i\pi}+1}{e^{x+i\pi}-1}-(n+1)\ln\frac{e^{\bar t}+1}{e^{\bar t}-1}\notag\\ &+(n+1-u)\ln\frac{x+i\pi}{\bar t};\label{htuC3}\\ =&\,(n+1)\ln\frac{e^{x}-1}{e^{x}+1}+(n+1-u)\ln\frac{x+i\pi}{\bar t} +O(1). \end{align} Consequently $$ \text{Re }h(t,u)=(n+1)\ln\frac{e^{x}-1}{e^{x}+1}+(n+1-u)\ln\frac{\sqrt{x^2+\pi^2}}{\bar t}+O(1). $$ The first order derivative of the explicit part on the RHS is $$ \frac{n+1}{\sinh x}+(n+1-u)\frac{x}{x^2+\pi^2}\ge \frac{n+1}{\sinh \bar t}\ge \frac{1}{2}. $$ Hence \begin{align*} \text{Re }h(t,u)\le&\,(n+1-u)\ln\frac{\sqrt{\bar t^2+\pi^2}}{\bar t}-\frac{\bar t-x}{2}+O(1)\\ =&\,- \frac{\bar t-x}{2}+O((n+1-u)/\bar t)+O(1)\\ =&\,-\frac{\bar t-x}{2}+O(1), \end{align*} as $n+1-u=O(\ln n)$. Thus $$ \bigl|e^{h(t,u)}\bigr|=e^{\text{Re }h(t,u)}\le_b e^{-(\bar t-x)/2}, $$ and picking $\gamma\in (0,1)$, \begin{equation}\label{xsmall} \left|\,\,\oint\limits_{t\in C_3:\,\bar t -\bar t^{\gamma}\ge x\ge 0}\frac{1}{t}\,e^{h(t,u)}\,dt\, \right|\le_b e^{-\bar t\, ^{\gamma}/2}, \end{equation} for all $u$ in question, including $\bar u$. For $t\in C_3$ with $x=\text{Re }t\ge \bar t-\bar t\,^{\gamma}$, \begin{align*} \text{Re }(h(t,u)-h(t,\bar u))=&\,(\bar u-u)\ln\frac{\sqrt{x^2+\pi^2}}{\bar t}\\ =&\,O\bigl[(\ln n)^{1/2+\varepsilon}\,\bar t\,^{\gamma-1}\bigr]=O\bigl((\ln n)^{-\sigma}\bigr), \end{align*} where $\sigma:=1/2 - \varepsilon-\gamma>0$, if we choose $\gamma\in (0,1/2-\varepsilon)$, which we do! In that case \begin{equation}\label{xlarge} \begin{aligned} \left|\,\,\oint\limits_{t\in C_3:\,\bar t \ge x\ge \bar t -\bar t\,^{\gamma}}\frac{1}{t}\, \bigl[e^{h(t,u)}-e^{h(t,\bar u)}\bigr]\,dt\,\right|\le_b&\,(\ln n)^{-\sigma}\int_{\bar t-\bar t\,^{\gamma}} ^{\bar t }\frac{1}{x}\,e^{-(\bar t-x)/2}\,dx\\ \le_b\,(\ln n)^{-\sigma-1} \end{aligned} \end{equation} Combining \eqref{xsmall} and \eqref{xlarge}, we obtain \begin{align} \oint\limits_{C_3}\frac{1}{t}\,e^{h(t,u)}\,dt-\oint\limits_{C_3}\frac{1}{t}\,e^{h(t,\bar u)}\,dt=&\, O\bigl(e^{-\bar t\,^{\gamma}}+(\ln n)^{-\sigma-1}\bigr)\notag\\ =&\,O\bigl((\ln n)^{-\sigma-1}\bigr).\label{IntC3=} \end{align} The same argument yields \begin{equation}\label{IntC1=} \oint\limits_{C_1}\frac{1}{t}\,\bigl[e^{h(t,u)}-e^{h(t,\bar u)}\bigr]\,dt =\,O\bigl((\ln n)^{-\sigma-1}\bigr)=O\bigl((\ln n)^{-3/2+\varepsilon+\gamma}\bigr). \end{equation} Combining \eqref{intC2=}, \eqref{IntC3=} and \eqref{IntC1=}, we conclude that \begin{equation}\label{IntC=} \begin{aligned} \oint\limits_{C}\frac{1}{t}\,e^{h(t,u)}\,dt=&\,2\int\limits_{C^*} \frac{1}{t}\,e^{h(t,u)}\,dt =\,I_n +O\bigl((\ln n)^{-3/2+\varepsilon+\gamma}\bigr);\\ I_n:=&\,2\int\limits_{C^*} \frac{1}{t}\,e^{h(t,\bar u)}\,dt. \end{aligned} \end{equation} (We cannot write $I_n$ as the contour integral over the whole $C$ since $\bar u$ may not be an (even) integer.) \par\bigskip\noindent The rest is short. Using \eqref{IntC=}, we rewrite \eqref{transpng0} as follows: \begin{equation}\label{transpng*} \begin{aligned} p_{n,g}=&\,\frac{(\ln n)^{1/2}}{(2\pi)^{3/2}i}\,\exp\left[-\frac{(u-\bar u)^2}{2\ln n}+O\bigl((\ln n)^{-\delta} \bigr)\right]\\ &\times\bigl[I_n+O\bigl((\ln n)^{-3/2+\varepsilon+\gamma}\bigr)\bigr]. \end{aligned} \end{equation} Summing this expression over $g=u/2$ in the range \eqref{upshot} and using \eqref{risk}, we get \begin{multline}\label{sum*} 1-O\bigl((\ln n)^{-2\varepsilon}\bigr)=\,\left[1+O\bigl((\ln n)^{-\delta}\bigr)\right]\, \frac{(\ln n)^{1/2}}{(2\pi)^{3/2}i}\,I_n\\ \times \sum_{u:\,g=u/2\text{ meets }\eqref{upshot}} \!\!\!\exp\left[-\frac{(u-\bar u)^2}{2\ln n}\right] +O(R_n), \end{multline} where $$ R_n:=(\ln n)^{-1+\varepsilon+\gamma}\sum_{u\in \mathbb Z}\exp\left[-\frac{(u-\bar u)^2}{2\ln n}\right]. $$ Recall also that $$ \varepsilon<1/6,\quad \delta=1/2-3\varepsilon,\quad \varepsilon+\gamma<1/2. $$ Recognizing the sum in the definition of $R_n$ as a Riemann sum for $\int_{\mathbb R}e^{-x^2/2}dx$ times $\sqrt{\ln n}$, we see that \begin{equation}\label{Rn=} R_n\le_b (\ln n)^{-1/2+\varepsilon+\gamma}. \end{equation} Likewise the sum in \eqref{sum*}, (with $u$ running through even integers), equals \begin{multline}\label{starsum=} \frac{\sqrt{\ln n}}{2}\left(\int_{|z|\le 2(\ln n)^{\varepsilon}}e^{-z^2/2}\,dz+O\bigl((\ln n)^{-1/2}\bigr)\right)\\ =\frac{\sqrt{2\pi \ln n}}{2}\left[1+O\bigl((\ln n)^{-1/2}\bigr)\right]. \end{multline} Plugging the estimates \eqref{Rn=} and \eqref{starsum=} into \eqref{sum*} we arrive at \begin{equation}\label{In=} I_n=\frac{4\pi i}{\ln n}\left[1+O\bigr((\ln n)^{-\sigma}\bigr) \right], \end{equation} where $$ \sigma=\min\{2\varepsilon, 1/2-\varepsilon-\gamma,1/2-3\varepsilon\}. $$ By choosing $\gamma>0$ sufficiently small, we can get $\sigma$ arbitrarily close, from below, to the {\it largest\/} value of $\min\{2\varepsilon,1/2-3\varepsilon\}=1/5$, which is attained at $\varepsilon=1/10$. Combining \eqref{transpng*} and \eqref{In=}, we have proved the following local limit theorem. \begin{Theorem}\label{thm1} Let $c_{n.g}$ denote the total number of chord diagrams of genus $g$. Then \begin{equation}\label{transpng**} \frac{c_{n,g}}{(2n-1)!!}=p_{n,g}=\frac{1+O((\ln n)^{-1/5+\alpha})}{\sqrt{2\pi (\ln n)/4}}\exp\left[-\frac{(g-\bar g)^2}{2(\ln n)/4}\right], \end{equation} uniformly for $g$ satisfying $$ |g-\bar g|\le (\ln n)^{7/10-\alpha},\quad(\bar g=(n-\ln n)/2+O(1)), $$ for $\alpha>0$, arbitrarily close to zero. As a corollary, $G_n$ is asymptotically normal with mean $(n-\ln n)/2$ and variance $(\ln n)/4$. \end{Theorem} \par\bigskip\noindent
1,314,259,995,554
arxiv
\section{Introduction} \label{sec:Introduction} Since the first detection of a Galactic \ac{DNS} system \citep{hulse1975discovery}, the growing observed population of DNSs continues to provide constraints on their orbital parameters. Precise measurements of Keplerian and post-Keplerian parameters \citep{kramer2006tests} contain valuable information about the progenitors and formation history of \acp{NS} and \acp{DNS}. Additionally, GW170817 \citep{GW170817} became the first gravitational--wave signal detected from a DNS merger. These precise measurements allow us to test our understanding on the massive binary progenitor populations and their formation history \citep[e.g.,][]{bhattacharya1991formation}. \cite{tutukov1993formation} carried out an early rapid population synthesis study of Galactic NSs. The formation and fate of \acp{DNS} has been studied with a similar approach by \cite{portegies1998formation} who made an analysis of the observed systems and predictions of \ac{GRB} rates, and \cite{belczynski1999effect} who focused on gravitational--wave merger rates. \cite{voss2003mnras} studied both \ac{GRB} and gravitational--wave merger rates for Galactic \acp{DNS} (and binary black holes). \cite{Oshaughnessy:2005c} used six \acp{DNS} observed in the Galactic disk to constrain population synthesis models. Several binary population synthesis studies have focussed on natal kick distributions \citep[e.g.,][]{brandt1995dns,podsiadlowski2004effects,bray2016}, short \acp{GRB} locations \citep[e.g.,][]{church2011implications}, evolutionary channels \citep[e.g.,][]{andrews2015evolutionary} and merger rates \citep[e.g.,][]{chruslinska2018double}. More recently, \cite{2018arXiv180105433K} used their population synthesis model, calibrated to match the observed Galactic \ac{DNS} population, to predict merger rates in the local Universe. Using the rapid population synthesis element of the Compact Object Mergers: Population Astrophysics and Statistics (COMPAS) suite \citep{stevenson2017formation}, we use the Galactic \ac{DNS} population as an observational constraint on massive binary evolution, from two zero age main sequence stars (ZAMS) to a pair of \acp{NS}. The COMPAS tool simulates isolated binaries: double star systems which evolve without significant interaction with the environment or with other stars. The majority of the confirmed Galactic \acp{DNS} (14 confirmed systems, for details, see Table~\ref{tab:DNS}, as well as \cite{tauris2017formation} and references therein) come from isolated binaries which lie in the Galactic disk. We do not address the two Galactic globular cluster binaries in this work, $\rm~B2127+11C$ \citep{anderson1990discovery} and $\rm J1807-2500B$ \citep[][not a confirmed \ac{DNS}]{lynch2012timing}, since dynamical interactions likely played a key role in their formation \citep{phinney1991ejection,Grindlay2006sGRBs,ivanova2008NSGC}. Our paper explores the impact of physical interactions during various stages of binary evolution on predictions of observables such as orbital parameters of Galactic \acp{DNS} and inferred mass distributions of gravitational--wave events. To do this, we compare models with different underlying assumptions and quantify the difference between their predictions. For each model, we provide \ac{DNS} formation rates and orbital parameters as predictions, to be tested against present time observations. We compare the predicted \ac{DNS} masses ($\rm m_{1,2}$) and orbital parameters (period P, eccentricity e) to those of the observed Galactic \acp{DNS}. We find that the natal kicks received by \acp{NS} during formation in a \ac{SN} and mass transfer stability criteria play a fundamental role in recreating the Galactic \ac{DNS} population. The paper is structured as follows. Section \ref{sec:methods} describes population synthesis and presents our \Fiducial~model. Changes made to binary evolution in COMPAS since \cite{stevenson2017formation} are specified. Section \ref{sec:results} presents the results of the \Fiducial~population, with particular emphasis on the formation history of Galactic \acp{DNS}. The effect of variations, such as mass transfer during the post-helium-burning phase and the comparison between different natal kick distributions is mentioned. We conclude with a summary and discussion in Section \ref{sec:discussion}. \begin{table} \caption[caption]{Measured parameters of the Galactic \acp{DNS} used as a diagnosis in this study. Notes: $^\dagger$Systems which will merge in via gravitational--wave emission in less than $3000~\rm Myrs$. $^{\ddagger}$Double pulsar. $^{\star}$Measurements used only for diagnosis in the $P-e$ plane. The masses of the \acp{DNS} are presented as $M_{\rm plsr}$ and $M_{\rm cmpn}$, the mass of the pulsar and the companion respectively. References: $^a$\cite{martinez2015pulsar}. $^b$\cite{kramer2006tests}. $^c$\cite{fonseca2014comprehensive}. $^d$\cite{faulkner2004psr}. $^e$\cite{hulse1975discovery}. $^f$\cite{lazarus2016einstein}. $^g$\cite{cameron2017high}. $^h$\cite{janssen2008multi}. $^i$\cite{corongiu2007binary}. $^j$\cite{champion2004psr}. $^k$\cite{swiggum2015psr}. $^l$\cite{keith2009psr}. $^m$\cite{martinez2017pulsar}. $^n$\cite{stovall2018palfa}.} \label{tab:DNS} \begin{tabular}{lcccccc} \hline Pulsar & $P$ & $e$ & $M_{\rm plsr}$ & $M_{\rm cmpn}$ & Ref \\ & $\rm [days]$ & & $\rm [M_{\odot}]$ & $\rm [M_{\odot}]$ & \\ \hline $\rm J0453+1559$ & 4.072 & 0.113 & 1.559 & 1.174 & a \\ $\rm J0737-3039^{\dagger,\ddagger}$ & 0.102 & 0.088 & 1.338 & 1.249 & b \\ $\rm B1534+12^{\dagger}$ & 0.421 & 0.274 & 1.333 & 1.346 & c \\ $\rm J1756-2251^{\dagger}$ & 0.320 & 0.181 & 1.341 & 1.230 & d \\ $\rm B1913+16^{\dagger}$ & 0.323 & 0.617 & 1.440 & 1.389 & e \\ $\rm J1913+1102^{\dagger}$ & 0.206 & 0.090 & 1.580 & 1.300 & f \\ $\rm J1757-1854^{\dagger}$ & 0.184 & 0.606 & 1.338 & 1.395 & g \\ $\rm J1518+4904^{\star}$ & 8.634 & 0.249 & - & - & h \\ $\rm J1811-1736^{\star}$ & 18.779 & 0.828 & - & - & i \\ $\rm J1829+2456^{\star}$ & 1.176 & 0.139 & - & - & j \\ $\rm J1930-1852^{\star}$ & 45.060 & 0.399 & - & - & k \\ $\rm J1753-2240^{\star}$ & 13.638 & 0.304 & - & - & l \\ $\rm J1411+2551^{\star}$ & 2.616 & 0.169 & - & - & m \\ $\rm J1946+2052^{\star}$ & 0.078 & 0.064 & - & - & n \\ \end{tabular} \end{table} \section{Methods} \label{sec:methods} \subsection{Population Synthesis} \label{subsec:pop_synth} The COMPAS suite includes a rapid population synthesis module designed to simulate isolated binary evolution. Rapid population synthesis aims to simulate the evolution of a binary system in a fraction of a second; that makes it possible to simulate millions of binaries in a few days using a single processor. The population synthesis module of COMPAS explores binary evolution with a Monte Carlo simulation. We stochastically sample the initial distribution of binary masses, separations and eccentricities, in order to generate a population. Given a certain mass and certain metallicity value at ZAMS, we define the initial conditions and evolution of a star following the fitting formulae of single-star evolution (SSE) as given in \cite{hurley2000comprehensive} to the detailed models calculated in \cite{pols1998stellar}. We use the same nomenclature as \cite{hurley2000comprehensive} to define stellar phases, such as the \ac{HG}, where the \ac{HG} is defined as the phase after the depletion of hydrogen during the \ac{MS} and before the start of \ac{CHeB}. For every binary we follow the centre of mass evolution of the system, computing the masses, separation and eccentricity at every time step. We use parameterisations to quantify the effect on the orbit of the physics involving mass loss through stellar winds, mass transfer, \acp{SN} and \ac{CE} events. For \acp{SN} we also use remnant mass distributions which will determine the ultimate fate of our stars. Each binary is evolved until the system either merges, becomes unbound or forms a \ac{DCO}. The population generates a set of \acp{DCO}, where \acp{DNS} are sub-selected into our final distribution of interest. COMPAS population synthesis is similar to the general approach of the codes SeBa \citep{portegies1996SEBA,portegies1998formation,toonen2012SEBA}, BSE \citep{hurley2002evolution}, \texttt{StarTrack} \citep{belczynski2002comprehensive,belczynski2008compact} and binary\_c \citep{izzard2004new,izzard2006population,izzard2009population}, all of which use the SSE fits from \cite{hurley2000comprehensive}. Our current approach to the study of populations by proposing an initial model and studying the variations is similar to the one described in \cite{dominik2012double}. That study uses \texttt{StarTrack} to simulate populations from ZAMS to \ac{DCO} formation and predict merger rates for all compact objects. Their ``Standard" model overlaps with some of our \Fiducial~model assumptions. \subsection{\Fiducial~Model} \label{subsec:Fiducial} \subsubsection[]{Changes since \cite{stevenson2017formation}} \label{subsubsec:} The main changes to binary evolution modelling in COMPAS relative to the default assumptions in \cite{stevenson2017formation}, hereafter referred to as $\rm COMPAS\_\alpha$, are:\\ \begin{enumerate*}[label=(\roman*), itemjoin={\\}, itemjoin*={{\\}}] \item incorporation of the fitting formulae of the binding energy parameter $\lambda_{\textrm{Nanjing}}$ instead of a fixed $\lambda=0.1$, as described in \ref{subsubsec:CE}. \item a bimodal natal kick distribution, where \ac{CCSN} explosions contribute to the high mode ($\sigma_{\rm{high}} = 265~\rm km~s^{-1}$) while \ac{USSN} explosions and \ac{ECSN} explosions constitute the low mode ($\sigma_{\rm{low}} = 30~\rm km~s^{-1}$), as described in \ref{subsubsec:sne}. \item mass transfer stability criteria, allowing for always stable case~BB mass transfer, as described in \ref{subsubsec:CE}. \item the ``optimistic'' \ac{CE} assumption, which allows donors classified as Hertzsprung Gap (HG) stars in the \citet{hurley2000comprehensive} models to engage and survive a \ac{CE} phase, as described in \ref{subsubsec:CE}. \end{enumerate*} \subsubsection{Initial Distributions} \label{subsubsec:initialDit} To initialise a binary, we sample from initial distributions of mass, separation and eccentricity of the binary at ZAMS. For the mass distribution, we draw the primary mass from a Kroupa initial mass function (IMF) \citep{kroupa2001variation} in the form $dN/dm_1 \propto m_{1}^{-2.3}$ with masses between $5 \leq m_{1}/ \rm{M_{\odot}} \leq 100$. The secondary is drawn from a flat distribution in mass ratio between $0.1 < q_{\rm ZAMS} \equiv m_2/m_1 \leq 1$ \citep{sana2012binary}. The initial separation follows the flat-in-the-log distribution \citep{opik1924statistical,sana2012binary} in the range $0.1 < {a_{\rm ZAMS}}/ \rm{AU} < 1000.0$. We assume that all of our binaries are circular at ZAMS, with $e_{\rm ZAMS}=0$. \subsubsection{Supernovae} \label{subsubsec:sne} We differentiate between three \ac{SN} scenarios: \ac{CCSN}, \ac{ECSN} and \ac{USSN}. For the \ac{CCSN} treatment, we apply the ``rapid" explosion mechanism\footnote{In this text, the term supernova explosion \textit{scenario} refers to the type of explosion, such as ECSN, USSN or CCSN, while the term explosion \textit{mechanism} refers to the numerical treatment of this process in the code. The latter is henceforth also referred to as supernova prescription, or fallback prescription, or remnant mass model.}, as presented in \cite{fryer2012compact}, to determine the compact object remnant mass according to the total and carbon-oxygen (CO) core mass of the progenitor, with a maximum allowed \ac{NS} mass of $m_{\rm{NS,max}} = 2.0~\rm M_{\odot}$. In this explosion mechanism, the collapse does not allow for accretion onto the proto-\ac{NS}, and is able to reproduce the apparent mass gap between \acp{NS} and black~holes \citep[BHs,][]{Ozel:2010,farr2011mass}. There is no consensus yet whether the mass gap is due to observational selection effects or if it is intrinsic to the explosion mechanism \citep{kreidberg2012mass,wyrzykowski2016black}. Another explosion scenario that some of our binary systems experience, is called \acp{USSN} \citep{tauris2013ultra,tauris2015ultra}. In this case, a star becomes stripped when it loses its hydrogen envelope during its evolution; if, during later stages, it manages to lose its helium envelope, it becomes ultra-stripped. In COMPAS, any star which engages in a stable case~BB mass transfer episode with a \ac{NS} as an accretor, is considered to be ultra-stripped. We define case~BB as a mass transfer episode which involves a post \ac{HeMS} donor star which has stopped burning helium in the core (helium~Hertzsprung-gap, HeHG). Ultra-stripped stars are left with an ONeMg core with a thin carbon and helium layer \citep{tauris2013ultra}. The compact object remnant mass of an \ac{USSN} is determined in the same way as for \ac{CCSN}. A single star with mass within $7\lesssim$~$m_{\rm ZAMS}/\rm M_{\odot}\lesssim$~$10$ may collapse in an \ac{ECSN}. Early studies by \citet{nomoto1984evolution,nomoto1987evolution} had a higher mass range for single stars between $8\lesssim$~$m_{\rm ZAMS}/\rm M_{\odot}\lesssim$~$10$, while more recent studies propose a lower mass range from $7\lesssim$~$m_{\rm ZAMS}/\rm M_{\odot}\lesssim$~$9$ \citep{woosley2015ECSN}. Note that binary interactions extend this initial mass range, which means that if we take a COMPAS simulation with binaries, the mass range for ECSNe will be broader because binarity changes the progenitor ZAMS masses (i.e. initially less massive stars which accreted mass or initially more massive which lost mass). We assume the baryonic mass of the degenerate ONeMg core leading to an \ac{ECSN} is $1.38~\rm M_{\odot}$ \citep{nomoto1987evolution}. We approximate the \ac{ECSN} remnant mass as $m_{\rm{ECSN}}=1.26~\rm M_{\odot}$ using the quadratic approximation $m_{\rm{bar}}-m_{\rm{grav}}=0.075m_{\rm{grav}}^2$ \citep{timmes1995neutron}. All natal kicks from \acp{SN} are assumed to be isotropic in the frame of reference of the exploding star and randomly drawn from a unit sphere. For the magnitude of the natal kicks of the \acp{SN}, we assume a bimodal distribution \citep[e.g.,][]{1975Natur.253..698K,2002ApJ...568..289A}. For \ac{CCSN}, we draw natal kick magnitudes from a Maxwellian velocity distribution with a one-dimensional standard deviation of $\sigma_{\rm{high}} = 265~\rm km~s^{-1}$ following the 3D pulsar velocity distribution derived by \cite{hobbs2005statistical} from a subset of their 2D observations. \ac{USSN} and \ac{ECSN} natal kick magnitudes are drawn from a Maxwellian velocity distribution with a one-dimensional standard deviation of $\sigma_{\rm{low}} = 30~\rm km~s^{-1}$, following \cite{pfahl2002population} and \cite{podsiadlowski2004effects}. This second component is introduced to match the observed low natal kicks in some Galactic \acp{DNS} \citep{schwab2010further,beniamini2016formation} as well as in isolated pulsars \citep{0004-637X-571-2-906}, as the single-mode isolated pulsar velocity distribution proposed by \cite{hobbs2005statistical} fails to predict the low-velocity pulsar population as discussed by \cite{Verbunt2017bimodal} and \cite{verbunt2017new}. \subsubsection{Mass Transfer} \label{subsubsec:MT} A crucial part of binary evolution is mass transfer, which begins when one or both stars fill their Roche lobe \citep{Eggleton1983rocheLobe}, instigating a \ac{RLOF} event. In our population synthesis approach, mass transfer is treated by determining stability, timescales and conservativeness. Rapid population synthesis oversimplifies the complex hydrodynamics involved in a mass transfer episode. There have been some efforts to provide generalised models \citep[e.g.,][]{deMink:2007SMC,claeys2014theoretical,tauris2015ultra}. In particular, determining whether mass transfer is dynamically stable is challenging \citep[e.g.,][]{pavlovskii2016stability}. To determine dynamical stability during mass transfer episodes, we compare the response of the donor star's radius to adiabatic mass loss, $\zeta_{\textrm{ad}}=(d\textrm{logR}/d\textrm{logM})_{\textrm{ad}}$, to the response of the Roche-lobe radius of the donor, $\zeta_{\textrm{RL}}$, under the same mass exchange conditions. Mass transfer is defined as dynamically stable if $\zeta_{\textrm{ad}}\geq\zeta_{\textrm{RL}}$. We use fixed values of $\zeta_{\textrm{ad,MS}}=2.0$ for \ac{MS} and $\zeta_{\textrm{ad,HG}}=6.5$ for \ac{HG} stars which are typical for these phases, following models by \cite{ge2015adiabatic}. For later phases in which the stars still possess hydrogen envelopes (such as the phases \ac{CHeB} and early asymptotic giant branch, EAGB) we use a fit to $\zeta_{\textrm{ad}}=\zeta_{\textrm{SPH}}$ for condensed polytrope models of a red giant as provided in \cite{soberman1997stability}. Case BB mass transfer is always stable in the \Fiducial~model, broadly in agreement with \citet{tauris2015ultra}. COMPAS uses fits to equilibrium mass-radius relations \citep{hurley2000comprehensive} to describe stellar evolution. We use these analytic formulae to determine when stable mass transfer is driven by thermal readjustment. If the calculated donor-star radius cannot stay within its Roche lobe during thermally stable mass transfer, then we remove the mass on a thermal timescale, although our stellar evolution recipes do not accurately represent the donor stars during thermal-timescale mass transfer \citep[for more detailed studies, see, e.g.,][]{kippenhahn1967entwicklung, pols1994caseA}. Once the donor's calculated equilibrium radius can again fit within its Roche lobe, we assume that the mass transfer occurs on a nuclear timescale \citep{claeys2014theoretical}. Dynamically stable mass transfer from evolved stars is assumed to always proceed on the thermal timescale until the entire envelope is removed \citep[but see, e.g.,][]{2017A&A...608A..11G}. We approximate the thermal timescale as the Kelvin-Helmholtz timescale of the donor's envelope, $\tau_{\rm KH}=GMM_{\rm env}/RL$, where $G$ is the gravitational constant, $M$ is the total mass, $M_{\rm env}$ is the mass of the envelope, $R$ is the radius and $L$ is the luminosity of the star. Conservativeness is defined as the amount of transferred mass from the donor that the accretor will accept and retain. When mass is lost from the system during non-conservative mass transfer, the fraction of mass lost and the specific angular momentum it carries away determine the orbital parameters and subsequent evolution of the system. In the \Fiducial~model, if mass transfer is non-conservative, the non-accreted mass is lost from the vicinity of the accreting star via isotropic re-emission, carrying away the specific orbital angular momentum of the accretor. The conservativeness of our mass transfer episode is limited by the accretor. For non-degenerate accretors we assume a star can accrete at a maximum rate $\dot{M}_{acc}=CM_{\rm acc}/\tau_{\rm KH}$. We use $C=10$ following \cite{hurley2002evolution}. For degenerate accretors, we assume the compact object accretion is limited by the Eddington accretion limit. \subsubsection{Common Envelope} \label{subsubsec:CE} If either of the binary stars begin dynamically unstable mass transfer, the binary may become engulfed in a \ac{CE} phase. The loss of corotation between the binary system and the envelope generates drag forces, which causes the binary to inspiral. The gravitational energy lost from the orbit is deposited in the envelope and may be enough to eject it from the binary. The whole process allows the system to decrease its separation several orders of magnitude. The classical isolated binary evolutionary scenario for the formation of \acp{DCO} often involves a \ac{CE} phase \citep{paczynski1976common,ivanova2013common,belczynski2016first}. We use the $\alpha\lambda$-formalism, as proposed by \cite{webbink1984double} and \cite{de1990common}, to estimate the effect of the \ac{CE} phase on the orbit of the binary. The value of $\lambda$, which parametrises the envelope's binding energy, is calculated from detailed models of the stellar structure. For our \Fiducial~model, we adopt $\lambda_{\textrm{Nanjing}}$ (originally referred to as $\lambda_{\rm b}$, which includes internal energy) as calculated by \cite{xu2010binding}. This $\lambda_{\textrm{Nanjing}}$ is also implemented in the \texttt{StarTrack} code \citep{dominik2012double}. The value of $\alpha$, which parametrises the efficiency of converting orbital energy into unbinding the envelope, depends on the orbital parameters, energy sources and energy exchange during the \ac{CE} phase, and is difficult to constrain even with detailed hydrodynamical models \citep{ivanova2013common}. We use $\alpha=1$. We assume that the orbit is always circularised during a \ac{CE} phase. We allow donor stars which engage into a \ac{CE} phase during the \ac{HG} to survive the event and expel the \ac{CE} if allowed by the energy condition. This assumption is labeled ``optimistic'' \ac{CE} in the literature \citep{dominik2012double}, while the alternative, ``pessimistic'' \ac{CE}, always leads to a merger for \ac{HG} donors. \subsection{Model Comparison} \label{subsec:ModelComparison} In order to quantify how well our models match the observed Galactic \ac{DNS} period--eccentricity ($P-e$) distribution, we calculate the likelihood $\mathcal{L}_i$ that observations could have come from the synthesised \ac{DNS} population for each model $i$. We use the $P-e$ distribution because all 14 observed Galactic \acp{DNS} (cf.~Table~\ref{tab:DNS}) have precise measurements of the period and the eccentricity, but only half of them have precise measurements of their individual masses. We do not use any of the mass measurements in the likelihood calculation. We also do not attempt to account for selection biases in the observed $P-e$ distribution. The details of how the likelihoods $\mathcal{L}_i$ are computed are given in Appendix \ref{sec:likelihood}. We quote our results as the ratio of the likelihood for a given model to the likelihood of the \Fiducial~model $i$, that is, we define the Bayes~factor $\mathcal{K}_i$ as: \begin{equation} \log \mathcal{K}_i = \log \mathcal{L}_i - \log \mathcal{L}_{01} \, , \end{equation} where all logarithms in this study are base $e$ unless stated otherwise. A positive log~Bayes~factor $\log~\mathcal{K}>0$ means that the given model is preferred over the \Fiducial~model. On the other hand, a negative log~Bayes~factor means that the \Fiducial~model is preferred over the given model. If all models have equal priori probabilities, the odds ratio is equal to the Bayes~factor. The odds ratio determines how significantly favoured or unfavoured the model is with respect to the \Fiducial~model; for an introduction to Bayesian analysis, see \cite{jaynes2003probability} and \cite{sivia1996data}. For readers unfamiliar with Bayes~factors, we also indicate when odds ratios for these model comparisons exceed $20:1$ (or $1:20$ for disfavoured models), loosely corresponding to the common significance threshold with a p--value of $p < 0.05$. Limited sampling of the synthetic distributions leads to uncertainties of order unity on $\log \mathcal{K}_i$, corresponding to a factor of two or three uncertainty in the Bayes factor; this statistical uncertainty can be improved with longer simulations. The Bayes factors calculated for our models are plotted in Figure~\ref{fig:likelihoods} and presented in Table~\ref{tab:models}. \begin{table*} \caption{ We list all simulations computed for this study; for simulations (02) through (19), we state the physical interaction or assumption varied relative to the \texttt{Fidicual} model and the actual parameter varied. For each simulation, we give the formation rate $\rate$ of \ac{DNS} which will merge in a Hubble time in the Galaxy, its log~Bayes~factor relative to the \texttt{Fidicual} model (see Appendix \ref{sec:likelihood}) given the observed Galactic \ac{DNS} $P-e$ distribution and the fraction $f$ of formed \acp{DNS} that merge within a Hubble time. See Figure~\ref{fig:nineteenpanels} for the predicted $P-e$ distributions for all models. } \label{tab:models} \begin{tabular}{lcccccc} \hline Number & Physics & Variation & $\rate$~[\textrm{$\rm Myr^{-1}$}] & $\textrm{log}(\mathcal{K})$ & $f$ \\ \hline 00 & \texttt{$\rm COMPAS\_\alpha$} & & \formationVarZero & \bayesVarZero & \fractionVarZero \\ 01 & $\rm COMPAS$ \texttt{Fiducial} & & \formationVarOne & \bayesVarOne & \fractionVarOne \\ 02 & Stability & Case BB: unstable & \formationVarTwo & \bayesVarTwo & \fractionVarTwo \\ 03 & SNe & Fryer Delayed & \formationVarThree & \bayesVarThree & \fractionVarThree \\ 04 & SNe & M\"uller & \formationVarFour & \bayesVarFour & \fractionVarFour \\ 05 & SNe & Single Mode & \formationVarFive & \bayesVarFive & \fractionVarFive \\ 06 & SNe & $\sigma_{\textrm{ECSN}}=\sigma_{\textrm{high}}$ & \formationVarSix & \bayesVarSix & \fractionVarSix \\ 07 & SNe & $\sigma_{\textrm{USSN}}=\sigma_{\textrm{high}}$ & \formationVarSeven & \bayesVarSeven & \fractionVarSeven \\ 08 & CE & $\lambda=0.1$ & \formationVarEight & \bayesVarEight & \fractionVarEight \\ 09 & CE & $\lambda_{\textrm{Kruckow}}\propto R^{-5/6}$ & \formationVarNine & \bayesVarNine & \fractionVarNine \\ 10 & CE & $\alpha=0.1$ & \formationVarTen & \bayesVarTen & \fractionVarTen \\ 11 & CE & $\alpha=10.0$ &\formationVarEleven & \bayesVarEleven & \fractionVarEleven \\ 12 & Circularisation & $a_{\rm p}=a(1-e)$ & \formationVarTwelve & \bayesVarTwelve & \fractionVarTwelve \\ 13 & Circularisation & $a_{\rm SR}=a(1-e^2)$ & \formationVarThirteen & \bayesVarThirteen & \fractionVarThirteen \\ 14 & Mass Loss Mode & Jeans & \formationVarFourteen & \bayesVarFourteen & \fractionVarFourteen \\ 15 & Mass Loss Mode & Circumbinary & \formationVarFifthteen & \bayesVarFifthteen & \fractionVarFifthteen \\ 16 & Distribution & $f_{e}(e)=$ Thermal & \formationVarSixteen & \bayesVarSixteen & \fractionVarSixteen \\ 17 & Metallicity & Z=0.002 & \formationVarSeventeen & \bayesVarSeventeen & \fractionVarSeventeen \\ 18 & Metallicity & Z=0.001 & \formationVarEightteen & \bayesVarEighteen & \fractionVarEightteen \\ 19 & CE & Pessimistic & \formationVarNineteen & \bayesVarNineteen & \fractionVarNineteen \\ \hline \end{tabular} \end{table*} \begin{figure} \includegraphics[trim={0cm 0cm 1cm 4.5cm},clip,angle=90,scale=0.36]{fig1.pdf} \caption{The ratio of the likelihood of each model to the likelihood of the \Fiducial\ model (01). Green (red) bars denote models significantly favoured (disfavoured) by an odds ratio of greater than $20:1$ relative to the \Fiducial\ model.}\label{fig:likelihoods} \end{figure} \section{Results} \label{sec:results} We evolve $10^6$ binaries\footnote{The total mass of evolved binaries is \evolvedMass $~\rm M_{\odot}$ for each simulation; this represents \totalMass $~\rm M_{\odot}$ of total star forming mass under the assumed initial mass distribution.} with initial metallicity $\rm Z_{\odot}=0.0142$. Although Galactic \acp{NS} were born at a range of metallicities, we use solar metallicity values \citep{asplund2009chemical} for bulk composition as a proxy for ongoing star formation metallicity in the Galaxy. We present the detailed results of our \Fiducial~model (01)\footnote{We will label the variations by their number (see Table \ref{tab:models}) in parentheses; e.g.: \Fiducial~model (01) or \texttt{COMPAS$_{\rm \alpha}$} (00).} and some variations to it, all with identical initial parameters (unless stated otherwise). The diagnostic tools we use to analyse all of our variations is the $P-e$ distribution (see Figure~\ref{fig:PeWG} and Section \ref{subsec:ModelComparison}, as well as Appendix \ref{sec:likelihood} for details), remnant \ac{NS} mass distribution (see Figure~\ref{fig:bnskde}) and formation rate estimates (see Table \ref{tab:models}). We report the number of significant figures based on statistical simulation uncertainty, i.e., the Monte Carlo uncertainty. We illustrate the plausible distribution of simulated Galactic \acp{DNS} (see Figure~\ref{fig:PeWG} for \Fiducial~model and Figure~\ref{fig:nineteenpanels} for all models), which shows, in the $P-e$ plane, how systems may evolve from \ac{DNS} formation to a typical observable distribution. To illustrate this, we assign each binary a random probability of being born at any given point in the last 10 Gyr \citep[a proxy for the age of the Galactic thin disk, see][]{2005A&A...440.1153D}, and then follow their gravitational--wave driven orbital evolution until present time. Our models predict the mass ratio distribution (Figure~\ref{fig:qCDFs}) and time distributions (Figure~\ref{fig:tdel}). The mass ratio distribution depends on the explosion mechanism of the \acp{SN}. The time distributions describe the formation time ($t_{\rm{form}}$), coalescence time ($t_{\rm c}$) and delay time ($t_{\rm{delay}}$). The formation time is the time it takes a binary to evolve from ZAMS to \ac{DCO} formation. The coalescence time is the time it takes that \ac{DCO} to inspiral until coalescence due to gravitational radiation, following the post-Newtonian approximation as given by \cite{peters1964gravitational}. The delay time is the sum of the formation time and the coalescence time. Given the orbital properties of the population and the estimated time distributions we are able to predict the formation rate $\rate$ of \acp{DNS} which will merge in a Hubble time (assuming $H_0^{-1}=\hubbleTimeGyrs~\textrm{Gyr}$ in a flat $\rm\Lambda CDM$ cosmology; \citealt{ade2016planck}). If a system has a delay time of less than a Hubble time we include it in the formation rate $\rate$. Formation rates are calculated for a galaxy with a continuous star formation rate of $f_{\textrm{SFR}}=2.0~\rm M_{\odot}/yr$ \citep{chomiuk2011toward}, with all systems in our simulated universe born in binaries\footnote{ While our models only include binaries, our orbital period distribution allows wide systems to evolve effectively as single stars. In fact, we find that more than half of our simulated binaries never engage in mass transfer.}. The star formation rate is chosen to mimic the Milky Way value of $f_{\textrm{SFR}}=1.9\pm0.4~\rm M_{\odot}/yr$ \citep{chomiuk2011toward}; any shifts in the chosen value would proportionately shift the quoted DNS formation rate. A summary of all the formation rates and Bayes factors for the different variations is given in Table \ref{tab:models}. \begin{figure} \centering \includegraphics[trim={2.5cm 7.0cm 3.5cm 7.9cm},clip,scale=0.56]{fig2.pdf} \caption{ Predicted $P-e$ distribution of Galactic \acp{DNS} under the \Fiducial~model. \textcolor{gray}{\textbf{Gray}} dots are all \acp{DNS} at \ac{DCO} formation. \ac{DCO} period and eccentricity are evolved forward from birth until present age given gravitational--wave radiation, removing a fraction of the short-lived short-period binaries from the observable population. Coloured dots represent the \ac{DNS} distribution at present age. Colour denotes the type of \ac{CE} phase: \ltwo{red} for a single-core, in which only the donor has a clear core-envelope separation, and \lthree{yellow} for a double-core \ac{CE} phase, in which both the donor and the accretor have a clear core-envelope separation. The single-core and double-core \ac{CE} phases can be, in most cases, associated with \textit{Channel~I} and \textit{Channel~II} respectively (see Section \ref{subsubsec:FormationChannels}, Figures \ref{fig:channelI} and \ref{fig:channelII} for more details). \textit{Channel~I} and \textit{Channel~II} are the first and second most common formation channels respectively. \lfour{Purple} diamonds represent the observed Galactic \acp{DNS}; all observed systems have precise $P-e$ measurements with error bars within the thickness of the symbol. The black curve illustrates a gravitational--wave driven $P-e$ evolution from \ac{DCO} formation to merger; this system, with initial $P=1.5~\rm hours$, $e=0.69$ and characteristic \ac{NS} masses $m_{1}=m_{2}=1.2~\rm M_{\odot}$, would merge in $\approx 3$ Myr through gravitational--wave emission. } \label{fig:PeWG} \end{figure} \subsection{On the \Fiducial~Model} \label{subsec:FiducialResults} \subsubsection{Formation Channels} \label{subsubsec:FormationChannels} There are two main ways that DNSs can form in our \Fiducial~model. We call these two dominant channels \textit{Channel~I} and~\textit{II}; some variations on these channels with additional mass transfer episodes or a different sequential order are possible. Below we will explain the crucial steps in these formation channels, mentioning the fraction $f$ of systems that went through different stages of binary evolution. We find that 0.13 per cent of all simulated binaries become \acp{DNS}. \textit{Channel~I}, illustrated in Figure~\ref{fig:channelI}, is responsible for the formation of roughly $\dominantFormationChannel$ per cent of all \acp{DNS}. This formation channel is consistent with the canonical channel described by, e.g., \cite{bhattacharya1991formation} and \cite{tauris2006formation}. \textit{Channel~I} involves a single-core \ac{CE} phase in which the primary has already collapsed into a \ac{NS}. A single-core \ac{CE} phase occurs when only one of the stars has a clear core--envelope separation; all compact objects are assumed not to have a clear core--envelope separation, as well as \ac{MS} and \ac{HeMS} stars. This channel proceeds as follows:\\ \textit{Channel~I}:\\ \begin{enumerate*}[label=(\roman*), itemjoin={{\\}}, itemjoin*={{\\}}] \item The two stars begin their evolution with the more massive one (the primary) evolving faster than its companion. \item \textit{$\approx$22 per cent of the all the initial systems experience stable mass transfer from the primary during the \ac{HG} phase onto a \ac{MS} secondary.} This is because $52$ per cent of the primaries never expand enough to start the mass transfer, and of the ones that do $47$ per cent are stable during this phase (0.48~$\times$~0.47~$\approx$~0.22). \item \textit{$\approx$4 per cent of those $\approx$22 per cent systems contain a primary that experiences a \ac{SN} explosion producing a \ac{NS} and remaining in a bound orbit.} In the mass transfer episode the primary becomes a \ac{HeMS} star. The majority of the \ac{HeMS} stars are either too light to become \acp{NS} or heavy enough to become BHs. Only 30 per cent of them have the mass of a \ac{NS} progenitor. In this first \ac{SN} explosion, there are ten times more \acp{CCSN} than there are \acp{ECSN} but, given the higher natal kick magnitude, their survival rate is only 9 per cent compared to 47 per cent of the \acp{ECSN}. \item \textit{$\approx$25 per cent of those $\approx$4 per cent systems experience and survive a \ac{CE} phase initiated by the post \ac{MS} secondary.} Only 33 per cent of the secondaries expand enough to engage into a \ac{RLOF} mass transfer. This second mass transfer episode, with a primary \ac{NS} accretor, is usually dynamically unstable and leads to a \ac{CE} phase. 85 per cent of these systems are able to successfully eject their envelope, hardening the binary by two or three orders of magnitude. \item \textit{$\approx$40 per cent of those $\approx$25 per cent systems begin a third mass transfer episode (case~BB) of a \ac{HeHG} secondary onto a NS primary.} There the \ac{HeHG} star recycles its \ac{NS} companion while being stripped for a second time until a CO core (we call this ultra-stripped, see Sect.~\ref{subsubsec:sne}). Half of those cores are in the right mass range to become a \ac{NS} (lighter cores may form a \ac{NS}--white dwarf binary while heavier cores yield a \ac{NS}--BH binary). \item \textit {$\approx$96 per cent of those $\approx$40 per cent systems will remain bound after the second \ac{SN} explosion and form a \ac{DNS}.} The tight post--\ac{CE} orbit and the reduced natal kicks for \acp{USSN} make it relatively easy for binaries to survive the natal kick and form a \ac{DNS} system. The systems that are disrupted either lost enough mass and/or had orbital velocities low enough that even the reduced \ac{USSN} natal kick disrupts the system. \end{enumerate*} \\ The secondary formation \textit{Channel~II}, illustrated in Figure~\ref{fig:channelII}, is responsible for forming approximately $\secondFormationChannel$ per cent of \acp{DNS}; it is prevalent for systems with initial mass ratio $q_{\rm ZAMS}\approx1$ and, therefore, similar evolutionary timescales of both stars in the binary. This channel experiences a double-core \ac{CE} phase \citep{brown1995doubleCore,dewi2006double,hwang2015twin}, in which both of the stars have a clear core-envelope separation, before the first \ac{SN}. \textit{Channel~II} proceeds as follows:\\ \textit{Channel~II}:\\ \begin{enumerate*}[label=(\roman*), itemjoin={{\\}}, itemjoin*={{\\}}] \item Again, the two stars begin their evolution with the primary evolving faster than its companion. \item \textit{ $\approx 1$ per cent of the primaries start their first mass transfer episode as either a \ac{CHeB} or an EAGB star with a secondary that is a slightly less evolved \ac{HG} or a \ac{CHeB} star.} Almost all of these systems (90 per cent) initiate a double-core \ac{CE} phase during this mass transfer episode. \item \textit{$\approx$35 per cent of those $\approx 1$ per cent binaries can eject their envelopes.} Only a tiny fraction ($\approx$2 per cent) lose enough mass to become white dwarfs whereas the majority become two \ac{HeMS} stars in a tighter orbit. \item \textit{$\approx$87 per cent of those $\approx$35 per cent systems have primaries that can initiate a second mass transfer episode (case~BB)}. The primaries donate their helium envelope to the secondary \ac{HeMS} star. All these episodes are dynamically stable. \item \textit{$\approx$35 per cent of those $\approx$87 per cent systems are able to have a primary experience a \ac{SN} explosion producing a \ac{NS} and remaining in a bound orbit}. As in \textit{Channel~I}, the mass transfer episodes reduce the masses of the primary and only 63 per cent can experience a \ac{SN} explosion. They are all \acp{CCSN} and although the \ac{CE} phase leaves them in a tight orbit the higher natal kick magnitude still disrupts 45 per cent of these systems. \item \textit{$\approx$80 per cent of those $\approx$35 per cent systems begin a third mass transfer episode (case~BB) from the secondary to a \ac{NS} accretor}. This mass transfer episode onto the \ac{NS} is defined to always be stable and the secondary now becomes an ultra-stripped CO core. \item \textit{$\approx$55 per cent of those $\approx$80 per cent systems have secondaries which experience and survive a \ac{SN} explosion and become \acp{NS}}. 71 per cent of the CO cores are massive enough to explode as a \ac{SN}, and given the previous episode of mass transfer they are all \acp{USSN}. The lower natal kicks and tighter orbits help to get a survival rate of 77 per cent, leaving a \ac{DNS} system behind. \end{enumerate*} \\ All simulated DNS systems are shown in the $P-e$ distributions in Figure~\ref{fig:PeWG}, \ref{fig:PeMain} and Appendix \ref{sec:likelihood}. Most of the DNS systems that survived a single-core CE phase come from \textit{Channel~I}, while most of those that survived a double-core CE phase come from \textit{Channel~II}. The rest of the \acp{DNS}, about 9 per cent of the total, come from more exotic or fortuitous channels, including non--recycled \acp{DNS} ($\leq1$ per cent of all Galactic-like \acp{DNS}). Non-recycled \ac{DNS} progenitors are systems which never had stable mass transfer onto a \ac{NS} \citep{1538-4357-550-2-L183}, which leads to spin up and recycling; all of them experienced \acp{CE} in our models, which we assume to be inefficient at spinning up the \ac{NS} and suppressing its magnetic field \citep{MacLeodRamirezRuiz:2015}. We find that our \Fiducial~model has a formation rate of $\rate=\formationVarOne$ per Milky Way equivalent galaxy per Myr. All of our \acp{DNS} experience and survive at least one \ac{CE} phase, \doubleCoreCEE \ per cent of them in a double-core scenario. \\ \begin{figure*} \includegraphics[trim={6.7cm 2.0cm 6.5cm 2.0cm},clip,angle=90,width=\textwidth]{fig5.pdf} \caption[]{Mass ratio distribution of merging \acp{DNS} (blue histogram) and its cumulative distribution function (orange curve) for three \ac{SN} fallback and natal kick models: (01) Fryer Rapid [left], (03) Fryer Delayed [middle], (04) M\"uller [right]. See Sections \ref{subsubsec:FiducialMassRatio} and \ref{subsec:massRatio} for a discussion of the evolutionary channels leading to sharp features in the histograms.} \label{fig:qCDFs} \end{figure*} \begin{figure*} \includegraphics[trim={6.3cm 2.5cm 6.7cm 2.5cm},clip,angle=90,width=\textwidth]{fig6.pdf} \caption{Time distributions or merging \acp{DNS} (blue histogram) for our \Fiducial~model (01): time $t_{\rm form}$ from ZAMS to \ac{DNS} formation [left], coalescence time $t_{\rm c}$ from \ac{DNS} formation to merger [middle] and total delay time $t_{\rm delay}$ from ZAMS to merger [right]. We show a $\rm d\rate/dt\propto t_{delay}^{-1}$ curve for comparison with the delay time distribution in the right panel. The apparent gap in the middle panel is a sampling artefact.} \label{fig:tdel} \end{figure*} \begin{figure} \includegraphics[height=0.8\textheight]{fig3.pdf} \caption{ Evolutionary history of formation \textit{Channel~I} (top to bottom); 70 per cent of all \acp{DNS} in our \Fiducial~population were formed through this channel. The numbers in the callout symbols represent the percentage of simulated binaries that end up in that particular stage among those that follow the preceding evolutionary history. For example, $22$ per cent of all simulated binaries experience stable mass transfer from a \ac{HG} primary onto a \ac{MS} secondary; among those $22$ per cent, $4$ per cent of systems will have a primary that undergoes a \ac{SN} producing a \ac{NS} while remaining in a bound orbit; and so on. } \label{fig:channelI} \end{figure} \begin{figure} \includegraphics[height=0.8\textheight]{fig4.pdf} \caption{ Evolutionary history of formation \textit{Channel~II} (top to bottom); 21 per cent of all \acp{DNS} in our \Fiducial~population were formed through this channel. The numbers in the callout symbols represent the percentage of simulated binaries that end up in that particular stage among those that follow the preceding evolutionary history. For example, $1$ per cent of all simulated binaries initiate mass transfer while both companions are post-\ac{MS} stars; among those $1$ per cent, $35$ per cent enter and survive a double-core \ac{CE} phase; and so on. } \label{fig:channelII} \end{figure} \begin{figure*} \includegraphics[trim={2cm 1cm 2.0cm 2.5cm},clip,width=0.325\textwidth]{fig7left.pdf} \includegraphics[trim={2.0cm 1cm 2.5cm 2.5cm},clip,width=0.325\textwidth]{fig7middle.pdf} \includegraphics[trim={2cm 1cm 2.5cm 2.5cm},clip,width=0.325\textwidth]{fig7right.pdf} \caption{Predicted mass distribution of all predicted \acp{DNS} under three different \ac{SN} fallback and natal kick models: (01) Fryer Rapid [left], (03) Fryer Delayed [center], (04) M\"uller [right]. Primary and secondary mass of the \acp{NS} are shown in the horizontal and vertical axes respectively. \textcolor{red}{Red} diamonds denote the observed Galactic \acp{DNS} with well-constrained masses (see Table \ref{tab:DNS}), with pulsar and companion \ac{NS} masses shown in the horizontal and vertical axes respectively. \textcolor{blue}{Blue} dots correspond to the \ac{DNS} masses at \ac{DCO} formation. The density map shows the two-dimensional \ac{DNS} mass probability distribution; the histograms show its one-dimensional linear projections. See Sections \ref{subsubsec:FiducialMassRatio} and \ref{subsec:massRatio} for a discussion of the evolutionary channels leading to sharp features in the histograms.} \label{fig:bnskde} \end{figure*} \subsubsection{Mass Ratio Distribution} \label{subsubsec:FiducialMassRatio} Figure~\ref{fig:bnskde} shows the mass distribution of all the Galactic \acp{DNS} at the moment of birth, while Figure~\ref{fig:qCDFs} shows the distribution of the predicted mass ratio $q_{\rm DCO}$ for the merging Galactic \acp{DNS}. We define $q_{\rm DCO}=m_{\rm{NS,lighter}}/m_{\rm{NS,heavier}}$; the heavier \ac{NS} is not necessarily the more massive star at ZAMS. In the \Fiducial~model, the initially less massive star produces the more massive \ac{NS} in $\heavierSecondaryAtZAMS$ per cent of the systems, due to the accretion of mass from the companion, and its core growth, during the early phases of evolution. The mass ratio of these systems lies between $\minMassRatioFiducial \leq q_{\rm DCO} \leq 1$. Among the merging Galactic \acp{DNS}, \qAboveEightyFiducial \ per cent of the systems have $q_{\rm DCO} > 0.8$, \qAboveNinetyFiducial \ per cent have $q_{\rm DCO} > 0.9$ and \qAboveNinetyFiveFiducial \ per cent have $q_{\rm DCO} > 0.95$. There are two significant peaks in this distribution: \begin{enumerate*}[label=(\roman*), itemjoin={{, }}, itemjoin*={{, and }}] \item the first peak, with $\approx16$ per cent of systems have $q_{\rm DCO}\approx0.88$; most systems close to this mass ratio are formed through \textit{Channel~I}, with their first \ac{NS} being an \ac{ECSN} (with gravitational mass of $1.26~\rm M_{\odot}$) and the second an \ac{USSN} (with lower mass remnants of \minMassRapid$~\rm M_{\odot}$) \item the second peak, with $\approx14$ per cent of the total \acp{DNS}, has a mass ratio $q_{\rm DCO}\approx1$, from $q_\mathrm{ZAMS} \approx 1$ systems that evolved through the double-core CE, with a low mass \ac{CCSN} and an \ac{USSN} (i.e. \textit{Channel~II}). \end{enumerate*} The mass range of NSs in our \Fiducial~population is $[m_{\rm{NS,min}}, m_{\rm{NS,max}}]=[\minMassRapid,\maxMassRapid]~\rm M_{\odot}$. \subsubsection{Time Distributions} \label{subsubsec:FiducialTimeDistribution} We define the following timescales: \begin{enumerate*}[label=(\roman*), itemjoin={{, }}, itemjoin*={{, and }}] \item formation time $t_{\rm form}$ as the time from ZAMS to \ac{DCO} formation \item coalescence time $t_{\rm c}$ as the time from \ac{DCO} formation to merger \item total delay time $t_{\rm delay}$ as the time from ZAMS to merger. \end{enumerate*} Figure~\ref{fig:tdel} shows the distributions for $t_{\rm form}$, $t_{\rm c}$ and $t_{\rm delay}$ for our \Fiducial~model. Time distributions were made based on only those \acp{DNS} which have a merger time of less than the Hubble time. The extreme ends of the time distributions are systems with $\minFormTimeMyrsFiducial \leq t_{\rm{form}}/\textrm{Myr} \leq \maxFormTimeMyrsFiducial$, $\minCoalTimeyrsFiducial \leq t_{\rm{c}}/\textrm{yr}$ and $\minDelayTimeMyrsFiducial \leq t_{\rm{delay}}/\textrm{Myr}$. Fewer than 0.5 per cent of merging \acp{DNS} have very short coalescence times of less than 10 Myr (see middle panel of Figure~\ref{fig:tdel} and outliers in Figure~\ref{fig:kicks}---note that the apparent gap in the middle panel is a sampling artefact, and does not represent an actual gap in the population). These systems usually experience \acp{CE}, reduce their orbit during case~BB mass transfer and have fortuitous natal kick directions which place them on a low-periapsis orbit at \ac{DCO} formation. Systems with $t_{\rm{c}}>10^{-3}$~Gyr represent the bulk of the population in Figure~\ref{fig:PeWG}; shorter coalescence times are exhibited by outliers with orbital periods of $\lesssim 10^{-2}$~days. \subsubsection{Supernovae} \label{subsubsec:FiducialSupernovae} Of all the \acp{NS} in \ac{DNS} systems, \numberECSN~per cent were formed via \acp{ECSN}. \numberSecondariesUSSN~per cent of the initially less massive secondaries in these \acp{DNS} experienced ultra-stripping before exploding. Only \doubleECSN~per cent of the \ac{DNS} systems had both stars experiencing a \ac{ECSN}. In \ECSNUSSN~per cent of the \acp{DNS} the primary went through an \ac{ECSN} and was later recycled by case BB mass transfer from the secondary. The secondary is stripped by this mass transfer and explodes in an \ac{USSN}. In our single stellar models at $Z=Z_{\rm \odot}$, \ac{ECSN} progenitors have masses at ZAMS of $\minSinglePrimaryECSN \leq m/\rm{M_{\odot}} \leq \maxSinglePrimaryECSN$; more recent detailed models find that the mass range of single star progenitors at metallicity $Z=0.02$ which explode as an \ac{ECSN} is $7.5 \leq m/\rm{M_{\odot}} \leq 9.25$ \citep{poelarends2008}. Interaction during binary evolution increases this range to $\minBinaryPrimaryECSN \leq m_1/\rm{M_{\odot}} \leq \maxBinaryPrimaryECSN$ for the primary and $\minSecondaryECSN \leq m_2/\rm{M_{\odot}} \leq \maxSecondaryECSN$ for the secondary in our study. Detailed studies of \acp{ECSN} from interacting binary systems find that the mass range for an interacting primary at $Z=0.02$ is between $13.5 \leq m/\rm{M_{\odot}} \leq 17.6$ \citep{poelarends2017}, where $17.6~\rm M_{\odot}$ is the highest mass primary used in that study. Metallicity does not play a strong role in modifying the \ac{NS} mass range. We explore lower metallicity populations at $Z=0.002$ (17) and $Z=0.001$ (18), and find that, for single stars, the \ac{ECSN} progenitor masses at ZAMS decrease to $7.0 \leq m/\rm{M_{\odot}} \leq 7.2$ and $6.6 \leq m/\rm{M_{\odot}} \leq 6.9$, respectively. However, the remnant mass from an \ac{ECSN} does not vary as a function of metallicity and is always fixed to be $m_{\rm{ECSN}}=1.26~\rm M_{\odot}$. Furthermore, our minimum and maximum \ac{NS} masses of $[m_{\rm{NS,min}}, m_{\rm{NS,max}}]=[\minMassRapid,\maxMassRapid]~\rm M_{\odot}$ do not change as a function of metallicity. \subsection{Variations} COMPAS is a modular code designed to explore the effects of different treatments of uncertain physical assumptions. Given the complexity of the formation channels we explore the uncertainties by changing one assumption per variation. This allows us to link all the changes in the population and its formation channels to a specific physical treatment and test the robustness of our \Fiducial~model. The parameters of the physical interactions may be correlated; since computing these correlations is computationally expensive \citep[see e.g.][]{barrett2017accuracy} we do not consider them here. \subsubsection{On Mass Transfer Stability Criteria} \label{subsubsec:MTstability} Stable case~BB mass transfer leads to orbital periods similar to the observed Galactic \ac{DNS} population. Meanwhile, unstable case~BB, leading to a \ac{CE} phase, typically results in sub-hour orbital periods (see right panel of Figure~\ref{fig:PeMain}); such orbital periods yield coalescence times of $\lesssim 10$~Myr. About $~90$ per cent of Galactic \ac{DNS} progenitors in the \Fiducial~model experience case~BB mass transfer. At the onset of the episode, \cassBBsystemsAboveTwo \ per cent of systems have mass ratio $q \geq 0.2$ and \cassBBsystemsAboveFour \ per cent with $q \geq 0.4$. \cite{claeys2014theoretical} assume that mass transfer of \ac{HeHG} donors with a degenerate accretor will be stable if $q > 0.21$ (see Table 2 of that paper), while \cite{tauris2015ultra} propose to consider mass ratio and orbital period to define stability criteria in order to account for the evolutionary phase of the donor at the onset of \ac{RLOF}; in that study, orbital periods of $P \geq 0.07\rm~days$ at the onset of \ac{RLOF} lead to stable case~BB mass transfer. In our \Fiducial~model, all Galactic \ac{DNS} progenitors have $P \geq 0.07\rm~days$ at the onset of case~BB mass transfer. In $\rm COMPAS$, we probe the extreme cases of either stable or dynamically unstable case~BB mass transfer for a whole population. The difference in formation rate $\rate$ between the stable (01) and dynamically unstable (02) case~BB mass transfer is comparable within a few per cent, with $\{\rate_{\rm01}$,$\rate_{\rm02}\}=\{{\formationVarOne,\formationVarTwo}\}$ per Galaxy per Myr. Nevertheless, the log~Bayes~factor of model (02) relative to model (01) is log $\bayesFactor=\bayesVarTwo$, which favours our \Fiducial~model, and ultimately, significantly favours stable against unstable mass transfer in a dichotomous scenario. In our \Fiducial~population, the assumption of case~BB mass transfer being always stable is in broad agreement with mass ratio constraints from \cite{claeys2014theoretical}, which would result in more than 90 per cent of these systems experiencing stable mass transfer. If instead we used the the stability criteria presented in \cite{tauris2015ultra} \citep[as shown in][]{2018arXiv180105433K}, all of the aforementioned systems would experience stable mass transfer. \begin{figure*} \centering \includegraphics[trim={7.1cm 1.0cm 6.9cm 2.8cm},clip,angle=90,scale=0.7]{fig8.pdf} \caption[]{Predicted $P-e$ distribution of Galactic \acp{DNS} at \ac{DCO} formation: (00) \cite{stevenson2017formation} standard [left], (01) \Fiducial~[middle], (02) variation with unstable case~BB mass transfer [right] (for more details see Table \ref{tab:DNS}). \lfour{Purple} diamonds represent the Galactic \acp{DNS}. Colour denotes the type of \ac{CE} phase: \lone{blue} for no \ac{CE} phase, \ltwo{red} for a single-core and \lthree{yellow} for a double-core \ac{CE} phase. The single-core and double-core \ac{CE} formation are typically associated with \textit{Channel~I} and \textit{Channel~II}, respectively. \lone{Blue} dots on the left panel correspond to double-\acp{ECSN} with $\sigma_{\rm ECSN}=0\rm~km~s^{-1}$ in $\rm COMPAS\_\alpha$.} \label{fig:PeMain} \end{figure*} \subsubsection{On the ``Delayed" Explosion Scenario} To test the effect of the explosion mechanism on our predictions, we investigate three prescriptions; one being the ``rapid'' (01) explosion mechanism as presented in our \Fiducial~model (see Sect.~\ref{subsubsec:FiducialSupernovae}). The second one is the ``delayed'' (03) explosion mechanism applied in our model (03) and to be explained below, while the third is the ``M\"uller'' (04) prescription presented in Sect.~\ref{sec:Muller}. The ``delayed'' explosion mechanism proposed in \cite{fryer2012compact} allows for accretion onto the proto-\ac{NS} before the standing-accretion shock instability or convection become powerful enough to support a neutrino-driven explosion. This accretion removes the mass gap and creates a continuous remnant mass distribution from \ac{NS} to BH. This continuous distribution of compact-object remnant masses requires us to define an arbitrary mass cut to distinguish \acp{NS} from BHs; we follow \cite{fryer2012compact} and set this mass cut to $2.5\rm~M_\odot$, which is higher than the maximum mass of $2.0\rm~M_\odot$ from the other explosion mechanisms used in this study. The ``delayed'' explosion mechanism formation rate is $\rate=\formationVarThree$ per Milky Way equivalent galaxy per Myr. The ``delayed" explosion mechanism, which changes the remnant mass given a CO core at the moment of a \ac{SN}, produces a slightly different $P-e$ distribution than the ``rapid'' explosion mechanism because of the impact of mass loss at the moment of the explosion on the binary's orbit. Middle panel of Figure~\ref{fig:bnskde} shows that the ``delayed" explosion mechanism lies close to the observed population and is preferred over the ``rapid'' explosions mechanism in the \Fiducial~model with a log~$\bayesFactor=\bayesVarThree$. The ``delayed" explosion scenario, which does not have a mass gap between \acp{NS} and BHs, has the largest likelihood of all models. \subsubsection{On the Supernovae Kick Distribution and Magnitude} Both mass loss during a \ac{SN} and the natal kick magnitude and direction modify the orbital parameters and determine whether the binary is disrupted. Low natal kick \acp{ECSN} and \acp{USSN} therefore play a prominent role in \ac{DNS} formation and possible eventual merger, as would low-mass iron-core-collapse \acp{SN} with a reduced natal kick. Our modelling allows for testing a bimodal natal kick distribution, which distinguishes between \acp{CCSN} (high mode, $\sigma_{\rm{high}} = 265~\rm km~s^{-1}$), \acp{ECSN} (low mode, $\sigma_{\rm{low}} = 30~\rm km~s^{-1}$) and \acp{USSN} (low mode). When allowing for a bimodal distribution, but with only \ac{USSN} (06) or \ac{ECSN} (07) contributing to the low component of the Maxwellian distribution, the \ac{DNS} formation rate $\rate$ drops by a factor of $\approx 2$ relative to the \Fiducial~model. We also simulated a single high-mode distribution (05) with high natal kicks for both \acp{USSN} and \acp{ECSN}, which is also the assumption in $\rm COMPAS\_\alpha$ (00). In this case, $\rate$ decreases by a factor of $\approx 3$; this single high-mode variation (05) also fails to create the observed longer period \acp{DNS} with low eccentricities. The formation rates and log~Bayes~factors are $\{\rate_{\textrm{05}},\rate_{\textrm{06}},\rate_{\textrm{07}}\}=\{\formationVarFive,\formationVarSix,\formationVarSeven\}$ per Milky Way equivalent galaxy per Myr and $\log~\{\bayesFactor_{\textrm{05}},\bayesFactor_{\textrm{06}},\bayesFactor_{\textrm{07}}\}=\{\bayesVarFive,\bayesVarSix,\bayesVarSeven\}$ for variations with a single high mode (05), $\sigma_{\rm ECSN}=\sigma_{\rm high}$ (06) and $\sigma_{\rm USSN}=\sigma_{\rm high}$ (07), respectively. Given the log~Bayes~factors, the \Fiducial~model is significantly preferred over single high mode (05) and $\sigma_{\rm USSN}=\sigma_{\rm high}$ (07) variations. It is preferred, but not significantly, over the $\sigma_{\rm ECSN}=\sigma_{\rm high}$ (06) variation. \begin{figure*} \includegraphics[trim={7cm 0cm 7cm 0cm},clip,width=\textwidth]{fig9.png} \caption{Masses of merging compact binaries predicted by the \texttt{Fiducial} model at three different metallicities: $\rm Z=0.001$ (left), $\rm Z=0.002$ (center) [cf.~\citep{stevenson2017formation}] and solar metallicity $\rm Z=0.0142$ (right). Coloured regions correspond to masses matching advanced LIGO detections within the reported 90 per cent credible intervals.} \label{fig:GWs} \end{figure*} \subsubsection{On the M\"uller prescription} \label{sec:Muller} We introduce the ``M\"uller" (04) explosion prescription as fitting formulae to the detailed models described by \cite{mueller2016simple}. The full description and fitting formulae are provided in Appendix \ref{sec:AppMuller} for use in other population synthesis studies. The ``M\"uller" prescription maps a CO core mass to a \ac{NS} remnant mass and a natal kick. The remnant and ejecta mass and the explosion energy are obtained semi-analytically and calibrated to numerical models. We update the analytic supernova models of \cite{mueller2016simple} by using a shock radius factor $\alpha_\mathrm{turb}=1.18$ and a compression ratio at the shock $\beta=3.2$, which fit constraints on the progenitor masses of Type~IIP supernovae \citep{smartt2015} slightly better than the original version. The natal kick velocity is obtained from these by assuming a uniform ejecta anisotropy \citep{janka2017}. The natal kick magnitude, with a dominant mode at $v_{\rm{kick}}\approx 100~\rm km~s^{-1}$ is therefore correlated with the \ac{NS} remnant mass, unlike for the other models considered here. The mass range of \acp{NS} in our evolved population, using the ``M\"uller" explosion mechanism, is $[m_{\rm{NS,min}}, m_{\rm{NS,max}}]=[\minMassMuller,\maxMassMuller]~\rm M_{\odot}$. The formation rate and log~Bayes~factor of model (04) are $\rate=\formationVarFour$ per Milky Way equivalent galaxy per Myr and $\log~\bayesFactor=\bayesVarFour$. This Bayes factor was calculated using only the $P-e$ distribution. The mass distribution (Figures \ref{fig:bnskde} and \ref{fig:chirpMass}) will play an important role in distinguishing the ``rapid'' (01), ``delayed'' (03) and ``M\"uller'' (04) explosion mechanism variations. \subsubsection{On the Comparison with $\rm COMPAS\_\alpha$} Here we compare our \Fiducial~model to the one described by \citet[][(00), $\rm COMPAS\_\alpha$]{stevenson2017formation}. The latter uses different parameterisations: both \acp{CCSN} and \acp{USSN} natal kicks are drawn from a high mode Maxwellian distribution and all \acp{ECSN} have a $v_{\rm kick}=0~\rm km~s^{-1}$; stability is determined using $\zeta_{\textrm{SPH}}$ for all stellar phases, which often leads to dynamically unstable mass transfer, particularly during case~BB \ac{RLOF}; and the binding energy parameter is $\lambda_{\textrm{fixed}}=0.1$ for all stars in any evolutionary stage. That study was successful in explaining all gravitational--wave events from the first advanced LIGO observing run \citep[GW150914, LVT151012 and GW151226][]{GW150914, O1:BBH,GW151226} via a single evolutionary scenario: isolated binary evolution. However, the same assumptions fail to reproduce the observed Galactic \ac{DNS} populations (see left panel of Figure~\ref{fig:PeMain}). Model (00), which yields a \ac{DNS} formation rate of $\rate_{\rm 00}=\formationVarZero$ per Milky Way equivalent galaxy per Myr, is the least preferred model from our variations, with a log~Bayes~factor of $\log~\bayesFactor=\bayesVarZero$. In particular, the extreme hardening of case~BB binaries through a second \ac{CE} phase in $\rm COMPAS\_\alpha$ leads to a gap in the $P-e$ distribution where systems such as J0737-3039 are observed. From the major changes, dynamical stability during case~BB mass transfer and a bimodal natal kick distribution are preferred over the alternatives in the \Fiducial~model (see unstable case~BB mass transfer (02) and single mode natal kick distribution (05) variations), which are ruled out in our model comparison. On the other hand, the \Fiducial~model is able to explain, in a consistent form with \cite{stevenson2017formation}, the gravitational--wave events from the first advanced LIGO observing run, as well as GW170104 \citep{GW170104}, GW170608 \citep{GW170608}, GW170814 \citep{GW170814} and the \ac{DNS} merger GW170817 \citep{GW170817}, all detected during the second observing run of advanced LIGO and advanced Virgo (see Figure~\ref{fig:GWs}). \subsubsection{On the Circularisation During Mass Transfer} Our \Fiducial~model does not circularise the orbit during a mass transfer episode, except as a consequence of dynamically unstable mass transfer (\ac{CE}). As a variation, we consider circularisation at the onset of \ac{RLOF} (e.g. as a consequence of tidal dissipation prior to mass transfer or during the episode). We allow for two types of circularisation: \begin{enumerate*}[label=(\roman*), itemjoin={{, }}, itemjoin*={{, and }}] \item circularisation to periastron $a_{p}=a(1-e)$, which dissipates both orbital energy and angular momentum (12) \item circularisation to semilatus rectum $a_{SR}~=a(1-e^2)$, which conserves the angular momentum of the orbit (13). \end{enumerate*} The \ac{DNS} formation rates and log~Bayes~factors are $\{\rate_{\textrm{12}},\rate_{\textrm{13}}\}=\{\formationVarTwelve,\formationVarThirteen\}$ per Milky Way equivalent galaxy per Myr and $\log~\{\bayesFactor_{\textrm{12}},\bayesFactor_{\textrm{13}}\}=\{\bayesVarTwelve,\bayesVarThirteen\}$ respectively. Rates decrease by less than a factor of $2$. Circularisation to periastron at the onset of mass transfer is slightly preferred than the alternatives, but not enough for us to consider it clearly preferred over the \Fiducial~model. Circularisation which conserves angular momentum is not favoured or disfavoured with respect to the \Fiducial~assumption (i.e. no circularisation at all). \subsubsection{On the Angular-Momentum Loss During Non-Conservative Mass Transfer} During a non-conservative mass transfer episode, the specific angular momentum of the removed matter is determined by how mass leaves the system. In our \Fiducial~assumption, any non-accreted mass is removed isotropically in the reference frame of the accretor; this mass loss mode is usually referred to as ``isotropic re-emission'' (01). Another common parameterisation is the ``Jeans" mode (14), which consists of ejecting the mass isotropically in the reference frame of the donor, similarly to fast winds. The last possibility we take into account is the formation of a circumbinary disk (15), with a radius of $a_{\rm disk}= 2a$ \citep{artymowicz1994dynamics}, from which the mass will be ejected. While isotropic re-emission (01) and the ``Jeans" mode (14) tend to effectively widen the orbit, that is not the case if mass is lost from a circumbinary disk (15). The formation rates of Galactic-like \acp{DNS} and the log~Bayes~factor are $\{\rate_{\textrm{14}},\rate_{\textrm{15}}\}=\{\formationVarFourteen,\formationVarFifthteen\}$ per Milky Way equivalent galaxy per Myr and $\log~\{\bayesFactor_{\textrm{14}},\bayesFactor_{\textrm{15}}\}=\{\bayesVarFourteen,\bayesVarFifthteen\}$ respectively. The \Fiducial~model is strongly preferred over the ``Jeans" mode (14) variation; it is also mildy preferred over the circumbinary disk (15) variation. The mass loss mode also affects the future fate of the formed \acp{DNS}. The fraction of all formed \acp{DNS} that will merge in a Hubble time is $\{f_{01},f_{14},f_{15}\}=\{\fractionVarOne,\fractionVarFourteen,\fractionVarFifthteen\}$ for the ``isotropic re--emission'', ``Jeans'' and ``circumbinary disk'' mode, respectively. \subsubsection{On the Common Envelope Parameters} We consider several variations to the parameters that govern \ac{CE} evolution: $\lambda$, which determines the envelope binding energy, and $\alpha$, which determines the amount of orbital energy needed to expel the envelope. In our \Fiducial~model all of the \acp{DNS} experience a \ac{CE} phase and therefore varying $\lambda$ and $\alpha$ from the \Fiducial~model choices (i.e. $\lambda_{\textrm{Nanjing}}$ and $\alpha=1$) will affect the final distributions. $\lambda_{\textrm{Nanjing}}$ is a function of core mass, total mass and radius. We use a fixed value $\lambda_{\textrm{fixed}}=0.1$ (08) for comparison with previous population synthesis studies \cite[e.g.,][]{belczynski2002comprehensive}. Recently, \cite{Kruckow:2016tti} found for several models at different mass and metallicity that $\lambda$ depends on the radius in a roughly power-law form $\lambda \propto R^{\beta}$, with $-1\leq\beta\leq-2/3$. We made a fit to Figure~1 of \cite{Kruckow:2016tti} in the form $\lambda_{\textrm{Kruckow}} = 1600\times0.00125^{-\beta}R^{\beta}$, assuming a monotonically decreasing function. For our particular variation, we use an average value where $\beta=-5/6$ (09). The formation rates of \acp{DNS} and the log~Bayes~factors for these variations in $\lambda$ are $\{\rate_{\textrm{08}},\rate_{\textrm{09}}\}=\{\formationVarEight,\formationVarNine\}$ per Milky Way equivalent galaxy per Myr and $\log~\{\bayesFactor_{\textrm{08}},\bayesFactor_{\textrm{09}}\}=\{\bayesVarEight,\bayesVarNine\}$ respectively, not favouring nor disfavouring the $\lambda$ variations with respect to the \Fiducial~model. Higher values of $\alpha$ lead to wider post--\ac{CE} orbits than low values of $\alpha$. Without exploring the full and continuous parameter space, we vary $\alpha$ to extreme values of $\alpha_{min}=0.1$ (10) and $\alpha_{max}=10$ (11). Values of $\alpha>1$ suppose that there are substantial additional energy sources, such as recombination energy and/or nuclear energy \citep{podsiadlowski2010explosive, ivanova2013common} that contribute to the energy budget for \ac{CE} ejection, in addition to the orbital energy. The extreme value of $\alpha_{max}=10$ is more for illustration purposes rather than to mimic a particular physical interaction; in this case $\alpha_{max}=10$ can only be explained if it comes from nuclear energy. The formation rates of \acp{DNS} and the log~Bayes~factors for variations in $\alpha$ are $\{\rate_{\textrm{10}},\rate_{\textrm{11}}\}=\{\formationVarTen,\formationVarEleven\}$ per Milky Way equivalent galaxy per Myr and $\log~\{\bayesFactor_{\textrm{10}},\bayesFactor_{\textrm{11}}\}=\{\bayesVarTen,\bayesVarEleven\}$ respectively, not clearly favouring nor disfavouring the $\alpha$ variations with respect to the \Fiducial~model. The choice of $\alpha$ influences not only the number of created \acp{DNS}, but also the number of mergers. The fraction of all formed \acp{DNS} that will merge in a Hubble time is $\{f_{01},f_{10},f_{11}\}=\{\fractionVarOne,\fractionVarTen,\fractionVarEleven\}$. Additionally, we also consider the ``pessimistic" \ac{CE} assumption (19). This assumption yields a \ac{DNS} population which is a subset of the population under the \Fiducial~model, with binaries that enter the \ac{CE} while the donor is classified as a \ac{HG} star removed, as these are assumed to always lead to merger. The ``pessimistic'' \ac{CE} assumption (19) is therefore expected to decrease \ac{DNS} formation rates. The formation rates of \acp{DNS} and the log~Bayes~factors for these variations are $\{\rate_{\textrm{01}},\rate_{\textrm{19}}\}=\{\formationVarOne,\formationVarNineteen\}$ per Milky Way equivalent galaxy per Myr and $\log~\{\bayesFactor_{\textrm{01}},\bayesFactor_{\textrm{19}}\}=\{\bayesVarOne,\bayesVarNineteen\}$ respectively. The likelihood of the ``pessimistic'' model (19) is similar to the one from the \Fiducial~model, which means the $P-e$ distribution alone is insufficient to pick between these models. Additional constraints, such as merger rates, would be needed to determine the preferred model. \subsubsection{On the Effect of Thermal Eccentricity} The only initial distribution we varied in this study was eccentricity. In order to simulate a population with non circular binaries at ZAMS we use the thermal eccentricity distribution (16), which has the form of $f_{\textrm{e}}(e)= 2e$ \citep{heggie1975binary}. In this variation, the first episode of mass transfer begins once the primary expands to fill its Roche lobe at periastron. This changes the range of initial periods leading to interaction \citep{deMinkBelczysnki2015}. The formation rate and log~Bayes~factor of model (16) are $\rate=\formationVarSixteen$ per Milky Way equivalent galaxy per Myr and $\log~\bayesFactor=\bayesVarSixteen$ respectively. While formation rates drop by a factor of approximatively 3, the $P-e$ distribution of forming \acp{DNS} is not significantly affected. The drop in the formation rate is due to enhanced rates of interactions of \ac{MS} stars that only need to fill their Roche lobe at periastron; if that mass transfer episode is unstable, the two \ac{MS} stars merge. \subsection{On Mass Ratio Distributions} \label{subsec:massRatio} Figure~\ref{fig:qCDFs} shows the impact of the choice of the \ac{SN} remnant mass model on the \ac{DNS} mass ratio distributions. The \Fiducial~model shows two distinct peaks in the mass ratio distribution around $\rm q_{\rm DCO}=0.87$ and $\rm q_{\rm DCO}=1$. The two peaks can be explained given the evolution of \textit{Channel~I} and \textit{Channel~II}, respectively. For the full discussion on the characteristics of the mass ratio for the \Fiducial~model, see Section \ref{subsubsec:FiducialMassRatio}. In the ``delayed'' prescription (03) most of the \acp{USSN} change mass from $\minMassRapid~\rm M_{\odot}$ to $1.28~\rm M_{\odot}$, with respect to the ``rapid'' mechanism; therefore, the mass ratio of systems where the primary collapsed in an \ac{ECSN} and the secondary in an \ac{USSN} approaches 1, yielding an even more dominant peak at $q_{\rm DCO}=1$ in the overall mass ratio distribution. \textit{Channel~II} leads to the second peak, with mass ratio $q_{\rm DCO}=1$, as in the \Fiducial~model. This results in a cumulative distribution function for the ``delayed" mechanism (03) with a mass ratio between $\minMassRatioDelayed \leq q_{\rm DCO} \leq 1$, where \qAboveEightyDelayed \ per cent of the systems have $q_{\rm DCO} > 0.80$, \qAboveNinetyDelayed\ per cent have $q_{\rm DCO} > 0.90$ and \qAboveNinetyFiveDelayed \ per cent have $q_{\rm DCO} > 0.95$. The remnant masses in the M\"uller prescription (04), as shown in Figure~\ref{fig:bnskde} and \ref{fig:Muller}, have a wider spread and vary more at the low mass end. In this model, there is no significant pile-up. There is more scatter, with \qAboveEightyMuller \ per cent of the systems having $q_{\rm DCO} > 0.8$, \qAboveNinetyMuller \ per cent having $q_{\rm DCO} > 0.9$ and \qAboveNinetyFiveMuller \ per cent having $q_{\rm DCO} > 0.95$. \subsection{On the Chirp Mass Distribution} \label{subsec:chirpMass} Figure~\ref{fig:chirpMass} shows the chirp mass distributions of \acp{DNS} which will merge within a Hubble time. We compare the prediction of our \Fiducial~model (01) which uses the ``rapid'' explosion mechanism, to the model which uses the ``delayed'' (03) explosion mechanism and to that which uses the ``M\"uller'' (04) prescription. Additionally, we also show the $\rm COMPAS\_\alpha$ (00) chirp mass distribution which uses the ``delayed'' mechanism. As expected, the chirp mass distributions show similarities with the mass ratio distributions, reproducing the same sharp features (peaks) explained in Section \ref{subsec:massRatio}. In Figure~\ref{fig:chirpMass} we added all the confirmed \acp{DNS} with an estimated delay time smaller than the Hubble time, as well as GW170817. We find that the ``rapid'' (01) mechanism predicts that most of the \acp{DNS} will have chirp mass lower than J1756-2251, which has the lowest chirp mass among confirmed \acp{DNS} with good mass constraints. In fact, the ``rapid'' \ac{SN} mechanism (01) allows for low-mass \acp{NS} which would be difficult to differentiate from \ac{NS}--white dwarf binaries; there are several non-confirmed \acp{DNS} or poorly constrained \ac{DNS} masses in the region favoured by the ``rapid'' mechanism (01) \citep{Ozel:2010,ozel2016masses}. On the other hand, the seven existing well-constrained mass measurements in this study are inconsistent with the predictions of the \Fiducial~model (01) at a $> 4\sigma$ level. None of these seven measurements fall below a chirp mass of 1.1 $\rm M_{\odot}$, while $83$ per cent of \acp{DNS} in the \Fiducial~model have lower chirp masses. This suggests that the ``rapid'' mechanism under-predicts the amount of collapsed mass for the lowest-mass \acp{NS} for both \acp{ECSN} and \acp{USSN}. All other \ac{SN} prescriptions considered here yield \ac{DNS} chirp mass distributions starting above 1.1 $\rm M_{\odot}$. Unsurprisingly, the ``delayed'' mechanism (03) has a very similar distribution to $\rm COMPAS\_\alpha$ which uses the same explosion mechanism. They both predict systems matching all chirp masses (see Figure 10), with a peak close to the lowest observed \ac{DNS} chirp masses, J1756-2251 and J0737-3039. The ``M\"uller'' prescription (04) yields a similarly broad chirp mass distribution above 1.1 $\rm M_{\odot}$. The ``delayed'' (03) and ``M\"uller'' (04) \ac{SN} fallback prescriptions cannot be distinguished based on existing mass measurements. However, the separation of $\approx 0.4$ between the predicted chirp mass cumulative distribution functions for these two models suggests that $\sim$ 10 additional chirp mass measurements (whether from radio pulsars or merging \acp{DNS}) would be sufficient to tell these models apart. \begin{figure} \centering \includegraphics[trim={2.2cm 7cm 3.7cm 7.4cm},clip,scale=0.55]{fig10.pdf} \caption[]{ Chirp mass distribution of \acp{DNS} with a delay time smaller than a Hubble time: (00) $\rm COMPAS\_\alpha$ [black dotted], (01) \Fiducial~Fryer Rapid [dark gray solid], (03) Fryer Delayed [gray dashed] and (04) M\"uller [light gray dot-dashed]. Galactic \acp{DNS} with an estimated delay time smaller than a Hubble time are indicated at the top. GW170817, the only GW signal detected from \acp{DNS} to date, is shown as a vertically offset thick green line, with a similar chirp mass ($1.188~\rm M_{\odot}$) as J1757-1854 in purple. All systems have precise mass measurements with error bars within the thickness of the line. } \label{fig:chirpMass} \end{figure} \subsection{On Kicks} When binaries survive a \ac{SN} explosion, they may get significant centre-of-mass kicks from both natal \ac{NS} kicks and Blaauw recoil \citep{blaauw1961origin} from mass loss. The resulting \ac{DNS} population should therefore be more broadly spatially distributed in the Galaxy than the regions of massive star formation. We follow a population of \Fiducial~model \acp{DNS} with the predicted total kick distribution in a Galactic potential starting from birth in the thin disk. While we find that, as expected, kicks broaden the distribution of Galacto-centric distances (see Figure~\ref{fig:kicks} in appendix \ref{sec:AppD}, where the details of this analysis are presented), the deep Galactic potential well means that this broadening is relatively small and challenging to test for. In practice, the spreading of \acp{DNS} away from the thin disk may be even smaller than estimated here, because our simplified case~BB mass loss models imply fairly high remaining core masses, between $1.6 \leq m_{CO}/\rm{M_{\odot}} \leq 4.6$, while detailed calculations of ultra-stripping suggest lower remnant core masses $1.45 \leq m/\rm{M_{\odot}} \leq 3.15$ \citep{tauris2015ultra}. Reducing COMPAS core masses in line with \citet{tauris2015ultra} would both reduce Blaauw kicks and \ac{DNS} eccentricities. On the other hand, three quarters of short \acp{GRB} are found outside the effective radius of the host galaxy \citep{FongBerger:2013}, providing a strong constraint on the binary natal kick distribution; \citet{FongBerger:2013} estimate total kicks of $\approx 20$ -- $140$ km s$^{-1}$. \subsection{On Rates} \label{subsec:Rates} \subsubsection{DNS Merger Rates} \ac{DNS} formation and merger rates are constrained by the observed sample of Galactic binary pulsars \citep[e.g.,][]{Kim:2003kkl, OShaughnessyKim:2009}, by observations of short \acp{GRB} \citep{FongBerger:2013}, and will ultimately be measured with gravitational--wave observations \citep[see][for a review]{MandelOShaughnessy:2010}. Rates inferred from Galactic binary pulsars are dominated by a few systems and are sensitive to the imperfectly known pulsar radio luminosity distribution \citep{Kalogera:2004tn}. Short \acp{GRB} extend the observations beyond the Milky Way to cosmological distances, but inference from these is complicated by the difficulty of measuring jet opening angles and uncertain selection effects, and relies on the additional assumption of a one-to-one correspondence between short \acp{GRB} and \ac{DNS} mergers \citep{2014ARA&A..52...43B}. \cite{ratesdoc} combined the existing observational constraints to suggest that the \ac{DNS} merger rate lies between $1$ and $1000$ $\rm Myr^{-1}$ in a Milky Way equivalent galaxy (approximately $10$ to $10000$ Gpc$^{-3}$ yr$^{-1}$), with a likely value toward the middle of this range. All of the models presented here fall within this range, although we focus on the Milky Way \ac{DNS} population rather than the merger rate, and hence did not consider the convolution of the \ac{DNS} formation rate and delay time distribution over cosmic history. Other recent population synthesis studies give estimates that, like ours, fall in the two lower decades of this range. \citet{chruslinska2018double} use \texttt{StarTrack} to predict a local merger rate density of $48$ Gpc$^{-3}$ yr$^{-1}$ for their standard assumptions and $600_{-300}^{+600}$ Gpc$^{-3}$ yr$^{-1}$ for a very optimistic set of assumptions. \citet{belczynski2017GW170817} also use \texttt{StarTrack} to argue that even these rates are 2 orders of magnitude larger than the contribution from globular or nuclear clusters. \citet{2018arXiv180105433K} use $\rm C_{OM}B_{IN}E$ to predict an upper limit of local merger rate of $400$ Gpc$^{-3}$ yr$^{-1}$. Meanwhile, \citet{GW170817} estimate a \ac{DNS} merger rate of $1540^{+3200}_{-1220}$ Gpc$^{-3}$ yr$^{-1}$ based on GW170817 alone. However, given the significant Poisson uncertainty and sensitivity to rate priors from a single observation\footnote{For example, shifting from a flat-in-rate prior to a $p(R) \propto 1/\sqrt{R}$ Jeffreys prior \citep{Jeffreys:1946} would reduce the peak of the posterior by a factor of 2 following one detection. Furthermore, the posterior peak is a factor of $1.67$ lower than the posterior median quoted by \cite{GW170817}.}, the addition of this one (albeit, very special) event to the population of merging Galactic \acp{DNS} and short \acp{GRB} does not significantly shift the observational constraints on the \ac{DNS} merger rate. In fact, given the similarity of the predicted \ac{DNS} formation rates among most models presented here, observational constraints on the rate alone will not be sufficient to distinguish between these models in the near future. \subsubsection{Supernova Rates} We estimate the \ac{SN} rates for our \Fiducial~model (01). Given the ambiguity in \ac{SN} classification, we make simplifying assumptions to convert our models into observational predictions. We consider all progenitors with a hydrogen envelope to lead to hydrogen rich \acp{SN} (type II excluding type IIb) and the rest are considered stripped \acp{SN} (either hydrogen absent type Ib or Ic or hydrogen poor type IIb). Our total rate of \acp{SN} leading to \ac{NS} formation is \CCSNperMsolSF~per~$\rm M_{\odot}$, which includes both \acp{ECSN} and \acp{USSN}. Among these, \typeIItoCCSN~per cent are hydrogen rich and the remaining \typeItoCCSN~per cent are classified as stripped \acp{SN}, including all \acp{USSN}. We predict that \acp{USSN} that follow after case~BB mass transfer onto a \ac{NS} companion should make up \USSNtoTypeI~per cent of all stripped \acp{SN} and \USSNtoCCSN~per cent of all \acp{SN} leading to \ac{NS} formation. Our total \ac{SN} rate prediction is consistent with \cite{zapartas2017ccsne}, a population synthesis study which reports \ac{CCSN} rates in binaries between $0.0035$--$0.0253$ per~$\rm M_{\odot}$, depending on the assumed IMF. Our estimates for the fraction of stripped \acp{SN} compare well with observational results. \citet{2013MNRAS.436..774E} find that the fractions of hydrogen rich and stripped \acp{SN} leading to \ac{NS} formation are $61.9$ and $38.1$ per cent respectively; that study was made with \acp{SN} discovered between $1998$ and $2012$ in galaxies with recessional velocities less than $2000~\rm km~s^{-1}$. More recently, \cite{2017PASP..129e4201S} report that $69.6$ per cent of \acp{CCSN} are hydrogen rich (according to the definition above), while the remaining $30.4$ per cent come from stars with stripped envelopes. \section{Discussion \& Conclusions} \label{sec:discussion} We used the COMPAS rapid population synthesis code to follow the evolution of massive stellar binaries and thus generate a population of \acp{DCO}. We quantitatively validated our models by comparing the predicted $P-e$ distribution of \acp{DNS} against the observed Galactic \ac{DNS} distribution, and qualitatively compared the predicted rate and mass distribution of Galactic \acp{DNS} to observations. We considered variations relative to the \Fiducial~model in order to investigate the impact of uncertain evolutionary physics. We find that: \begin{itemize} \item{\it Case BB mass transfer during \ac{DNS} formation must be predominantly stable.} We considered the possibility that \ac{HeHG} of the secondary leads to dynamically unstable mass transfer and a second \ac{CE} phase \citep{dewi2003late} in Variation (02). In fact, this was our initial default model, consistent with $\rm COMPAS\_\alpha$ (00) in this assumption. However, the lack of \acp{DNS} with few-hour orbital periods (such as J0737-3039) in this variation (see Figure~\ref{fig:PeMain}), as well as our Bayesian analysis, indicates that most case~BB mass transfer episodes must be stable. This finding is consistent with the detailed models of \cite{tauris2015ultra}. However, some case~BB dynamically unstable systems could exist without being detectable in the observed \ac{DNS} population: the very short orbital periods of \acp{DNS} that were hardened by two \ac{CE} phases would lead them to merge in less than a few hundred thousand years. While our study assumes constant star formation within the history of the Galaxy, the short orbital period \acp{DNS} would be disfavoured in Galactic star formation history models without recent periods of starbursts. \item{\it A bimodal \ac{SN} natal kick distribution is preferred over a single mode one.} We find that a bimodal natal kick distribution (with non-zero components) with lower natal kicks for \acp{ECSN} and \acp{USSN} and higher natal kicks for standard \acp{CCSN} is preferred (see variations (05), (06), (07)). If \acp{ECSN} and/or \acp{USSN} are given the high natal kicks consistent with the observed velocities of isolated pulsars \citep{hansen1997pulsar,hobbs2005statistical}, wider binaries are overwhelmingly disrupted by \acp{SN}, and observed wide \acp{DNS} cannot be reproduced in the models. A bimodal \ac{SN} natal kick distribution is consistent with the findings of other population synthesis studies (see \cite{pfahl2002new} and \cite{belczynski2002comprehensive} as well as with comparison to observations from \cite{schwab2010further}, \cite{beniamini2016formation} and \cite{Verbunt2017bimodal}); although \citet{Oshaughnessy:2008} didn't find evidence for multiple natal kick distributions. \\ The aforementioned findings in our paper, stability during case~BB mass transfer and a bimodal natal kick distribution, are broadly in agreement with those in \citet{andrews2015evolutionary}, which used a smaller sample of eight Galactic \acp{DNS} instead of the current 14 confirmed systems and carried out population synthesis by mainly varying \ac{CE} parameters and natal kick magnitudes. \citet{andrews2015evolutionary} find that it is likely that short-period low-eccentricity systems went through an evolutionary channel which includes stable case~BB mass transfer. Their study also points out that the cores of \ac{ECSN} progenitors should have relatively low mass, which can be related to lower natal natal kick magnitude. \\ \item{\it Predicted \ac{DNS} formation rates across variations are consistent with observations.} The formation rate of \acp{DNS} in the \Fiducial~model is 24 Myr$^{-1}$ in the Milky Way. The Milky Way \ac{DNS} formation rate for all considered variations is 5 -- 31 Myr$^{-1}$. All rates are consistent with observations \citep{ratesdoc}, including the inferred rate from the GW170817 gravitational--wave detection \citep{GW170817}, and cannot be used to differentiate between the models at this point. \end{itemize} We also considered multiple \ac{SN} explosion mechanisms, including varying the fallback mass (Fryer ``rapid'' (01) and Fryer ``delayed'' (03) variations) and a coupled mass--kick model calibrated to numerical simulations (``M\"uller'' (04) prescription). Low-mass iron-core \acp{CCSN} may have reduced natal kicks, but are given standard \ac{CCSN} natal kicks in the Fryer models, including the \Fiducial~model. The mass distribution of observed systems is not consistent with the very low masses predicted by the Fryer ``rapid'' fallback prescription used in the \Fiducial~model (01). Furthermore, observations do not show a peak in the mass distribution around $1.26~\rm M_\odot$ where \acp{ECSN} should fall in our models. The remnant mass of an \ac{ECSN} depends on the \ac{NS}'s equation-of-state and indicates either that \acp{ECSN} are less common in binaries than we expected or that the \ac{ECSN} models should be revisited, as similarly noticed by \cite{2018arXiv180105433K}. With only $\sim 10$ additional \ac{DNS} mass measurements it will be possible to further constrain the \ac{SN} fallback mechanisms, distinguishing between the ``M\"uller'' (04) and Fryer ``delayed'' (03) variants, both of which are consistent with existing observations. Further input on natal kick velocity distributions should come from a better comparison with observed isolated pulsar natal kicks. At the moment, the observed isolated pulsar distribution is used to calibrate the \ac{CCSN} natal kicks in binaries. However, the sample of observed isolated pulsars is contaminated by pulsars from disrupted binaries. Therefore, the approach we used here, which is also used by most population-synthesis codes, is not self-consistent: the observed single-pulsar velocity distribution should be checked for consistency against a model which includes contributions from both single and binary massive stars. In particular, observations should be tested for evidence of the predicted low natal kicks associated with \acp{ECSN}, which may preferentially occur in binaries \citep{podsiadlowski2004effects} that may subsequently be disrupted. We assumed a solar metallicity $\rm Z_{\odot}=0.0142$ for massive stars in the Galaxy. In reality, the Galaxy has a distribution of metallicities at the present day, as well as a history of metallicity evolution over time, since present-day \ac{DNS} systems and particularly \ac{DNS} mergers may have formed at earlier times or in lower-metallicity regions \citep[see][for a discussion of Galactic binary black hole formation]{lamberts2018predicting}. While Figure~\ref{fig:GWs} confirms that, for a suitable choice of metallicity and initial conditions, the \Fiducial~model can produce compact binary mergers with masses matching all of the existing gravitational--wave observations; it also demonstrates that metallicity does impact the rate and properties of merging \acp{DNS}. Therefore, the metallicity-specific star formation history of the Milky Way could affect the details of the modelled \ac{DNS} population. We do not account for selection effects in the observed Galactic \ac{DNS} population in this study; see \cite{tauris2017formation} for a detailed discussion. Binaries with very short orbital periods may be selected against because of the orbital acceleration of the pulsar, which changes the apparent spin period; they will also have short merger times, and their location within the Galaxy will be sensitive to the details of recent star formation history. Meanwhile, binaries with extremely long orbital periods may also be challenging to detect, since they are less likely to be recycled during binary evolution, and detectable radio emission from non-recycled pulsars is expected to last for $\lesssim 50$ Myrs \citep{2004hpa..book.....L}. The \ac{DNS} formation models presented here can also be tested against observable populations of massive stars during intermediate phases before \ac{DNS} formation. Neutron star Be/X-ray binaries \citep[e.g.,][]{knigge2011two} offer a particularly promising test case; for example, the observed correlation between the orbital period and the \ac{NS} spin, with the latter appearing to be bimodal, could indicate distinct \ac{SN} classes in their evolutionary history \citep{knigge2011two}. Spin distribution predictions could also be compared to observed pulsar spin periods in both isolated pulsars \citep[e.g.][]{Kiel:2008xw} and in \ac{DNS} systems \citep[e.g.][]{2005MNRAS.363L..71D,2011MNRAS.413..461O,tauris2017formation}. However, determining the \ac{NS} spin-up or spin-down through binary interactions and pulsar evolution requires additional modelling assumptions, and hence spin models were not included in the present study. Meanwhile, more detailed studies of natal kicks in the Galactic potential could lead to additional constraints on natal kick distributions. Moreover, gravitational--wave detections will produce an ever larger catalogue of accurate mass measurements, at least for the chirp mass parameter. Together, these growing observational data sets will enable increasingly accurate tests of the massive stellar binary evolution models described here. \section*{Acknowledgments} AVG acknowledges support from CONACYT and thanks the PHAROS COST Action (CA16214) for partial support. CN and IM acknowledge partial support from the STFC. KB acknowledges support from the Polish National Science Center (NCN) grants Sonata Bis 2 (DEC-2012/07/E/ST9/01360) and OPUS (2015/19/B/ST9/01099). SJ is partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences ``Multi-waveband Gravitational Wave Universe'' (Grant No.~XDB23040000), and is also grateful to the Chinese Academy of Sciences (President's International Fellowship Initiative grant no. 2011Y2JB07), and the National Natural Science Foundation of China (grant no. 11633005). SdM acknowledges the European Union's Horizon 2020 research and innovation programme for funding from the European Research Council (ERC), Grant agreement No. 715063. SdM and IM acknowledge the hospitality of the Kavli Institute for Theoretical physics, Santa Barbara, CA. Their stay was supported by the National Science Foundation under Grant No.\ NSF PHY11-25915. BM was supported by the Australian Research Council through ARC Future Fellowship FT160100035. SS was supported by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), CE170100004. We thank the Niels Bohr Institute for its hospitality while part of this work was completed, and acknowledge the Kavli Foundation and the DNRF for supporting the 2017 Kavli Summer Program. We also thank Jeff Andrews, Christopher Berry, Ross Church, David Stops, Jason Hessels, Serena Vinciguerra and Manos Zapartas for discussions, suggestions and assistance during the writing of this manuscript. \bibliographystyle{mnras}
1,314,259,995,555
arxiv
\section{Preliminares} Most definitions and theorems in this section may be found in \cite{D68}. We direct the reader to this source for more information, and details, about projective planes. Let $GF(q^{2})$ be the field with $q^{2}$ elements, where $q=p^{n}$ with $p$ prime (we will always consider $p$ to be odd in this article), and $n\in {\mathbb N}$. Throughout this article we will use \[ GF(q^2) = \{ a+\epsilon b; \ a,b\in GF(q), \ \text{and} \ \epsilon^2=w\in GF(q) \}, \] and we will write $\overline{x}=x^q$, $T(x) = x+ \overline{x}$ and $N(x)=x\overline{x}$, for all $x\in GF(q^2)$. We let $V$ be a $3$-dimensional vector space over $GF(q^{2})$ and we consider the projective plane $\Pi = PG(2,q^{2})$, defined by letting its points to be the $1$-dimensional subspaces of $V$ and its lines be the $2$-dimensional subspaces of $V$. A point $P$ of $\Pi$ will be denoted by \[ P=[a,b,c], \] where $(a,b,c)$ is a vector generating the subspace defining $P$. If $l$ is a line of $\Pi$ then it will be denoted by \[ l=\left[\begin{array}{c} x\\ y\\ z\\ \end{array}\right]=[x,y,z]^{t}, \] where $(x,y,z)$ is a vector that is orthogonal (using the standard dot product) to the $2$-dimensional subspace defining $l$. The incidence in $\Pi$ is given by natural set-theoretic containment. Thus, \[ P\in l \ \ \Longleftrightarrow \ \ \left[\begin{array}{ccc} a & b & c\\ \end{array}\right] \left[\begin{array}{c} x\\ y\\ z\\ \end{array}\right] = 0 \ \ \Longleftrightarrow \ \ ax+by+cz=0. \] It is known that $\Pi$ is a projective plane of order $q^2$. Hence, the following properties hold in $\Pi$: \begin{enumerate} \item Every line of $\Pi$ contains exactly $q^{2}+1$ points. \item Every point of $\Pi$ is on exactly $q^{2}+1$ lines. \item The number of points, and the number of lines, in $\Pi$ is $q^{4}+q^{2}+1$. \end{enumerate} In order to motivate the concept of unital, we define another important object. \begin{definition} A \emph{blocking set} $\beta$ is a subset of points of $\Pi$ such that every line of $\Pi$ contains at least one point in $\beta$. A \emph{minimal blocking set} is a blocking set in which removing any of its points never yields a blocking set. \end{definition} \begin{remark} The collection of points on a line $l$ of $\Pi$ is a blocking set. We will say that a blocking set containing all the points on a line is a trivial blocking set. \end{remark} A good summary of the basics on blocking sets may be found in Chapter 13 of \cite{H79}. The following result may be found there. \begin{theorem}\label{boundofblockingset} Blocking sets exist in $\Pi$. Moreover, if $\beta$ is a minimal blocking set of $\Pi$ then $|\beta|\leq q^3 + 1$. \end{theorem} A very special kind of a largest possible minimal blocking set of $\Pi$ is the object we want to focus our attention from now on. \begin{definition} A \emph{unital} in $\Pi$ is a set $U$ of $q^{3}+1$ points of $\Pi$ such that every line of $\Pi$ intersects $U$ in exactly $1$ or $q+1$ points. Lines of $\Pi$ will be called \emph{tangent} or \emph{secant} to $U$ depending on whether they intersect $U$ in $1$ or $q+1$ points, respectively. \end{definition} \begin{remark} Unitals may be defined in a much more general way but in this article we will focus only on unitals embedded in $\Pi$. So, our definition has been written with this purpose in mind. We refer the reader to \cite{BE08} for a detailed exposition about unitals and for the concepts we use in this article that we may fail to explain in detail. \end{remark} Two standard examples of unitals are \begin{enumerate} \item The set of absolute points of a non-degenerate unitary polarity of $\Pi$: \[ H=\{[x,y,z]\in \Pi; \ N(x)+N(y)+N(z)=0\} \] is a unital in $\Pi$, called \emph{classical}. \item Buekenhout \cite{B76} proved that, for $\alpha, \beta \in GF(q^2)$ such that $4N(\alpha)+(\overline{\beta}-\beta)^{2}$ is a non-square in $GF(q)$, the set \[ U_{\alpha,\beta}=\{[x,\alpha x^{2}+\beta N(x)+r,1]; \ x\in GF(q^{2}), \ r\in GF(q)\}\cup\{P_{\infty} \} \] is a unital (said to be an \emph{orthogonal-Buekenhout-Metz unital}) in $\Pi$, where $P_{\infty} =[0,1,0]$. Moreover, $\alpha=0$ if and only if the unital $U_{\alpha,\beta}$ is classical, and $\beta=\overline{\beta}$ if and only if the unital $U_{\alpha,\beta}$ is a union of conics (see \cite{BE90} or \cite{HS91}, and \cite{DS13}). \end{enumerate} From now on we focus our study on non-classical orthogonal-Buekenhout-Metz unitals $U_{\alpha, \beta}$. So, for the rest of this article we assume $\alpha \neq 0$. Elementary counting shows that if $U_{\alpha, \beta}$ is an orthogonal-Buekenhout-Metz unital in $\Pi$ and $P\in U_{\alpha, \beta}$ then there is exactly one tangent line to $U_{\alpha, \beta}$ through $P$ and there are exactly $q^{2}$ secant lines to $U_{\alpha, \beta}$ through $P$. Similarly, if $P\notin U_{\alpha, \beta}$ then there are exactly $q+1$ lines tangent to $U_{\alpha, \beta}$ through $P$ and there are exactly $q^{2}-q$ secant lines to $U_{\alpha, \beta}$ through $P$. \begin{definition} Let $U_{\alpha, \beta}$ be an orthogonal-Buekenhout-Metz unital and $P$ a point not in $U_{\alpha, \beta}$. Each of the $q+1$ points of $U_{\alpha, \beta}$ that are on a tangent line to $U_{\alpha, \beta}$ through $P$ is said to be a foot of $P$. We will denote the set of feet of $P$ by $\tau_{P}(U_{\alpha, \beta})$ and we will call it the \emph{pedal} of $P$. \end{definition} It is known that $\tau_{P}(U_{\alpha, \beta})$ has the following properties: \begin{enumerate} \item $\tau_{P}(U_{\alpha, \beta})$ is contained in a line of $\Pi$, for all $P\in \Pi\setminus U_{\alpha, \beta}$ if and only if $U_{\alpha, \beta}$ is classical (see Thas \cite{T92}). The conditions for this result have been relaxed after Thas's work, see \cite{AE02} for a more recent result on this characterization. \item For $U_{\alpha, \beta}$ non-classical. $\tau_{P}(U_{\alpha, \beta})$ is contained in a line of $\Pi$ if and only if $P\in \ell_{\infty}$. Note that $\ell_{\infty} \cap U_{\alpha, \beta} = P_{\infty}$ (see, e.g. \cite{BE08}). \\ Note that this result implies that every line, different from $\ell_{\infty}$, through $P_{\infty}$ contains a pedal. \end{enumerate} Finally, there is a group $G \leq P\Gamma L(3,q^{2})$ leaving $U_{\alpha,\beta}$ invariant and fixing $P_{\infty}$ such that \begin{enumerate} \item $G$ is transitive on the set of points of $U_{\alpha,\beta} \setminus \ell_{\infty}$. \item $G$ is transitive on the points of $l_{\infty}\setminus \{P_{\infty}\}$. \item $G$ has either one or two orbits on the points of $\Pi\setminus (U_{\alpha,\beta}\cup l_{\infty})$. Moreover, these orbits are those of $P_{\lambda}=[0,\lambda \epsilon,1]$, with $\lambda=1$ or $\lambda=w=\epsilon^{2}$. \end{enumerate} \section{Intersections of Lines and Pedals} Our objective is to find geometric properties that can describe the pedals of points $P \notin \ell_{\infty}$ in unitals $U_{\alpha,\beta}$, where $\alpha \neq 0$. In particular, we care about how lines of $\Pi$ intersect these sets. Not much is known about pedals in non-classical unitals, albeit the work by Kr{\v{c}}adinac and Smoljak is pertinent; in \cite{KS11} they study all possible configurations for pedals in unitals that are embedded in (not-necessarily Desarguesian) projective planes of order $9$ and $16$. Because of the results listed above about the group $G$ we will now only study $\tau_{P}(U_{\alpha,\beta})$ for $P = P_{\lambda}=[0,\lambda \epsilon,1]$, with $\lambda=1$ or $\lambda=w$; this decision is justified in the following lemma. \begin{lemma}\label{lemorbits} If $\sigma\in G$, $A,C\in \tau_{P}(U_{\alpha,\beta})$, $\sigma(A)=B$, $\sigma(C)=D$, $Q=\sigma(P)$, then $\sigma(\tau_{P}(U_{\alpha,\beta})) = \tau_{Q}(U_{\alpha,\beta})$ and \[ | AC \cap \tau_{P}(U_{\alpha,\beta}) | = | BD \cap \tau_{Q}(U_{\alpha,\beta})|. \] \end{lemma} \begin{proof} It is easy to see that $\sigma$ preserves the number of points of intersection between lines and $U_{\alpha,\beta}$, and so $\sigma$ maps tangent lines into tangent lines. It follows that $\sigma(\tau_{P}(U_{\alpha,\beta})) = \tau_{Q}(U_{\alpha,\beta})$. Note that $\sigma(AC)=BD$ and that $B,D \in \tau_{Q}(U_{\alpha,\beta})$. So, if we repeat this argument with $A$ and any other point $E \in AC \cap \tau_{P}(U_{\alpha,\beta})$ we would get another point in $BD \cap \tau_{Q}(U_{\alpha,\beta})$. Hence, since $\sigma$ is injective we get one direction of the desired inequality. We obtain the other direction by repeating the argument using $\sigma^{-1}$ instead of $\sigma$. \end{proof} We first look at the lines through $P_{\infty}$. It was mentioned earlier that there is a bijection between the set of pedals containing $P_{\infty}$ and the set of lines, different from $\ell_{\infty}$, through this point. We now want to look at how these $q^2$ lines intersect $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. The following remark gives enough information for us to address this issue in the subsequent lemma. \begin{remark} Two distinct pedals can intersect in at most one point, as for every point $A\in \tau_{P}(U_{\alpha,\beta})\cap \tau_{Q}(U_{\alpha,\beta})$ we always get that $A$, $P$ and $Q$ are collinear. Moreover, two distinct pedals intersect if and only if they are the pedals of two points on a line tangent to $U_{\alpha,\beta}$; their intersection is the tangency point. \end{remark} The following lemma is immediate. \begin{lemma}\label{lemlinesthroughPinfinity} Let $\ell \neq \ell_{\infty}$ be a line such that $P_{\infty} \in \ell$. Then, $\ell$ is either tangent or exterior to all the pedals not contained in $\ell$. \end{lemma} The generalization of this lemma to lines intersecting pedals of points not on $\ell_{\infty}$ is not true (see Section \ref{secnotPinfty}). However, Lemma \ref{lemlinesthroughPinfinity} implies that the line $\ell\neq \ell_{\infty}$ can be partitioned into singletons, all of them in distinct pedals. We are able to prove that result for all other pedals as well. \begin{lemma} Let $\ell$ be a line that is not tangent to $U_{\alpha,\beta}$. Then, there are $q+1$ distinct pedals intersecting $\ell$ in singletons, creating a partition of the points in $\ell \cap U_{\alpha,\beta}$. \end{lemma} \begin{proof} The case when $P_{\infty} \in \ell$ follows immediately from Lemma \ref{lemlinesthroughPinfinity}. Now, if $P_{\infty} \notin \ell$ then we use that every point in $U_{\alpha,\beta}$ is in $q^2$ pedals, and $\ell$ contains $q+1$ points of $U_{\alpha,\beta}$ then for each point in $\ell \cap U_{\alpha,\beta}$ there are at least $q^2-(q+1)$ pedals containing no other point of $\ell \cap U_{\alpha,\beta}$. Hence, using that $q\geq 3$ implies that $q^2-(q+1)\geq q+1$, we can choose the pedals to create the desired partition. \end{proof} Now our interest shifts to learn about the intersections of lines, not through $P_{\infty}$, with pedals of points not on $\ell_{\infty}$. \section{Lines Not Containing $P_{\infty}$}\label{secnotPinfty} In this section we will study lines that do not go through $P_{\infty}$. We consider the orthogonal-Buekenhout-Metz unital in $\Pi$ \[ U_{\alpha,\beta}=\{[x,\alpha x^{2}+\beta N(x)+r,1]; \ x\in GF(q^{2}), \ r\in GF(q)\}\cup\{P_{\infty}\}. \] The tangent line to $U_{\alpha,\beta}$ through $[x,\alpha x^2+\beta N(x)+r,1]$ is \[ [-2\alpha x+(\overline{\beta}-\beta)\overline{x},1,\alpha x^{2}-\overline{\beta}N(x)-r]^{t}. \] In order to study $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ we need to determine all $x\in GF(q^{2})$ and $r\in GF(q)$ such that \[ [0,\lambda \epsilon,1]\in[-2\alpha x+(\overline{\beta}-\beta)\overline{x},1,\alpha x^{2}-\overline{\beta}N(x)-r]^{t} \] which means \begin{align*} [0,\lambda \epsilon,1]\left[\begin{array}{c} -2\alpha x+(\overline{\beta}-\beta)\overline{x}\\ 1\\ \alpha x^{2}-\overline{\beta}N(x)-r \end{array}\right] =0 & \Longleftrightarrow \ \lambda \epsilon+\alpha x^{2}-\overline{\beta}N(x)-r=0\\ &\Longleftrightarrow \ r=\lambda \epsilon +\alpha x^{2}-\overline{\beta}N(x). \end{align*} Since $r\in GF(q)$ we get $\overline{r}=r$. Hence, \[ \overline{\lambda \epsilon +\alpha x^{2}-\overline{\beta}N(x)} \ =\lambda \epsilon +\alpha x^{2}-\overline{\beta}N(x) \] and thus \begin{equation}\label{eqtogettrace} 2\lambda \epsilon+\alpha x^{2}-\overline{\alpha}\;\overline{x}^{2}+(\beta-\overline{\beta})N(x)=0. \end{equation} We let \[ M_{\alpha,\beta}=\left[\begin{array}{cc} \alpha & \frac{1}{2}(\beta-\overline{\beta})\\ \frac{1}{2}(\beta-\overline{\beta}) & -\overline{\alpha}\\ \end{array}\right], \] and notice that \[ 2\lambda \epsilon+\alpha x^{2}-\overline{\alpha} \ \overline{x}^{2}+(\beta-\overline{\beta})N(x)=0 \ \Longleftrightarrow \ 2\lambda\epsilon+\left[\begin{array}{cc} x & \overline{x}\\ \end{array}\right]M_{\alpha,\beta}\left[\begin{array}{c} x\\ \overline{x}\\ \end{array}\right]=0. \] Hence, $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ is the set of all the points of the form \[ [x,2\alpha x^{2}+(\beta-\overline{\beta})N(x)+\lambda \epsilon,1], \] where $x\in GF(q^{2})$, and \[ 2\lambda \epsilon+\left[\begin{array}{cc} x & \overline{x}\\ \end{array}\right]M_{\alpha,\beta}\left[\begin{array}{c} x\\ \overline{x}\\ \end{array}\right]=0. \] We can now use Equation (\ref{eqtogettrace}) to find a different way to represent points in $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. Notice that \begin{align*} T(\alpha x^2) -\lambda \epsilon & = (\alpha x^2) + \overline{(\alpha x^2)}-\lambda \epsilon \\ & = \alpha x^2 + \overline{\alpha} \ \overline{x}^2-\lambda \epsilon \\ & = \alpha x^2 + ( 2\lambda \epsilon+\alpha x^{2}+(\beta-\overline{\beta})N(x) ) -\lambda \epsilon \\ & = 2 \alpha x^2 + \lambda \epsilon + (\beta-\overline{\beta})N(x). \end{align*} Hence, letting $Q_x=[x,T(\alpha x^2) -\lambda \epsilon,1]$ we get \[ \tau_{P_{\lambda}}(U_{\alpha,\beta}) = \left\{Q_x; \ x\in GF(q^{2}), \ 2\lambda \epsilon+\left[\begin{array}{cc} x & \overline{x}\\ \end{array}\right]M_{\alpha,\beta}\left[\begin{array}{c} x\\ \overline{x}\\ \end{array}\right]=0\right\}. \] \begin{remark}\label{remimaginarynorm} Equation (\ref{eqtogettrace}) can be re-written as \[ 2\lambda \epsilon+ 2 \epsilon Im(\alpha x^{2})+(\beta-\overline{\beta})N(x)=0 \] It follows that for two points $Q_x, Q_y \in \tau_{P_{\lambda}}(U_{\alpha,\beta})$ we get $N(x) =N(y)$ if and only if $Im(\alpha x^{2})= Im(\alpha y^{2})$. \end{remark} We now introduce some notation. Let \begin{align*} T_{\lambda} & = \left\{ x\in GF(q^{2}); \ Q_x \in \tau_{P_{\lambda}}(U_{\alpha,\beta}) \right\} \\ & = \left\{x\in GF(q^{2}); \ 2\lambda \epsilon+\left[\begin{array}{cc} x & \overline{x}\\ \end{array}\right]M_{\alpha,\beta}\left[\begin{array}{c} x\\ \overline{x}\\ \end{array}\right]=0\right\}. \end{align*} It is easy to see that $x\in T_{\lambda}$ if and only if $-x\in T_{\lambda}$. Moreover, if $x, -x\in T_{\lambda}$ then they have the same value of $r$ associated to them (in the representation of $Q_x$ and $Q_{-x}$ as points in $U_{\alpha,\beta}$). \begin{lemma}\label{lemmxand-xandythen-y} Let $l_{x,-x}$ be the line through $Q_{x}$ and $Q_{-x}$, where $\pm x\in T_{\lambda}$. If $Q_{y} \in l_{x,-x}$, for some $y\in T_{\lambda}$, then $Q_{-y} \in l_{x,-x}$. \end{lemma} \begin{proof} The line passing through $Q_{x}$ and $Q_{-x}$ is given by \[ l_{x,-x}=\left[\begin{array}{c} 0\\ -1\\ T(\alpha x^2) -\lambda \epsilon \end{array}\right]. \] If $Q_y \in l_{x,-x}$ then \[ [y,T(\alpha y^2) -\lambda \epsilon,1] \left[\begin{array}{c} 0\\ -1\\ T(\alpha x^2) -\lambda \epsilon \end{array}\right] = 0, \] which can be simplified to \[ T(\alpha x^2) - T(\alpha y^2) =0. \] On the other hand, \[ [- y,T(\alpha (-y)^2) -\lambda \epsilon,1] \left[\begin{array}{c} 0\\ -1\\ T(\alpha x^2) -\lambda \epsilon \end{array}\right] = T(\alpha x^2) - T(\alpha y^2), \] which is equal to zero. Hence, $Q_{-y} \in l_{x,-x}$. \end{proof} \begin{remark}\label{remlinesthrough100} All lines of the form $l_{x,-x}$ pass through $[1,0,0]$. \end{remark} We want to learn about the conditions under which the line through $Q_{x}$ and $Q_{y}$, for $x, y \in T_{\lambda}$, contains more points of $U_{\alpha,\beta}$ besides $Q_x$ and $Q_y$. \begin{lemma}\label{lemmsametrace} Let $l_{x,y}$ be the line through $Q_{x}$ and $Q_{y}$, for $x\neq y$ in $T_{\lambda}$. If $Q_z \in l_{x,y}$, $z \in T_{\lambda}$, and $T(\alpha x^2) = T(\alpha y^2)$, then $Q_{-z} \in l_{x,y}$. \end{lemma} \begin{proof} The line $l_{x,y}$ is given by: \[ l_{x,y}=\left[\begin{array}{c} T(\alpha x^{2}) - T(\alpha y^{2}) \\ y-x\\ xT(\alpha y^2) - yT(\alpha x^2)+(y-x)\lambda \epsilon \end{array}\right]. \] But, $T(\alpha x^2) = T(\alpha y^2)$ and $x\neq y$, so $l_{x,y}$ is represented by \[ l_{x,y}=\left[\begin{array}{c} 0 \\ -1 \\ T(\alpha x^2) -\lambda \epsilon \end{array}\right]. \] If $Q_z \in l_{x,y}$ then, after routine simplifications, we get \[ - T(\alpha z^2) + T(\alpha x^2) =0. \] It follows that $l_{x,y}$ can be represented by \[ l_{x,y}=\left[\begin{array}{c} 0 \\ -1 \\ T(\alpha z^2) -\lambda \epsilon \end{array}\right], \] which is the line $l_{z,-z}$. \end{proof} \begin{theorem}\label{themonlytwo} Let $U_{\alpha,\beta}$ be an orthogonal-Buekenhout-Metz unital with $\alpha\neq0$. Let $Q_x$ and $Q_y$ be two distinct points in $\tau_{P_{\lambda}}(U_{\alpha,\beta})$, and let $l_{x,y}$ be the line through them. If $T(\alpha x^{2})\neq T(\alpha y^{2})$ then \[ l_{x,y}\cap \tau_{P_{\lambda}}(U_{\alpha,\beta})=\{Q_x, Q_y\}. \] \end{theorem} \begin{proof} Suppose that $l_{x,y}$ contains a point $Q_z \in \tau_{P_{\lambda}}(U_{\alpha,\beta})$, different from $Q_x$ and $Q_y$, then there is a $\mu\in GF(q^{2})\setminus \{0\}$ such that \[ Q_z = Q_x+\mu Q_y. \] Note that $1+\mu\neq0$, otherwise $Q_x+\mu Q_y\notin U_{\alpha,\beta}$. Then, \[ [z,T(\alpha z^{2})-\lambda\epsilon,1] = Q_x+\mu Q_y = \left[\frac{x+\mu y}{1+\mu}, \frac{T(\alpha x^{2}) +\mu T(\alpha y^{2})}{1+\mu} -\lambda\epsilon,1\right]. \] This expression implies \[ T\left(\alpha\left(\frac{x+\mu y}{1+\mu}\right)^{2}\right) = \frac{T(\alpha x^{2}) +\mu T(\alpha y^{2})}{1+\mu}, \] which we re-write as: \[ (1+\mu)T\left(\alpha\left(\frac{x+\mu y}{1+\mu}\right)^{2}\right) = T(\alpha x^{2}) +\mu T(\alpha y^{2}). \] It follows that \begin{equation}\label{eqtraces} T\left(\alpha\left(\frac{x+\mu y}{1+\mu}\right)^{2}-\alpha x^{2}\right)=\mu T\left(\alpha y^{2}-\alpha\left(\frac{x+\mu y}{1+\mu}\right)^{2}\right). \end{equation} If \[ T\left(\alpha\left(\displaystyle\frac{x+\mu y}{1+\mu}\right)^{2}-\alpha x^{2}\right)=T\left(\alpha y^{2}-\alpha\left(\displaystyle\frac{x+\mu y}{1+\mu}\right)^{2}\right)=0 \] then \[ T\left(\alpha\left(\frac{x+\mu y}{1+\mu}\right)^{2}\right)=T(\alpha x^{2}) \ \ \ \text{and} \ \ \ T(\alpha y^{2})=T\left(\alpha\left(\frac{x+\mu y}{1+\mu}\right)^{2}\right) \] and thus $T(\alpha x^{2})=T(\alpha y^{2})$, which contradicts our hypothesis. It follows that Equation (\ref{eqtraces}) implies $\mu\in GF(q)$, and thus $1+\mu \in GF(q)$. Since $z \in T_{\lambda}$ and $\displaystyle{z = \frac{x+\mu y}{1+\mu}}$ we get \[ 2\lambda \epsilon+\alpha \left(\displaystyle{\frac{x+\mu y}{1+\mu}}\right)^{2}-\overline{\alpha}\overline{\left(\displaystyle{\frac{x+\mu y}{1+\mu}}\right)}^{2}+(\beta-\overline{\beta}) \left(\displaystyle{\frac{x+\mu y}{1+\mu}}\right)^{q+1} = 0, \] which is equivalent to \[ 2\lambda\epsilon(1+\mu)^{2}+\alpha(x+\mu y)^{2}-\overline{\alpha}\overline{(x+\mu y)}^{2}+(\beta-\overline{\beta})(x+\mu y)^{q+1}=0. \] After some simplifications we get \begin{align}\label{eqnalmostthere} \begin{split} \left(2\lambda\epsilon + \alpha x^2 - \overline{\alpha} \ \overline{x}^2 + (\beta-\overline{\beta})N(x)\right) \\ + \mu^2 \left(2\lambda\epsilon + \alpha y^2 - \overline{\alpha}\ \overline{y}^{2}+ (\beta-\overline{\beta}) N(y)\right) \\ + \mu \left( 4\lambda\epsilon + 2 \alpha xy - 2 \overline{\alpha} \ \overline{x} \ \overline{y} + (\beta-\overline{\beta})(\overline{y}x + \overline{x}y) \right) & = 0. \end{split} \end{align} Since $x, y \in T_{\lambda}$, we know \[ 2\lambda \epsilon+\alpha x^{2}-\overline{\alpha}\;\overline{x}^{2}+(\beta-\overline{\beta})N(x) = 2\lambda \epsilon+\alpha y^{2}-\overline{\alpha}\;\overline{y}^{2}+(\beta-\overline{\beta})N(y) = 0 \] and thus, Equation (\ref{eqnalmostthere}) implies \[ 4\lambda\epsilon + 2 \alpha xy - 2 \overline{\alpha} \ \overline{x} \ \overline{y} + (\beta-\overline{\beta})(\overline{y}x + \overline{x}y) =0, \] as $\mu\neq0$. Since the expression above is zero, independent of the value of $\mu$, every point $Q_x+\mu Q_y$, for $\mu\in GF(q)\setminus \{-1\}$, is in $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. Hence, there are exactly $q$ feet of $P_{\lambda}$ in $U_{\alpha,\beta}$ lying on the same line $\ell$. But since $x\in T_{\lambda}$ implies $- x\in T_{\lambda}$, there is a $y\in T_{\lambda}$ such that $\ell = l_{y,-y}$. However, by Lemma \ref{lemmxand-xandythen-y} the number of points on $\ell \cap \tau_{P_{\lambda}}(U_{\alpha,\beta})$ must be even, which contradicts $q$ being odd. \end{proof} As of now we have that a secant line cannot intersect $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ in $3$ points, and that if the line is not of the form $l_{x,-x}$ then this intersection contains at most $2$ points. Next we obtain a bound for the maximum number of collinear points on $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. \begin{theorem}\label{thmatmostfour} Let $U_{\alpha,\beta}$ be an orthogonal-Buekenhout-Metz unital with $\alpha\neq0$ and let $\ell$ be the line through two distinct points in $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. Then, $\ell$ intersects $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ in at most four points. \end{theorem} \begin{proof} Because of the previous results, the only case to consider is when $\ell$ is of the form $l_{x,-x}$, for some $x\in T_{\lambda}$. Hence, the conditions for $Q_z \in \ell \cap \tau_{P_{\lambda}}(U_{\alpha,\beta})$ are \begin{equation}\label{eqnewtowardsBezout} T(\alpha x^2) - T(\alpha z^2) =0 \hspace{.3in} \text{and} \hspace{.3in} 2\lambda \epsilon+ 2 \epsilon Im(\alpha z^{2})+(\beta-\overline{\beta})N(z)=0. \end{equation} We let $z=z_{1}+z_{2}\epsilon$, $\alpha=\alpha_{1}+\alpha_{2}\epsilon$, and $\beta=\beta_{1}+\beta_{2}\epsilon$, where $z_{1}$, $z_{2}$, $\alpha_{1}$, $\alpha_{2}$, $\beta_1$, $\beta_2 \in GF(q)$. Using these variables we can re-write Equations (\ref{eqnewtowardsBezout}) as the system \begin{align}\label{eqnpointontangentline4} \begin{split} 2^{-1} T(\alpha x^2) & = \alpha_{1} z_{1}^2 + \alpha_{1} w z_{2}^2 + 2\alpha_{2}w z_1z_2 \\ - \lambda & = (\alpha_{2} + \beta_2) z_{1}^2 + (\alpha_{2} - w \beta_2) z_2^2 + 2\alpha_{1}z_1z_{2} \end{split} \end{align} We define the following elements in $GF(q)$. \[ \begin{array}{llll} A = \alpha_1 & B = \alpha_{1}w & C = 2\alpha_{2} w \ \ \ \ \ & D = - 2^{-1} T(\alpha x^2) \\ E = \alpha_{2} + \beta_2 \ \ \ \ \ & F = \alpha_{2} - w \beta_2 \ \ \ \ \ & G = 2\alpha_{1} & H = \lambda. \end{array} \] These elements allow us to re-write System (\ref{eqnpointontangentline4}) as the following system of equations with coefficients in $GF(q)$: \begin{equation}\label{eqnpointontangentline5} A z_{1}^2 + B z_{2}^2 + C z_1z_2 + D =0 \hspace{.4in} \text{and} \hspace{.4in} E z_{1}^2 + F z_2^2 + G z_1z_{2} + H =0 \end{equation} If these equations have a common linear factor then we get three linear equations equal to zero, which is three intersecting lines. This yields one solution or a triplet of coinciding lines, which would imply that each equation in System (\ref{eqnpointontangentline5}) is a multiple of the other. But we know that the equation $T(\alpha x^2) - T(\alpha z^2) =0$ has exactly $2(q+1)$ solutions, implying that System (\ref{eqnpointontangentline5}) has $2(q+1)$ solutions, which is more than the maximum number of points on $\ell \cap \tau_{P_{\lambda}}(U_{\alpha,\beta})$, which is $q+1$. In the case the equations in System (\ref{eqnpointontangentline5}) do not have common factors we can use B\'ezout's Theorem for the curves given by \begin{align*} p(z_{1},z_{2}) &= Az_1^2 + B z_{2}^2 +Cz_1z_{2} + D \\ q(z_{1},z_{2}) &= Ez_1^2 + F z_{2}^2 +Gz_1z_{2} + H \end{align*} and since both are polynomials in two variables with coefficients in $GF(q)$, and both have degree two, we get that System (\ref{eqnpointontangentline4}) has at most $4=deg(p)\cdot deg(q)$ solutions. \end{proof} We summarize our results on the size of the intersections between lines and pedals in the following theorem. \begin{theorem} Let $P\notin \ell_{\infty}$ and $\alpha \neq 0$. Then, \begin{enumerate} \item lines in $\Pi$ intersect $\tau_{P}(U_{\alpha,\beta})$ in exactly $0$, $1$, $2$, or $4$ points. \item the points of $\tau_{P}(U_{\alpha,\beta})$ may be partitioned into two arcs. \end{enumerate} \end{theorem} \begin{proof} The first part of the theorem follows from Theorems \ref{themonlytwo} and \ref{thmatmostfour}, and Lemmas \ref{lemorbits}, \ref{lemmxand-xandythen-y} and \ref{lemmsametrace}. For the second part, we use Lemma \ref{lemorbits} to allow ourselves to consider the particular case $P=P_{\lambda}$. Since we know that only the lines of the form $l_{x,-x}$ can intersect $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ in four points, we look at these lines first. Assume that the lines of the form $l_{x,-x}$ intersecting $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ are partitioned as follows: $\ell_1, \ell_2, \ldots, \ell_n$ intersect $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ in exactly two points and $\ell_{n+1}, \ell_{n+2}, \ldots, \ell_t$ intersect $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ in four. We label the points of $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ by $Q_{x}$, where $x$ is one of the following \[ x_{11}, x_{12},\ldots, x_{n1}, x_{n2}, x_{(n+1)1}, x_{(n+1)2}, x_{(n+1)3}, x_{(n+1)4}, \ldots, x_{t1}, x_{t2}, x_{t3}, x_{t4} \] where $Q_{x_{ij}}\in \ell_i$, and $x_{i2}=-x_{i1}$ and $x_{i3}=-x_{i4}$, for all $i$. Then, the points of $\tau_{P_{\lambda}}(U_{\alpha,\beta})$ can be partitioned into the following two arcs \[ A_1 = \{Q_{x}; \ x= x_{ij}, \ i=1,\ldots,t \ \text{and} \ j=1,2 \} \cup \{P_{\lambda}\} \] and \[ A_2 = \{Q_{x}; \ x= x_{ij}, \ i=n+1,\ldots,t \ \text{and} \ j=3,4 \}. \] Note that the partition given is just one of the many possible ones. \end{proof} In the particular case when $\beta = \overline{\beta}$ we can get an even stronger result. \begin{corollary}\label{corarcsbeta} If $\alpha \neq 0$ and $\beta = \overline{\beta}$, then the points of $\tau_{P}(U_{\alpha,\beta})$ are contained in lines or arcs. \end{corollary} \begin{proof} We already know that $\tau_{P}(U_{\alpha,\beta})$ is contained in a line when $P\in \ell_{\infty}$. For when $P\notin \ell_{\infty}$ we use Lemma \ref{lemorbits} to restrict ourselves to study the structure of $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. Let $x\neq y$ and let $l_{x,y}$ be the line through $Q_x, Q_y \in \tau_{P_{\lambda}}(U_{\alpha,\beta})$. Since $\beta=\overline{\beta}$ we get that $T(\alpha x^2) = 2 \alpha x^2 + 2 \lambda \epsilon$, for all $x\in T_{\lambda}$ (this follows from the argument before Remark \ref{remimaginarynorm}). Hence, a point $Q_z \in \tau_{P_{\lambda}}(U_{\alpha,\beta})$ now looks like $Q_z=[z,2\alpha z^{2}+\lambda \epsilon,1]$, and the line $l_{x,y}$ is given by \[ \left[\begin{array}{c} - 2\alpha(x+y)\\ 1\\ 2\alpha xy-\lambda \epsilon \end{array}\right]. \] Thus, $Q_z \in l_{x,y}$ if and only if \[ 2\alpha[(x^{2}-y^{2})z-(x-y)z^{2}-xy(x-y)]=0. \] Since $\alpha\neq0$ and $x\neq y$, this equation reduces to \[ z^{2}-(x+y)z+xy=0, \] which can be re-written as \[ (z-x)(z-y)=0. \] The result follows. \end{proof} \section{The Elation Group of $U_{\alpha,\beta}$} Let us consider the collineation group of $U_{\alpha,\beta}$ given by \[ \mathcal{E}= \left\{ E_t: (x,y,z) \mapsto (x,y+tz,z); \ t\in GF(q) \right\}. \] Note that $\mathcal{E}$ is an elation group with center $P_{\infty}$ and axis $\ell_{\infty}$. It is easy to show that lines of the form $AE_t(A)$ must pass through $P_{\infty}$, for all $A\notin \ell_{\infty}$ and $E_t\in \mathcal{E}$. Also, since $\mathcal{E}$ acts semi-regularly on points not on its axis, nothing but the identity in $\mathcal{E}$ stabilizes a $\tau_{P}(U_{\alpha,\beta})$, and if $Q= E_t(P)$, for some $E_t \in \mathcal{E}$, we get that $ \tau_{P}(U_{\alpha,\beta})$ and $\tau_{Q}(U_{\alpha,\beta})$ are disjoint. It has been mentioned before that every line through $P_{\infty}$, except from $\ell_{\infty}$, contains a pedal (of a point on $\ell_{\infty}$). Moreover, it is easy to see that the $q$ points, different from $P_{\infty}$, on each of these pedals form an orbit under the group $\mathcal{E}$. We take this observation as a `suggestion' to take a closer look at the orbits of pedals under $\mathcal{E}$ and to study how lines intersect these sets. From now on, we will use $\mathcal{O}(X)$ to denote the orbit of a set $X$ under the group $\mathcal{E}$. \begin{lemma}\label{lemsomelinesinq+1} Given a pedal $\tau_{P}(U_{\alpha,\beta})$, there is a point $Q\in \ell_{\infty}$ and $q$ lines through $Q$ that partition $\mathcal{O}(\tau_{P}(U_{\alpha,\beta}))$. That is, the intersection of each of these lines with $U_{\alpha,\beta}$ is completely contained in $\mathcal{O}(\tau_{P}(U_{\alpha,\beta}))$. \end{lemma} \begin{proof} Because of Lemma \ref{lemorbits}, it is enough to look at how lines intersect $\tau_{P_{\lambda}}(U_{\alpha,\beta})$. We consider the point $[1,0,0]$ and the lines through it. We know that lines, different from $\ell_{\infty}$, through $[1,0,0]$ look like \[ l_{\gamma}=\left[\begin{array}{c} 0 \\ -1 \\ \gamma \end{array}\right]. \] It is easy to see that the orbit of $P_{\lambda}$ is contained on a line through $P_{\infty}$. So, we let $\mathcal{O}(P_{\lambda})=\{P_1, P_2, \ldots, P_q \}$, where $P_t = E_t(P_{\lambda})$, for all $t\in GF(q)$. Moreover, since \[ E_t(x,y,z)=(x,y+tz,z) \] for all $t\in GF(q)$, and $P_{\lambda}=[0,\lambda \epsilon,1]$, with $\lambda=1$ or $\lambda=w$, we obtain \[ P_t=[0,\lambda \epsilon+t,1] \] Using the arguments at the beginning of Section \ref{secnotPinfty}, we get that \[ \tau_{P_{t}}(U_{\alpha,\beta}) = \left\{R_y; \ y\in GF(q^{2}), \ 2\lambda \epsilon+\left[\begin{array}{cc} y & \overline{y}\\ \end{array}\right]M_{\alpha,\beta}\left[\begin{array}{c} y\\ \overline{y}\\ \end{array}\right]=0\right\} \] where $R_y=[y,T(\alpha y^2) -\lambda \epsilon +t,1]$. Now, the points of intersection (if any) of $\tau_{P_{t}}(U_{\alpha,\beta})$ with $l_{\gamma}$ are given by \[ 0 = [y,T(\alpha y^2) -\lambda \epsilon +t,1]\left[\begin{array}{c} 0 \\ -1 \\ \gamma \end{array}\right] = -(T(\alpha y^2) -\lambda \epsilon +t)+ \gamma \] which means \begin{equation}\label{eqalmostdone} t= \gamma + \lambda \epsilon - T(\alpha y^2). \end{equation} It follows that, if $y$ and $\gamma$ were given, and $\gamma = s - \lambda \epsilon$ for some $s \in GF(q)$, then we can always find a $t\in GF(q)$ that satisfies Equation (\ref{eqalmostdone}). In this case, given a line $l_{s - \lambda \epsilon}$, for every $y\in GF(q^2)$ such that $R_y \in \tau_{P_{t}}(U_{\alpha,\beta})$ there is a point of intersection between $l_{s - \lambda \epsilon}$ and $\mathcal{O}(\tau_{P_{\lambda}}(U_{\alpha,\beta}))$. Hence, for $\gamma \neq s - \lambda \epsilon$, for all $s\in GF(q)$ the intersection is empty and for when $\gamma = ks - \lambda \epsilon$ then the intersection contains $q+1$ points. Note that for every $s\in GF(q)$ we are able to choose such a $\gamma$, thus we get $q$ lines through $[1,0,0]$ intersecting $\mathcal{O}(\tau_{P_{\lambda}}(U_{\alpha,\beta}))$ in $q+1$ points each. \end{proof} We would like to close this paper stating a few open problems. \begin{enumerate} \item Do lines intersecting pedals in at least four points exist if and only if $\beta \neq \overline{\beta}$? Corollary \ref{corarcsbeta} gives us one direction of this conjecture. \item What geometric properties determine when a line of the form $l_{x,-x}$ intersects a given $\tau_{P}(U_{\alpha,\beta})$ in four points? \item When a $\tau_{P}(U_{\alpha,\beta})$ is partitioned into two arcs (or contained in one arc for the case $\beta = \overline{\beta}$), is any of these arcs contained in a conic? \item In how many points does a line intersect the set of points in the orbit of any given $\tau_{P}(U_{\alpha,\beta})$ under $\mathcal{E}$? Lemma \ref{lemsomelinesinq+1} gives us a partial answer to this, but there are several other lines that are not considered in this result. \item Is there a combinatorial characterization for the structure formed by the lines of $\Pi$ and the points on $\mathcal{O}(\tau_{P_{\lambda}}(U_{\alpha,\beta}))$? \end{enumerate}
1,314,259,995,556
arxiv
\section{Introduction} We consider a Hamiltonian dynamics property, the symplectic nonsqueezing, of the coupled KdV type system, \begin{equation*} \left\{\begin{array}{ll}\label{eq:Coupled KdV system}\tag{CKdV} u_t + u_{xxx} + \frac{1}{2}\left(vv\right)_x = 0 \\ v_t + v_{xxx} + \left(uv\right)_x =0 \\ \left. \left(u,v\right) \right|_{t=0}= \left(u_0\left(x\right), v_0\left(x\right)\right), &\left(u_0,v_0\right) \in H^{-1/2}\left(\mathbb{T}\right) \times H^{-1/2}\left(\mathbb{T}\right), \end{array} \right. \end{equation*} where $\left(x,t\right) \in \mathbb{T} \times \mathbb{R} = \left[0,2\pi\right) \times \mathbb{R}$, and $u$, $v$ are real valued functions. (\ref{eq:Coupled KdV system}) is a special version of the coupled KdV system, \begin{equation}\label{eq:Generalized Coupled KdV system} \begin{cases}{} u_t + a_{11} u_{xxx} + a_{12}v_{xxx} + b_1 u u_{x} +b_2 uv_{x}+b_3 u_{x}v +b_4 v v_{x}= 0 \\ v_t + a_{21} u_{xxx} + a_{22}v_{xxx} + b_5 u u_{x} +b_6 uv_{x}+b_7 u_{x}v +b_8 v v_{x}= 0, \end{cases} \end{equation} where $u$,$v$ are real valued functions and $A=\left[\begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{matrix}\right]$ is self-adjoint and non-singular. By diagonalization of $A$, we can reduce (\ref{eq:Generalized Coupled KdV system}) to \begin{equation}\label{eq:Diagonalize Coupled KdV system} \begin{cases}{} u_t + u_{xxx} + b_1 u u_{x} +b_2 uv_{x}+b_3 u_{x}v +b_4 v v_{x}= 0 \\ v_t + \alpha v_{xxx} + b_5 u u_{x} +b_6 uv_{x}+b_7 u_{x}v +b_8 v v_{x}= 0, \end{cases} \end{equation} where $\alpha \not =0$ on the same domain. There are many examples of this type of systems, such as the Majda-Biello system, \begin{equation}\label{eq:Majda-Biello system} \begin{cases}{} u_t + u_{xxx} + \frac{1}{2}\left(vv\right)_{x}= 0 \\ v_t + \alpha v_{xxx} + \left(uv\right)_{x}= 0. \end{cases} \end{equation} The KdV type equations can be seen as examples of nonlinear dispersive equations or Hamiltonian systems. If one consider them on a compact domain, due to lack of dispersion, it is better to see them as Hamiltonian systems. They have studied by many authors for both periodic and nonperiodic settings. The studies of the well-posedness have done via local smoothing estimates and Bourgain's $X^{s,b}$ analysis \cite{KENIG:1993ts, Bourgain:1993cl, KENIG:1996tq}. In the low regularity below $L^2$ the global well-posedness is obtained by I-method \cite{Colliander:2003td}. The Majda-Biello system \eqref{eq:Majda-Biello system} is an example of the coupled KdV system. As an extension of results of the KdV equations, the Cauchy problem of the Majda-Biello was well studied by Oh \cite{Oh:2009uu, Oh:2009ux, Oh:2009um}. More precisely, the local well-posedness and the almost surely global well-posedness were proved in \cite{Oh:2009ux, Oh:2009um}, respectively. In \cite{Oh:2009uu}, the global well-posedness of (\ref{eq:Majda-Biello system}) in $H^{s}\left(\mathbb{T}\right) \times H^{s}\left(\mathbb{T}\right)$ for $s \ge -1/2$, when $\alpha=1$, was proved (moreover, Oh \cite{Oh:2009uu} obtained the global well-posedness of (\ref{eq:Majda-Biello system}) in $H^{s}\left(\mathbb{T}\right) \times H^{s}\left(\mathbb{T}\right)$, $s \ge \tilde s$, where $\tilde s = \tilde s\left(\alpha \right) \in \left( \left. 5/7,1 \right] \right.$, and $\alpha$ satisfies the certain Diophantine condition). Note that the phase space should be $H^{-1/2}(\mathbb{T}) \times H^{-1/2}(\mathbb{T})$ to consider the solution flow of \eqref{eq:Coupled KdV system} as a symplectic map. Recently, Guo, Simon and Titi \cite{Guo:2013up} proved the unconditional well-posedness of (\ref{eq:Coupled KdV system}) by the differentiation by parts. \\ \\ The purpose of this paper is to show the symplectic nonsqueezing property of the solution flow of \eqref{eq:Coupled KdV system}. The Lebesgue measure is a typical invariant of a symplectic transform. In \cite{Gromov:1985ww}, Gromov discovered another invariant of a symplectic transform which is called Darboux width. Later, Hofer and Zehnder \cite{Hofer:2011vo} developed the theory of the symplectic capacity. Moreover, Kuksin \cite{Kuksin:1995ue} extended the symplectic capacity to Hamiltonian PDEs. The main idea of \cite{Kuksin:1995ue} is that one can approximate the solution flow of the given Hamiltonian PDE as a finite dimensional symplectic map on the phase space. Concrete examples were studied by Bourgain \cite{Bourgain:1994tr} for the 1D cubic nonlinear Schr\"odinger equation (NLS), and Colliander et al. \cite{Colliander:2005vv} for the KdV equation. Recently, Roum\'egoux \cite{Roumegoux:2010sn} also proved nonsqueezing for the BBM equation. Mendelson \cite{Mendelson:2014vh} proved the nonsqueezing property of the Klein-Gordon equation on $\mathbb{T}^3$ via a probabilistic approach.\\ \\ As mentioned above, Bourgain \cite{Bourgain:1994tr} proved the nonsqueezing property of the 1D cubic NLS on $L^2_x\left(\mathbb{T}\right)$ space, and the basic strategy in \cite{Bourgain:1994tr} was an approximation by a finite dimensional truncated flow. The main step is to approximate the 1D cubic NLS flow which is the flow of the infinite dimensional Hamiltonian system by a frequency truncated finite dimensional system. Then due to Gromov's nonsqueezing of finite dimensional Hamiltonian systems, we have the nonsqueezing property of the truncated flow, and the result is transferred to the infinite dimensional NLS flow. Note that the truncated flow should be a symplectic map. Thus, the main here is to find a good such frequency truncation. The 1D cubic NLS is turned out to be well-behaved with the frequency truncations. Indeed, Bourgain used a basic (or a sharp) frequency truncation, and $X^{s,b}$ space to apply this argument. Later, this argument extended by Colliander et al. \cite{Colliander:2005vv} for the KdV flow on its phase space $H^{-1/2}_x\left(\mathbb{T}\right)$. In \cite{Colliander:2005vv}, there are two additional ingredients. Firstly, it turned out that the sharp frequency truncation is not working efficiently. They provided a counterexample that a sharp truncation does not approximate the original flow. Instead, they use a smooth truncation to resolve this problem. Secondly, they use the Miura transform to change the KdV flow to the mKdV flow. In fact, they proved the approximation by the truncated flow for the mKdV flow, and using the Miura transform and the inverse of it, concluded the approximation for the KdV flow. \\ The main goal of our work is to show the second ingredient, the Miura transform, is not necessary but can be replaced by more general technique, so called the normal form. In fact, the Miura transform is a special feature of the KdV flow due to integrability, and so it not widely applicable. Indeed, the system \eqref{eq:Coupled KdV system} does not enjoy the Miura transform. Although we presented the proof for (CKdV), the same proof works for the KdV flow, and so we think this provides an alternative simplified proof of the result in \cite{Colliander:2005vv}.\\ \\ The method of the normal form via the differentiation by parts first introduced by Babin, Ilyin and Titi \cite{Babin:2011wb} for the unconditional well-posedness of the KdV equation on $L^2(\mathbb{T})$, in which the normal form replaces the use of $X^{s,b}$ spaces. This argument is extended to other equations \cite{Kwon-Oh,Guo-Kwon-Oh}. Also, Erdogan and Tzirakis \cite{Erdogan:2012gs} used this method with $X^{s,b}$ multilinear estimates to show the global smoothing for the periodic KdV equation. The method of normal form is a way to detect and cancel out the nonresonancy in the nonlinear term. In general, if the characteristic surface is curved, then from the dispersion relation there is no quadratic resonance. Thus, by taking the normal form, the equation is changed to a cubic equation with quadratic boundary terms. See the detail in Section~\ref{Section: proof of different flow}. In \cite{Colliander:2005vv}, we observe that the role of the Miura transform is to change the KdV equation to the mKdV equation to do analysis for trilinear nonlinearity. Thus, we have thought this could be replaced by the normal form method. Note that in this example, both the Miura transform (in \cite{Colliander:2005vv}) and the method of normal form do not utilize full information of \emph{integrability} of the KdV flow. Thus, the method of normal form is more widely applicable to nonintegrable equations. \\ \\ The rest of paper is organized as follows: In Section \ref{Sec2}, we present theorems for the nonsqueezing property. In Section~\ref{Sec3}, we prove lemmas of bilinear and trilinear estimates in $X^{s,b}$ space setting. In Section~\ref{Section: proof of different flow}, we apply the differentiation by parts to the equation (\ref{eq:Coupled KdV system}) and prove key theorems using multilinear estimates. \subsection*{Notations} $ $ \\ For each dyadic number $N$, we denote the Littlewood-Paley projection by \begin{align*} \widehat{P_Nu}\left(k\right) := 1_{N \le \left|k\right| <2N}\left(k\right) \hat u_k, \\ \widehat{P_{\le N} u}\left(k \right) := 1_{\left|k\right| \le N}\left(k\right) \hat u_k, \\ \widehat{P_{\ge N} u}\left(k \right) := 1_{\left|k\right| \ge N}\left(k\right) \hat u_k, \end{align*} where $1_{\Omega}$ is a characteristic function on $\Omega$. For positive real numbers $x, y$, $x \lesssim y$ denotes $x \le Cy$ for some $C >0$, and $x \sim y$ means $x \lesssim y$ and $y \lesssim x$. We also denote $f = \mathcal{O}(g)$ by $f \lesssim g$ for positive real valued functions $f$ and $g$. Moreover, $x \ll y$ denotes $x \le cy$ for some small positive constant $c$.\\ \\ \section{Setting and Statement}\label{Sec2} We consider (\ref{eq:Coupled KdV system}) for simplicity of the argument. Denote $S_{CKdV}\left(t\right)$ be the nonlinear solution flow of (\ref{eq:Coupled KdV system}). The system (\ref{eq:Coupled KdV system}) enjoys several conservation laws, \begin{align*} E_1 = \int_{\mathbb{T}} u dx, ~ E_2 = \int_{\mathbb{T}} v dx, \label{eq:mean} \\ M\left(u,v\right) = \int_{\mathbb{T}} u^2 +v^2 dx, \end{align*} and \begin{equation}\label{eq:Hamiltonian} H\left(u,v\right) = \frac{1}{2}\int_{\mathbb{T}} u_x^2 + v_x^2 - uv^2 dx. \end{equation} Especially, (\ref{eq:Hamiltonian}) is the Hamiltonian, i.e., the system (\ref{eq:Coupled KdV system}) has Hamiltonian structure with respect to (\ref{eq:Hamiltonian}). We denote the spatial Fourier transform and the inverse Fourier transform by \begin{align*} \mathcal{F}_x \left(u\right) &= \hat u_k = \int _{\mathbb{T}} e^{-ikx}u\left(x\right) dx, \\ u \left(x \right) &= \int e^{ikx}\hat u\left(k\right) dk:=\frac{1}{2 \pi}\sum_{k \in \mathbb{Z}} \hat u_k e^{ikx}. \end{align*} We use the spatial Sobolev space \begin{equation*}\label{eq:H^s space} \left\|u\right\|_{H^s_x} = \left\|\left<k\right>^s \hat u\right\|_{L^2_k} := \frac{1}{\left(2\pi\right)^{1/2}} \left(\sum_{k \in \mathbb{Z}} \left<k\right>^{2s}\left|\hat u\right|^2\right)^{1/2}, \end{equation*} where $s \in \mathbb{R}$ and $\left<k\right> = \left(1+\left|k\right|^2\right)^{1/2}$. Mostly, we work on the mean zero $H^s$ space as follows, \begin{equation*} H^s_0 = \left\{u \in H^s : \frac{1}{2\pi} \int_{\mathbb{T}} u =0 \right\} ~\text{and}~ \left\|u\right\|_{H_0^s} := \left\|\left<k\right>^s \hat u(k)\right\|_{L^2_k}. \end{equation*} Since $E_1$ and $E_2$ are preserved quantities, the function space $H^{s}_0 \times H^s_0$ is well-suited for the solution to (\ref{eq:Coupled KdV system}). Note that due to the Galilean transform, one can switch from mean zero solutions to general mean solutions. \\ The equation (CKdV) is a Hamiltonian PDE associated with Hamiltonian (\ref{eq:Hamiltonian}). More precisely, we can write (\ref{eq:Coupled KdV system}) as \begin{equation*} \begin{cases}{} u_t = \nabla_{\omega,u} H\left(u\left(t\right),v\left(t\right)\right) \\ v_t = \nabla_{\omega,v} H\left(u\left(t\right),v\left(t\right)\right), \\ \end{cases} \end{equation*} where \begin{equation*} \begin{cases}{} \omega\left(h,\nabla_{\omega,u}H\left(u\left(t\right),v\left(t\right)\right)\right) := \left. \frac{d}{d\varepsilon}\right|_{\varepsilon=0} H\left(u+\varepsilon h, v\right) \\ \omega\left(h,\nabla_{\omega,v}H\left(u\left(t\right),v\left(t\right)\right)\right) := \left. \frac{d}{d\varepsilon}\right|_{\varepsilon=0} H\left(u, v+\varepsilon h\right), \end{cases} \end{equation*} and \begin{equation*} \omega\left(u,v\right) := \int_{\mathbb{T}} u \partial^{-1}_x v dx. \end{equation*} Thus, we say \begin{equation}\label{eq:symplectic form} \omega_H \left(\left(u,u'\right),\left(v,v'\right)\right)= \omega\left(u,v\right)+ \omega\left(u',v'\right) \end{equation} the symplectic form associated with (\ref{eq:Hamiltonian}). Thus, a solution flow of (\ref{eq:Coupled KdV system}) is the Hamiltonian flow in $(H^{-1/2}_0\left(\mathbb{T}\right) \times H^{-1/2}_0\left(\mathbb{T}\right), \omega_H)$ corresponding to (\ref{eq:Hamiltonian}). Note that the system is globally well-posed on its phase space $H^{-1/2}\left(\mathbb{T}\right) \times H^{-1/2}\left(\mathbb{T}\right)$, and so the solution flow from data $(u_0,v_0)$ to $(u(t),v(t))$ is a symplectic map at any time $t$. \\ \\ Now, we discuss the nonsqueezing theorem. We first recall Gromov's finite dimensional nonsqueezing theorem. \begin{thm}[Finite dimensional nonsqueezing theorem]\label{thm:Finite nonsqueezing} Let $\mathcal{S}$ be the symplectic map on the $2n$-dimensional phase space. Let $B_R$, $C_{k,r}$ denote a ball with radius $R$, and a cylinder with radius $r$ at $k$-th component, respectively. If \begin{equation*} \mathcal{S}\left(B_R\right) \subseteq C_{k,r}, \end{equation*} then $r \ge R$. \end{thm} Our strategy is to find a frequency truncated finite dimensional solution flow which is also the Hamiltonian flow, and approximate to the original flow for some sense. Moreover, once we find the finite dimensional approximation, we can transfer the nonsqueezing theorem to the original flow. \\ The first guess is a sharp frequency truncation like \cite{Bourgain:1994tr}, as an approximation of the flow. However, this is not a good approximation for (\ref{eq:Coupled KdV system}) (see Remark \ref{rmk:counter example}). Naturally, we next choose a smooth truncation like \cite{Colliander:2005vv}. More precisely, let $\phi\left(x\right)$ be a smooth even bump function supported to $\left[-N,N\right]$ which equals 1 on $\left[-N/2,N/2\right]$, and $b\left(k\right)$ be the restriction to integers of $\phi\left(x\right)$. We thus consider the smooth truncated system, \begin{equation}\label{eq:smooth truncated system}\tag{BKdV} \left\{\ \begin{aligned} \partial_t u + \partial_{xxx} u + \frac{1}{2}B\left(\left(vv\right)_x\right) = 0 \\ \partial_t v + \partial_{xxx} v + B\left(\left(uv\right)_x\right) =0, \end{aligned} \right. \end{equation} where \begin{equation*} \widehat{Bu}\left(k\right) = b\left(k\right) \hat u\left(k\right). \end{equation*} Let $S_{BKdV}\left(t\right)$ be the solution flow of (\ref{eq:smooth truncated system}). Clearly, $S_{BKdV}\left(t\right)$ is a finite dimensional solution flow. However, $S_{BKdV}\left(t\right)$ is not a symplectic map, so we need more steps. To construct an appropriate finite dimensional symplectic map with respect to (\ref{eq:symplectic form}), we first consider a modified Hamiltonian. Let $H_N\left(u,v\right)$ be a Hamiltonian which is defined by \begin{equation*} H_N\left(u,v\right) := \frac{1}{2} \int u_x^2 + v_x^2 - B\left(u\right)\left(B\left(v\right)\right)^2 dx, \end{equation*} on $P_{\le N} H^{-1/2}_0 \left(\mathbb{T}\right) \times P_{\le N} H^{-1/2}_0 \left(\mathbb{T}\right)$. Then, we can get the appropriate truncated system by using $H_N\left(u,v\right)$. By the usual gradient with respect to (\ref{eq:symplectic form}), \begin{equation*} \begin{aligned} \left.\frac{d}{d\varepsilon} H_N \left(u+\varepsilon w,v\right) \right|_{\varepsilon=0} &= \int \left(u_x w_x-B\left(w\right)\left(B\left(v\right)\right)^2\right) = \omega\left(w,-u_{xxx}-\frac{1}{2}B\left(\left(B\left(v\right)B\left(v\right)\right)_x\right)\right), \\ \left.\frac{d}{d\varepsilon} H_N \left(u,v+\varepsilon w\right) \right|_{\varepsilon=0} &= \int \left(v_x w_x-B\left(u\right)B\left(v\right)B\left(w\right)\right) = \omega\left(w,-v_{xxx}-B\left(\left(B\left(u\right)B\left(v\right)\right)_x\right)\right). \end{aligned} \end{equation*} Hence, we conclude that the smooth truncated system with respect to $H_N\left(u,v\right)$ is given by \begin{equation}\label{eq:symplectic truncated system}\tag{FKdV} \left\{\ \begin{aligned} \partial_t u + \partial_{xxx} u + \frac{1}{2}B\left(\left(B\left(v\right)B\left(v\right)\right)_x\right) = 0, \\ \partial_t v + \partial_{xxx} v + B\left(\left(B\left(u\right)B\left(v\right)\right)_x\right)=0, \end{aligned} \right. \end{equation} for initial data $\left(u_0,v_0\right) \in P_{\le N} H^{-1/2}_0 \left(\mathbb{T}\right) \times P_{\le N} H^{-1/2}_0 \left(\mathbb{T}\right)$. Let $S_{FKdV}\left(t\right)$ be the solution flow of (\ref{eq:symplectic truncated system}). It is the finite dimensional symplectic map at any time $t$ by the construction and the global well-posedness. We now consider that $S_{FKdV}\left(t\right)$ as a candidate of the good approximation. \begin{rmk}\label{rmk:relation of flow} We observe relation between \eqref{eq:smooth truncated system} and \eqref{eq:symplectic truncated system}. First of all, we apply the operator $B$ to the both sides of \eqref{eq:symplectic truncated system}. Then we can obtain the system, \begin{equation}\label{eq:double truncated system}\tag{BBKdV} \left\{\ \begin{aligned} \partial_t u + \partial_{xxx} u + \frac{1}{2}B^2\left(\left(vv\right)_x\right) = 0 \\ \partial_t v + \partial_{xxx} v + B^2\left(\left( uv\right)_x\right)=0, \end{aligned} \right. \end{equation} for initial data $\left(Bu_0,Bv_0\right) \in P_{\le N} H^{-1/2}_0 \left(\mathbb{T}\right) \times P_{\le N} H^{-1/2}_0 \left(\mathbb{T} \right)$. We let $S_{BBKdV}\left(t\right)$ be the solution flow of \eqref{eq:double truncated system}, and then by the definition of $S_{FKdV}\left(t\right)$ and $S_{BBKdV}\left(t\right)$, \begin{equation*} BS_{FKdV}\left(t\right)\left(u_0\right) = S_{BBKdV}\left(t\right)\left(Bu_0\right). \end{equation*} From the definition of $B$, \eqref{eq:double truncated system} is \eqref{eq:smooth truncated system} with $B$ replaced by $B^2$. This relation will be used in the proof of the approximation to the solution flow. \end{rmk} \begin{rmk}\label{rmk:counter example} The sharp truncation (it uses $P_{\le N}$ instead of $B$ for truncation) turns out to be not a good finite approximation, due to a counterexample by \cite{Colliander:2005vv}. We consider the initial data \begin{equation*}\label{eq:counter example} u_0 = v_0 = \sigma^3 \cos\left(k_0x\right) + \sigma N^{1/2} \cos(Nx), \end{equation*} and by the similar iterating argument in \cite{Colliander:2005vv} (or consecutive substitution), we can show that the sharp truncated coupled KdV flow does not approximate the original coupled KdV flow. Since the coupled KdV flow with the same initial data can be regarded as the KdV flow, the same counterexample as in \cite{Colliander:2005vv} works in (CKdV). \end{rmk} \begin{rmk} By the same argument in \cite{Colliander:2005vv, Oh:2009uu}, I-method, we can show the global well-posedness of \eqref{eq:symplectic truncated system}. See \cite{Oh:2009uu} for the detail. \end{rmk} So far, we have chosen the appropriate truncation and the function spaces. We now define balls and cylinders in the phase space and state the main theorem, the nonsqueezing property of the coupled KdV type system (\ref{eq:Coupled KdV system}). \begin{defn} Let $B^{N}_r\left(u_*\right)$ be a finite dimensional ball in $P_{\le N}{H^{-1/2}_0}$ which has radius $r$ and centered at $u_* \in P_{\le N}{H^{-1/2}_0}$. Likewise, $B^{\infty}_r\left(u_*\right)$ is an infinite dimensional ball in ${H^{-1/2}_0}$ which has radius $r$ and centered at $u_* \in {H^{-1/2}_0}$. That is, \begin{align*} B^{N}_r\left(u_*\right) := \left\{u \in P_{\le N}{H^{-1/2}_0} : \left\|u-u_*\right\|_{{H^{-1/2}_0}} \le r\right\}, \\ B^{\infty}_r\left(u_*\right) := \left\{u \in {H^{-1/2}_0} : \left\|u-u_*\right\|_{{H^{-1/2}_0}} \le r\right\}. \end{align*} For any $k \in \mathbb{Z}\setminus \left\{0\right\}\left(:=\mathbb{Z}^*\right)$, we define that $C^{N}_{k,r}\left(z\right)$ is the finite dimensional cylinder in $P_{\le N}{H^{-1/2}_0}$ which has radius $r$ and centered at $z \in \mathbb{C}$. Likewise, $C^{\infty}_{k,r}\left(z\right)$ is the infinite dimensional cylinder in ${H^{-1/2}_0}$ which has radius $r$ and centered at $z \in \mathbb{C}$. That is, \begin{align*} C^{N}_{k,r}\left(z\right) := \left\{u \in P_{\le N}{H^{-1/2}_0} : \left|k\right|^{-1/2}\left|\hat u_k-z\right| \le r\right\}, \\ C^{\infty}_{k,r}\left(z\right) := \left\{u \in {H^{-1/2}_0} : \left|k\right|^{-1/2}\left|\hat u_k-z\right| \le r\right\}. \end{align*} \end{defn} Now we state our main theorem. \begin{thm}\label{thm:nonsqueezing of CKdV(main thm)} Let $k_1, k_2 \in \mathbb{Z}^*$, $r_1<R_1$, $r_2<R_2$ and $T>0$. In addition, $\left(u_*,v_*\right) \in H_0^{-\frac{1}{2}} \left(\mathbb{T}\right) \times H_0^{-\frac{1}{2}} \left( \mathbb{T}\right)$ and $\left(z,w\right) \in \mathbb{C}^2$. Then \begin{equation*} S_{CKdV}\left(T\right)\left(B^{\infty}_{R_1}\left(u_*\right) \times B^{\infty}_{R_2}\left(v_*\right)\right) \not \subseteq C^{\infty}_{k_1,r_1}\left(z\right) \times C^{\infty}_{k_2,r_2}\left(w\right). \end{equation*} In other words, there exists a global solution $S_{CKdV}\left(t\right)\left(u_0,v_0\right) \in H^{-\frac{1}{2}}_0 \times H^{-\frac{1}{2}}_0$ to \eqref{eq:Coupled KdV system} such that \begin{equation*} \left\|u_0-u_*\right\|_{H^{-1/2}_0} \le R_1, ~ \left|k_1\right|^{-1/2} \left|\left(S_{CKdV}\left(T\right)u_0\right) \hat{}\left(k_1\right)-z\right| > r_1, \end{equation*} and \begin{equation*} \left\|v_0-v_*\right\|_{H^{-1/2}_0} \le R_2 , ~ \left|k_2\right|^{-1/2} \left|\left(S_{CKdV}\left(T\right)v_0\right) \hat{}\left(k_2\right)-w\right| > r_2, \end{equation*} respectively\footnote{Obviously, $S_{CKdV}(t)$ is the flow $\mathbb{R} \to \mathbb{R}^2$. However by abuse of notation, let $S_{CKdV}(t)u_0$ and $S_{CKdV}(t)v_0$ denote the first and the second component of $S_{CKdV}(t)(u_0,v_0)$, respectively. Here and in the sequel, we use these notations for all solution flow as well.}. \end{thm} Note that no smallness conditions are imposed on $k_i$, $r_i$, $R_i$, $\left(u_*,v_*\right)$ and $\left(z,w\right)$. \\ Our strategy is to construct a truncated solution flow which has the nonsqueezing property, and approximate to the original solution flow. Hence, we need the nonsqueezing theorem associated with the truncated solution flow (\ref{eq:symplectic truncated system}). \begin{lem}\label{lem:Nonsqueezing property of finite solution flow} Let $k_1, k_2 \in \mathbb{Z}^*$ such that $\left|k_1\right|, \left|k_2\right| \le N$. Let $r_1<R_1$, $r_2<R_2$ and $T>0$. Furthermore, let $\left(u_0,v_0\right) \in P_{\le N}H_0^{-\frac{1}{2}} \left(\mathbb{T}\right) \times P_{\le N}H_0^{-\frac{1}{2}} \left( \mathbb{T}\right)$ and $z,w \in \mathbb{C}$. Then \begin{equation*} S_{FKdV}\left(T\right)\left(B^{N}_{R_1}\left(u_0\right) \times B^{N}_{R_2}\left(v_0\right)\right) \not \subseteq C^{N}_{k_1,r_1}\left(z\right) \times C^{N}_{k_2,r_2}\left(w\right)\end{equation*} \end{lem} Since $S_{FKdV}\left(T\right)$ is the finite dimensional symplectic map at time $T$, Lemma \ref{lem:Nonsqueezing property of finite solution flow} is a direct consequence of Theorem \ref{thm:Finite nonsqueezing}. Thus, in the rest we prove that two flows, the flow of (\ref{eq:Coupled KdV system}) and (\ref{eq:symplectic truncated system}) are close for sufficiently large $N$. We show that in two steps. Firstly, we prove that solutions agreeing on low frequency data stay close at frequencies $\le N$. Secondly, we show that solutions to the truncated flow stay close to the original flow in low frequencies. The first part is written as follows, \begin{thm}\label{thm:est. of different data} Let $T>0$, $\varepsilon>0$, $\left(u_0,v_0\right) \in {H^{{-\frac{1}{2}}}_0} \times {H^{{-\frac{1}{2}}}_0}$ and $\left(u'_0,v'_0\right) \in {H^{{-\frac{1}{2}}}_0} \times {H^{{-\frac{1}{2}}}_0}$. There exists a positive integer \begin{equation*} N_0(T, \varepsilon,\left\|u_0\right\|_{H^{-1/2}_0},\left\|u'_0\right\|_{H^{-1/2}_0},\left\|v_0\right\|_{H^{-1/2}_0},\left\|v'_0\right\|_{H^{-1/2}_0}) \end{equation*} such that for all $N>N_0$, and the data satisfying $P_{\le 2N}\left(u_0,v_0\right) = P_{\le 2N}\left(u'_0,v'_0\right)$, \begin{equation*} \begin{aligned} &\sup_{\left|t\right|\le T}\left\|P_{\le N}\left(S_{CKdV}\left(t\right) u_0-S_{CKdV} \left(t\right)u'_0\right)\right\|_{{H^{-1/2}_0}} \\ &+\sup_{\left|t\right|\le T} \left\|P_{\le N}\left(S_{CKdV}\left(t\right)v_0-S_{CKdV}\left(t\right)v'_0\right)\right\|_{{H^{-1/2}_0}} \\ &\lesssim \varepsilon. \end{aligned} \end{equation*} \end{thm} We now compare the solutions to the original flow and the truncated flow. The proof of this case is more involved and form main analysis of this work. However, we introduce a relatively easier way than the former result \cite{Colliander:2005vv}. In this step, we use the method of the normal form to change the flow with trilinear nonlinear terms and bilinear boundary terms. See the detail in Section \ref{Section: proof of different flow}. \begin{thm}[Truncation of the flow]\label{thm:est. of truncated flow} Let $T>0$ and $\varepsilon >0$. There exists a positive integer $N_0(T,\varepsilon,\left\|u_0\right\|_{H^{-1/2}_0},\left\|v_0\right\|_{H^{-1/2}_0})$ such that for all $N> N_0$, \begin{equation}\label{eq:est. of truncated flow} \begin{aligned} &\sup_{\left|t\right|\le T}\left\|P_{\le N^{\frac{1}{2}}}\left(S_{CKdV}\left(t\right)u_0-S_{BKdV}\left(t\right)u_0\right)\right\|_{H_0^{-\frac{1}{2}}} \\ &+ \sup_{\left|t\right|\le T}\left\|P_{\le N^{\frac{1}{2}}}\left(S_{CKdV}\left(t\right)v_0-S_{BKdV}\left(t\right)v_0\right)\right\|_{H_0^{-\frac{1}{2}}} \\ &\lesssim \varepsilon, \end{aligned} \end{equation} where $\left(u_0,v_0\right) \in {H^{{-\frac{1}{2}}}_0} \times {H^{{-\frac{1}{2}}}_0}$ which has the frequency support on $\left[-N,N\right] \times \left[-N,N\right]$. \end{thm} Note that we consider $S_{BKdV}\left(t\right)$ instead of $S_{FKdV}\left(t\right)$ in Theorem \ref{thm:est. of truncated flow}. However, it is enough to prove the approximation, because $S_{FKdV}\left(t\right)$ can be represented $S_{BBKdV}\left(t\right)$ by Remark \ref{rmk:relation of flow} and the support of initial data and $b\left(k\right)$. Thus, Theorem \ref{thm:est. of truncated flow} is equivalent to the approximation between $S_{CKdV}\left(t\right)$ and $S_{FKdV}\left(t\right)$. We can now reach the approximation lemma by assuming Theorem \ref{thm:est. of different data} and \ref{thm:est. of truncated flow}. \begin{lem}[Approximation lemma]\label{lem:Approximation of solution flow} Let $k_1, k_2 \in \mathbb{Z}^*$, $A_1, A_2>0$, $T>0$ and $0< \varepsilon \ll 1$. Then there exists a positive integer $N_0\left(k_1,k_2,A_1,A_2,T,\varepsilon\right) \gg \left|k_1\right|, \left|k_2\right|$ such that \begin{equation*} \left|k_1\right|^{-\frac{1}{2}}\left|\left({S_{CKdV}\left(T\right)u_0}\right)^{\wedge{}} \left(k_1\right)-\left({S_{FKdV}\left(T\right)u_0}\right)^{\wedge{}}\left(k_1\right) \right| \le \varepsilon, \end{equation*} and \begin{equation*} \left|k_2\right|^{-\frac{1}{2}}\left|\left({S_{CKdV}\left(T\right)v_0}\right)^{\wedge{}} \left(k_2\right)-\left({S_{FKdV}\left(T\right)v_0}\right)^{\wedge{}}\left(k_2\right) \right| \le \varepsilon \end{equation*} for $N>N_0\left(k_1,k_2,A_1,A_2,T,\varepsilon\right)$ and all initial data $u_0 \in B^{N}_{A_1}\left(0\right)$ and $v_0 \in B^{N}_{A_2}\left(0\right)$. \end{lem} \begin{proof} We assume that Theorem \ref{thm:est. of different data} and \ref{thm:est. of truncated flow} are true for a while. The following equalities are obtained by support of the operator $B$ and Remark \ref{rmk:relation of flow}, \begin{equation*} \left({S_{FKdV}\left(t\right)u_0}\right)^{\wedge{}}\left(k_1\right) = \left({BS_{FKdV}\left(t\right)u_0}\right)^{\wedge{}}\left(k_1\right) = \left({S_{BBKdV}\left(Bu_0\right)}\right)^{\wedge{}}\left(k_1\right) \end{equation*} for $\left|k_1\right| \ll N_0$. The constant $\varepsilon$ in Lemma \ref{lem:Approximation of solution flow} is different from the constant $\varepsilon$ in Theorem \ref{thm:est. of different data} and \ref{thm:est. of truncated flow}, so we let $\varepsilon'$ denote the upper bounds in the theorems. We choose the sufficiently large $N_0\left(k_1,k_2,A_1,A_2,T,\varepsilon\right)$ such that for all $N>N_0$, `$\lesssim \varepsilon'$' can be changed into `$\le \frac{1}{2} \varepsilon$' in Theorem \ref{thm:est. of different data} and \ref{thm:est. of truncated flow}. Thus, we have \begin{equation*} \begin{split} &\left|k_1\right|^{-\frac{1}{2}}\left|\left({S_{CKdV}\left(T\right)u_0}\right)^{\wedge{}} \left(k_1\right)-\left({S_{BBKdV}\left(T\right)Bu_0}\right)^{\wedge{}}\left(k_1\right) \right| \\ &\le \left|k_1\right|^{-\frac{1}{2}}\left|\left({S_{CKdV}\left(T\right)u_0}\right)^{\wedge{}} \left(k_1\right)-\left({S_{CKdV}\left(T\right)Bu_0}\right)^{\wedge{}}\left(k_1\right) \right| \\ &+ \left|k_1\right|^{-\frac{1}{2}}\left|\left({S_{CKdV}\left(T\right)Bu_0}\right)^{\wedge{}} \left(k_1\right)-\left({S_{BBKdV}\left(T\right)Bu_0}\right)^{\wedge{}}\left(k_1\right) \right| \\ &\le \varepsilon, \end{split} \end{equation*} for $N>N_0\left(k_1,k_2,A_1,A_2,T,\varepsilon\right)$ and $\left|k_1\right| \le N^{1/2}$. In the first inequality, we use the triangle inequality. To have the second inequality, we apply Theorem \ref{thm:est. of different data} to the first term, and Theorem \ref{thm:est. of truncated flow} to the second term, respectively. Similarly, we can obtain the estimate with respect to $v_0$. \end{proof} \noindent Assuming Lemma \ref{lem:Approximation of solution flow}, we provide a proof of Theorem \ref{thm:nonsqueezing of CKdV(main thm)}. \begin{proof}[Proof of Theorem \ref{thm:nonsqueezing of CKdV(main thm)}] $ $\\ Let $r_1$, $R_1$, $u_*$, $k_1$, $z$ and $T$ as in the theorem \ref{thm:nonsqueezing of CKdV(main thm)}. Choose $0 < \varepsilon < \frac{R_1-r_1}{2}$, and the ball $B^{\infty}_{R_1}\left(u_*\right) \subset B^{\infty}_{A_1}\left(0\right)$. We also choose $N > N_0\left(k_1, A_1, T, \varepsilon\right)$ so large that \begin{equation*} \left\|u_*- P_{\le N} u_*\right\|_{H^{-1/2}_0} \le \varepsilon. \end{equation*} By Lemma \ref{lem:Nonsqueezing property of finite solution flow}, we can find initial data $u_0 \in P_{\le N} H^{-\frac{1}{2}}_0\left(\mathbb{T}\right)$ satisfying \begin{equation*} \left\|u_0 - P_{\le N}u_*\right\|_{H^{-1/2}_0} \le R_1-\varepsilon \end{equation*} and then at time T, \begin{equation*} \left|k_1\right|^{-\frac{1}{2}} \left|\left(S_{FKdV}\left(T\right)u_0\right)^{\wedge{}}\left(k_1\right)-z\right| > r_1+\varepsilon. \end{equation*} From the triangle inequality, \begin{equation*} \left\|u_0 - u_*\right\|_{H^{-1/2}_0} \le \left\|u_0 - P_{\le N }u_*\right\|_{H^{-1/2}_0} + \left\|P_{\le N }u_*- u_*\right\|_{H^{-1/2}_0} \le R_1 - \varepsilon + \varepsilon = R_1. \end{equation*} We thus have the claim by the triangle inequality and Lemma \ref{lem:Approximation of solution flow}, \begin{equation*} \begin{split} &\left|k_1\right|^{-\frac{1}{2}} \left|z-\left(S_{CKdV}\left(T\right)u_0\right)^{\wedge{}}\left(k_1\right)\right| \\ &\ge \left|k_1\right|^{-\frac{1}{2}} \left[ \left|z-\left(S_{FKdV}\left(T\right)u_0\right)^{\wedge{}}\left(k_1\right)\right| - \left|\left(S_{FKdV}\left(T\right)u_0\right)^{\wedge{}}\left(k_1\right)-\left(S_{CKdV}\left(T\right)u_0\right)^{\wedge{}}\left(k_1\right)\right|\right] \\ & > r_1+\varepsilon -\varepsilon = r_1. \end{split} \end{equation*} Similarly, we also get the result for $S_{CKdV}\left(t\right)v_0$. \end{proof} \noindent Hence, we remain to prove Theorem~\ref{thm:est. of different data} and Theorem~\ref{thm:est. of truncated flow}. In Section ~\ref{Sec3}, we introduce function spaces and prove bilinear and trilinear estimates. The analysis in this part is similar to \cite{Colliander:2004gc, Erdogan:2012gs}. In Section~\ref{Section: proof of different flow}, we use the normal form method via the differentiation by parts, to change the system to cubic system with bilinear boundary terms, and then we apply multilinear estimates to prove Theorem~\ref{thm:est. of different data} and Theorem~\ref{thm:est. of truncated flow}. \section{Bi- and Trilinear estimates}\label{Sec3} In this section, we state and prove bilinear and trilinear estimates that are used in the proof of Theorem~\ref{thm:est. of different data} and \ref{thm:est. of truncated flow}. First of all, we define function spaces to obtain the multilinear estimates. These function spaces are Fourier restriction spaces that are known as the Bourgain space or the $X^{s,b}$-space. We slightly modify them to define $Y^s$ and $Z^s$ spaces for the solutions and nonlinear terms. For fixed $s,b \in \mathbb{R}$, and a mean-zero function $u\left(x,t\right)$ on $\mathbb{T} \times \mathbb{R}$, recall \begin{equation*} \left\|u\right\|_{X^{s,b}}:=\left\|\left<k\right>^s \left<\tau-k^3\right>^b \mathcal{F} \left(u\right) \left(k, \tau \right)\right\|_{L^2_k L^2_{\tau}}, \end{equation*} where $\mathcal{F}$ is the space-time Fourier transform, \begin{equation*} \mathcal{F}\left(u\right)\left(k,\tau\right) = \tilde u\left(k,\tau\right)= \int_{\mathbb{T} \times \mathbb{R}} e^{- i \left(xk+t \tau\right)} u\left(x,t\right) dx dt. \end{equation*} However, $X^{s,b}$-space barely fails to control the $L^{\infty}_t H^s_x$ norm on $ \mathbb{T} \times \mathbb{R}$. Hence, we use slightly smaller spaces by adding an additional norm, \begin{equation*} \left\|u\right\|_{Y^s}:=\left\|u\right\|_{X^{s,1/2}}+\left\|\left<k\right>^s \mathcal{F}\left(u\right)\right\|_{L^2_k L^1_{\tau}}, \end{equation*} and the space for nonlinear terms would be \begin{equation*} \left\|u\right\|_{Z^s}:=\left\|u\right\|_{X^{s,-1/2}}+\left\|\frac{\left<k\right>^s}{\left<\tau-k^3\right>} \mathcal{F}\left(u\right)\right\|_{L^2_k L^1_{\tau}}. \end{equation*} Then, we have embeddings as follows: \begin{equation}\label{eq:space embedding} \begin{aligned} Y^s \subseteq C_tH^s_x \subseteq L^{\infty}_t H^s_x, \\ L^{\infty}_t H^s_x \subseteq L^{2}_t H^s_x \subseteq Z^s \end{aligned} \end{equation} in a compact time interval $[0,T]$. \\ \\ We introduce bilinear and trilinear terms that will appear in normal form analysis. \begin{equation*}\label{eq: bilinear formula} F_2\left(u,v\right) := \mathcal{F}^{-1}\left(\int \sum_{\substack{ k_i \in \mathbb{Z}^*\\k_0+k_1+k_2=0 }} \frac{\tilde u_{k_1}}{k_1} \frac{\tilde v_{k_2}}{k_2} d\Gamma\right), \end{equation*} where $\int \cdots \,d\Gamma$ means the integration taken on the hyperplane \begin{equation*} \left\{\left(\tau_0,\tau_1,\tau_2\right) \in \mathbb{R}^3:\tau_0+\tau_1+\tau_2=0\right\}. \end{equation*} In the analysis, we have two types of trilinear terms, namely, resonance or nonresonance terms: \begin{equation}\label{eq:resonant trilinear} F_{r}\left(u,v,w\right) := \mathcal{F}^{-1}\left(\int \tilde u_{-k} \sum_{\substack{k_i \in \mathbb{Z}^* \\k_2+k_3=0} }\tilde v_{k_2} \frac{\tilde w_{k_3}}{k_3} d\Gamma\right), \end{equation} and \begin{equation*} F_{nr}\left(u,v,w\right) := \mathcal{F}^{-1}\left(\int\sum_{\substack{k_i \in \mathbb{Z}^* \\ k_0+k_1+k_2+k_3=0 \\ (k_1+k_2)(k_2+k_3)(k_3+k_1) \not =0}} \tilde u_{k_1} \tilde v_{k_2} \frac{\tilde w_{k_3}}{k_3} d\Gamma\right), \end{equation*} the integral $ \int \cdots \, d\Gamma $ is taken on the set \begin{equation*} \left\{\left(\tau_0,\tau_1,\tau_2,\tau_3\right) \in \mathbb{R}^4:\tau_0+\tau_1+\tau_2+\tau_3=0\right\}. \end{equation*} For dyadic numbers $ N_i $, we assume $ N_i \sim \left|k_i\right|$. We denote by $n_1, n_2, n_3, n_4 \in \mathbb Z $ frequencies in order, i.e., \begin{equation*} {|n_{1}|} \ge {|n_{2}|} \ge {|n_{3}|} \ge {|n_{4}|} \quad \text{and} \quad \{n_1,n_2,n_3,n_4\} =\{k_0,k_1,k_2,k_3\}. \end{equation*} Similarly, in the case of three frequencies, let $n_1$, $n_2$ and $n_3$ be defined to be the maximum, median and minimum of $k_0$, $k_1$ and $k_2$, respectively. Namely, \begin{equation*} {n_{1}} \ge {n_{2}} \ge {n_{3}} ~\text{and}~ \{n_1,n_2,n_3\} =\{k_0,k_1,k_2\}. \end{equation*} \subsection{Bilinear estimate} \begin{lem}\label{lem:bi est.} Let $u,v \in Y^{-1/2}$. Then \begin{equation}\label{eq: bi est.} \left\|\partial_x^{-1}u\partial_x^{-1}v\right\|_{Y^{-1/2}} \lesssim \left\|u\right\|_{Y^{-1/2}} \left\|v\right\|_{Y^{-1/2}}. \end{equation} \end{lem} \begin{proof} The $X^{-1/2,1/2}$ part of \eqref{eq: bi est.} is a variant of \begin{equation}\label{eq:X^s,1/2 est.} \left\|uv\right\|_{X^{-1/2,1/2}} \lesssim \left\|u\right\|_{Y^{1/2}} \left\|v\right\|_{Y^{1/2}}, \end{equation} which was proved in Section 4 of \cite{Colliander:2004gc}, so it is done. To prove the $L^2_kL^1_{\tau}$ part, it is enough to show the estimate \begin{equation*} \left\|uv\right\|_{L^2_x L^1_{\tau}} \lesssim \left\|u\right\|_{H^{1/2}_x L^1_{\tau}} \left\|v\right\|_{H^{1/2}_x L^1_{\tau}}. \end{equation*} It can be obtained by the Young, H\"older, and Sobolev inequalities. \end{proof} \subsection{Trilinear estimate} $ $ \\ In this subsection, we prove the trilinear estimate of the following form. \begin{lem}\label{lem:tri est.} Let $u,v$ and $w \in Y^{-1/2}$. Then \begin{equation}\label{eq:tri est.} \left\|\left(uv - P_0\left(uv\right)\right)\partial_x^{-1}w \right\|_{Z^{-1/2}} \lesssim \left\|u\right\|_{Y^{-1/2}} \left\|v\right\|_{Y^{-1/2}}\left\|w\right\|_{Y^{-1/2}}, \end{equation} where $ P_0$ is the Dirichlet projection to zero frequency, i.e., $P_0\left(f\right) := \int_{\mathbb{T}} f dx$. \end{lem} As opposed to the bilinear estimate, the trilinear term contains resonant interactions. We decompose it into resonant part and nonresonant part. We first consider the resonant part. \begin{lem}\label{lem:tri resonant est} Let $u,v$ and $w \in Y^{-1/2}$. We have \begin{equation*} \left\|F_{r}\left(u,v,w\right)\right\|_{Z^{-1/2}} \lesssim \left\|u\right\|_{Y^{-1/2}}\left\|v\right\|_{Y^{-1/2}}\left\|w\right\|_{Y^{-1/2}}. \end{equation*} \end{lem} \begin{proof} To prove the lemma, we handle the space variable and the time variable in consecutive order. We first show an estimate for the spatial domain, \begin{equation*} \left\|F_{r}\left(u,v,w\right)\right\|_{H_x^{-1/2}} \lesssim \left\|u\right\|_{H_x^{-1/2}}\left\|v\right\|_{H_x^{-1/2}}\left\|w\right\|_{H_x^{-1/2}}. \end{equation*} By duality and the Plancherel's, it suffices to prove the estimate \begin{equation}\label{eq: space domain} \left|\int z u dx\right| \left| \int v W dx\right| \lesssim \left\|z\right\|_{H^{1/2}_x}\left\|u\right\|_{H^{-1/2}_x}\left\|v\right\|_{H^{-1/2}_x}\left\|W\right\|_{H^{1/2}_x}, \end{equation} where $W = \partial_x^{-1} w$. It is deduced by the H\"older inequality. \\ For the time variable, it is obvious that \begin{equation*} \left\|\left<\tau-k^3\right>^{-1/2}\mathcal{F}\left[F_r\left(u,v,w\right)\right]\right\|_{L^2_{k,\tau}} \lesssim \left\|F_r\left(u,v,w\right)\right\|_{L^2_{x,t}}, \end{equation*} and \begin{equation*} \left\|\left<\tau-k^3\right>^{-1}\mathcal{F}\left(F_r\left(u,v,w\right)\right)\right\|_{L^2_kL^1_{\tau}} \lesssim \left\|F_r\left(u,v,w\right)\right\|_{L^2_{x,t}}. \end{equation*} By taking temporal frequency translation $e^{-t \partial_x^3}$, the claim is reduced to \begin{equation*} \left\|F_r\left(u,v,w\right)\right\|_{L^2_{t}H^{-1/2}_x} \lesssim \left\|u\right\|_{H^{1/2}_{t}H^{-1/2}_x}\left\|v\right\|_{H^{1/2}_{t}H^{-1/2}_x}\left\|w\right\|_{H^{1/2}_{t}H^{-1/2}_x}, \end{equation*} and then the claim follows from \eqref{eq: space domain}, the H\"older inequality, and the Sobolev inequality. \end{proof} Next, we consider the nonresonant case. We can prove a slightly stronger estimate for the nonresonant part. \begin{lem}\label{lem:local tri. est.} Let $u,v$ and $w \in Y^{-1/2}$ and $N_0$, $N_1$, $N_2$ and $N_3$ be dyadic numbers. Then \begin{equation}\label{eq:freq supp nonresonant est.} \left\|P_{N_0}F_{nr}\left(P_{N_1}u,P_{N_2}v,P_{N_3}w\right)\right\|_{Z^{-1/2}} \lesssim \left(\frac{N_i}{{n_{1}}}\right)^{\sigma} {n_{3}}^{-\sigma} \left\|u\right\|_{Y^{-1/2}}\left\|v\right\|_{Y^{-1/2}}\left\|w\right\|_{Y^{-1/2}}, \end{equation} for small enough $\sigma>0$ and $i=1 ~\text{or} ~2$. \end{lem} A part of proof of the estimate \eqref{eq:freq supp nonresonant est.} relies on \begin{equation}\label{eq:L2 multiest.} \left\|uvw\right\|_{L^2_{x,t}} \lesssim \left\|u\right\|_{X^{0,1/2-\delta}}\left\|v\right\|_{X^{0,1/2-\delta}}\left\|w\right\|_{X^{1/2-\delta,1/2-\delta}}, \end{equation} for some small $0<\delta\ll 1$. For the proof, see Section 7 in \cite{Colliander:2004gc}. \begin{proof} By symmetry, we assume that $i=1$. We first prove the $X^{-1/2,-1/2}$ part, i.e., \begin{align*} \left\|P_{N_0}F_{nr}\left(P_{N_1}u,P_{N_2}v,P_{N_3}w\right)\right\|_{X^{-1/2,-1/2}} \lesssim \left(\frac{N_1}{{n_{1}}}\right)^{\sigma} {n_{3}}^{-\sigma} \left\|u\right\|_{X^{-1/2,1/2}}\left\|v\right\|_{X^{-1/2,1/2}}\left\|w\right\|_{X^{-1/2,1/2}}. \end{align*} Without loss of generality, we may assume that all $u_i$ are nonnegative. By duality, it is equivalent to \begin{equation}\label{eq:claim of Xsb} \begin{aligned} &\left|\iint u_0F_{nr}\left(u_1u_2u_3\right) dxdt\right| \\ &\lesssim \left(\frac{N_1}{{n_{1}}}\right)^{\sigma} {n_{3}}^{-\sigma} \left\|u_0\right\|_{X^{1/2,1/2}}\left\|u_1\right\|_{X^{-1/2,1/2}}\left\|u_2\right\|_{X^{-1/2,1/2}}\left\|U_3\right\|_{X^{1/2,1/2}}, \end{aligned} \end{equation} where $U_3 = \partial_x^{-1}u_3$ and $u_i$ has Fourier support on the region $\left|k_i\right| \sim N_i$. The right hand side of (\ref{eq:claim of Xsb}) is comparable to \begin{equation}\label{eq:RHS of claim of Xsb} \left(\frac{N_1}{{n_{1}}}\right)^{\sigma}{n_{3}}^{-\sigma} \frac{\left|N_0N_3\right|^{1/2}}{\left|N_1N_2\right|^{1/2}} \left\|u_0\right\|_{X^{0,1/2}}\left\|u_1\right\|_{X^{0,1/2}}\left\|u_2\right\|_{X^{0,1/2}}\left\|U_3\right\|_{X^{0,1/2}}. \end{equation} \begin{lem}\label{lem:est. of coeff} In the same notation, we have \begin{equation}\label{eq:est. of coefficient} \left(\frac{N_1}{{n_{1}}}\right)^{\sigma} \frac{\left|N_0N_3\right|^{1/2}}{\left|N_1N_2\right|^{1/2}} \ge \frac{{n_{3}}^{1/2}{n_{4}}^{1/2}}{{n_{1}}}. \end{equation} \end{lem} \begin{proof} If $N_1 \sim {n_{1}}$, then we can easily obtain (\ref{eq:est. of coefficient}). Hence we may assume that $N_1 \ll {n_{1}}$. We rewrite (\ref{eq:est. of coefficient}) as \begin{equation*} \frac{{n_{3}}^{1/2}{n_{4}}^{1/2}}{{n_{1}}} \left(\frac{{n_{1}}}{N_1}\right)^{\sigma} \frac{\left|N_1N_2\right|^{1/2}}{\left|N_0N_3\right|^{1/2}} \le 1. \end{equation*} In other words, \begin{equation*} \frac{\left|N_1\right|^{1/2-\sigma}}{{n_{1}}^{1/2-\sigma}} \frac{\left|N_2\right|^{1/2}}{{n_{1}}^{1/2}} \frac{{n_{3}}^{1/2}{n_{4}}^{1/2}}{\left|N_0N_3\right|^{1/2}} \le 1. \end{equation*} Each term of the left hand side is smaller than 1, and so we are done. \end{proof} From Lemma \ref{lem:est. of coeff}, (\ref{eq:RHS of claim of Xsb}) is bounded below by \begin{equation}\label{eq:below bound} \frac{{n_{3}}^{1/2-\sigma}{n_{4}}^{1/2}}{{n_{1}}} \left\|u_0\right\|_{X^{0,1/2}}\left\|u_1\right\|_{X^{0,1/2}}\left\|u_2\right\|_{X^{0,1/2}}\left\|U_3\right\|_{X^{0,1/2}}. \end{equation} By a resonance identity, \begin{equation*} \sum_{i=0,1,2,3} \left(\tau_i-k^3_i\right) = - \sum_{i=0,1,2,3} k_i^3 = 3\left(k_1+k_2\right)\left(k_2+k_3\right)\left(k_3+k_1\right), \end{equation*} and thus, \begin{equation*}\label{eq:freq identity} \begin{split} \sup_{i=0,1,2,3} L_i &\gtrsim \left|k_1+k_2\right|\left|k_2+k_3\right|\left|k_3+k_1\right|, \end{split} \end{equation*} for $L_i = \left<\tau_i-k_i^3\right>$. Due to the symmetry of the functions in (\ref{eq:below bound}), we only consider that $\displaystyle L_0 = \sup_{i=0,1,2,3} L_i$, and by Lemma 4.4 of \cite{Colliander:2005vv}, we have \begin{equation*} L_0 \gtrsim {n_{1}}^2{n_{4}}^{-1}. \end{equation*} Therefore, to prove \eqref{eq:claim of Xsb}, it suffices to show that \begin{equation*} \left|\sum_{|\bf k| \in \mathcal{NR} } \int {n_{3}}^{-1/2+\sigma}L_0^{1/2} \tilde u_0 \tilde u_1 \tilde u_2 \tilde U_3 d\Gamma\right| \lesssim \left\|u_0\right\|_{X^{0,1/2}}\left\|u_1\right\|_{X^{0,1/2}}\left\|u_2\right\|_{X^{0,1/2}}\left\|U_3\right\|_{X^{0,1/2}}. \end{equation*} Here, as $\int \cdots \, d\Gamma $ is the integral taken over the set $\{ \tau_0 + \tau_1 + \tau_2 +\tau_3 =0\}$ before, and we denote $|{\bf k}| =(k_0,k_1,k_2,k_3) $ and $ \mathcal{NR}= \{(k_0,k_1,k_2,k_3) : k_0+k_1+k_2+k_3=0, \,(k_1+k_2)(k_2+k_3)(k_3+k_1) \not =0 \} $. \\ At least one of $k_1$, $k_2$ and $k_3$ is $\mathcal{O}\left(n_3\right)$. By symmetry, let us suppose that it is $k_3$. Then it is enough to show that \begin{equation}\label{X part} \left|\iint u_0 u_1u_2U_3 dxdt\right| \lesssim \left\|u_0\right\|_{L^2_{x,t}}\left\|u_1\right\|_{X^{0,1/2}}\left\|u_2\right\|_{X^{0,1/2}}\left\|U_3\right\|_{X^{1/2-\sigma,1/2}}. \end{equation} From the Cauchy-Schwarz inequality and (\ref{eq:L2 multiest.}), we can get \eqref{X part} for sufficiently small $\sigma$.\\ \\ Next, we prove the $L_k^2 L_{\tau}^1$ part. From the H\"older inequality, $X^{-1/2,-1/2}$ part and the interpolation, it is enough to prove the estimate \begin{equation*} \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{1}{\left|k_0\right|^{1/2}L_0^{1-\delta}}\tilde u_1\tilde u_2\tilde U_3d\Gamma\right\|_{L^2_{k_0}L^1_{\tau_0}} \lesssim \left\|u_1\right\|_{X^{-1/2,1/2}}\left\|u_2\right\|_{X^{-1/2,1/2}}\left\|U_3\right\|_{X^{1/2,1/2}}, \end{equation*} where $U_3 = \partial_x^{-1} u_3$ and some positive constant $\delta \ll 1$. In other words, we will show that \begin{equation}\label{eq:claim of L2L1} \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{\left|k_1k_2\right|^{1/2}}{\left|k_0k_3\right|^{1/2}L_0^{1-\delta}L^{1/2}_1L^{1/2}_2L^{1/2}_3}\tilde u_1\tilde u_2\tilde U_3d\Gamma\right\|_{L^2_{k_0}L^1_{\tau_0}} \lesssim \left\|u_1\right\|_{L^2_{x,t}}\left\|u_2\right\|_{L^2_{x,t}}\left\|U_3\right\|_{L^2_{x,t}}. \end{equation} From the H\"older inequality, the left hand side of (\ref{eq:claim of L2L1}) is bounded by \begin{equation}\label{eq:i case after holder} \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{\left|k_1k_2\right|^{1/2}}{\left|k_0k_3\right|^{1/2}L_0^{1/2-2\delta}L^{1/2}_1L^{1/2}_2L^{1/2}_3}\tilde u_1\tilde u_2\tilde U_3d\Gamma\right\|_{L^2_{k_0}L^2_{\tau_0}}. \end{equation} Similarly to $X^{-1/2,-1/2}$ case, we have $\displaystyle \sup_{i=0,1,2,3}L_i (=:L_s) \gtrsim {n_{1}}^2{n_{4}}^{-1}$, and so \begin{equation}\label{eq:est of freq} \frac{\left|k_1k_2\right|^{1/2}}{\left|k_0k_3\right|^{1/2}L^{1/2}_s} \lesssim \frac{{n_{1}}}{{k_{0}}^{1/2}{k_{3}}^{1/2}}\cdot \frac{{n_{4}}^{1/2}}{{n_{1}}} \le \frac{1}{{k_{3}}^{1/2}}. \end{equation} We first consider $\displaystyle \sup_{i=1,2,3}L_i= L_1$. By combining (\ref{eq:i case after holder}) and (\ref{eq:est of freq}), it suffices to show that \begin{equation}\label{eq:i case claim} \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{1}{{k_{3}}^{1/2}L_0^{1/2-2\delta}L^{1/2}_2L^{1/2}_3}\tilde u_1\tilde u_2\tilde U_3d\Gamma\right\|_{L^2_{k_0}L^2_{\tau_0}} \lesssim \left\|u_1\right\|_{L^2_{x,t}}\left\|u_2\right\|_{L^2_{x,t}}\left\|U_3\right\|_{L^2_{x,t}}. \end{equation} Then by duality, it suffices to prove \begin{equation}\label{the desire} \left|\iint u_0u_1u_2 U_3dxdt\right| \lesssim \left\|u_0\right\|_{X^{0,1/2-2\delta}}\left\|u_1\right\|_{L^2_{x,t}}\left\|u_2\right\|_{X^{0,1/2}}\left\|U_3\right\|_{X^{1/2,1/2}}. \end{equation} We can obtain \eqref{the desire} by the Cauchy-Schwarz inequality and (\ref{eq:L2 multiest.}) for small enough $\delta > 0$. By symmetry, $\displaystyle \sup_{i=0,1,2,3}L_i = L_2$ or $L_3$ cases are proved as well.\\ \\ Finally, we assume that $\displaystyle \sup_{i=0,1,2,3}L_i = L_0 \gtrsim {n_{1}}^2{n_{4}}^{-1} $. From \eqref{eq:est of freq}, the left hand side of (\ref{eq:claim of L2L1}) is bounded by \begin{equation}\label{eq:ii claim} \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{1}{{n_{3}}^{1/2}L_0^{1/2-\delta}L^{1/2}_1L^{1/2}_2L^{1/2}_3}\tilde u_1 \tilde u_2 \tilde U_3d\Gamma\right\|_{L^2_{k_0}L^1_{\tau_0}}. \end{equation} From the assumption $L_0 \gtrsim L_1$, the Fubini theorem and the Cauchy-Schwarz inequality, \begin{equation}\label{eq:step of ii claim} \begin{aligned} \eqref{eq:ii claim} &\le \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{1}{{n_{3}}^{1/2} L_1^{1-\delta} L_2^{1/2}L_3^{1/2}}\tilde u_1 \tilde u_2 \tilde U_3d\Gamma\right\|_{L^2_{k_0}L^1_{\tau_1}} \\ &\lesssim \left\|\sum_{|\bf k| \in \mathcal{NR} } \int \frac{1}{{n_{3}}^{1/2} L_1^{1/2-2\delta} L_2^{1/2}L_3^{1/2}}\tilde u_1 \tilde u_2 \tilde U_3d\Gamma\right\|_{L^2_{k_0}L^2_{\tau_1}} . \end{aligned} \end{equation} The last term of \eqref{eq:step of ii claim} is similar to the left hand side of (\ref{eq:i case claim}), so we finish the proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:tri est.}] It is obtained from the combination of Lemma \ref{lem:tri resonant est} and \ref{lem:local tri. est.} with summation with respect to each of dyadic frequency supports. More precisely, we observe \begin{equation*} \left(uv- P_0\left(uv\right)\right)\partial_x^{-1}w = F_{r}\left(u,v,w\right) + F_{r}\left(v,u,w\right) + F_{nr}\left(u,v,w\right), \end{equation*} and use the fact that left hand side of (\ref{eq:freq supp nonresonant est.}) vanishes unless $n_1 \sim n_2$ when we sum up with dyadic numbers. \end{proof} \begin{rmk}\label{rmk:rescaling est.} In the proof of main theorems, we will use a rescaling argument. The bilinear and trilinear estimates obtained above can be easily restated with a rescaling parameter. We record facts here for convenience of readers. See more details in \cite{Bourgain:1993cl}, \cite{Colliander:2003td}, \cite{Colliander:2004gc} and \cite{university2007well}. We let $\alpha \mathbb{T} = [0, 2 \pi \alpha)$ be the spatial domain. Then implicit constants of \eqref{eq:X^s,1/2 est.} and \eqref{eq:L2 multiest.} depend on $\alpha$. More precisely, for $2\pi \alpha $-periodic function $ u$, we define \begin{equation}\label{eq:den of rescaled H^s norm } \left\|u\right\|_{H^s(\alpha \mathbb{T})} := \frac{1}{\left(2\pi \alpha\right)^{1/2}} \left(\sum_{k \in \mathbb{Z}/{\alpha}} \left<k\right>^{2s}\left|\hat u\right|^2\right)^{1/2}, \hat u(k) = \int_0^{2\pi \alpha} e^{-ikx} u(x) \,dx \end{equation} and \begin{equation*} \left\|u\right\|_{X^{s,b}\left(\alpha \mathbb{T}\right)}:=\left\|\left<k\right>^s \left<\tau-k^3\right>^b \mathcal{F} \left(u\right) \left(k, \tau \right)\right\|_{L^2_k\left(\mathbb{Z}/\alpha\right) L^2_{\tau}}. \end{equation*} In addition, we can define $Y^s\left(\alpha \mathbb{T}\right)$ and $Z^s\left(\alpha \mathbb{T}\right)$ norm by the same method. From \cite{Colliander:2004gc}, we have \begin{equation}\label{eq: rescaled X^s,b est.} \left\|uv\right\|_{X^{-1/2,1/2}\left(\alpha \mathbb{T}\right)} \lesssim \alpha^{0+} \left\|u\right\|_{Y^{1/2}\left(\alpha \mathbb{T}\right)} \left\|v\right\|_{Y^{1/2}\left(\alpha \mathbb{T}\right)}. \end{equation} Moreover, the following estimates are well-known, \begin{equation}\label{eq:stric est. of recaled} \left\|u\right\|_{L^4_{x,t}\left(\alpha \mathbb{T}\right)} \lesssim C\left(\alpha\right)\left\|u\right\|_{X^{0,1/3}\left(\alpha \mathbb{T}\right)} \end{equation} and \begin{equation}\label{eq:another stric est. of rescaled} \left\|u\right\|_{L^{\infty}_{x,t}\left(\alpha \mathbb{T}\right)} \lesssim C\left(\alpha\right)\left\|u\right\|_{X^{\frac{1}{2}+,\frac{1}{2}+}\left(\alpha \mathbb{T}\right)}, \end{equation} where implicit constants $C\left(\alpha\right)$ are decreasing functions of $\alpha$. In particular, we have $C\left(\alpha\right) \le C\left(1\right)$ for $\alpha \ge 1$. From \eqref{eq:stric est. of recaled} and \eqref{eq:another stric est. of rescaled}, \begin{equation}\label{eq:tri est. of direct method} \left\|uvw\right\|_{L^2_{x,t}\left(\alpha \mathbb{T}\right)} \lesssim C'\left(\alpha\right) \left\|u\right\|_{X^{0,1/3}\left(\alpha \mathbb{T}\right)}\left\|v\right\|_{X^{0,1/3}\left(\alpha \mathbb{T}\right)}\left\|w\right\|_{X^{\frac{1}{2}+,\frac{1}{2}+}\left(\alpha \mathbb{T}\right)}, \end{equation} where implicit constant $C'\left(\alpha\right)$ is also a decreasing function of $\alpha$. Moreover, by rescaling \eqref{eq:L2 multiest.}, we can obtain \begin{equation}\label{eq:tri est. of rescaled} \left\|uvw\right\|_{L^2_{x,t}\left(\alpha \mathbb{T}\right)} \lesssim \alpha^M \left\|u\right\|_{X^{0,1/2-\delta}\left(\alpha \mathbb{T}\right)}\left\|v\right\|_{X^{0,1/2-\delta}\left(\alpha \mathbb{T}\right)}\left\|w\right\|_{X^{1/2-\delta,1/2-\delta}\left(\alpha \mathbb{T}\right)}, \end{equation} for some positive constant $M$. Interpolating \eqref{eq:tri est. of direct method} and \eqref{eq:tri est. of rescaled} we can obtain the $\alpha$-rescaled estimate as follows \begin{equation}\label{eq:rescaled L2 multi est} \left\|uvw\right\|_{L^2_{x,t}\left(\alpha \mathbb{T}\right)} \lesssim \alpha^{0+} \left\|u\right\|_{X^{0,1/2-\sigma}\left(\alpha \mathbb{T}\right)}\left\|v\right\|_{X^{0,1/2-\sigma}\left(\alpha \mathbb{T}\right)}\left\|w\right\|_{X^{1/2-\sigma,1/2-\sigma}}, \end{equation} for some small $0<\sigma \ll1$. Once we obtain \eqref{eq:den of rescaled H^s norm }-\eqref{eq:stric est. of recaled} and \eqref{eq:rescaled L2 multi est}, it is straightforward that one can replace \eqref{eq:X^s,1/2 est.} and \eqref{eq:L2 multiest.} in the proof, and so conclude \eqref{eq: bi est.} and \eqref{eq:tri est.} with the scaling parameter $\alpha$. \end{rmk} \section{Differentiation by parts and Proof of Theorem \ref{thm:est. of different data} and \ref{thm:est. of truncated flow}}\label{Section: proof of different flow} In this section, we use the method of the normal form to show Theorem \ref{thm:est. of different data} and \ref{thm:est. of truncated flow}. The normal form is performed via the differentiation by parts. Writing the system \eqref{eq:Coupled KdV system} in the interaction representation, we take the differentiation by parts to change the quadratic nonlinear terms into the bilinear nonlinear terms as the boundary term and the trilinear terms. This procedure replaces the use of the Miura transform in the proof of \cite{Colliander:2005vv}. \subsection{Differentiation by parts} To simplify the notation, we denote \begin{align*} u\left(t,x\right)=S_{CKdV}\left(t\right) u_0, \quad v\left(t,x\right)=S_{CKdV}\left(t\right) v_0, \\ u^b\left(t,x\right)=S_{BKdV}\left(t\right) u_0, \quad v^b\left(t,x\right)=S_{BKdV}\left(t\right) v_0. \end{align*} Moreover, denote ${\bf u} = e^{t \partial_x^3}u$, ${\bf v} = e^{t \partial_x^3}v$, ${\bf u}^b = e^{t \partial_x^3}u^b$ and ${\bf v}^b = e^{t \partial_x^3}v^b$. From (\ref{eq:Coupled KdV system}), \begin{align*} \partial_t{\bf u} = e^{t \partial_x^3}\left(\partial_x^3 u+\partial_t u\right) = - \frac{e^{t \partial_x^3}}{2}\partial_x\left(vv\right) = -\frac{e^{t\partial_x^3}}{2} \partial_x\left(e^{-t\partial_x^3 }{\bf v} \cdot e^{-t\partial_x^3 }{\bf v}\right), \\ \partial_t{\bf v} = e^{t \partial_x^3}\left(\partial_x^3 v+\partial_t v\right) = - e^{t \partial_x^3}\partial_x\left(uv\right) = -e^{t\partial_x^3} \partial_x\left(e^{-t\partial_x^3 }{\bf u} \cdot e^{-t\partial_x^3 }{\bf v}\right). \end{align*} We look at the system of the Fourier variables, still denoted as ${\bf u}_k$, ${\bf v}_k$ for $ k \in \mathbb Z^*$,\footnote{$k_i \not =0$ in the sequel is due to mean zero assumption.} \begin{equation*}\label{eq: interaction repre} \begin{aligned} \partial_t {\bf u}_k = -\frac{i}{2} e^{-ik^3t} \sum_{k_1+k_2=k} k e^{ik_1^3 t} {\bf v}_{k_1} e^{ik_2^3 t} {\bf v}_{k_2} = -\frac{i}{2}\sum_{k_1+k_2=k} k e^{-i \phi\left({\rm k}\right) t} {\bf v}_{k_1} {\bf v}_{k_2}, \\ \partial_t{\bf v}_k = -i e^{-ik^3t} \sum_{k_1+k_2=k} k e^{ik_1^3 t} {\bf u}_{k_1} e^{ik_2^3 t} {\bf v}_{k_2} = -i\sum_{k_1+k_2=k} k e^{-i \phi\left({\rm k}\right) t} {\bf u}_{k_1} {\bf v}_{k_2}, \end{aligned} \end{equation*} where $\phi\left({\rm k}\right)= \phi\left(k_1,k_2\right)=3k_1k_2\left(k_1+k_2\right)$. Taking the differentiation by parts, we write \begin{equation}\label{eq:diff by parts original flow} \begin{aligned} \partial_t{\bf u}_k = &-\frac{i}{2} \left[\partial_t\left\{\sum_{k_1+k_2=k}k \frac{e^{-i \phi\left({\rm k}\right)t}}{-i \phi\left({\rm k}\right)} {\bf v}_{k_1} {\bf v}_{k_2}\right\} \right. \\ &\left. +2 \sum_{k_1+k_3=k}k \frac{e^{-i \phi\left(k_1,k_3\right)t}}{-i \phi\left(k_1,k_3\right)} {\bf v}_{k_3}\sum_{k_{11}+k_{12}=k_1}ik_1 e^{-i \phi\left(k_{11},k_{12}\right)t} {\bf u}_{k_{11}}{\bf v}_{k_{12}}\right] \\ =& \frac{1}{6}\left[\partial_t\left\{\sum_{k_1+k_2=k} \frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}{\bf v}_{k_1}{\bf v}_{k_2}\right\} +2i \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}} \frac{e^{-i\Phi\left({\rm k}\right)t}}{k_3}{\bf u}_{k_1}{\bf v}_{k_2}{\bf v}_{k_3}\right] \\ \partial_t{\bf v}_k = &- i\left[\partial_t\left\{\sum_{k_1+k_2=k}k \frac{e^{-i \phi\left({\rm k}\right)t}}{-i \phi\left({\rm k}\right)} {\bf u}_{k_1} {\bf v}_{k_2}\right\} \right. \\ &\left. + \sum_{k_1+k_3=k}k \frac{e^{-i \phi\left(k_1,k_3\right)t}}{-i \phi\left(k_1,k_3\right)} {\bf u}_{k_1}\sum_{k_{21}+k_{22}=k_3}ik_3 e^{-i \phi\left(k_{21},k_{22}\right)t} {\bf u}_{k_{21}}{\bf v}_{k_{22}}\right. \\ &\left. + \frac{1}{2}\sum_{k_1+k_3=k}k \frac{e^{-i \phi\left(k_1,k_3\right)t}}{-i \phi\left(k_1,k_3\right)} {\bf v}_{k_3}\sum_{k_{11}+k_{12}=k_1}ik_1 e^{-i \phi\left(k_{11},k_{12}\right)t} {\bf v}_{k_{11}}{\bf v}_{k_{12}}\right] \\ =& \frac{1}{3}\left[\partial_t\left\{\sum_{k_1+k_2=k} \frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}{\bf u}_{k_1}{\bf v}_{k_2}\right\} +\sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}} \frac{ie^{-i \Phi\left({\rm k}\right)t}}{k_3}\left({\bf v}_{k_1}{\bf u}_{k_2}{\bf u}_{k_3}+\frac{1}{2}{\bf v}_{k_1}{\bf v}_{k_2}{\bf v}_{k_3}\right)\right], \end{aligned} \end{equation} where $\Phi\left({\rm k}\right)=\Phi\left(k_1,k_2,k_3\right)=3\left(k_1+k_2\right)\left(k_2+k_3\right)\left(k_3+k_1\right)$. Similarly, we write a system for (\ref{eq:smooth truncated system}), \begin{equation}\label{eq:diff by parts truncated flow} \begin{aligned} \partial_t{\bf u}^b_k =& \frac{1}{6}\left[\partial_t\left\{\sum_{k_1+k_2=k} \frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}b\left(k\right){\bf v}^b_{k_1}{\bf v}^b_{k_2}\right\} +2i \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}} \frac{e^{-i\Phi\left({\rm k}\right)t}}{k_3}b\left(k\right)b\left(k_1+k_2\right){\bf u}^b_{k_1}{\bf v}^b_{k_2}{\bf v}^b_{k_3}\right] \\ \partial_t{\bf v}^b_k =& \frac{1}{3}\left[\partial_t\left\{\sum_{k_1+k_2=k} \frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}b\left(k\right){\bf v}^b_{k_1}{\bf v}^b_{k_2}\right\} \right.\\ &\left.+\sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}} \frac{ie^{-i\Phi\left({\rm k}\right)t}}{k_3}b\left(k\right)b\left(k_1+k_2\right)\left({\bf v}^b_{k_1}{\bf u}^b_{k_2}{\bf u}^b_{k_3}+\frac{1}{2}{\bf v}^b_{k_1}{\bf v}^b_{k_2}{\bf v}^b_{k_3}\right)\right]. \end{aligned} \end{equation} Integrating (\ref{eq:diff by parts original flow}) and (\ref{eq:diff by parts truncated flow}) in time $t$, we have \begin{equation}\label{eq:solution u} \begin{split} {\bf u}_k\left(t\right)=&{\bf u}_k\left(0\right)+\frac{1}{6}\left[\sum_{k_1+k_2=k}\frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}{\bf v}_{k_1}\left(t\right){\bf v}_{k_2}\left(t\right)-\sum_{k_1+k_2=k}\frac{1}{k_1k_2}{\bf v}_{k_1}\left(0\right){\bf v}_{k_2}\left(0\right) \right. \\ &\left.+ 2i \int_0^t \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{e^{-i\Phi\left({\rm k}\right)s}}{k_3}{\bf u}_{k_1}{\bf v}_{k_2}{\bf v}_{k_3}ds\right] \end{split} \end{equation} \begin{equation}\label{eq:solution v} \begin{split} {\bf v}_k\left(t\right)=&{\bf v}_k\left(0\right)+\frac{1}{3}\left[\sum_{k_1+k_2=k}\frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}{\bf u}_{k_1}\left(t\right){\bf v}_{k_2}\left(t\right)-\sum_{k_1+k_2=k}\frac{1}{k_1k_2}{\bf u}_{k_1}\left(0\right){\bf v}_{k_2}\left(0\right) \right.\\ &\left.+ \int_0^t \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{i e^{-i\Phi\left({\rm k}\right)s}}{k_3}\left({\bf v}_{k_1}{\bf u}_{k_2}{\bf u}_{k_3}+\frac{1}{2}{\bf v}_{k_1}{\bf v}_{k_2}{\bf v}_{k_3}\right)ds\right] \end{split} \end{equation} \begin{equation}\label{eq:solution ub} \begin{split} {\bf u}_k^b\left(t\right)=&{\bf u}_k\left(0\right)+\frac{1}{6}\left[\sum_{k_1+k_2=k}\frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}b\left(k\right){\bf v}^b_{k_1}\left(t\right){\bf v}^b_{k_2}\left(t\right)-\sum_{k_1+k_2=k}\frac{1}{k_1k_2}b\left(k\right){\bf v}_{k_1}\left(0\right){\bf v}_{k_2}\left(0\right) \right. \\ &\left.+ 2i\int_0^t \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{e^{-i\Phi\left({\rm k}\right)s}}{k_3}b\left(k\right)b\left(k_1+k_2\right){\bf u}^b_{k_1}{\bf v}^b_{k_2}{\bf v}^b_{k_3}ds\right] \end{split} \end{equation} \begin{equation}\label{eq:solution vb} \begin{split} {\bf v}_k^b\left(t\right)=&{\bf v}_k\left(0\right)+\frac{1}{3}\left[\sum_{k_1+k_2=k}\frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}b\left(k\right){\bf u}^b_{k_1}\left(t\right){\bf v}^b_{k_2}\left(t\right)-\sum_{k_1+k_2=k}\frac{1}{k_1k_2}b\left(k\right){\bf u}_{k_1}\left(0\right){\bf v}_{k_2}\left(0\right) \right.\\ &\left.+ \int_0^s \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{ie^{-i\Phi\left({\rm k}\right)s}}{k_3}b\left(k\right)b\left(k_1+k_2\right)\left({\bf v}^b_{k_1}{\bf u}^b_{k_2}{\bf u}^b_{k_3}+\frac{1}{2}{\bf v}^b_{k_1}{\bf v}^b_{k_2}{\bf v}^b_{k_3}\right)ds\right]. \end{split} \end{equation} Transforming back (\ref{eq:solution u}), we write $u_k\left(t\right)$ as follows, \begin{equation}\label{eq:trans back soln} \begin{split} u_k\left(t\right)=e^{ik^3t}u_{k}\left(0\right)+\frac{e^{ik^3t}}{6}\left[\sum_{k_1+k_2=k}\frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}e^{-it\left(k_1^3+k_2^3\right)}v_{k_1}\left(t\right)v_{k_2}\left(t\right) -\sum_{k_1+k_2=k}\frac{v_{k_1}\left(0\right)v_{k_2}\left(0\right)}{k_1k_2} \right. \\ \left.+ 2i\int_0^t \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{e^{-i\Phi\left({\rm k}\right)s}}{k_3}e^{-is\left(k_1^3+k_2^3+k_3^3\right)}u_{k_1}v_{k_2}v_{k_3}ds\right]. \end{split} \end{equation} In the same way, the solutions $v_k\left(t\right)$, $u^b_k\left(t\right)$ and $v^b_k\left(t\right)$ are denoted similarly. As shown in the theorems, we should investigate in detail the difference between solutions. Since $v$ and $v^b$ are handled similarly, we mainly consider the solutions $u$ and $u^b$. Note that `$k_1+k_2 \not = 0$' in (\ref{eq:trans back soln}) means `$-P_0\left(uv\right)\partial_x^{-1}v$' in the spatial domain. This represent why we require the trilinear form in Lemma \ref{lem:tri est.} in the approximation analysis. \subsection{Proof of Theorem \ref{thm:est. of truncated flow}} \label{sec: pf of thm trun} $ $ \\ We prove the estimate \eqref{eq:est. of truncated flow} for $ u-u_b$. By the following argument, we can obtain \eqref{eq:est. of truncated flow} for $v-v_b$ as well. From \eqref{eq:space embedding} and \eqref{eq:solution u}-\eqref{eq:trans back soln}, the first term of the left hand side of (\ref{eq:est. of truncated flow}) is bounded by \begin{equation}\label{eq:Full est.} \begin{split} &\left\|P_{\le N^{1/2}}\left(u-u^b\right)\right\|_{Y^{-1/2}}\\ & \lesssim \left\|P_{ \le N^{1/2}}\left[ \partial_x^{-1}v\partial_x^{-1}v-\partial_x^{-1}v^b\partial_x^{-1}v^b\right]\right\|_{Y^{-1/2}} \\ &+ \left\|P_{\le N^{1/2}} \left[\left(uv-P_0\left(uv\right)\right)\partial^{-1}_xv-\left(B\left(u^bv^b\right)-P_0\left(B\left(u^bv^b\right)\right)\right)\partial^{-1}_xv^b\right]\right\|_{Z^{-1/2}} \\ &=: \left\|B_2\left(v,v\right)\right\|_{Y^{-1/2}} +\left\|N_3\left(u,v,v\right)\right\|_{Z^{-1/2}}. \end{split} \end{equation} We now use the bilinear and trilinear estimates obtained in Section 3. From the triangle inequality and (\ref{eq: bi est.}), \begin{equation}\label{eq:B_2 est.} \begin{split} \left\|B_2\left(v,v\right)\right\|_{Y^{-1/2}} &\le \left\|P_{ \le N^{1/2}}\left[\partial_x^{-1}v\partial_x^{-1}\left(v - v^b\right)\right]\right\|_{Y^{-1/2}} + \left\|P_{ \le N^{1/2}}\left[\partial_x^{-1}v^b\partial_x^{-1}\left(v - v^b\right)\right]\right\|_{Y^{-1/2}} \\ &\lesssim \left\|P_{ \le N^{1/2}}v\right\|_{Y^{-1/2}} \left\|P_{ \le N^{1/2}}\left(v - v^b\right)\right\|_{Y^{-1/2}} \\ &+ \left\|P_{ \le N^{1/2}}v^b\right\|_{Y^{-1/2}} \left\|P_{ \le N^{1/2}}\left(v - v^b \right)\right\|_{Y^{-1/2}} \\ &+ \text{(remainder terms)$_1$}. \end{split} \end{equation} Here $\text{(remainder terms)$_1$}$ contain \emph{high-high} to \emph{low} frequency interactions. \\ \noindent The integral terms are also estimated by the triangle inequality, (\ref{eq:tri est.}) and $P_0B = P_0$, \begin{equation}\label{eq:N_3 est.} \begin{split} \left\|N_3\left(u,v,v\right)\right\|_{Z^{-1/2}} &\lesssim \left\|P_{ \le N^{1/2}}u\right\|_{Y^{-1/2}}\left\|P_{ \le N^{1/2}}v\right\|_{Y^{-1/2}} \left\|P_{ \le N^{1/2}}\left(v - v^b\right)\right\|_{Y^{-1/2}} \\ &+ \left\|P_{ \le N^{1/2}}u\right\|_{Y^{-1/2}}\left\|P_{ \le N^{1/2}}v^b\right\|_{Y^{-1/2}} \left\|P_{ \le N^{1/2}}\left(v - v^b\right)\right\|_{Y^{-1/2}} \\ &+ \left\|P_{ \le N^{1/2}}v^b\right\|_{Y^{-1/2}}\left\|P_{ \le N^{1/2}}v^b\right\|_{Y^{-1/2}} \left\|P_{ \le N^{1/2}}\left(u - u^b\right)\right\|_{Y^{-1/2}} \\ &+ \left\|P_{\le N^{1/2}}\mathcal{F}_x^{-1}\left[\sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{\left(1-b\left(k_1+k_2\right)\right)}{k_3}u^b_{k_1}v^b_{k_2}v^b_{k_3}\right]\right\|_{Z^{-1/2}} \\ &+ \text{(remainder terms)$_2$}. \end{split} \end{equation} We take three steps. We first show that the remainder terms are $ { \scriptstyle\mathcal{O}}_N\left(1\right)$ using the bilinear and trilinear estimates that is obtained the last section. Next, we show that $Z^{-1/2}$-term of (\ref{eq:N_3 est.}) is ${ \scriptstyle\mathcal{O}}_N\left(1\right)$ as well. Lastly, we show that terms involving the difference are absorbed into the left hand side of \eqref{eq:Full est.} and $\left\|P_{\le N^{1/2}}\left(v-v^b\right)\right\|_{Y^{-1/2}}$. For this step, we use a rescaling argument to make the factor $\|P_{\le N^{1/2}} u \|_{Y^{-1/2}} $ small in a large domain. \\ \\ {\bf Step 1}. \\ First, we handle \text{(remainder terms)$_1$} in the boundary terms. We take a dyadic decomposition and use Lemma \ref{lem:bi est.}, Lemma \ref{lem:tri resonant est} and Lemma \ref{lem:local tri. est.}. In view of \eqref{eq:Full est.}, \text{(remainder terms)$_1$} contains only \emph{high-high} to \emph{low} interactions. Namely, it is bounded by \begin{equation*} \left\|P_{\le N^{1/2}}F_2\left(v_{hi},v_{hi}\right)\right\|_{Y^{-1/2}}, \end{equation*} where we denote $v_{low} = P_{\le N^{1/2}}v$ and $v_{hi} = \left(1-P_{\le N^{1/2}}\right)v$. Obviously, $v$ can be replaced to $u$, $u^b$ or $v^b$, but they are handled in the same way. From Lemma \ref{lem:bi est.} and the global well-posedness, since \begin{equation*} \left\|F_2\left(v,v\right)\right\|_{Y^{-1/2}} \lesssim \|v\|_{Y^{-1/2}} \|v\|_{Y^{-1/2}}, \end{equation*} we have \begin{equation}\label{eq: result of bi est.} \left\|P_{\le N^{1/2}}F_2\left(v_{hi},v_{hi}\right)\right\|_{Y^{-1/2}} \lesssim \|v_{hi}\|_{Y^{-1/2}} \|v_{hi}\|_{Y^{-1/2}} \sim { \scriptstyle\mathcal{O}}_N\left(1\right). \end{equation} \\ Next, we control the integral terms. Similarly, (remainder terms)$_2$ also has the sum of the multilinear terms, but it has resonant form $P_{\le N^{1/2}} F_r \left(u,v,v\right)$ and nonresonant form $P_{\le N^{1/2}} F_{nr} \left(u,v,v\right)$. The resonant case can be controlled as the boundary terms did. More precisely, we write the $P_{\le N^{1/2}} F_{r}\left(u,v,v\right)$ as follows, \begin{equation}\label{eq:truncated resonant} \sum_{N_0,N_1,N_2,N_3} P_{N_0} P_{\le N^{1/2}} F_{r}\left(P_{N_1}u,P_{N_2}v,P_{N_3}v\right). \end{equation} Likewise, $u$ and $v$ in the sequel can be replaced by $u^b$ or $v^b$. As before, in view of \eqref{eq:N_3 est.}, (remainder terms)$_2$ does not contain trilinear terms of which all factors are from low frequency piece. We thus have $n_1 > N^{1/2}$ and from \eqref{eq:resonant trilinear}, we can write the form \eqref{eq:truncated resonant} in (remainder terms)$_2$ as follows, \begin{equation}\label{eq: low-hi-hi tri} \sum_{N_0,N_2} P_{N_0} P_{\le N^{1/2 }}F_{r}\left(P_{N_0}u_{low}, P_{N_2}v_{hi}, P_{N_2} v_{hi}\right) = P_{\le N^{1/2}}F_r\left(u_{low}, v_{hi}, v_{hi}\right). \end{equation} Similarly to the boundary case, we have the following estimates by Lemma~\ref{lem:tri resonant est} and the global well-posedness, \begin{equation*} \left\|F_{r}\left(u_{low}, v, v\right)\right\|_{Z^{-1/2}} \lesssim 1, \end{equation*} and \begin{equation}\label{eq: high freq resonant est.} \left\|P_{\le N^{1/2}}F_r\left(u_{low}, v_{hi}, v_{hi}\right)\right\|_{Z^{-1/2}}\lesssim \|u\|_{Y^{-1/2}}\|v_{hi}\|_{Y^{-1/2}} \|v_{hi}\|_{Y^{-1/2}} \sim { \scriptstyle\mathcal{O}}_N\left(1\right). \end{equation} In other words, (remainder terms)$_2$ is bounded by ${ \scriptstyle\mathcal{O}}_N\left(1\right)$.\\ \\ The integral terms associated with nonresonant case require a bit more work since (\ref{eq:freq supp nonresonant est.}) has $N_i$ and $n_3$ as its coefficients. For the frequency interval $\left[N^{1/2},2N^{1/2}\right]$, we can divide this interval into $O(\left(N'\right)^{1/4})$ intervals uniformly, and then by the orthogonality and the pigeon-hole principle, there exists at least one interval form of $[M,M+N^{1/4}]$ such that \begin{equation*} \left\|\left(P_{\le M+N^{1/4}}-P_{\le M}\right)u\right\|_{Y^{-1/2}} \lesssim N^{-\sigma}. \end{equation*} Fix this $M$, we can let $u_{low} = P_{\le M} u$, $u_{med} = \left(P_{\le M+N^{1/4}}-P_{\le M}\right)u$ and $u_{hi} = \left(1-P_{\le M+N^{1/4}}\right)u$. Then by Lemma \ref{lem:tri est.}, (remainder terms)$_2$ is bounded by $\mathcal{O}\left(N^{-\sigma}\right)$ if it has $u_{med}$ terms. Moreover, as before, the terms consisting of low frequency terms only are not included in (remainder terms)$_2$. Hence, we consider terms which have at least one $u_{hi}$. The worst case of this situation is $P_{\le M}F_{nr}\left(u_{hi}v_{low}v_{low}\right)$. As like the boundary terms, we split the solutions into the dyadic pieces, \begin{equation*} \sum_{N_0,N_1,N_2,N_3} P_{N_0} P_{\le M} F_{nr}\left(P_{N_1}u_{hi},P_{N_2}v_{low},P_{N_3}v_{low}\right). \end{equation*} Using frequency relation, we have $\left|k_1+k_2+k_3\right| =\left|k_0\right| \le M$, $\left|k_1\right| \ge M+N^{1/4}$ and $\left|k_2\right|, \left|k_3\right| \le M$. We thus have $n_3 \gtrsim N^{1/4}$, and from Lemma \ref{lem:local tri. est.}, \begin{equation}\label{eq: result of trun. trilinear} \left\|\sum_{N_0,N_1,N_2,N_3} P_{N_0} P_{\le M} F_{nr}\left(P_{N_1}u_{hi},P_{N_2}v_{low},P_{N_3}v_{low}\right)\right\|_{Z^{-1/2}} \lesssim N^{-\sigma}. \end{equation} Therefore, (remainder terms)$_2$ is bounded by ${ \scriptstyle\mathcal{O}}_N\left(1\right)$.\\ \\ {\bf Step 2}. \\ The argument is based on the mean value theorem. Indeed, we use the smooth truncation instead of the sharp truncation to applying the mean value theorem. Our claim is \begin{equation}\label{eq:goal of MVT} \begin{aligned} \left\|P_{\le N^{1/2}}\mathcal{F}^{-1}\left[\sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}\frac{\left(1-b\left(k_1+k_2\right)\right)}{k_3}v^b_{k_1}u^b_{k_2}u^b_{k_3}\right]\right\|_{Z^{-1/2}} \\ \le C\left(T,\left\|u^b_0\right\|_{H^{-1/2}_0},\left\|v^b_0\right\|_{H^{-1/2}_0}\right) { \scriptstyle\mathcal{O}}_N\left(1\right). \end{aligned} \end{equation} To prove \eqref{eq:goal of MVT}, we inspect the support of indices. By the sharp truncation $P_{\le N^{1/2}}$ and the smooth truncation $b\left(k_1+k_2\right)$, we have $\left|k_1+k_2+k_3\right| =\left|k\right| \lesssim N^{1/2}$, $\left|k_1+k_2\right| \gtrsim N$ and then $\left|k_3\right| \gtrsim N$, and therefore, $n_1 \gtrsim N$. If $\left(k_1,k_2,k_3\right)$ is nonresonant, then (\ref{eq:goal of MVT}) is directly obtained by Lemma \ref{lem:local tri. est.} and the global well-posedness. The remaining case is the resonant case. As mentioned above, we have $\left|k_1+k_2\right| \gtrsim N$ and $\left|k_3\right| \gtrsim N$, so there are only two cases. That is, $\left(k_1,k_2,k_3\right) = \left(k,-k_3,k_3\right) \text{or} \left(-k_3, k, k_3\right)$. For fixed time $T$, we have \begin{equation*} \left|\sum_{k \in \mathbb{Z}^*} \frac{1}{k} v_{-k}u_{k}\right| \lesssim \left\|v\right\|_{H^{-1/2}_0}\left\|u\right\|_{H^{-1/2}_0} \lesssim 1 \end{equation*} by the Cauchy-Schwarz inequality and the global well-posedness, and then \begin{equation*} \frac{1}{2} \left|\sum_{k \in \mathbb{Z}^*} \frac{v_{-k}u_{k} - v_{k}u_{-k}}{k} \right| = \left|\sum_{k \in \mathbb{Z}^*} \frac{1}{k} v_{-k}u_{k}\right| \lesssim 1. \end{equation*} Hence, we can let $e_{k}$ such that \begin{equation}\label{eq:error sum} e_{k} = \frac{1}{k} v_{-k}u_{k} - \frac{1}{k} v_{k}u_{-k} ~\text{and}~ \sum_{k \gtrsim N} e_{k} = { \scriptstyle\mathcal{O}}_N\left(1\right). \end{equation} We can now rewrite a piece of the resonant case as follows, \begin{equation*} \begin{aligned} &\sum_{\left|k_3\right| \gtrsim N}1_{\left[-N^{1/2},N^{1/2}\right]}\left(k\right)\frac{1-b\left(k-k_3\right)}{k_3}v^b_{k}u^b_{-k_3}u^b_{k_3} \\ &+\sum_{\left|k_3\right| \gtrsim N}1_{\left[-N^{1/2},N^{1/2}\right]}\left(k\right)\frac{1-b\left(k-k_3\right)}{k_3}v^b_{-k_3}u^b_{k}u^b_{k_3} \\ &=\sum_{k_3 \gtrsim N}1_{\left[-N^{1/2},N^{1/2}\right]}\left(k\right)\frac{b\left(k+k_3\right)-b\left(k-k_3\right)}{k_3}v^b_{k}u^b_{-k_3}u^b_{k_3} \\ &+\sum_{k_3 \gtrsim N}1_{\left[-N^{1/2},N^{1/2}\right]}\left(k\right)\left[\frac{1-b\left(k-k_3\right)}{k_3}v^b_{-k_3}u^b_{k}u^b_{k_3} - \frac{1-b\left(k+k_3\right)}{k_3}v^b_{k_3}u^b_{k}u^b_{-k_3}\right] \\ &=\sum_{k_3 \gtrsim N}1_{\left[-N^{1/2},N^{1/2}\right]}\left(k\right)\frac{b\left(k+k_3\right)-b\left(k-k_3\right)}{k_3}v^b_{k}u^b_{-k_3}u^b_{k_3} \\ &+\sum_{k_3 \gtrsim N}1_{\left[-N^{1/2},N^{1/2}\right]}\left(k\right)\left[\frac{b\left(k+k_3\right)-b\left(k-k_3\right)}{k_3}v^b_{k_3}u^b_{k}u^b_{-k_3} + \left(1-b\left(k-k_3\right)\right)e_{k_3}u_k^b\right]. \end{aligned} \end{equation*} By the fact that $b\left(k\right)$ is even and the mean value theorem, we have \begin{equation}\label{eq:mean value thm of B(k)} \left|b\left(k+k_3\right)-b\left(k-k_3\right)\right|= \left|b\left(k+k_3\right)-b\left(k_3-k\right)\right| = \mathcal{O}\left(\frac{\left|k\right|}{N}\right) = \mathcal{O}\left(N^{-\sigma}\right), \end{equation} for $\left|k\right| \lesssim N^{1/2}$. In addition, we have \begin{equation}\label{eq:last bound1} \left|\sum_{\left|k_3\right| \gtrsim N} \frac{1}{k_3} u_{-k_3}v_{k_3}\right| \le \left\|u\right\|_{H^{-1/2}_0}\left\|v\right\|_{H^{-1/2}_0}, \end{equation} and \begin{equation}\label{eq:last bound2} \left\|u\right\|_{L^{\infty}_tH^{-1/2}_x} \lesssim \left\|u\right\|_{Y^{-1/2}}, \end{equation} by the Cauchy-Schwarz inequality and the global well-posedness. Thus, (\ref{eq:goal of MVT}) is proved by \eqref{eq:error sum}-\eqref{eq:last bound2}. From (\ref{eq:space embedding}), we conclude the $Z^{-1/2}$-term of (\ref{eq:N_3 est.}) is ${ \scriptstyle\mathcal{O}}_N\left(1\right)$. \\ \\ {\bf Step 3}. \\ To complete the proof of Theorem \ref{thm:est. of truncated flow}, we need to use a rescaling argument. Our claim is to show that \eqref{eq:est. of truncated flow} is true on the time interval $\left[0,T\right]$. In fact, this claim is equivalent to show that the $\alpha$-scaled problem with solution \begin{equation}\label{scaling} u_\alpha\left(x,t\right) = \alpha^{-2}u\left(\frac{x}{\alpha},\frac{t}{\alpha^3}\right) \end{equation} is true on an interval $\left[0,\alpha^3T\right]$ and on domain $ \alpha\mathbb{T} $. Roughly speaking, if we can show that $\left\|P_{ \le N^{1/2}}u_{\alpha}\right\|_{Y^{-1/2}}$, $\left\|P_{ \le N^{1/2}}v_{\alpha}\right\|_{Y^{-1/2}}$, $\left\|P_{ \le N^{1/2}}u^b_{\alpha}\right\|_{Y^{-1/2}}$ and $\left\|P_{ \le N^{1/2}}v^b_{\alpha}\right\|_{Y^{-1/2}}$ are sufficiently small in $\alpha \mathbb{T} $, then we are done by putting all previous step together.\\ \\ Although the implicit constants depend on $\alpha$, we can obtain (\ref{eq:B_2 est.}) and (\ref{eq:N_3 est.}) on $\alpha \mathbb{T}$ by Remark \ref{rmk:rescaling est.}. More precisely, by Step 1, Step 2 and Remark \ref{rmk:rescaling est.}, we have \begin{equation}\label{eq: full est. rescaled} \begin{split} &\left\|P_{\le N^{1/2}}\left(u_{\alpha}-u_{\alpha}^b\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)}\\ &\lesssim \alpha^{0+}\left[ \left\|P_{ \le N^{1/2}}v_{\alpha}\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \left\|P_{ \le N^{1/2}}\left(v_{\alpha} - v_{\alpha}^b\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \right.\\ &+ \left\|P_{ \le N^{1/2}}v_{\alpha}^b\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \left\|P_{ \le N^{1/2}}\left(v_{\alpha} - v_{\alpha}^b \right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \\ &+ \left\|P_{ \le N^{1/2}}u_{\alpha}\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)}\left\|P_{ \le N^{1/2}}v_{\alpha}\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \left\|P_{ \le N^{1/2}}\left(v_{\alpha} - v_{\alpha}^b\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \\ &+ \left\|P_{ \le N^{1/2}}u_{\alpha}\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)}\left\|P_{ \le N^{1/2}}v_{\alpha}^b\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \left\|P_{ \le N^{1/2}}\left(v_{\alpha} - v_{\alpha}^b\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \\ &+ \left.\left\|P_{ \le N^{1/2}}v_{\alpha}^b\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)}\left\|P_{ \le N^{1/2}}v_{\alpha}^b\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \left\|P_{ \le N^{1/2}}\left(u_{\alpha} - u_{\alpha}^b\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)}\right] \\ &+{ \scriptstyle\mathcal{O}}_N\left(1\right). \end{split} \end{equation} Similarly, we also have the estimate with respect to $\left\|P_{\le N^{1/2}}\left(v_{\alpha}-v_{\alpha}^b\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)}$. Hence, if we can prove \begin{equation}\label{eq: smallness of rescaled} \left\|P_{\le N^{1/2}}u^{(b)}_{\alpha}\left(t,x\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} + \left\|P_{\le N^{1/2}}v^{(b)}_{\alpha}\left(t,x\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \ll 1, \end{equation} for $N > N_0(T, \varepsilon, \left\|u_{0}\right\|_{H^{-1/2}_0\left(\mathbb{T}\right)}, \left\|v_{0}\right\|_{H^{-1/2}_0\left(\mathbb{T}\right),})$, then all terms of the right hand side of \eqref{eq: full est. rescaled} except for ${\scriptstyle\mathcal{O}}_N\left(1\right)$ are absorbed in the left hand side. Now we show that \eqref{eq: smallness of rescaled}. By the global bound of solutions for $ t\in [0,T]$, and scaling back from \eqref{scaling}, \begin{equation*} \left\|P_{\le N^{1/2}}u_{\alpha}\left(t,x\right)\right\|_{Y^{-1/2}\left(\alpha \mathbb{T}\right)} \lesssim \left\|P_{\le N^{1/2}}u_{{\alpha},0}\left(x\right)\right\|_{{H^{-1/2}_0}\left(\alpha \mathbb{T}\right)} = \alpha^{-1}\| P_{\le \alpha N^{1/2}} u_0 \|_{H_0^{-1/2}(\mathbb{T})}. \end{equation*} We first choose $\alpha$ sufficiently large such that terms involving the difference are absorbed to the left hand side. In the estimates of remainder terms in the previous step, $ {\scriptstyle\mathcal{O}}_N\left(1\right) $ depends on $\alpha $, too. But after fixing $\alpha $, we choose $N$ sufficiently large so that $ {\scriptstyle\mathcal{O}}_N\left(1\right) $ to be small. We handle $v_{\alpha}$, $u^b_{\alpha}$ and $v^b_{\alpha}$ similarly. Consequently, we finish the proof of Theorem~\ref{thm:est. of truncated flow} due to $Y^{s} \subset C_{t}H^s$. \subsection{Proof of Theorem \ref{thm:est. of different data}} $ $\\ The argument is highly similar to \cite{Colliander:2005vv} to prove Theorem \ref{thm:est. of different data}. We mainly prove the following local-in time estimate. \begin{lem}\label{lem:local est. of different data} Let $N' \ge 1$, $\left(u_0,v_0\right) \in {H^{{-\frac{1}{2}}}_0} \times {H^{{-\frac{1}{2}}}_0}$ and $\left(u'_0,v'_0\right) \in {H^{{-\frac{1}{2}}}_0} \times {H^{{-\frac{1}{2}}}_0}$ such that $P_{\le N'}\left(u_0,v_0\right) = P_{\le N'}\left(u'_0,v'_0\right)$. If $T'$ is sufficiently small depending on $\left\|u_0\right\|_{H^{-1/2}_0}$, $\left\|u'_0\right\|_{H^{-1/2}_0}$, $\left\|v_0\right\|_{H^{-1/2}_0}$, and $\left\|v'_0\right\|_{H^{-1/2}_0}$, then we have \begin{align*} &\sup_{\left|t\right|\le T'}\left\|P_{ \le N'-(N')^{1/2}}\left(S_{CKdV}\left(t\right) u_0-S_{CKdV} \left(t\right)u'_0\right)\right\|_{{H^{-1/2}_0}} \\ &+\sup_{\left|t\right|\le T'} \left\|P_{ \le N'-(N')^{1/2}}\left(S_{CKdV}\left(t\right)v_0-S_{CKdV}\left(t\right)v'_0\right)\right\|_{{H^{-1/2}_0}}\\ &\le C\left(\left\|u_0\right\|_{H^{-1/2}_0}, \left\|u'_0\right\|_{H^{-1/2}_0}, \left\|v_0\right\|_{H^{-1/2}_0}, \left\|v'_0\right\|_{H^{-1/2}_0}\right) { \scriptstyle\mathcal{O}}_{N'}\left(1\right). \end{align*} \end{lem} Theorem \ref{thm:est. of different data} can be proved by using Lemma~\ref{lem:local est. of different data}. Roughly speaking, we divide the given time interval $\left[-T,T\right]$ into intervals which has length $\left|T'\right|$, and use repeatedly Lemma~\ref{lem:local est. of different data}. For this argument, we refer to Section 5 in \cite{Colliander:2005vv} for details. \\ \begin{proof}[Proof of Lemma \ref{lem:local est. of different data}] We only consider the difference between $u$ and $u'$ as $v$ and $v'$ case is handled in the same way. From the local well-posedness for \eqref{eq:Coupled KdV system}, we have the local estimates \begin{equation}\label{eq:small} \left\|u\right\|_{Y^{-1/2}} + \left\|u'\right\|_{Y^{-1/2}} \lesssim C ~\text{and}~\left\|v\right\|_{Y^{-1/2}} + \left\|v'\right\|_{Y^{-1/2}} \lesssim C, \end{equation} by choosing the sufficiently small time $T'$ depending on the $H_0^{-1/2}$-norms of $u_0$, $u'_0$, $v_0$ and $v'_0$. We apply $P_{\le M}$ in \eqref{eq:diff by parts original flow} to get, \begin{equation}\label{eq:low freq eq} \begin{aligned} \partial_t 1_{\left[-M,M\right]}\left(k\right) {\bf u}_k &= \partial_t\left\{\frac{1_{\left[-M,M\right]}\left(k\right)}{6} \sum_{k_1+k_2=k} \frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}{\bf v}_{k_1}{\bf v}_{k_2}\right\} \\ &+2i~\frac{1_{\left[-M,M\right]}\left(k\right)}{6} \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}1_{[-M,M]}\left(k_1+k_2\right) \frac{e^{-i\Phi\left({\rm k}\right)t}}{k_3}{\bf u}_{k_1}{\bf v}_{k_2}{\bf v}_{k_3}. \end{aligned} \end{equation} Taking linear propagator back, we can rewrite the right hand side of \eqref{eq:low freq eq} as follows: \begin{equation*} \begin{aligned} &\partial_t \mathcal{F}_x\left[P_{\le M}B'_2\left({\bf v},{\bf v}\right)\right]+\mathcal{F}_x\left[ P_{\le M} N'_3\left({\bf u},{\bf v},{\bf v}\right)\right] \\ &=\partial_t\left\{\frac{1_{\left[-M,M\right]}\left(k\right)}{6} \sum_{k_1+k_2=k} \frac{e^{-i\phi\left({\rm k}\right)t}}{k_1k_2}e^{-it\left(k_1^3+k_2^3\right)} v_{k_1}v_{k_2}\right\} \\ &+2i~\frac{1_{\left[-M,M\right]}\left(k\right)}{6} \sum_{\substack{k_1+k_2+k_3=k \\ k_1+k_2 \not = 0}}1_{[-M,M]}\left(k_1+k_2\right) \frac{e^{-i\Phi\left({\rm k}\right)t}}{k_3}e^{-it\left(k_1^3+k_2^3+k_3^3\right)}u_{k_1}v_{k_2}v_{k_3} \\ &=:\partial_t \mathcal{F}_x\left[P_{\le M}B'_2\left(v,v\right)\right]+ \mathcal{F}_x\left[P_{\le M} N'_3\left(u,v,v\right)\right]. \end{aligned} \end{equation*} The constant $M$ will be the low frequency cut-off and will be chosen later. In order to show $ \|P_{N'-(N')^{1/2}}u(t) -u'(t) \|_{Y^{-1/2}}$ to be small for a short time $T'$, we analyse the nonlinear terms and show that the contribution from high frequency pieces is small and so regarded as remainder terms in ${ \scriptstyle\mathcal{O}}_{N'}\left(1\right) $. In addition, for the contribution from all low frequency pieces, we use the local stability theory. \\ \\ We first consider the trilinear term $N'_3(u,v,v)$. This part is also similar to Step 1 of Subsection \ref{sec: pf of thm trun}. In order to control $N'_3(u,v,v)$, we define the $(\mbox{\emph{error terms}})_3$ which has the $Z^{-\frac{1}{2}}$-norm of $ { \scriptstyle\mathcal{O}}_{N'}\left(1\right)$. The solutions $u$, $u'$, $v$ and $v'$ are decomposed into the three pieces using the following argument. By the global well-posedness and the pigeon-hole principle, we may find an interval $\left[M,M+(N')^{1/4}\right] \subseteq [N'-\left(N'\right)^{\frac12},N']$\footnote{ The constant $M$ shall be different from that in Subsection \ref{sec: pf of thm trun}.} such that \begin{equation}\label{eq:sub-small} \begin{split} &\left\|(P_{\le M+\left(N'\right)^{1/4}}-P_{\le M})u\right\|_{Y^{-1/2}} + \left\|(P_{\le M+\left(N'\right)^{1/4}}-P_{\le M})u'\right\|_{Y^{-1/2}} \\ &+\left\|(P_{\le M+\left(N'\right)^{1/4}}-P_{\le M})v\right\|_{Y^{-1/2}} + \left\|(P_{\le M+\left(N'\right)^{1/4}}-P_{\le M})v'\right\|_{Y^{-1/2}} \lesssim \left(N'\right)^{-\sigma}. \end{split} \end{equation} We fix such $M$ with $ N'-N'^{1/2} \le M \le N'$ and decompose $u$ as \begin{equation*} u = u_{low} + u_{med}+u_{hi}, \end{equation*} where \begin{equation*} u_{low} := P_{\le M}u, \hspace{1em} u_{med} := (P_{\le M+\left(N'\right)^{1/4}}-P_{\le M})u, \hspace{1em} u_{hi}:= (1-P_{\le M+\left(N'\right)^{1/4}})u. \end{equation*} From \eqref{eq:small} and \eqref{eq:sub-small}, we have \begin{equation}\label{eq:split} \left\|u_{low}\right\|_{Y^{-1/2}},\left\|u_{hi}\right\|_{Y^{-1/2}} \le C \hspace{1em} \mbox{and} \hspace{1em} \left\|u_{med}\right\|_{Y^{-1/2}} \lesssim \left(N'\right)^{-\sigma}. \end{equation} We also do the same decomposition for $u'$, $v$ and $v'$, and obtain analogous estimates like \eqref{eq:split}. Moreover, we denote \begin{equation*} P_{\le M}N'_3(u, v, v) = P_{\le M}N'_3(u_{low},v_{low}, v_{low}) + \text{(remainder terms)}_3. \end{equation*} First of all, from Lemma \ref{lem:tri resonant est} and \ref{lem:local tri. est.}, any term in $\text{(remainder terms)}_3$ involving $u_{med}$, $u'_{med}$, $v_{med}$ or $v'_{med}$ is $\mathcal{O}\left(\left(N'\right)^{-\sigma}\right)$ in $(\mbox{\emph{error terms}})_3$. We now consider terms which involve in $v_{hi}$. As before, $\text{(remainder terms)}_3$ is split into the resonant case and the nonresonant case. From \eqref{eq: low-hi-hi tri}, the typical term of resonant case is $P_{\le M}N'_3(u_{low}, v_{hi}, v_{hi})$ and therefore, we have \begin{equation*} \left\|P_{\le M}N'_3(u_{low}, v_{hi}, v_{hi})\right\|_{Z^{-1/2}} \sim { \scriptstyle\mathcal{O}}_{N'}\left(1\right), \end{equation*} by \eqref{eq: high freq resonant est.}. For the nonresonant case, we estimate as Step 1 of Subsection \ref{sec: pf of thm trun}. Since we have $n_3 \gtrsim (N')^{1/4}$, Lemma \ref{lem:local tri. est.}, and the estimate of \eqref{eq: result of trun. trilinear} in Subsection~\ref{sec: pf of thm trun}, the nonresonant case of $\text{(remainder terms)}_3$ is bounded by $\mathcal{O}\left(\left(N'\right)^{-\sigma}\right)$. In other words, $P_{\le M}N'_3(u,v,v)$ can be written $P_{\le M}N'_3(u_{low}, v_{low},v_{low}) + (\mbox{\emph{error terms}})_3$ with $ \| (\mbox{\emph{error terms}})_3 \|_{Z^{-1/2}} = {\scriptstyle\mathcal{O}}_{N'}\left(1\right) $. Due to $P_{\le M}e^{\pm t \partial^3_x} = e^{\pm t\partial^3_x}P_{\le M}$, we have \begin{equation*} P_{\le M}N'_3(u,v,v) = P_{\le M}N'_3({\bf u}_{low}, {\bf v}_{low}, {\bf v}_{low}) + (\mbox{\emph{error terms}})_3, \end{equation*} where ${\bf u}_{low} = P_{\le M}{\bf u}$ and ${\bf v}_{low} = P_{\le M}{\bf v}$.\\ \\ The bilinear term $B'_2(v,v)$ can be analysed in a similar way. Let $(\mbox{\emph{error terms}})_4$ be term which has the $Y^{-\frac{1}{2}}$-norm of ${ \scriptstyle\mathcal{O}}_{N'}\left(1\right)$. We choose the same constant $M$ in $N_3'(u,v,v)$-term case and split the solution $v$ and $v'$ into as follows, \begin{equation*} v = v_{low} + v_{hi}, \end{equation*} where \begin{equation*} v_{low} = P_{ \le M}v \hspace{1em} \text{and} \hspace{1em} v_{hi} = \left(1-P_{ \le M}\right)v. \end{equation*} We denote \begin{equation}\label{eq:truncated bilinear} P_{\le M}B'_2(v, v) = P_{\le M}B'_2(v_{low}, v_{low}) + \text{(remainder terms)}_4. \end{equation} From \eqref{eq:truncated bilinear}, $\text{(remainder terms)}_4$ has $v_{hi}$ term only. Hence, $\text{(remainder terms)}_4$ is bounded by ${ \scriptstyle\mathcal{O}}_{N'}\left(1\right)$ from \eqref{eq: result of bi est.} and $M\in [N'-\left(N'\right)^{\frac12},N']$. Therefore, the bilinear term $B'_2(v, v)$ can be written $B'_2(v_{low}, v_{low}) + (\mbox{\emph{error terms}})_4$, and so \begin{equation*} B'_2(v, v) = B'_2({\bf v}_{low}, {\bf v}_{low}) + (\mbox{\emph{error terms}})_4, \end{equation*} by $P_{\le M}e^{\pm t \partial^3_x} = e^{\pm t\partial^3_x}P_{\le M}$. \\ \\ Consequently, ${\bf u}_{low}$ obeys the equation, \begin{equation}\label{eq:low equation1} \partial_t{\bf u}_{low} = \partial_t P_{\le M}B'_2({\bf v}_{low}, {\bf v}_{low})+P_{\le M}N'_3({\bf u}_{low},{\bf v}_{low},{\bf v}_{low}) + \sum_{i=3,4}(\mbox{\emph{error terms}})_{i}. \end{equation} In the same manner, the function ${\bf u}'_{low}$ obeys the equation \begin{equation}\label{eq:low equation2} \partial_t{\bf u}'_{low} = \partial_t P_{\le M}B'_2({\bf v}'_{low}, {\bf v}'_{low})+P_{\le M}N_3'({\bf u}'_{low},{\bf v}'_{low},{\bf v}'_{low}) + \sum_{i=3,4}(\mbox{\emph{error terms}})_{i}. \end{equation} From the local well-posedness for (\ref{eq:low equation1}) or \eqref{eq:low equation2}, transforming back, $u_{low}\left(0\right) = u'_{low}\left(0\right)$, and the fact that by the rescaling argument as in Subsection \ref{sec: pf of thm trun}, we may assume that the initial data are small in $Y^{-1/2}$, we have \begin{equation*} \left\|u_{low} - u'_{low}\right\|_{Y^{-1/2}} \lesssim { \scriptstyle\mathcal{O}}_{N'}\left(1\right). \end{equation*} We also get \begin{equation*} \left\|v_{low} - v'_{low}\right\|_{Y^{-1/2}} \lesssim { \scriptstyle\mathcal{O}}_{N'}\left(1\right), \end{equation*} by the similar argument, and we thus finish the proof by $Y^{s} \subset C_{t}H^s$. \end{proof} \begin{rmk} Although we provide the proof for \eqref{eq:Majda-Biello system}, the same proof works for a more general case \eqref{eq:Diagonalize Coupled KdV system} if there is a global control of solutions on $C_tH_0^{-\frac{1}{2}}([0,T]\times\mathbb{T})$. \end{rmk}
1,314,259,995,557
arxiv
\section{Introduction} \label{sec:in} Let $G$ be a connected graph of order $n$ with vertex set $V(G)$. The {\em average distance} $\overline{\sigma}(v)$ of a vertex $v$ of $G$ is defined as the arithmetic mean of the distances from $v$ to all other vertices of $G$, i.e., \[ \overline{\sigma}(v,G) = \frac{1}{n-1} \sum_{u \in V(G)} d(v,u), \] where $d(v,u)$ is the usual shortest path distance between vertices $v$ and $u$. The {\em proximity} and the {\em remoteness} of $G$, denoted by $\pi(G)$ and $\rho(G)$, are the smallest and the largest, respectively, average distance among the vertices of $G$. The average distance of a vertex $v$ is closely related to its {\em total distance} $\sigma(v,G)$, defined as the sum of the distances from $v$ to all other vertices. Clearly, $\sigma(v,G) = (n-1) \overline{\sigma}(v,G)$. Also the names {\em status} or {\em transmission} have been used in the literature. The proximity of a graph is closely related to its {\em minimum status}, defined as the smallest total distance among the vertices of the graph. If a graph $G$ represents a network in which we want to place a facility which should be close, on average, to vertices of $G$, then the proximity of $G$ is an indicator for how good the best location in the network is, and the remoteness is an indicator for how good the worst location is. The proximity and also the remoteness of a connected graph on at least two vertices is at least $1$. Sharp upper bounds in terms of order alone were given by Zelinka \cite{Zel1968} and later, independently, by Aouchiche and Hansen \cite{AouHan2011}, who introduced the names proximity and remoteness. They proved that for every connected graph of order $n$, \begin{equation} \label{eq:remoteness-in-terms-of-order} \rho(G) \leq \frac{n}{2}, \end{equation} with equality if and only if $G$ is a path, and also that \begin{equation} \label{eq:proximity-in-terms-of-order} \pi(G) \leq \left\{ \begin{array}{cc} \frac{n+1}{4} & \textrm{if $n$ is odd,} \\ \frac{n+1}{4} + \frac{1}{4(n-1)} & \textrm{if $n$ is even,} \end{array} \right. \end{equation} with equality if and only if $G$ is a path or a cycle. There are several results in the literature on relations between proximity or remoteness and other distance measures. For the {\em diameter} (defined as the largest of the distances between the vertices), these were first investigated by Aouchiche and Hansen \cite{AouHan2011}, who determined sharp upper bounds on the difference between diameter and proximity and on the difference between diameter and remoteness in terms of order. Improved bounds, that take into account also the minimum degree were given in \cite{Dan2016} and \cite{DanMaf-manu}. Also bounds involving the {\em radius} (defined as the smallest of the eccentricities of the vertices of $G$, where the eccentricity of a vertex $v$ is the distance from $v$ to a vertex farthest from $v$) have been explored. A sharp upper bound on the difference between radius and proximity of a graph of given order was given in \cite{AouHan2011}, and improved bounds that take into account also the minimum degree can be found in \cite{Dan2016} and \cite{DanMaf-manu}. A conjecture in \cite{AouHan2011} on the maximum value of the difference between remoteness and radius for graphs of given order was proved independently by Wu and Zhang \cite{WuZha2014} and Hua, Chen and Das \cite{HuaCheDas2015}. Ma, Wu and Zhang \cite{MaWuZha2012} showed that the difference between {\em average eccentricity} (defined as the arithmetic mean of the eccentricities of the vertices) and proximity is maximised by the path, thus proving a conjecture from \cite{AouHan2011}. Another conjecture in \cite{AouHan2011} which states that the difference between average eccentricity and remoteness cannot exceed that of the cycle was shown by Sedlar \cite{Sed2013} to hold for trees. The maximum value of the difference between remoteness and {\em average distance} (defined as the arithmetic mean of the distances between all vertices of the graph) and the difference between average distance and proximity were determined by Wu and Zhang \cite{WuZha2014} and by Sedlar \cite{Sed2013}, respectively, thus confirming two conjectures from \cite{AouHan2011}. The minimum value of the ratio of proximity to average distance for graphs of given order was determined by Hua and Das \cite{HuaDas2014}. Proximity and remoteness have been studied for graphs from various classes. Among other results, Barefoot, Entringer and Sz\'{e}kely \cite{BarEntSze1997} determined the maximum value of the ratio $\rho(T)/\pi(T)$ for a tree $T$ of given order. They also determined lower bounds on the ratios of average distance to proximity and average distance to remoteness for trees of given order. The trees that maximise proximity among trees of given order and diameter were determined by Peng and Zhou \cite{PenZho2021}. The same authors also gave bounds on proximity in terms of order and either number of end-vertices, number of vertices of odd degree and number of vertices of degree $2$. Proximity of series-reduced trees, i.e., trees with no vertex of degree $2$, were studied by Cheng, Lin and Zhou \cite{CheLinZho2021}. For maximal planar graphs, bounds on remoteness and proximity were given by Czabarka, Dankelmann, Olsen and Sz\'{e}kely in \cite{CzaDanOlsSze2019} and \cite{CzaDanOlsSze2020}, respectively. The study of proximity and remoteness in digraphs was initiated by Ai, Gerke, Gutin and Mafunda \cite{AiGerGutMaf2021}. This paper is concerned with bounds on proximity and remoteness that take into account vertex degrees. The {\em degree} of a vertex $v$ is defined by ${\rm deg}_G(v) =|N_G(v)|$, where $N_G(v)$ is the {\em neighbourhood} of $v$ i.e., the set of vertices adjacent to $v$. By $\delta(G)$ and $\Delta(G)$ we denote the {\em minimum degree} and the {\em maximum degree} of $G$, i.e., the smallest and the largest degree of a vertex in $G$. The upper bounds in \eqref{eq:remoteness-in-terms-of-order} and \eqref{eq:proximity-in-terms-of-order} are attained by paths, which have minimum degree $1$. For graphs of larger minimum degree $\delta$, the following improved bounds were given in \cite{Dan2015}. \begin{equation} \label{eq:remoteness-vs-minimum-degree} \rho(G) \leq \frac{3n}{2(\delta+1)} +\frac{7}{2}, \end{equation} \begin{equation} \label{eq:proximity-vs-minimum-degree} \pi(G) \leq \frac{3n}{4(\delta+1)} +3, \end{equation} and further improvements for graphs not containing a $3$-cycle or a $4$-cycle as a subgraph were given in \cite{DanJonMaf2021}. The graphs constructed in \cite{Dan2015} to show that \eqref{eq:remoteness-vs-minimum-degree} and \eqref{eq:proximity-vs-minimum-degree} are sharp apart from an additive constant are close to regular. Hence it is natural to ask if one can find improved bounds for graphs containing a vertex of large degree. That this is indeed the case for trees was shown by Tsai, Shang and Zhang \cite{LinTsaShaZha2012} and Rissner and Burkhard \cite{RisBur2014}, who determined the trees of given order and maximum degree that maximise proximity and remoteness. In this paper we answer the above question in the affirmative and improve the bounds \eqref{eq:remoteness-vs-minimum-degree} and \eqref{eq:proximity-vs-minimum-degree} for graphs of given maximum degree. We prove the following bounds, which are sharp apart from an additive constant: \[ \pi(G) \leq \left\{ \begin{array}{cc} \frac{3(n-\Delta)^2}{2(n-1)(\delta+1)} + \frac{13}{2} & \textrm{if $\Delta > \frac{n}{2}-1$,} \\[1mm] \frac{3n^2 - 6\Delta^2}{4(n-1)(\delta+1)} + \frac{35}{4} & \textrm{if $\Delta \leq \frac{n}{2}-1$}, \end{array} \right. \] and \[ \rho(G) \leq \frac{3(n^2 -\Delta^2)}{2(n-1)(\delta+1)} + 7, \] where $\Delta$ denotes the maximum degree of $G$. Our bounds show a certain analogy to results in \cite{AloDan2021}, where it was shown that bounds on the average distance of graphs in terms of order and minimum degree can be improved significantly for graphs with large maximum degree. This paper is organised as follows. In Section \ref{section:Proximity and remoteness of vertex-weighted graphs} we consider graphs with a weight function on the vertex set. We define the weighted distance of a vertex and prove bounds on the weighted distance. These bounds are used in Section \ref{section:Proximity and remoteness of graphs of given minimum degree and maximum degree} to prove our main results, bounds on proximity and remoteness in terms of order, minimum degree and maximum degree. Graphs that show that these bounds are sharp apart from an additive constant are constructed in Section \ref{section:A sharpness example}. \section{Proximity and remoteness of weighted graphs} \label{section:Proximity and remoteness of vertex-weighted graphs} In this section we consider graphs with a weight function on the vertex set. Taking an approach similar to that in \cite{Dan2015}, we first define the weighted distance of a vertex $v$. \begin{definition} Let $G$ be a connected graph and $c: V(G) \longrightarrow \reals^{\geq 0}$ be a nonnegative weight function on the vertices of $G$. Let $v$ be a vertex of $G$. Then the weighted distance of $v$ with respect to $c$ is defined as \[ \sigma_c(v) = \sum_{w \in V(G) -\{v\}} c(w) d(v,w). \] A vertex whose weighted distance is minimum among all vertices of $G$ is a $c$-median vertex, and the $c$-median of $G$ is the set of all $c$-median vertices of $G$. \end{definition} If $c(v)=1$ for every vertex $v$ of $G$, then the $c$-median is exactly the median of $G$. It is well-known (see for example \cite{Zel1968}) that in a tree $T$ of order $n$, the median vertices are exactly the vertices whose branch weight is not more than $\frac{n}{2}$, where the branch weight of a vertex $v$ in $T$ is defined as the maximum order among all components of $T-v$. A more general result for weighted trees was given by Kariv and Hakimi \cite{KarHak1979}. \begin{definition} Let $T$ be a tree and $c: V(T) \longrightarrow \reals^{\geq 0}$ be a nonnegative weight function on the vertices of $T$. Let $v$ be a vertex of $T$. Then the $c$-branch weight ${\rm bw}_c(v,T)$ of $v$ is the largest weight of a component of $T-v$, where the weight of a component is the sum of the weights of its vertices. \end{definition} If $G$ is a graph with a weight function $c$ on the vertex set, then for a set $A$ of vertices of $G$ we write $c(A)$ for $\sum_{v\in A} c(v)$. If $H$ is a subgraph of $G$, then we write $c(H)$ for $c(V(H))$. \begin{proposition}[\cite{KarHak1979}] \label{prop:median-branchweight-weighted} Let $T$ be a tree and $c: V(T) \longrightarrow \reals^{\geq 0}$ be a nonnegative weight function on the vertices of $T$. Let $v$ be a vertex of $T$. Then $v$ is a $c$-median vertex of $T$ if and only if ${\rm bw}_c(v,T) \leq \frac{c(T)}{2}$. \end{proposition} A key result on weighted graphs in \cite{Dan2015} gives a bound on the weighted distance of a $c$-median vertex of a weighted graph with given total weight but no restriction on the order, in which the weight of every vertex is at least a prescribed value $k$. In the following lemma, which can be viewed as an extension of the result in \cite{Dan2015}, the graph satisfies the additional condition that the weight of at least one vertex is not less than a prescribed large value $L$. Its proof is significantly more involved than that of the corresponding result in \cite{Dan2015}. \begin{lemma} \label{la:proximity-of-weighted} Let $G$ be a connected graph and $k,L \in \reals$ with $0 < k < L$. Let $c: V(G) \longrightarrow \reals^{\geq 0}$ be a weight function with total weight $N$. Assume that $c(u) \geq k$ for every vertex $u \in V(G)$, and that $G$ contains a vertex of weight at least $L$. Assume further that $N-L$ is an integer multiple of $k$. If $v$ is a $c$-median vertex of $G$, then \begin{equation} \label{eq:statement-of-main-lemma} \sigma_c(v,G) \leq \left\{ \begin{array}{cc} \frac{(N-L)(N-L+k)}{2k} & \textrm{if $L > \frac{N}{2}$,} \\[1mm] \frac{N^2- 2L^2}{4k} + \frac{N+L}{2} & \textrm{if $L \leq \frac{N}{2}$.} \end{array} \right. \end{equation} \end{lemma} {\bf Proof:} Assume that $N$, $k$ and $L$ are given. Let $T$ be a spanning tree of $G$ that preserves the distances from a $c$-median vertex $v$. Clearly, $v$ is a $c$-median vertex of $T$ and $\sigma_c(v,T)=\sigma_c(v,G)$, so it suffices to prove the lemma for $T$. We may assume that $T$ and $c$ are such that the weighted distance of a $c$-median vertex of $T$ is maximum among all trees and weight functions satisfying the hypothesis of the lemma. Our proof strategy is as follows. We prove a sequence of claims from which it follows that $T$ is a path in which one end-vertex has weight $L$, and all other vertices have weight $k$. Evaluating the weighted distance of a $c$-median vertex of this path then yields the inequality \eqref{eq:statement-of-main-lemma}. We assume that $v$ is a $c$-median vertex of $T$ that is also an internal vertex, if possible. Let $u_1, u_2,\ldots, u_d$ be the neighbours of $v$. For $i \in \{1,2, . . . ,d\}$ denote the component of $T-v$ containing $u_i$ by $T_i$, and its total weight by $C_i$. We may assume that $C_1 \geq C_2 \geq \ldots \geq C_d$. It follows from Proposition \ref{prop:median-branchweight-weighted} that $C_1 = {\rm bw}_c(v,T) \leq \frac{N}{2}$. \\[1mm] {\sc Claim 1:} $C_i + C_j > \frac{N}{2}$ for all distinct $i,j \in \{1,2,\ldots,d\}$. \\ Suppose to the contrary that $C_i + C_j \leq \frac{N}{2}$ for some distinct $i,j \in \{1,2, \ldots, d\}$. We consider the tree $H =T-vu_i + u_iu_j$ with the same weight function $c$. Clearly, the weights of the vertices of $H$ satisfy the hypothesis of the lemma. Furthermore, $v$ is also a $c$-median vertex of $H$ since the new branch has weight $C_i + C_j \leq \frac{N}{2}$, which implies that the $c$-branch weight of $v$ in $H$ is still at most $\frac{N}{2}$. Since the distance between the vertices of $T_i$ and $v$ has increased by $1$, we have \[ \sigma_c(v,H) = \sigma_c(v,T) + \sum_{x \in V(T_i)} c(x) = \sigma_c(v,T) + C_i > \sigma_c(v,T), \] contradicting our choice of $T$ and $v$. This proves Claim 1. \\[1mm] {\sc Claim 2:} $T-v$ has at most $3$ components, i.e., $d \leq 3$. \\[1mm] Suppose to the contrary that $T-v$ has at least $4$ components. Since by Claim 1 the total weight of any two of these is greater than $\frac{N}{2}$, the total weight of these four components would exceed $N$. This contradiction proves Claim 2. \\[1mm] {\sc Claim 3:} $T_i$ is a path and $v$ is adjacent to one of the end-vertices of $T_i$ for each $i \in \{1,2,\ldots,d\}$. \\[1mm] It suffices to prove that no vertex $w$ of $T_i$ has two neighbours that are farther by $1$ from $v$ than $w$. Suppose to the contrary that some vertex $w$ of $T_i$ has two neighbours, say, $x_1$ and $x_2$ with $d(v,x_1) = d(v,x_2) = d(v,w)+1$. Consider the tree $H = T- wx_1 +x_1x_2$ with the same weight function $c$. The distance between $x_1$ and $v$ has increased by $1$, and no distance between $v$ and another vertex has decreased. Furthermore, $v$ is a $c$-median vertex of $H$ since its $c$-branch weight has not changed and is thus still at most $\frac{N}{2}$. Hence $\sigma_c(v,H) > \sigma_c(v,T)$. This contradiction to our choice of $T$ proves Claim 3. \\[-3mm] For the remainder of the proof we use the following notation. For $i \in \{1,2,\ldots,d\}$ denote the end-vertex of $T_i$ that is farthest from $v$ by $z_i$. Let $y$ be a vertex of maximum weight in $T$, so $c(y) \geq L > k$. If $v$ is among the vertices of maximum weight, then choose $y = v$.\\[1mm] {\sc Claim 4:} Let $w$ be an internal vertex of $T$. Then $c(w) = k$, unless $w = v = y$ and $c(w) = L$. \\ Assume that $T$ contains an internal vertex $w$ with $c(w) > k$. It suffices to show that $w = v$, $w = y$ and $c(y)=L$. First suppose that $w \neq v$. Then $w \in V(T_i)$ for some $i \in \{1,2, \ldots, d\}$. We obtain a weight function $c'$ from $c$ by reducing the weight of vertex $w$ to $k$ and adding the difference $c(w)-k$ to the weight of $z_i$. Then $v$ is also a $c'$-median vertex since its $c'$-branch weight equals its $c$-branch weight, which is at most $\frac{N}{2}$. The conditions on the weight function are satisfied by $c'$. Indeed, every vertex has weight at least $k$, and either $w \neq y$, in which case $c'(y) \geq L$, or $w=y$, in which case $c'(z_i) = c(z_i)+c(y)-k \geq k+L-k =L$, so there exists a vertex of weight at least $L$. Clearly, we have \[ \sigma_{c'}(v,T) = \sigma_c(v,T) + d_T(w,z_i) \big(c(w)-k\big) > \sigma_c(v,T), \] contradicting our choice of $T$ and $c$. This proves that $w=v$. Now suppose that $w \neq y$ or that $w = y$ and $c(w) > L$. Since $v = w$ and $w$ is an internal vertex, $T-v$ has at least two components. Note that $C_1 + C_2 \leq N-c(v) < N-k$, thus we get that $C_2 \leq \frac{N}{2} - \frac{k}{2}$. We now obtain the weight function $c'$ by reducing the weight of $w$ by $\min\{ \frac{k}{2}, c(w)-k\}$ if $w \neq y$, and by $\min\{ \frac{k}{2}, c(w)-L\}$ if $w = y$ and increasing the weight of $z_2$ by the same amount. Then $v$ is also a $c'$-median vertex, $c'$ satisfies the hypothesis of the lemma, but $\sigma_{c'}(v,T) > \sigma_c(v,T)$, a contradiction to the choice of $T$ and $c$. We conclude that $w = y$ and $c(y)=L$, so Claim 4 follows. \\[1mm] {\sc Claim 5:} The lemma holds if $L > \frac{N}{2}$. \\ Assume that $L > \frac{N}{2}$. Then $y$ is a $c$-median vertex since ${\rm bw}_c(y,T) \leq N- c(y) \leq \frac{N}{2}$. It is easy to see that $y$ is the only $c$-median vertex of $T$, so $y=v$. Now $v$ is an end-vertex of $T$. Indeed, if $v$ is an internal vertex, then $T-v$ has at least two components, and by Claim 1 their combined weight is more than $\frac{N}{2}$. Since $L>\frac{N}{2}$, this would imply that the total weight of $T$ is greater than $N$, a contradiction. Thus $v$ is an end-vertex. By Claim 3 and Claim 4 it follows that $T$ is a path with all internal vertices having weight $k$. We now show that $c(v)=L$. Indeed, if $c(v)>L$, then we obtain a weight function $c'$ from $c$ by reducing the weight of $v$ to $L$ and adding the excess weight $c(v)-L$ to $z_1$. Then $c'$ satisfies the hypothesis of the lemma and $v$ is a $c'$-median vertex of $T$. As above, $c'$ satisfies the hypothesis of the lemma, and we have \[ \sigma_{c'}(v,T) = \sigma_{c}(v,T) + d(v,z_1)(c(v)-L) > \sigma_{c}(v,T), \] a contradiction to the maximality of $\sigma_c(v,T)$. We now show that $c(z_1)=k$. Suppose to the contrary that $c(z_1)>k$. Since $N-L$ is an integer multiple of $k$ and $c(v)=L$, it follows that $c(z_1)$ is an integer multiple of $k$, which implies that $c(z_1) \geq 2k$. Extending the path $T$ by adding a new vertex $z_1'$ adjacent to $z_1$, and moving $c(z_1)-k$ weight units from $z_1$ to $z_1'$ yields a tree $H$ with a weight function $c'$ that satisfies the hypothesis of the lemma. Then $v$ is a $c'$-median vertex of $H$ and $\sigma_{c'}(v,H)> \sigma_c(v,T)$, a contradiction. We have shown that $T$ is a path, the $c$-median vertex $v$ has weight $L$ and is an end-vertex of $T$, and all other vertices have weight $k$. So there are $\frac{N-L}{k}$ vertices of weight $k$ at distance $1,2,\ldots, \frac{N-L}{k}$ from $v$. Hence, \[ \sigma_{c}(v,T) = k\big(1 + 2 + \cdots + \frac{n-L}{k} \big) = \frac{(N-L)(N-L+k)}{2k}, \] which proves \eqref{eq:statement-of-main-lemma} for the case $L>\frac{N}{2}$. \\[1mm] {\sc Claim 6:} If $L \leq \frac{N}{2}$, then $v$ is an internal vertex of $T$, $T$ is a path, and $y$ is an end-vertex of $T$. \\ We first prove that $v$ is an internal vertex of $T$. Suppose to the contrary that $v$ is an end-vertex. Then $T-v$ has only one component, and ${\rm bw}_c(v,T) = N- c(v)$. Since ${\rm bw}_c(v,T) \leq \frac{N}{2}$ it follows that $c(v) \geq \frac{N}{2}$. If this inequality is strict, i.e., if $c(v) > \frac{N}{2}$, then transferring $c(v)-\frac{N}{2}$ weight units from $v$ to its neighbour $u_1$ yields a weight function $c'$ which satisfies the hypothesis of the lemma and for which $v$ is a $c'$-median vertex. But $\sigma_{c'}(v,T) > \sigma_c(v,T)$, a contradiction to the maximality of $\sigma_c(v,T)$. Hence $c(v) = \frac{N}{2}$. Now consider vertex $u_1$. Clearly, the component of $T-u_1$ containing only $v$ has the maximum weight among all components of $T-u_1$, so ${\rm bw}_c(u_1,T) = \frac{N}{2}$, hence $u_1$ is also a $c$-median vertex. Hence $T$ has an internal vertex that is a $c$-median vertex. This contradicts the choice of $v$ as a $c$-median vertex that is also internal, if possible. It follows that our initial assumption that $v$ is an end-vertex is false, and so $v$ is an internal vertex of $T$. We now show that $T$ is a path. Suppose not. Since by Claim 3 each component of $T-v$ is a path where $v$ is adjacent to one of its ends, it follows that $T-v$ has at least three components. Since $T-v$ has at most three components by Claim 2, it follows that $T-v$ has exactly three components. There are at least two components of $T-v$, $T_{i'}$ and $T_{i''}$ say, that do not contain $y$. Denote the third component by $T_i$. Recall that $z_{i'}$ and $z_{i''}$ are the end-vertices of $T$ in $T_{i'}$ and $T_{i''}$, respectively. We may assume that $d_T(v,z_{i'}) \geq d_T(v,z_{i''})$. Note that $c(z_{i'}) < 2k$ (and similarly, $c(z_{i''}) < 2k$) since otherwise, if $c(z_{i'})\geq 2k$, then we obtain a new graph by adding a new vertex of weight $k$, joining it to $z_{i'}$ and reducing the weight of $z_{i'}$ by $k$, which increases the weighted distance of $v$, which contradicts our choice of $T$ and $c$. We now bound $C_{i'}$. Clearly, $C_{i'} = N - C_i - C_{i''}-c(v)$. Since $C_{i} + C_{i''} > \frac{N}{2}$ by Claim 1, and $c(v) \geq k$, it follows that $C_{i'} < \frac{N}{2} -k$. We may assume that $c(z_{i''})=k$. Indeed, if $c(z_{i''}) > k$, then let $r=c(z_{i''})-k$. As shown above, we have $r \leq k$. Consider the weight function obtained from $c$ by shifting the extra weight $r$ from $z_{i''}$ to $z_{i'}$. This does not increase the total weight of $T_{i'}$ beyond $\frac{N}{2}$, so $v$ is also a $c'$-median vertex of $T$, and the weighted distance of $v$ has not decreased since $d_T(v,z_{i'}) \geq d_T(v,z_{i''})$. Hence, from now on we may assume that $c(z_{i''})=k$. Let $x$ be the neighbour of $z_{i''}$ in $T$. Consider the tree $H=T-z_{i''}x + z_{i''} z_{i'}$, so vertex $z_{i''}$ is transferred from $T_{i''}$ to $T_{i'}$. Since $C_{i'} < \frac{N}{2}-k$, vertex $v$ has $c$-branch weight at most $\frac{N}{2}$ in $H$ and is thus a $c$-median vertex of $H$. Moreover, \[ \sigma_c(v,H) = \sigma_c(v,T) + k(d_T(v,z_{i'}) + 1 - d_T(v,z_{i''})) > \sigma_c(v,T), \] a contradiction to the choice of $T$. Hence $T$ is a path. Now we complete the proof of Claim 6 by showing that $y$ is an end-vertex of $T$. It suffices to show that $y \neq v$ since then by Claim 4 all internal vertices of $T$ have weight $k$, so $y$ is an end-vertex. Suppose to the contrary that $y=v$. Since $v$ is an internal vertex, it follows from Claim 4 that $c(v)=L$. Let $c'$ be the weight function obtained from $c$ by moving $L-k$ weight units from $v$ to $u_2$, i.e., let $c'(v)=k$, $c'(u_2) = c(u_2) + L-k$ and $c'(x) = c(x)$ for all $x \in V(T)-\{ v,u_2\}$. Clearly, $c'$ satisfies the hypothesis of the lemma. We have either $C_1 > \frac{N}{2}-k$ or $C_1 \leq \frac{N}{2}-k$. If $C_1 > \frac{N}{2}-k$, then $v$ is also a $c'$-median vertex of $T$. Indeed, the weights with respect to $c'$ of the two branches of $T-v$ are $C_1$ and $N-C_1-k < \frac{N}{2}$. Clearly, $\sigma_{c'}(v,T) = \sigma_c(v,T)+L-k > \sigma_c(v,T)$, a contradiction to the maximality of $\sigma_c(v,T)$. If $C_1 \leq \frac{N}{2}-k$, then $u_2$ is a $c'$-median vertex of $T$. Indeed, the total weights with respect to $c'$ of the two branches of $T-u_2$ are $C_1+k$ and $C_2 - c(u_2)$, and both terms are clearly not more than $\frac{N}{2}$. Clearly, $\sigma_{c'}(u_2,T) = \sigma_{c}(v,T) + C_1 + k - C_2 > \sigma_c(v,T)$, again a contradiction to the maximality of $\sigma_c(v,T)$. Hence Claim 6 holds. \\[1mm] {\sc Claim 7:} If $L \leq \frac{N}{2}$, then the lemma holds. \\ Assume that $L \leq \frac{N}{2}$. Then by Claim 6, we have that $T$ is a path, $v$ is an internal vertex, and $y$ is one of the two end-vertices of $T$. Denote the other end-vertex of $T$ by $x$. Moreover, denote the components of $T-v$ that contain vertex $x$ and $y$ by $T_x$ and $T_y$, respectively. Note that $c(w)=k$ for all $w \in V(T)-\{x,y\}$ by Claim 4. Let $C_x$ and $C_y$ be the total weight of the components of $T-v$ that contain vertex $x$ and $y$, respectively. Let $r_x=c(x)-k$ and $r_y=c(y)-L$. We prove that \begin{equation} \label{eq:rx+ry} 0 \leq r_x, r_y <k \quad \textrm{and} \quad r_x + r_y \in \{0,k\}. \end{equation} We first show that $0 \leq r_x < k$. Clearly, $0 \leq r_x$. If $r_x \geq k$, then we obtain a contradiction by adding a new vertex $x'$ and joining it to $x$ to obtain a new tree $H$, and defining a new weight function $c'$ with $c'(x)=k$, $c'(x')=r_x$ and the remaining vertices have the same weight as for $c$. Then $v$ is a $c'$-median vertex of $H$ but $\sigma_{c'}(v,H) > \sigma_c(v,T)$, a contradiction. Similarly we show that $0 \leq r_y < k$. Since $N-L$ is an integer multiple of $k$, and since all vertices except possibly $x$ and $y$ have weight $k$, it follows that $r_x+r_y$ is also an integer multiple of $k$. From $0 \leq r_x, r_y <k$ we conclude that $r_x+r_y$ either equals $0$ or $k$. This proves \eqref{eq:rx+ry}. We now consider three cases, depending on the values of $L$ and ${\rm bw}_c(v,T)$. \\[1mm] {\sc Case 1:} $c(y)=L$. \\ Since $c(y)=L$ we have $r_y=0$ and thus $c(x)=k$ by \eqref{eq:rx+ry}. Hence $T$ is a path $v_0, v_1,\ldots,v_{(N-L)/k}$, where $v_0$ has weight $L$ and the other vertices have weight $k$. In order to determine a $c$-median vertex of $T$, let $a \in \{1,2,\ldots, \frac{N-L}{k}\}$ be the largest value for which the component of $T-v_a$ containing $v_0$ has a total weight of not more than $\frac{N}{2}$, so $a = \lfloor \frac{N/2 - L + k}{k} \rfloor$. Then ${\rm bw}_c(v_a,T) \leq \frac{N}{2}$, and so $v_a$ is a $c$-median vertex of $T$. Apart from $v_a$ and $y$, $T$ has $a-1$ vertices of weight $k$ at distance $1,2,\ldots,a-1$ from $v_a$ in the component of $T-v_a$ containing $v_0$, and $\frac{N-L - ak}{k}$ vertices of weight $k$ at distance $1,2,\ldots, \frac{N-L - ak}{k}$ from $v_a$ in the other component of $T-v_a$. Thus, \begin{eqnarray*} \sigma_{c}(v_a,T) & = & aL + k\big( 1 + 2+ \ldots + (a-1)\big) + k \Big( 1 + 2 + \ldots + \frac{N-L-ak}{k} \Big) \\ & = & aL + \frac{ka(a-1)}{2} + \frac{(N-L-ak)(N-L-ak+k)}{2k}. \end{eqnarray*} Now $\frac{N-2L}{2k} < a \leq \frac{N-2L+2k}{2k}$ and so $N-L-ak < \frac{N}{2}$. Hence \begin{eqnarray*} \sigma_{c}(v_a,T) & < & \frac{N-2L+2k}{2k}L + \frac{k}{2} \frac{N-2L+2k}{2k}\frac{N-2L}{2k} + \frac{1}{2k} \frac{N}{2} \Big( \frac{N}{2}+k\Big) \\ & = & \frac{N^2-2L^2}{4k} + \frac{N+L}{2}, \end{eqnarray*} and \eqref{eq:statement-of-main-lemma} holds in this case. \\[1mm] {\sc Case 2:} $c(y) > L$ and ${\rm bw}_c(v,T) = \frac{N}{2}$. \\ Since ${\rm bw}_c(v,T) = \frac{N}{2}$, there exists a component of $T-v$ with total weight $\frac{N}{2}$. Let $v'$ be the neighbour of $v$ in this component. Then $T-vv'$ has two components, each of total weight $\frac{N}{2}$. It follows that ${\rm bw}_c(v',T) = \frac{N}{2}$, and so $v'$ is also a $c$-median vertex. We assume that $x$ is in the same component of $T-vv'$ as $v$, and $y$ is in the same component as $v'$ (otherwise the proof is analogous). The weights of the two components of $T-vv'$ are $k d_T(v,x) + c(x)$ and $k d_T(v',y) + c(y)$. Since these weights are equal, and since $c(y) \geq c(x)$, we have $k d_T(v,x) \geq k d_T(v',y)$ and thus $d_T(v,x) \geq d_T(v',y)$. Since $v$ and $v'$ are adjacent, this implies that \begin{equation} \label{eq:d(v,y) <= d(v,x)+1} d_T(v,y) \leq d_T(v,x)+1. \end{equation} Now we obtain a new tree $H$ by adding a new vertex $x'$ and joining it to $x$. We define a new weight function $c'$ by letting $c'(y)=L$, $c'(x)=k$, $c'(x')=r_x+r_y$, and as in $c$, all other vertices have weight $k$. By \eqref{eq:rx+ry} and $r_y>0$ we have that $c'(x')=k$. Note that $v$ is a $c'$-median vertex. Indeed, the components of $T-v$ containing $y$ and $x$ have (with respect to $c$) total weight $\frac{N}{2}$ and $\frac{N}{2}-k$, respectively, so the components of $H-x$ containing $y$ and $x$ have (with respect to $c'$) total weight $\frac{N}{2}-r_y$ and $\frac{N}{2}-k+r_y$, respectively, and both are less than $\frac{N}{2}$. From \eqref{eq:d(v,y) <= d(v,x)+1} we obtain that \[ \sigma_{c'}(v,H) = \sigma_c(v,T) + r_x + \big(d_T(v,x)+1 - d_T(v,y)\big)r_y > \sigma_c(v,T), \] contradicting our choice of $v$, $T$ and $c$. Hence Case 2 cannot occur. \\[1mm] {\sc Case 3:} $c(y) > L$ and ${\rm bw}_c(v,T) < \frac{N}{2}$. \\ We first show that $d_T(v,x)=d_T(v,y)$. Suppose not. Then we have either $d_T(v,x)<d_T(v,y)$ or $d_T(v,x)>d_T(v,y)$. First assume that $d_T(v,x) < d_T(v,y)$. Define $\varepsilon := \min\{\frac{N}{2} - {\rm bw}_c(v,T), c(y)-L, c(x)-k\}$. Since ${\rm bw}_c(v,T) < \frac{N}{2}$, we have that $\varepsilon >0$. We define a new weight function $c'$ by moving $\varepsilon$ weight units from $x$ to $y$. Clearly, $c'$ satisfies the hypothesis of the lemma and $v$ is a $c'$-median vertex of $T$. Then $\sigma_{c'}(v,T) > \sigma_c(v,T)$, a contradiction. We obtain a similar contradiction if $d_T(v,x)>d_T(v,y)$. This proves that $d_T(v,x)=d_T(v,y)$. If $C_x \leq \frac{N}{2}-k$, then we obtain a new tree $H$ by adding a new vertex $x'$ and joining it to $x$, and a new weight function $c'$ with $c'(x')=r_x+r_y=k$, $c(y)=L$ and all remaining vertices have weight $k$. Clearly, $v$ is a $c'$-median vertex since its $c'$-branch weight does not exceed $\frac{N}{2}$. Also, $\sigma_{c'}(v,H) > \sigma_c(v,T)$, a contradiction. If $\frac{N}{2}-k < C_x < \frac{N}{2}$, then either $C_x \leq \frac{N}{2}-r_y$ or $C_x > \frac{N}{2}-r_y$. If $C_x \leq \frac{N}{2}-r_y$, then we consider the weight function $c'$ obtained from $c$ by decreasing the weight of $y$ by $r_y$ and increasing the weight of $x$ by $r_y$. Now $d_T(v,x) = d_T(v,y)$ implies that $\sigma_{c'}(v,T)=\sigma_c(v,T)$. Since $c'(y)=L$, we can now apply Case 1 to $c'$. If $C_x > \frac{N}{2}-r_y$, then let $\varepsilon := \frac{N}{2}- C_x$. Consider the weight function $c'$ obtained from $c$ by decreasing the weight of $y$ by $\varepsilon$ and increasing the weight of $x$ by $\varepsilon$. As above we have $\sigma_{c'}(v,T)=\sigma_c(v,T)$. Since ${\rm bw}_{c'}(v,T)= \frac{N}{2}$, we can now apply Case 2 to $c'$. In both cases, Claim 7 follows. This completes the proof of Lemma \ref{la:proximity-of-weighted}. \hfill $\Box$ \\ \begin{lemma} \label{la:remoteness-of-weighted} Let $G$ be a connected graph and $k,L \in \reals$ with $0 < k < L$. Let $c: V(G) \longrightarrow \reals^{\geq 0}$ be a weight function with total weight $N$. Assume that $c(u) \geq k$ for every vertex $u \in V(G)$, and that $G$ contains a vertex of weight at least $L$. Assume further that $N-L$ is an integer multiple of $k$. If $v$ is a vertex of $G$, then \begin{equation} \label{eq:statement-of-main-lemma-remoteness} \sigma_c(v,G) \leq \frac{(N-L)(N+L-k)}{2k}. \end{equation} \end{lemma} {\bf Proof:} We give only an outline the proof of Lemma \ref{la:remoteness-of-weighted} since it uses arguments very similar to those in the proof of Lemma \ref{la:proximity-of-weighted}, but is much less elaborate. Let $N,k,L$ be as in the hypothesis of the lemma. Let $G$, $v$ and $c$ be such that $\sigma_c(v,G)$ is maximised among all graphs, vertices and weight functions satisfying the hypothesis of the lemma. We may assume that $G$ is a tree. Then $v$ is an end-vertex of $G$ since otherwise, if $u_1$ and $u_2$ are two neighbours of $v$, the graph $H=G-vu_2+u_1u_2$ satisfies $\sigma_c(v,H) > \sigma_c(v,G)$, a contradiction. A similar argument shows that $G$ is a path. As in the proof of Lemma \ref{la:proximity-of-weighted} we show that $v$ and all internal vertices of the path $G$ have weight equal to $k$, and that the other end-vertex of $G$ has weight $L$. Thus $G$ has $\frac{N-L}{k}-1$ vertices at distance $1,2,\ldots,\frac{N-L-k}{k}$ from $v$, all of weight $k$, and one vertex at distance $\frac{N-L}{k}$ from $v$ which has weight $L$. Hence \[ \sigma_c(v,G) = k\Big(1+2+\cdots +\frac{N-L-k}{k}\Big) + \frac{N-L}{k}L = \frac{(N-L)(N+L-k)}{2k}, \] which proves Lemma \ref{la:remoteness-of-weighted}. \hfill $\Box$ \section{Proximity and remoteness of graphs of given minimum degree and maximum degree} \label{section:Proximity and remoteness of graphs of given minimum degree and maximum degree} In this section we present our main results. We give a bound on the proximity of a graph in terms of order, minimum degree and maximum degree, and we construct graphs to show that our bound is sharp apart from an additive constant. If $v$ is a vertex of $G$, then by $S(v)$ we mean the subgraph with vertex set $N_G(v)\cup \{v\}$ in which $v$ is adjacent to each vertex of $N_G(v)$, and no other edges are present. We define the distance between a vertex $v$ and a set $B \subseteq V(G)$ as $\min_{u \in B} d(v,u)$. \begin{theorem} \label{theo:proximity-in-terms-of-mindegree-maxdegree} Let $G$ be a connected graph of order $n$, minimum degree $\delta$ and maximum degree $\Delta$. \\ If $\Delta > \frac{n}{2}-1$, then \[ \pi(G) \leq \frac{3(n-\Delta)^2}{2(n-1)(\delta+1)} + \frac{13}{2}. \] If $\Delta \leq \frac{n}{2}-1$, then \[ \pi(G) \leq \frac{3(n^2 - 2\Delta^2)}{4(n-1)(\delta+1)} + \frac{35}{4}. \] \end{theorem} {\bf Proof:} We first construct a spanning tree $T$ of $G$ as follows. Let $b_0$ be a vertex of $G$ of maximum degree. Let $B_0 = \{b_0\}$ and $T_0 = S(b_0)$. Let $b_1$ be a vertex at distance exactly $3$ from $B_0$, if one exists. Then there exists an edge $e_1$ joining some vertex of $T_0$ to some vertex of $S(b_1)$. Let $T_1$ be the tree obtained from $T_0 \cup S(b_1)$ by adding the edge $e_1$ and let $B_1 = B_0 \cup \{b_1\}$. Let $b_2$ be a vertex at distance exactly $3$ from $B_1$, if one exists. Then there exists an edge $e_2$ joining some vertex of $T_1$ to some vertex of $S(b_2)$. Let $T_2$ be the tree obtained from $T_1 \cup S(b_2)$ by adding the edge $e_2$ and let $B_2 = B_1 \cup \{b_2\}$. Generally, for given $B_j$ and $T_j$, choose a vertex $b_{j+1}$ at distance exactly $3$ from $B_j$, if one exists, let $e_j$ be an edge joining some vertex of $T_j$ to some vertex of $S(b_{j+1})$, let $T_{j+1}$ be the tree obtained from $T_j \cup S(b_{j+1})$ by adding the edge $e_{j+1}$ and let $B_{j+1} = B_j \cup \{b_{j+1}\}$. Repeat this procedure, say, for $r$ steps, until each vertex of $G$ is at distance at most $2$ from some vertex in $B_r$. Let $T'=T_r$ and $B = B_r =\{b_0, b_1,\ldots,b_r\}$. Then all vertices of $G$ are within distance at most $2$ from $B$ in $G$, and thus adjacent to some vertex in $T'$. Now we obtain a spanning tree $T$ of $G$ by joining every vertex that is not in $T'$ to a neighbour in $T'$, which results in a spanning tree $T$ of $G$. Note that $T$ has the same maximum degree as $G$ since ${\rm deg}_G(b_0) = {\rm deg}_T(b_0)$. Consider $T$ as a weighted tree in which every vertex of $T$ has weight $1$. Define a new weight function on the vertices of $T$ by moving each weight to the nearest vertex in $B$. More precisely, for every vertex $v$ of $T$ let $v_B$ be a vertex in $B$ closest to $v$ in $T$. We now move the weight of $v$ to $v_B$, that is, we define the weight function $c: V(T) \longrightarrow \reals^{\geq 0}$ by \[ c(w) = | \{v \in V(T) \ | \ v_B=w\} | \] for $w\in V(T)$. Every vertex of $T$ is within distance $2$ of some vertex in $B$, hence we have $d(v,v_B) \leq 2$ for all $v\in V(T)-B$ and $d(v,v_B)=0$ for all $v\in B$. Hence, \begin{eqnarray*} \sigma(w,T) & = & \sum_{v \in V(T)} d_T(w,v) \\ & \leq & \sum_{v \in V(T)} (d_T(w,v_B) + d_T(v,v_B) ) \\ & \leq & \sum_{v \in V(T)} c(v) d_T(w,v) + 2(n-|B|) \\ & \leq & \sum_{v \in V(T)} c(v) d_T(w,v) + 2(n-1). \end{eqnarray*} Thus, for all $w \in V(T)$, \begin{equation} \label{eq:sigma(w)-vs-sigma_c(w)} \sigma(w,T) \leq \sigma_c(w,T) + 2(n-1). \end{equation} Since the weight of the vertices of $T$ is concentrated entirely in $B$, we have \[ \sigma_c(w,T) = \sum_{v \in B} c(v) d(w,v). \] Denote by $T^3$ the graph with vertex set $V(T)$ in which two vertices $u,v$ are adjacent if and only if $d_T(u,v)\leq 3$. Let $F$ be the subgraph of $T^3$ induced by $B$. It follows from the construction of $T$ and $B$ that for each vertex $b_i \in B$ with $i\geq 1$ there exists some vertex $b_j \in B$ with $j<i$ such that $d_T(b_i,b_j)=3$. This implies that for every $i \geq 1$ there exists a $(b_i, b_0)$-path in $F$, and so $F$ is connected. We now consider $F$ with the weight function $c$. (Note that we use the notation $c$ also for the restriction of $c$ to $B$.) Fix a $c$-median vertex $w_0$ of $F$. Since $d_T(w_0,v) \leq 3 d_F(w_0,v)$ for all $v \in B$, we have \begin{equation} \label{eq:T-vs-F} \sigma_c(w_0,T) \leq \sum_{v \in B} 3d_F(w_0,v) = 3\sigma_c(w_0,F). \end{equation} Our aim is to bound the weighted distance $\sigma_c(w_0,F)$ using Lemma \ref{la:proximity-of-weighted}. If $w \in B$, then every vertex $v \in N_G(w) \cup \{w\}$ satisfies $v_B=w$, hence we have $c(w) \geq {\rm deg}_G(w)+1$. This implies that every vertex of $B$ has weight at least $\delta+1$, and that $F$ contains a vertex, viz.\ $w_0$, of weight at least $\Delta+1$. Also, $c(B)=c(T)=n$. However, $n-(\Delta+1)$ is not necessarily a multiple of $\delta+1$, so we apply Lemma \ref{la:proximity-of-weighted} not to $c$ but to a slightly modified weight function $c'$. There exists $q \in \reals$ with $0 \leq q \leq \delta$ such that $n-(\Delta+1)+q$ is a multiple of $\delta+1$. Let $c'$ be the weight function obtained from $c$ by increasing the weight of $w_0$ by $q$ and leaving the other weights unchanged. Then $c'(F)=n+q$, each vertex of $F$ has weight at least $\delta+1$, and $F$ contains a vertex of weight at least $\Delta+1$. Furthermore, $c'(F)-(\Delta+1)$ is a multiple of $\delta+1$. Clearly, $\sigma_c(w_0,T) \leq \sigma_{c'}(w_0,T)$ and $w_0$ is a $c'$-median vertex of $F$. For $N,L,k \in \reals^{\geq 0}$ let $f_1(N,L,k) = \frac{(N-L)(N-L+k)}{2k}$ and $f_2(N,L,k) = \frac{N^2 - 2L^2}{4k} + \frac{N+L}{2}$. By Lemma \ref{la:proximity-of-weighted} we have \[ \sigma_{c}(w_0,F) \leq \sigma_{c'}(w_0,F) \leq \left\{ \begin{array}{cc} f_1(n+q,\Delta+1, \delta+1) & \textrm{if $\Delta+1 > \frac{n+q}{2}$,} \\ f_2(n+q,\Delta+1, \delta+1) & \textrm{if $\Delta+1 \leq \frac{n+q}{2}$.} \end{array} \right. \] In order to eliminate $q$, we observe that if $\Delta+1 \leq \frac{n}{2}$, then $\Delta+1 \leq \frac{n+q}{2}$, and if $\Delta+1 \geq \frac{n+\delta+1}{2}$, then $\Delta+1 > \frac{n+q}{2}$. If $\frac{n}{2} < \Delta+1 < \frac{n+\delta+1}{2}$, then clearly $\sigma_{c'}(w_0,F) \leq \max\{ f_1(n+q,\Delta+1, \delta+1), f_2(n+q,\Delta+1, \delta+1) \}$. It is easy to verify that $f_1(N,L,k) \geq f_2(N,L,k)$ if and only if $2L - 2\sqrt{kL} \leq N \leq 2L + 2\sqrt{kL}$. For $N=n+q$, $L=\Delta+1$ and $k=\delta+1$, this condition is satisfied if $\frac{n}{2} < \Delta+1 < \frac{n+\delta+1}{2}$. Hence we have $\sigma_{c'}(w_0) \leq f_2(n+q, \Delta+1, \delta+1)$ if $\frac{n}{2} < \Delta+1 < \frac{n+\delta+1}{2}$. It follows that \[ \sigma_{c}(w_0,F) \leq \left\{ \begin{array}{cc} f_1(n+q,\Delta+1, \delta+1) & \textrm{if $\Delta+1 > \frac{n}{2}$,} \\ f_2(n+q,\Delta+1, \delta+1) & \textrm{if $\Delta+1 \leq \frac{n}{2}$.} \end{array} \right. \] Since $f_1(N,L,k)$ and $f_2(N,L,k)$ are increasing in $N$, and since $n+q \leq n+\delta$, we obtain, after evaluating $f_1(n+\delta,\Delta+1, \delta+1)$ and $f_2(n+\delta,\Delta+1, \delta+1)$, \begin{equation} \label{eq:bound-on-sigma(w0)-both-cases} \sigma_{c}(w_0,F) \leq \left\{ \begin{array}{cc} \frac{(n+\delta-\Delta-1)(n+2\delta-\Delta)}{2(\delta+1)} & \textrm{if $\Delta+1 > \frac{n}{2}$,} \\ \frac{(n+\delta)^2 - 2(\Delta+1)^2}{4(\delta+1)} + \frac{n+\delta + \Delta+1}{2} & \textrm{if $\Delta+1 \leq \frac{n}{2}$.} \end{array} \right. \end{equation} {\sc Case 1:} $\Delta > \frac{n}{2}-1$. \\ Consider the right hand side of \eqref{eq:bound-on-sigma(w0)-both-cases}. We have $(n+\delta-\Delta-1)(n+2\delta-\Delta) = (n-\Delta)^2 + (n-\Delta)(3\delta-1) + 2\delta(\delta-1)$. Bounding $3\delta-1$ by $3(\delta+1)$ and $2\delta(\delta-1)$ by $3\delta(\delta+1)$ we obtain that $(n-\Delta)(3\delta-1) + 2\delta(\delta-1) \leq 3(\delta+1)(n-\Delta+\delta) \leq 3(\delta+1)(n-1)$, and so \begin{equation} \label{eq:bound-on-sigma(w0)-case1} \sigma_{c}(w_0,F) \leq \frac{(n-\Delta)^2}{2(\delta+1)} + \frac{3}{2}(n-1). \end{equation} Combining \eqref{eq:sigma(w)-vs-sigma_c(w)} for $w=w_0$, \eqref{eq:T-vs-F} and \eqref{eq:bound-on-sigma(w0)-case1} we obtain \begin{eqnarray*} \sigma(w_0,T) & \leq & \sigma_{c}(w_0,T) + 2(n-1) \\ & \leq & 3\sigma_{c}(w_0,F) + 2(n-1) \\ & \leq & \frac{3(n-\Delta)^2}{2(\delta+1)} + \frac{13}{2}(n-1). \end{eqnarray*} Division by $n-1$ now yields the bound in the theorem. \\[1mm] {\sc Case 2:} $\Delta \leq \frac{n}{2}-1$. \\ Consider the right hand side of \eqref{eq:bound-on-sigma(w0)-both-cases}. We have $(n+\delta)^2 - 2(\Delta+1)^2 = n^2-2\Delta^2 +2\delta n + \delta^2 - 4\Delta -2$. Bounding $2\delta n$ by $2(\delta+1)(n-1)$ and $\delta^2 - 4\Delta -2$ by $(\delta+1)(\delta-1)$, we obtain that $2\delta n + \delta^2 - 4\Delta -2 \leq (\delta+1)(2(n-1)+\delta-1)$. We thus obtain \begin{eqnarray} \sigma_{c}(w_0,F) & \leq & \frac{n^2 - 2\Delta^2}{4(\delta+1)} + \frac{2n + \Delta+\frac{3}{2}\delta- \frac{1}{2}}{2} \nonumber \\ &\leq & \frac{n^2 - 2\Delta^2}{4(\delta+1)} + \frac{9}{4}(n-1), \label{eq:bound-on-sigma(w0)-case2} \end{eqnarray} where in the last step we bounded $\Delta \leq n-1$ and $\delta \leq n-2$. Combining \eqref{eq:sigma(w)-vs-sigma_c(w)} for $w=w_0$, \eqref{eq:T-vs-F} and \eqref{eq:bound-on-sigma(w0)-case2} we obtain \begin{eqnarray*} \sigma(w_0,T) & \leq & \sigma_{c}(w_0,T) + 2(n-1) \\ & \leq & 3\sigma_{c}(w_0,F) + 2(n-1) \\ & \leq & \frac{n^2 - 2\Delta^2}{4(\delta+1)} + \frac{35}{4}(n-1). \end{eqnarray*} Since $\pi(G) \leq \pi(T) \leq \frac{1}{n-1} \sigma(w_0,T)$, division by $n-1$ now yields the desired bound. \hfill $\Box$ \\ \begin{theorem} \label{theo:remoteness-in-terms-of-mindegree-maxdegree} Let $G$ be a connected graph of order $n$, minimum degree $\delta$ and maximum degree $\Delta$. Then there exists a spanning tree $T$ of $G$ with \[ \rho(T) \leq \frac{3(n^2 -\Delta^2)}{2(n-1)(\delta+1)} + 7. \] \end{theorem} {\bf Proof:} Let $B$, $T$, $F$, $c$, $q$ and $c'$ be as in the proof of Theorem \ref{theo:proximity-in-terms-of-mindegree-maxdegree}. Let $u$ be a vertex of maximum average distance in $T$, i.e., $\overline{\sigma}(u) = \rho(T)$. By the construction of $T$ there exists a vertex $u_B \in B$ with $d_T(u,u_B) \leq 2$. Hence \begin{equation} \label{eq:remoteness-T-1} \sigma(u,T) \leq \sigma(u_B,T) + 2(n-1). \end{equation} By \eqref{eq:sigma(w)-vs-sigma_c(w)}, and as in \eqref{eq:T-vs-F} we have \begin{equation} \label{eq:remotness-T-2} \sigma(u_B,T) \leq \sigma_c(u_B,T) + 2(n-1) \leq 3 \sigma_c(u_B,F) + 2(n-1). \end{equation} Using the same arguments as in the proof of Theorem \ref{theo:proximity-in-terms-of-mindegree-maxdegree}, we bound $\sigma_c(u,F)$ with help of the weight function $c'$, to which we apply Lemma \ref{la:remoteness-of-weighted} (with $N=n+q$, $L=\Delta+1$ and $k=\delta+1$) and use the fact that $q \leq \delta$ to obtain that \begin{eqnarray*} \sigma_c(u_B,F) & \leq & \sigma_{c'}(u_B,F) \\ & \leq & \frac{(n+q-\Delta-1)(n+q+\Delta-\delta)}{2(\delta+1)} \\ & \leq & \frac{(n+\delta-\Delta-1)(n+\Delta)}{2(\delta+1)}. \end{eqnarray*} Now $(n+\delta-\Delta-1)(n+\Delta) = (n-\Delta)(n+\Delta) + (\delta-1)(n+\Delta) < n^2-\Delta^2 + (\delta+1)(2n-2)$, and so \begin{equation} \sigma_c(u_B,F) < \frac{(n-\Delta)(n+\Delta)}{2(\delta+1)} + n-1. \label{eq:remoteness-F} \end{equation} Combining \eqref{eq:remoteness-T-1}, \eqref{eq:remotness-T-2} and \eqref{eq:remoteness-F} and dividing by $n-1$ yields \begin{eqnarray*} \sigma(u,T) & \leq & \sigma(u_B,T) + 2(n-1) \\ & \leq & 3 \sigma_c(u_B,F) + 4(n-1) \\ & \leq & \frac{3(n^2-\Delta^2)}{2(\delta+1)} + 7(n-1). \end{eqnarray*} Since $\rho(T) = \frac{1}{n-1} \sigma(u,T)$, division by $n-1$ now yields the theorem. \hfill $\Box$ \\ Since $\rho(G) \leq \rho(T)$ for every spanning tree of a connected graph $G$, we have the following corollary. \begin{corollary} \label{coro:remoteness-in-terms-of-mindegree-maxdegree} If $G$ is a connected graph of order $n$, minimum degree $\delta$ and maximum degree $\Delta$. then \[ \rho(G) \leq \frac{3(n^2 -\Delta^2)}{2(n-1)(\delta+1)} + 7. \] \end{corollary} \section{A sharpness example} \label{section:A sharpness example} We now construct a graph that shows that for fixed $\delta \geq 3$, and any given values of $n$ and $\Delta$ with $\delta< \Delta < n$, there are graphs of order $n$, minimum degree $\delta$ and maximum degree $\Delta$, whose proximity and remoteness are within a constant of the bounds in Theorems \ref{theo:proximity-in-terms-of-mindegree-maxdegree} and \ref{theo:remoteness-in-terms-of-mindegree-maxdegree}, respectively. If $G_1, G_2,\ldots,G_k$ are graphs, then we define the {\em sequential sum} $G_1+G_2+\cdots + G_n$ to be the graph obtained from the disjoint union of the graphs $G_1, G_2,\ldots,G_k$ by joining every vertex of $G_i$ to every vertex of $G_{i+1}$ for $i=1,2,\ldots,k-1$. By $K_n$ we mean the complete graph on $n$ vertices. Let $n, \Delta, \delta \in \naturals$ be given with $3 \leq \delta < \Delta <n$. For the following construction we assume that $n-\Delta$ is a multiple of $\delta+1$, but it is not hard to modify the construction to work without this additional assumption. Let $k := \frac{n-\Delta}{\delta+1}$. Define the graph $G_{n,\Delta, \delta}$ by \[ G_{n,\Delta,\delta} = K_{\delta} + K_1 + [K_1 + K_{\delta-1} + K_1]^{k-1} + K_1 +K_{\Delta-1}, \] where $[K_1 + K_{\delta-1} + K_1]^{k-1}$ stands for $k-1$ repetitions of the pattern $K_1 + K_{\delta-1} + K_1$. Clearly, $G_{n, \Delta, \delta}$ has order $n$, minimum degree $\delta$ and maximum degree $\Delta$. We first bound the proximity of $G_{n,\Delta,\delta}$ from below. Define $V_1$ to be the set of $\delta+1$ vertices that belong to the first (counted from the left) or second complete graph in the sequential sum, i.e, to $K_{\delta}\cup K_1$. For $i=1,2,\ldots,k-1$ let $V_{i+1}$ be the set of $\delta+1$ vertices that belong to $i$th repetition of the pattern $K_1 + K_{\delta-1} + K_1$ in the definition of $G_{n,\Delta,\delta}$. Define $V_{k+1}$ to be the set of $\Delta$ vertices belonging to one of the rightmost two complete graphs, i.e., to $K_1 +K_{\Delta-1}$ in the definition of $G_{n,\Delta,\delta}$. We make use of the fact that, whenever $x$ and $y$ are two vertices with $x\in V_i$ and $y \in V_j$, then $d(x,y) \geq 3|i-j|-2$. \\[1mm] {\sc Case 1:} $\Delta \leq \frac{n}{2}$. \\ Let $a:=\lceil \frac{n/2}{\delta+1} \rceil$. Then it is easy to see (for example using the fact that both components of $G_{n,\Delta,\delta} - V_a$ contain not more than $\frac{n}{2}$ vertices), that $V_a$ contains a median vertex, $w$ say. We bound $\sigma(w,G_{n,\Delta,\delta})$ from below by taking into account only the distances to vertices not in $V_a$. \begin{eqnarray*} \sigma(w,G_{n,\Delta,\delta}) & > & \sum_{i=1}^{a-1} \sum_{v \in V_i} d(w,v) + \sum_{i=a+1}^{k} \sum_{v \in V_i} d(w,v) + \sum_{v \in V_{k+1}} d(w,v) \\ & \geq & \sum_{i=1}^{a-1} (\delta+1) [3(a-i)-2] + \sum_{i=a+1}^{k} (\delta+1) [3(i-a)-2] \\ & & + \Delta[3(k-a+1)-2] \\ & = & \frac{3}{2}(\delta+1) a (a-1) - 2(\delta+1)(a-1) + \frac{3}{2}(\delta+1) (k-a+1) (k-a) \\ & & - 2(\delta+1)(k-a) + 3\Delta (k-a+1) - 2\Delta \\ & = & \frac{3}{2}(\delta+1) [a^2 + (k-a+1)^2] + 3\Delta(k-a+1) \\ & & - \big[(\delta+1)(\frac{7}{2}k-\frac{1}{2})+2\Delta\big]. \end{eqnarray*} Using the inequalities $a=\lceil \frac{n/2}{\delta+1} \rceil \geq \frac{n/2}{\delta+1}$, $k-a+1 = k - \lceil \frac{n/2}{\delta+1} \rceil + 1 > k - \frac{n/2}{\delta+1} = \frac{n - 2\Delta}{2(\delta+1)}$, and $(\delta+1)(\frac{7}{2}k-\frac{1}{2})+2\Delta < \frac{7}{2}(n-1)$ we obtain that \begin{eqnarray*} \sigma(w,G_{n,\Delta,\delta}) & > & \frac{3}{2} (\delta+1) \big[ \frac{n^2}{4(\delta+1)^2} + \frac{(n-2\Delta)^2}{4(\delta+1)^2} \big] + 3\Delta \frac{n-2\Delta}{2(\delta+1)} - \frac{7}{2}(n-1) \\ & = & \frac{3(n^2-2\Delta^2)}{4(\delta+1)} - \frac{7}{2}(n-1). \end{eqnarray*} Division by $n-1$ yields that \[ \pi(G_{n,\Delta,\delta}) = \overline{\sigma}(w,G_{n,\Delta,\delta}) > \frac{3(n^2-2\Delta^2)}{4(n-1)(\delta+1)} - \frac{7}{2}, \] and so $\pi(G_{n,\Delta,\delta})$ differs from the bound in Theorem \ref{theo:proximity-in-terms-of-mindegree-maxdegree} by less than $\frac{49}{4}$. \\[1mm] {\sc Case 2:} $\Delta \geq \frac{n}{2}$. Clearly, $V_{k+1}$ contains a median vertex, $w$ say. We bound $\sigma(w, (G_{n,\Delta,\delta})$ from below by taking into account only the distances to vertices not in $V_{k+1}$. \begin{eqnarray*} \sigma(w, (G_{n,\Delta,\delta})) & > & \sum_{i=1}^{k} \sum_{v \in V_i} d(w,v) \\ & \geq & \sum_{i=1}^{k} (\delta+1) [3(k+1-i)-2] \\ & = & \frac{3}{2}(\delta+1) k (k+1) - 2(\delta+1)k \\ & = & \frac{3}{2}(\delta+1) k^2 - \frac{1}{2} (\delta+1)k. \end{eqnarray*} Substituting $k= \frac{n-\Delta}{\delta+1}$, bounding $\frac{1}{2} (\delta+1)k = \frac{1}{2}(n-\Delta) \leq \frac{n-1}{2}$ and dividing by $n-1$ we obtain \[ \pi(G_{n,\Delta,\delta}) = \overline{\sigma}(w, (G_{n,\Delta,\delta}) > \frac{3(n-\Delta)^2}{(n-1)(\delta+1)} - \frac{1}{2}, \] and so $\pi(G_{n,\Delta,\delta})$ differs from the bound in Theorem \ref{theo:proximity-in-terms-of-mindegree-maxdegree} by less than $6 \delta+\frac{5}{2}$, which for fixed $\delta$ is a constant. \\ We now bound the remoteness of $G_{n,\Delta,\delta}$. Let $u$ be a vertex of the graph $K_{\delta}$ in the representation of $G_{n,\Delta,\delta}$ as a sequential sum. We have \begin{eqnarray*} \sigma(u, (G_{n,\Delta,\delta})) & = & \sum_{v\in V_1} d(u,v) + \sum_{i=2}^{k} \sum_{v \in V_i} d(u,v) + \sum_{v\in V_{k+1}} d(u,v) \\ & = & \delta + \sum_{i=2}^{k} (\delta+1) [3i-3] + \Delta\, 3k -1 \\ & = & 3(\delta+1) \frac{k(k-1)}{2} + 3\Delta k + \delta-1. \end{eqnarray*} Substituting $k = \frac{n-\Delta}{\delta+1}$ yields after simplification that \[ \sigma(u, (G_{n,\Delta,\delta})) = \frac{3(n^2 -\Delta^2)}{2(\delta+1)} - \frac{3}{2}(n- \Delta - \frac{2}{3}(\delta+1)) > \frac{3(n^2 -\Delta^2)}{2(\delta+1)} - \frac{3}{2}(n-1). \] Division by $n-1$ yields that \[ \rho(G_{n,\Delta,\delta}) \geq \overline{\sigma}(u, (G_{n,\Delta,\delta}) > \frac{3}{2} \frac{n^2 -\Delta^2}{(n-1)(\delta+1)} - \frac{3}{2}, \] and so $\rho(G_{n,\Delta,\delta})$ differs from the bound in Corollary \ref{coro:remoteness-in-terms-of-mindegree-maxdegree} by not more than $\frac{17}{2}$.
1,314,259,995,558
arxiv
\section{Introduction} Assume that on the probability space $\left(\Omega, \mathscr F, \bb P\right)$ we are given a sequence of random variables $\left(X_n\right)_{n\geqslant 1}$ with values in a measurable space $\bb X.$ Let $f$ be a real function on $\bb X.$ Suppose that the random walk $S_n = f(X_1)+\cdots + f(X_n),$ $n\geqslant 1$ has zero drift. For a starting point $y\in \bb R$ denote by $\tau_y$ the time at which $\left(y+S_n\right)_{n\geqslant 1}$ first passes into the interval $(-\infty,0].$ We are interested in determining the asymptotic behaviour of the probability $\bb P (\tau_y>n)$ and of the conditional law of $ \frac{y+S_n}{\sqrt{n}}$ given the event $\left\{ \tau_y >n \right\}=\left\{ S_1>0,\dots,S_n>0 \right\}$ as $n\to +\infty.$ The case when $f$ is the identity function and $\left(X_n\right)_{n\geqslant 1}$ are i.i.d.\ in $\bb X = \bb R$ has been extensively studied in the literature. We refer to Spitzer \cite{spitzer_principles_2013}, Iglehart \cite{iglehart_functional_1974,iglehart_random_1974}, Bolthausen \cite{bolthausen_functional_1976}, Doney \cite{doney_asymptotic_1989}, Bertoin and Doney \cite{bertoin_conditioning_1994}, Borovkov \cite{borovkov_asymptotic_2004,borovkov_asymptotic_2004-1}, Caravenna \cite{caravenna_local_2005}, Vatutin and Wachtel \cite{vatutin_local_2008} to cite only a few. Recent progress has been made for random walks with independent increments in $\bb X = \bb R^d$, see Eichelbacher and K\"onig \cite{eichelsbacher_ordered_2008}, Denisov and Wachtel \cite{denisov_random_2015, denisov_conditional_2010} and Duraj \cite{duraj_random_2014}. However, to the best of our knowledge, the case of the Markov chains has been treated only in some special cases. Upper and lower bounds for $\bb P (\tau_y>n)$ have been obtained in Varapoulos \cite{varopoulos_potential_1999}, \cite{varopoulos_potential_2000} for Markov chains with bounded jumps and in Dembo, Ding and Gao \cite{dembo_persistence_2013} for integrated random walks based on independent increments. An approximation of $\bb P\left( \tau_y >n \right)$ by the survival probability of the Brownian motion for Markov walk under moment conditions is given in Varopoulos \cite{varopoulos_potential_2001}. Exact asymptotics are obtained in Presman \cite{presman_1967,presman_1969} in the case of sums of random variables defined on a finite Markov chain under the additional assumption that the distributions have an absolute continuous component and in Denisov and Wachtel \cite{denisov_exit_2015} for integrated random walks. The case of products of i.i.d.\ random matrices which reduces to the study of a particular Markov chain defined on a merely compact state space was considered in \cite{grama_conditioned_2016} and the case of affine walks in $\bb R$ has been treated in \cite{GLLP_affine_2016}. In this paper we give the asymptotics of the probability of the exit time $\tau_y$ and of the law of $y+S_n$ conditioned to stay positive for a Markov chain under the assumption that its transition operator has a spectral gap. In particular our results cover the case of Markov chains with compact state spaces, the affine random walks in $\bb R$ (see \cite{GLLP_affine_2016}) and $\bb R^d$ (see Gao, Guivarc'h and Le Page \cite{gao_stable_2015}). Our results apply also to the case of sums of i.i.d.\ random variables. To present briefly the main results of the paper denote by $\bb P_x$ and $\bb E_x$ the probability and the corresponding expectation generated by the trajectories of a Markov chain $\left(X_n\right)_{n\geqslant 1}$ with the initial state $X_0=x \in \bb X.$ Let $\mathbf Q$ be the transition operator of the Markov chain $\left(X_n, y+S_n\right)_{n\geqslant 1}$ and let $\mathbf Q_{+}$ be the restriction of $\mathbf Q$ on $\bb X \times \bb R^{+}_{*}$. We show that under appropriate assumptions, there exists a $\mathbf Q_+$-harmonic function, say $V$, which is positive on a domain $\mathscr D_+(V) \subseteq \bb X \times \bb R$ and $0$ on its complement such that, for any $(x,y) \in \mathscr D_+(V)$, \begin{equation} \label{intro-001} \bb P_x \left( \tau_y > n \right) \underset{n\to +\infty}{\sim} \frac{2V(x,y)}{\sqrt{2\pi n}\sigma} \end{equation} and \[ \bb P_x \left( \sachant{\frac{y+S_n}{ \sigma \sqrt{n}} \leqslant t }{\tau_y >n} \right) \underset{n\to+\infty}{\longrightarrow} \mathbf \Phi^+(t), \] where $\mathbf \Phi^+(t) = 1-\e^{-\frac{t^2}{2}}$ is the Rayleigh distribution function and $\sigma$ is a positive real. On the complement of $\mathscr D_+(V)$ we find that \begin{equation} \bb P_x \left( \tau_y > n \right) \leqslant c_x e^{-c n}, \label{intro-002} \end{equation} where $c_x$ depends on $x$ and $c$ is a constant. Moreover, we obtain uniform versions of \eqref{intro-001} and \eqref{intro-002}. We give an example of a Markov chain for which the bound \eqref{intro-002} is attained. This is different from the case of sums of i.i.d.\ random variables where on the complement of $\mathscr D_+(V)$ it holds $\bb P_x \left( \tau_y > n \right)=0.$ For details we refer to Section \ref{sec-not-res}. The asymptotics of the probability of the exit time $\bb P (\tau_y>n)$ for walks in $\bb R$ are usually obtained by the Wiener-Hopf factorization (see Feller \cite{feller_introduction_1971}). Eichelbacher and K\"onig \cite{eichelsbacher_ordered_2008} and Denisov and Wachtel \cite{denisov_random_2015} have developed an alternative approach for obtaining the asymptotics of $\bb P (\tau_y>n)$ for random walks with independent increments in $\bb R^d.$ To study the case of Markov chains we mainly rely upon the developments made in \cite{eichelsbacher_ordered_2008}, \cite{denisov_random_2015} for the independent case and in the work of authors \cite{grama_conditioned_2016} and \cite{GLLP_affine_2016} for two particular cases of Markov chains. We also make use of the strong approximation result for Markov chains obtained in \cite{ion_grama_rate_2014} with explicit constants depending on the properties of the transition operator of the Markov chain and on its initial state. To carry out the approach developed in \cite{denisov_random_2015} from the independent case to the case of a Markov chain it is necessary to refine it substantially by taking into account the dependence on its initial state $x \in \bb X$, which is one of the major difficulties of this paper. We assume that the transition operator of the Markov chain satisfies a spectral gap condition on some associated Banach space, which implies that for any function $g$ in this space we have, for any $x \in \bb X,$ \begin{equation} \label{ideedecexp} \bb E_x \left( \abs{ g \left( X_n \right)} \right) = c + e^{-cn} N(x), \end{equation} where $N(x)$ is a function carrying the dependence on the initial state $x$ (see Section \ref{sec-not-res} for details). The relation \eqref{ideedecexp} ensures that the dependence on the initial state decreases exponentially fast. In the present paper it is supposed essentially that the property \eqref{ideedecexp} is extended to other functions than those in the Banach space (see Hypothesis \ref{Momdec} in the text section). In Section \ref{Applications} we show that the conditions we impose are verified for a stochastic recursion in $\bb R^d$ and for Markov chains with compact state space. The paper is organized as follows. In Section \ref{Mart Approx} we approximate the walk by an appropriate martingale and state some properties on this martingale and on associated exit times. In Section \ref{CMWI} we prove that the martingale killed at a special exit time has a uniformly bounded expectation. This result implies in particular that the sequence $( \bb E_x ((y+S_n) \mathbbm 1_{\{ \tau_y >n \}} ) )_{n\geqslant 0}$ is bounded. Using the results of Sections \ref{Mart Approx} and \ref{CMWI}, we establish in Section \ref{Sec Harm Func} the existence of a $\mathbf Q_+$-harmonic function and prove in Section \ref{PosHaFun} that this function is non identically zero. In Section \ref{AsExTi}, we determine the asymptotic of the probability $\bb P_x ( \tau_y > n )$ and in Section \ref{AsCondMarkWalk} we prove that the conditional law of $(y+S_n)/(\sigma\sqrt{n})$ given the event $\{\tau_y > n\}$ converges to the Rayleigh distribution. We end this section by agreeing on some basic notations. For the rest of the paper the symbol $c$ denotes a positive constant depending on the all previously introduced constants. Sometimes, to stress the dependence of the constants on some parameters $\alpha,\beta,\dots$ we shall use the notations $ c_{\alpha}, c_{\alpha,\beta},\dots$. All these constants are likely to change their values every occurrence. For any real numbers $u$ and $v$, denote by $u \wedge v=\min(u,v)$ the minimum between $u$ and $v$. The indicator of an event $A$ is denoted by $\mathbbm 1_A$. For any bounded measurable function $f$ on $\bb X$, random variable $X$ in $\bb X$ and event $A$, the integral $\int_{\bb X} f(x) \bb P (X \in \dd x, A)$ means the expectation $\bb E\left( f(X); A\right)=\bb E \left(f(X) \mathbbm 1_A\right)$. \section{Notations and results} \label{sec-not-res} On the probability space $\left(\Omega, \mathscr F, \bb P\right)$ consider a Markov chain $(X_n)_{n\geqslant 0}$ taking values in the measurable state space $(\bb X, \mathscr{X})$. For any given $x \in \bb X,$ denote by $\mathbf P (x,\cdot)$ its transition probability, to which we associate the transition operator \[ \mathbf Pg (x) = \int_{\bb X} g(x') \mathbf P (x,\dd x'), \] for any of complex bounded measurable function $g$ on $\bb X$. Denote by $\bb P_x$ and $\bb E_x$ the probability and the corresponding expectation generated by the finite dimensional distributions of the Markov chain $(X_n)_{n\geqslant 0}$ starting at $X_0=x.$ We remark that $\mathbf{P} g \left( x\right) = \bb E_x \left(g\left( X_1 \right)\right)$ and $\mathbf{P}^n g \left( x\right) = \bb E_x \left(g\left( X_n \right)\right)$ for any $g$ complex bounded measurable, $x \in \bb X$ and $n\geqslant 1.$ Let $f$ be a real valued function defined on the state space $\bb X$ and let $\mathscr{B}$ be a Banach space of complex valued functions on $\bb X$ endowed with the norm $\norm{\cdot}_{\mathscr{B}}$. Let $\norm{\cdot}_{\mathscr{B} \to \mathscr{B}}$ be the operator norm on $\mathscr{B}$ and let $\mathscr{B}'=\mathscr{L}\left( \mathscr{B},\bb C \right)$ be the topological dual of $\mathscr{B}$ endowed with the norm $\norm{\varphi}_{\mathscr{B}'}=\sup_{h \in \mathscr{B}} \frac{\abs{\varphi(h)}}{\norm{h}_{\mathscr{B}}}$, for any $\varphi \in\mathscr{B}'$. Denote by $e$ the unit function of $\bb X$: $e(x) = 1$, for any $x\in \bb X$ and by $\bs \delta_x$ the Dirac measure at $x\in \bb X$: $\bs \delta_x(g) = g(x)$, for any $g \in \mathscr{B}$. Following \cite{ion_grama_rate_2014}, we assume the following hypotheses. \begin{hypothesis}[Banach space]\ \label{BASP} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{BASP001} The unit function $e$ belongs to $\mathscr{B}$. \item \label{BASP002} For any $x\in \bb X$, the Dirac measure $\bs \delta_x$ belongs to $\mathscr{B}'$. \item \label{BASP003} The Banach space $\mathscr{B}$ is included in $L^1\left( \mathbf{P}(x,\cdot) \right)$, for any $x\in \bb X$. \item \label{BASP004} There exists a constant $\ee_0 \in (0,1)$ such that for any $g\in \mathscr{B}$, the function $\e^{itf}g$ is in $\mathscr{B}$ for any $t$ satisfying $\abs{t} \leqslant \ee_0$. \end{enumerate} \end{hypothesis} Under the point \ref{BASP003} of \ref{BASP}, $\mathbf{P}g(x)$ exists for any $g\in \mathscr{B}$ and $x\in \bb X$. \begin{hypothesis}[Spectral gap]\ \label{SPGA} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{SPGA001} The map $g \mapsto \mathbf{P}g$ is a bounded operator on $\mathscr{B}$. \item \label{SPGA002} There exist constants $C_Q >0$ and $\kappa \in (0,1)$ such that \[ \mathbf{P} = \Pi+Q, \] where $\Pi$ is a one-dimensional projector and $Q$ is an operator on $\mathscr{B}$ satisfying $\Pi Q=Q\Pi=0$ and for any $n\geqslant 1$, \[ \norm{Q^n}_{\mathscr{B}\to\mathscr{B}} \leqslant C_Q \kappa^n. \] \end{enumerate} \end{hypothesis} Since $\Pi$ is a one-dimensional projector and $e$ is an eigenvector of $\mathbf{P}$, there exists a linear form $\bs \nu \in \mathscr{B}'$, such that for any $g \in \mathscr{B}$, \begin{equation} \Pi g = \bs \nu(g) e. \label{linear form} \end{equation} When Hypotheses \ref{BASP} and \ref{SPGA} hold, we set $\mathbf{P}_tg := \mathbf{P}\left( \e^{itf}g \right)$ for any $g \in \mathscr{B}$ and $t \in [-\ee_0,\ee_0]$. In particular $\mathbf{P}_0=\mathbf{P}$. \begin{hypothesis}[Perturbed transition operator]\ \label{PETO} \begin{enumerate}[ref=\arabic*, leftmargin=*, label= \arabic*.] \item \label{PETO001} For any $\abs{t} \leqslant \ee_0$ the map $g \mapsto \mathbf{P}_tg$ is a bounded operator on $\mathscr{B}$. \item \label{PETO002} There exists a constant $C_{\mathbf{P}} >0$ such that, for any $n\geqslant 1$ and $\abs{t} \leqslant \ee_0$, \[ \norm{\mathbf{P}_t^n}_{\mathscr{B} \to \mathscr{B}} \leqslant C_{\mathbf{P}}. \] \end{enumerate} \end{hypothesis} To control the dependence on $x$ of the Markov chain $\left( X_n \right)_{n\geq0}$, the following hypothesis is a little more demanding than the one of \cite{ion_grama_rate_2014}. \begin{hypothesis}[Local integrability]\ \label{Momdec} The Banach space $\mathscr{B}$ contains a sequence of real non-negative functions $N, N_1, N_2, \dots $ such that: \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{Momdec001} There exist $\alpha > 2$ and $\gamma > 0$ such that, for any $x\in \bb X$, \[ \max \left\{ \abs{f(x)}^{1+\gamma}, \norm{\bs \delta_x}_{\mathscr{B}'}, \bb E_x^{1/\alpha} \left( N\left( X_n \right)^{\alpha} \right) \right\} \leqslant c \left( 1+N(x) \right) \] and \[ N(x) \mathbbm 1_{\{ N(x) > l\}} \leqslant N_l(x), \quad \text{for any} \quad l\geqslant 1. \] \item \label{Momdec002} There exists $c > 0$ such that, for any $l\geqslant 1$, \[ \norm{N_l}_{\mathscr{B}} \leqslant c. \] \item \label{Momdec003} There exists $\beta>0$ and $c > 0$ such that, for any $l\geqslant 1$, \[ \abs{\bs \nu \left( N_l \right)} \leqslant \frac{c}{l^{1+\beta}}. \] \end{enumerate} \end{hypothesis} Under Hypotheses \ref{BASP}, \ref{SPGA} and \ref{Momdec}, we have, for any $x\in \bb X$ and $n\geqslant 0$, \begin{align} \bb E_x \left( N(X_n) \right) &= \bs \nu(N) + Q^n N(x) \nonumber\\ &\leqslant \abs{\bs \nu(N)} + \norm{Q^n}_{\mathscr{B}\to\mathscr{B}} \norm{N}_{\mathscr{B}} \norm{\bs \delta_x}_{\mathscr{B}'} \nonumber\\ &\leqslant c + \e^{-cn} N(x) \label{decexpN} \end{align} and, in the same way, for any $x\in \bb X$, $l\geqslant 1$ and $n \geqslant 0$, \begin{equation} \label{decexpNl} \bb E_x \left( N_l \left( X_n \right) \right) \leqslant \frac{c}{l^{1+\beta}} + \e^{-c n} \left( 1+N(x) \right). \end{equation} Remark that in Hypothesis \ref{Momdec} the function $f$ need not belong to the Banach space $\mathscr B$. A consequence of Hypotheses \ref{BASP}-\ref{Momdec} is the following proposition (cf.\ \cite{ion_grama_rate_2014}). For any $x \in \bb X,$ set $\mu_\alpha(x)=\sup_{n\geqslant 1}\bb E_x^{1/\alpha} \left( \abs{f\left( X_n \right)}^{\alpha} \right)$. \begin{proposition} \label{MomAs} Assume that the Markov chain $\left(X_n\right)_{n\geqslant 0}$ and the function $f$ satisfy Hypotheses \ref{BASP}-\ref{Momdec}. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{MomAs001} There exists a constant $\mu$ such that, for any $x\in \bb X$ and $n \geqslant 0$, \[ \abs{ \bb E_x \left( f(X_n) \right) - \mu } \leqslant \e^{-c n} \left( 1+\mu_{\alpha}(x)^{1+\gamma} + \norm{\bs \delta_x}_{\mathscr{B}'} \right). \] \item \label{MomAs002} There exists a constant $\sigma \geqslant 0$ such that, for any $x\in \bb X$ and $n \geqslant 1$, \[ \underset{m\geqslant 0}{\sup} \abs{ \Var_{x} \left( \sum_{k={m+1}}^{m+n} f(X_k) \right) - n \sigma^2 } \leqslant c \left( 1+1+\mu_{\alpha}(x)^{2+2\gamma} + \norm{\bs \delta_x}_{\mathscr{B}'} \right), \] where $\Var_x$ is the variance under $\bb P_x$. \end{enumerate} \end{proposition} We do not assume the existence of the stationary probability measure. If a stationary probability measure $\bs \nu'$ satisfying $\bs \nu' \left( N^2 \right) < +\infty$ exists then, under Hypotheses \ref{BASP}-\ref{Momdec}, we have that $\bs \nu' = \bs \nu$ is necessarily unique and it holds (see \cite{ion_grama_rate_2014}) \begin{equation} \label{mu-sigma001} \bs \nu(f) = \mu \quad\text{and}\quad \sigma^2 = \int_{\bb R^d} f^2(x) \bs \nu(\dd x) + 2\sum_{n=1}^{+\infty} \int_{\bb R^d} f(x) \mathbf{P}^n f(x) \bs \nu(\dd x). \end{equation} \begin{hypothesis}[Centring and non-degeneracy]\ \label{CECO} We suppose that the constants $\mu$ and $\sigma$ defined in Proposition \ref{MomAs} satisfy $\mu=0$ and $\sigma > 0$. \end{hypothesis} Using this assumption and the point \ref{Momdec001} of \ref{Momdec} we have $\mu_{\alpha}(x) \leqslant c\left(1+N(x)^{\frac{1}{1+\gamma}} \right)$ and therefore, for any $x\in \bb X$ and $n \geqslant 0$, \begin{equation} \label{bound_EfXn} \abs{ \bb E_x \left( f(X_n) \right) } \leqslant \e^{-c n} \left( 1+ N(x) \right). \end{equation} Let $y \in \bb R$ be a starting point and $(y+S_n)_{n\geqslant 0}$ be the Markov walk defined by $S_n := \sum_{k=1}^n f\left( X_k \right)$, $n\geqslant 1$ with $S_0=0$. Denote by $\tau_y$ the first moment when $y+S_n$ becomes non-positive: \[ \tau_y := \inf \left\{ k\geqslant 1, \, y+S_k \leqslant 0 \right\}. \] It is shown in Corollary \ref{Exitfinit} that for any $y \in \bb R$ and $x\in \bb X$, the stopping time $\tau_y$ is $\bb P_x$-a.s.\ finite. The asymptotic behaviour of the probability $\bb P_x \left( \tau_y >n \right)$ is determined by the harmonic function which we proceed to introduce. For any $(x,y) \in \bb X \times \bb R$, denote by $\mathbf Q(x,y,\cdot)$ the transition probability of the Markov chain $(X_n,y+S_n)_{n\geqslant 0}$. The restriction of the measure $\mathbf Q(x,y,\cdot)$ on $\bb X \times \mathbb R^*_+$ is defined by \[ \mathbf{Q}_+(x,y,B) = \mathbf{Q}(x,y,B) \] for any measurable set $B$ on $\bb X \times \bb R_+^*$ and for any $(x,y) \in \bb X \times \bb R$. For any bounded measurable function $\varphi: \bb X \times \bb R \to \bb R$ set $\mathbf{Q}_+\varphi (x,y)=\int_{\bb X \times \bb R_+^*} \varphi(x',y') \mathbf{Q}_+(x,y,\dd x' \times \dd y')$. A function $V: \bb X \times \bb R \to \bb R$ is said to be $\mathbf{Q}_+$-harmonic if \[ \mathbf{Q}_+V (x,y) = V(x,y), \qquad \text{for any } (x,y) \in \bb X \times \bb R. \] In the sequel, we deal only with non-negative harmonic functions. For any non-negative function $V$, denote by $\mathscr{D}_+(V)$ the set where $V$ is positive, \[ \mathscr{D}_+(V) := \{ (x,y) \in \bb X \times \bb R, V(x,y)> 0 \} \] and by $\mathscr{D}_+(V)^c$ its complement, \textit{i.e.}\ the set where $V$ is $0$. For any $\gamma >0$, introduce the following set \[ \mathscr{D}_{\gamma} := \left\{ (x,y) \in \bb X \times \bb R, \; \exists n_0 \geqslant 1,\; \bb P_x \left( y+S_{n_0} > \gamma \left( 1+N \left( X_{n_0} \right) \right) \,,\, \tau_y > n_0 \right) > 0 \right\}. \] The following assertion proves the existence of a non-identically zero harmonic function. \begin{theorem} \label{thonV} Assume Hypotheses \ref{BASP}-\ref{CECO}. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{thonV001} For any $x\in \bb X$, $y\in \bb R$, the sequence $\left( \bb E_x \left( y+S_n \,;\, \tau_y > n \right) \right)_{n\geqslant 0}$ converges to a real number $V(x,y)$: \[ \bb E_x \left( y+S_n \,;\, \tau_y > n \right) \underset{n\to +\infty}{\longrightarrow} V(x,y). \] \item \label{thonV002} The function $V$: $\bb X \times \bb R \to \bb R$, defined in the previous point is $\mathbf{Q}_+$-harmonic, \textit{i.e.}\ for any $x\in \bb X$, $y\in \bb R,$ \[ \mathbf{Q}_+ V(x,y) = \bb E_x \left( V\left( X_1, y+S_1 \right) \,;\, \tau_y > 1 \right) = V(x,y). \] \item \label{thonV003} For any $x\in \bb X$, the function $V(x,\cdot)$ is non-negative and non-decreasing on $\bb R$ and \[ \underset{y\to+\infty}{\lim} \frac{V(x,y)}{y} = 1. \] Moreover, for any $\delta > 0$, $x \in \bb X$ and $y \in \bb R$, \[ \left( 1-\delta \right) \max(y,0) - c_{\delta} \left( 1+N(x) \right) \leqslant V(x,y) \leqslant \left( 1+\delta \right) \max(y,0) + c_{\delta} \left( 1+N(x) \right). \] \item \label{thonV004} There exists $\gamma_0>0$ such that, for any $\gamma \geqslant \gamma_0$, \[ \mathscr{D}_+(V) = \mathscr{D}_{\gamma}. \] \end{enumerate} \end{theorem} The following result gives the asymptotic of the exit probability for fixed $(x,y) \in \bb X \times \bb R$. \begin{theorem} \label{thontau} Assume Hypotheses \ref{BASP}-\ref{CECO}. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{thontau001} For any $(x,y) \in \mathscr{D}_+(V)$, \[ \bb P_x \left( \tau_y > n \right) \underset{n\to +\infty}{\sim} \frac{2V(x,y)}{\sqrt{2\pi n} \sigma}. \] \item \label{thontau002} For any $(x,y) \in \mathscr{D}_+(V)^c$ and $n\geqslant 1$, \[ \bb P_x \left( \tau_y > n \right) \leqslant e^{-c n} \left( 1+N(x) \right). \] \end{enumerate} \end{theorem} Now we complete the point \ref{thontau001} of the previous theorem by some estimations. \begin{theorem} \label{thontau2} Assume Hypotheses \ref{BASP}-\ref{CECO}. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{thontau001bis} There exists $\ee_0 >0$ such that, for any $\ee \in (0,\ee_0)$, $n\geqslant 1$ and $(x,y)\in \bb X\times \bb R$, \[ \abs{\bb P_x \left( \tau_y > n \right) - \frac{2V(x,y)}{\sqrt{2\pi n} \sigma}} \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/16}}. \] \item \label{thontau003} Moreover, for any $(x,y) \in \bb X \times \bb R$ and $n\geqslant 1$, \[ \bb P_x \left( \tau_y > n \right) \leqslant c\frac{ 1 + \max(y,0) + N(x) }{\sqrt{n}}. \] \end{enumerate} \end{theorem} Finally, we give the asymptotic of the conditional law of $y+S_n.$ \begin{theorem} \label{loideRayleigh} Assume Hypotheses \ref{BASP}-\ref{CECO}. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{AAA001} For any $(x,y) \in \mathscr{D}_+(V)$ and $t\geqslant 0$, \[ \bb P_x \left( \sachant{\frac{y+S_n}{\sigma \sqrt{n}} \leqslant t }{\tau_y >n} \right) \underset{n\to+\infty}{\longrightarrow} \mathbf \Phi^+(t), \] where $\mathbf \Phi^+(t) = 1-\e^{-\frac{t^2}{2}}$ is the Rayleigh distribution function. \item \label{AAA002} Moreover there exists $\ee_0 >0$ such that, for any $\ee \in (0,\ee_0)$, $n\geqslant 1$, $t_0 > 0$, $t\in [ 0, t_0 ]$ and $(x,y)\in \bb X \times \bb R$, \begin{align*} &\abs{\bb P_x \left( y+S_n \leqslant t \sqrt{n} \,,\, \tau_y > n \right) -\frac{2V(x,y)}{\sqrt{2\pi n}\sigma} \mathbf \Phi^+\left(\frac{t}{\sigma}\right)} \\ &\hspace{5cm} \leqslant c_{\ee,t_0}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/16}}. \end{align*} \end{enumerate} \end{theorem} We now comment on Theorems \ref{thonV} and \ref{thontau}. \begin{remark} \label{Demboité} The sets $(\mathscr{D}_{\gamma})_{\gamma>0}$ are nested: for any $\gamma_1 \leqslant \gamma_2$, we have $\mathscr{D}_{\gamma_1} \supseteq \mathscr{D}_{\gamma_2}.$ Moreover, by the point \ref{thonV004} of Theorem \ref{thonV}, the sets $\mathscr{D}_{\gamma}$ are equal to $\mathscr{D}_+(V)$ for all $\gamma$ large enough. \end{remark} \begin{remark} \label{dominclus} The set $\mathscr{D}_+(V)$ it is not empty. More precisely there exists $\gamma_1 > 0$ such that \[ \{ (x,y) \in \bb X \times \bb R,\; y > \gamma_1 \left( 1+N(x) \right) \} \subseteq \mathscr{D}_+(V). \] Example \ref{recstoacst} and Figure \ref{Domdepos} illustrate this property. \end{remark} \begin{remark} When $( X_n )_{n\geqslant 1}$ are i.i.d., it is well known that $\bb P_x \left( \tau_y > n \right) = 0$ for any $(x,y) \in \mathscr{D}_+(V)^c$. When the sequence $( X_n )_{n\geqslant 1}$ from a Markov hain, instead of this property, we have the bound of the point \ref{thontau002} of Theorem \ref{thontau}. Moreover there exist some Markov walks for which this exponential bound is attained. This remark is developed in Example \ref{seulexemple}. \end{remark} \begin{example}[Random walks in $\bb R$] When $( X_n )_{n\geqslant 1}$ are i.i.d.\ real random variables of mean $0$ and positive variance with finite absolute moments of order $p>2$, one can take $N=0$ and therefore \[ \mathscr{D}_{\gamma} := \left\{ y \in \bb R, \; \exists n_0 \geqslant 1, \; \bb P \left( y+S_{n_0} > \gamma \,,\, \tau_y > n_0 \right) > 0 \right\}. \] Since the walk is allowed to increase at each step with positive probability, it follows that $\bb P \left( y+S_{n_0} > \gamma \,,\, \tau_y > n_0 \right) > 0$ if and only if $\bb P \left( \tau_y > 1 \right)= \bb P \left( y+X_1 >0 \right) > 0$. Thus, $[0,+\infty] \subseteq ( -\max \supp( \bs \mu ) , +\infty ) = \mathscr{D}_{\gamma} = \mathscr{D}_+(V)$ for every $\gamma >0$, where $\bs \mu$ is the common law of $X_n$ and $\supp ( \bs \mu )$ is its support. \end{example} The following example is intended to illustrate Remark \ref{dominclus}. \begin{example} \label{recstoacst} Consider the following special case of the one dimensional stochastic recursion: $X_{n+1} = a_{n+1} X_n + b_{n+1}$ where $(a_i)_{i\geqslant 1}$ and $(b_i)_{i\geqslant 1}$ are two independent sequences of i.i.d.\ random variables. In this example we consider that the law of $a_i$ is $\frac{1}{2} \bs \delta_{\{-1/2\}}+\frac{1}{2} \bs \delta_{\{1/2\}}$ and that of $b_i$ is uniform on $[-1,1]$. The state space $\bb X$ is $\bb R$ and the function $N$ is given by $N(x) = \abs{x}^{1+\ee}$ for some $\ee >0$ (see \cite{GLLP_affine_2016} or Section \ref{MarcheaffineRd} for a construction of an appropriate Banach space and for the proof that \ref{BASP}-\ref{CECO} are verified for the stochastic recursion). One can verify that the domain of positivity of the function $V$ is $\mathscr{D}_+ (V) = \{ (x,y) \in \bb R^2,\; y > -\frac{\abs{x}}{2} -1 \} = \mathscr{D}_{\gamma}$, for all $\gamma > 0$. Obviously, $\{ (x,y) \in \bb X \times \bb R,\; y > \frac{1}{2} \left( 1+\abs{x}^{1+\ee} \right) \} \subseteq \mathscr{D}_+(V),$ see Figure \ref{Domdepos}. \end{example} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.8] \fill[black!10] (-5, 1.5) -- (0,-1) -- (5, 1.5) -- (5, 5) -- (-5, 5) -- cycle; \fill[black!40] plot [domain=-5:5] (\x,{exp(1.2*ln(abs(\x)))/2+1/2}) -- (5,5) -- (-5,5) -- cycle; \draw[thick][domain=-5:5] plot(\x,{exp(1.2*ln(abs(\x)))/2+1/2}); \draw(-2.5,4) node{$y>\frac{1}{2}(\abs{x}^{1+\ee}+1)$}; \draw(-3,1.3) node{$\mathscr{D}_+(V)$}; \draw(-3,-1) node{$\mathscr{D}_+(V)^c$}; \draw[very thick,->](-5,0)--(5,0)node[below left]{$x$}; \draw[very thick,->](0,-2.5)--(0,5)node[below left]{$y$}; \draw(0,0) node[below left]{$0$}; \draw[thick](-5,1.5)--(0,-1); \draw[thick](0,-1)--(5,1.5); \draw[thick](-5,-0.1)--(-5,0.1); \draw[thick](-4,-0.1)--(-4,0.1); \draw[thick](-3,-0.1)--(-3,0.1); \draw[thick](-2,-0.1)--(-2,0.1); \draw[thick](-1,-0.1)--(-1,0.1); \draw[thick](1,-0.1)--(1,0.1); \draw[thick](2,-0.1)--(2,0.1); \draw[thick](3,-0.1)--(3,0.1); \draw[thick](4,-0.1)--(4,0.1); \draw[thick](-0.1,-2)--(0.1,-2); \draw[thick](-0.1,-1)--(0.1,-1); \draw[thick](-0.1,1)--(0.1,1); \draw[thick](-0.1,2)--(0.1,2); \draw[thick](-0.1,3)--(0.1,3); \draw[thick](-0.1,4)--(0.1,4); \end{tikzpicture} \caption{\label{Domdepos}} \end{center} \end{figure} The next example is intended to show that the inequality of the point \ref{thontau002} of Theorem \ref{thontau} is attained. \begin{example} \label{seulexemple} Consider the Markov walk $(X_n)_{n\geqslant 0}$ living on the finite state space $\bb X :=\{-1\,;\,1\,;\,-3\,;\,7/6\}$ with the transition probabilities given in Figure \ref{graphedemarkov}. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.8] \tikzset{ville/.style={draw,minimum width=1cm,circle,very thick,fill=black!25}, chemin/.style={very thick,->,>=latex}} \node[ville] (P-1) at (0,0) {$-1$}; \node[ville] (P1) at (4,0) {$1$}; \node[ville] (P-3) at (2,-3) {$-3$}; \node[ville] (Pa) at (8,-3) {$\frac{7}{6}$}; \coordinate (n) at (10,-3); \node at (10.5,-3) {$1/2$}; \draw[chemin] (P-1) to [bend left] node[midway,above]{$1/2$} (P1); \draw[chemin] (P-1) to [bend right] node[midway,left]{$1/2$} (P-3); \draw[chemin] (P1) to [bend left] node[midway,below]{$1/2$} (P-1); \draw[chemin] (P1) to [bend left] node[midway,right]{$1/2$} (P-3); \draw[chemin] (P-3) to [bend right] node[midway,below]{$1$} (Pa); \draw[chemin] (Pa.north east) to [out=45,in=90] (n) to [out=-90,in=-45] (Pa.south east); \draw[chemin] (Pa) to [bend right] node[midway,above]{$1/2$} (P1); \end{tikzpicture} \end{center} \caption{\label{graphedemarkov}} \end{figure} Suppose that $f$ is the identity function on $\bb X$. It is easy to see that the assumptions stated in Remark \ref{casfini} of Section \ref{Compact2} are satisfied and thereby so are Hypotheses \ref{BASP}-\ref{CECO}. Now, when $x=1$ and $y\in (1,3]$ or when $x =-1$ and $y \in (-1,2]$, one can check that the Markov walk $y+S_n$ stays positive if and only if the values of the the variables $X_i$ alternate between $1$ and $-1$ and therefore, for such starting points $(x,y)$, we have $\bb P_x \left( \tau_y > n \right) = \left( \frac{1}{2} \right)^n$. This shows that, when the random variables $\left( X_n \right)_{n\geqslant 1}$ form a Markov chain, the survival probability $\bb P_x \left( \tau_y > n \right)$ has an asymptotic behaviour different from that in the independent case where it can be either equivalent to $\frac{c_{x,y}}{\sqrt{n}}$ or $0$. In this example we can make explicit the set $\mathscr{D}_+(V)$. Since $N=0$, we notice that the function $V$ is positive if and only if there exists an integer $n\geqslant 1$ such that $\bb P_x \left( y+S_n > \gamma \,,\, \tau_y > n \right)>0$ for a $\gamma$ large enough. This is possible only if the chain can reach the state $X_n = 7/6$ within a trajectory of $\left( y+S_k \right)_{n \geqslant k\geqslant 1}$ which stays positive, \textit{i.e.}\ $\bb P_x \left( X_n = 7/6 \,,\, \tau_y > n \right) > 0$. Consequently \begin{align*} \mathscr{D}_+(V) &= \{ -1 \} \times (2,+\infty) \cup \{ 1 \} \times (3,+\infty) \cup \{ -3,7/6 \} \times (-7/6,+\infty) \\ &= \mathscr{D}_3 = \left\{ (x,y) \in \bb X \times \bb R, \; \exists n \geqslant 1, \; \bb P_x \left( y+S_n > 3 \,,\, \tau_y > n \right) > 0 \right\}. \end{align*} To sum up, this model presents the three possible asymptotic behaviours of $\bb P_x \left( \tau_y >n \right)$: for any $(x,y) \in \mathscr{D}_+(V)= \{ -1 \} \times (2,+\infty) \cup \{ 1 \} \times (3,+\infty) \cup \{ -3,7/6 \} \times (-7/6,+\infty)$, \[ \bb P_x \left( \tau_y >n \right) \underset{n\to+\infty}{\sim} \frac{2V(x,y)}{\sqrt{2\pi n}\sigma}, \] for any $(x,y) \in \{ -1 \} \times (-1,2] \cup \{ 1 \} \times (1,3]$ and $n\geqslant 1$, \[ \bb P_x \left( \tau_y >n \right) = \left(\frac{1}{2}\right)^n, \] for any $(x,y) \in \{ -1 \} \times (-\infty,-1] \cup \{ 1 \} \times (-\infty,1] \cup \{ -3,7/6 \} \times (-\infty,-7/6]$ and $n\geqslant 1$, \[ \bb P_x \left( \tau_y >n \right) = 0. \] \end{example} \section{Applications} \label{Applications} We illustrate the results of Section \ref{sec-not-res} by considering three particular models. \subsection{Affine random walk in \texorpdfstring{$\bb R^d$}{} conditioned to stay in a half-space} \label{MarcheaffineRd} Let $d\geqslant 1$ be an integer and $( g_n )_{n\geqslant 1} = ( A_n, B_n )_{n\geqslant 1}$ be a sequence of i.i.d.\ random elements in $\GL\left( d, \bb R \right) \times \bb R^d$ following the same distribution $\bs \mu$. Let $( X_n )_{n \geqslant 0}$ be the Markov chain on $\bb R^d$ defined by \[ X_0 = x \in \bb R^d, \qquad \qquad X_{n+1} = A_{n+1} X_n + B_{n+1}, \quad n \geqslant 1. \] Set $S_n = \sum_{k=1}^n f\left( X_k \right)$, $n\geqslant 1,$ where the function $f(x) = \scal{u}{x}$ is the projection of the vector $x \in \bb R^d$ on the direction defined by the vector $u\in \bb R^d \smallsetminus \{0\}$. For any $y\in \bb R$, consider the first time when the random walk $\left(y+S_n\right)_{n\geqslant 1}$ becomes non-positive: \[ \tau_y = \inf \{ k \geqslant 1, \; y+S_k \leqslant 0 \}. \] This stopping time coincides with the entry time of the affine walk $\left( \sum_{k=1}^n X_k \right) _{n \geqslant 0}$ in the closed half-subspace $\{ s \in \bb R^d, \scal{u}{s} \leqslant -y \}$. Introduce the following hypothesis. \begin{hypothesisH}\ \label{hypoH} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{H1} There exists a constant $\delta>0$, such that \[ \bb E \left( \abs{A_1}^{2+2\delta} \right) < +\infty, \quad \bb E \left( \abs{B_1}^{2+2\delta} \right) < +\infty \] and \[ k(\delta) = \underset{n\to+\infty}{\lim} \bb E^{1/n} \left( \abs{A_nA_{n-1} \dots A_1}^{2+2\delta} \right) < 1. \] \item \label{H2} There is no proper affine subspace of $\bb R^d$ which is invariant with respect to all the elements of the support of $\bs \mu$. \item \label{H3} For any vector $v_0 \in \bb R^d \smallsetminus \{0\}$, \[ \bb P \left( {}^t\!A_1^{-1} v_0 = {}^t\!A_2^{-1} v_0 \right) < 1. \] \item \label{H4} The vector $B_1$ is centred: $\bb E \left( B_1 \right) = 0$. \end{enumerate} \end{hypothesisH} \begin{proposition} \label{PP001} Under Hypothesis \ref{hypoH}, Theorems \ref{thonV}, \ref{thontau}, \ref{thontau2} and \ref{loideRayleigh} hold true for the affine random walk conditioned to stay in the half-subspace $\{ s \in \bb R^d, \scal{u}{s} \leqslant 0 \}$. \end{proposition} Proposition \ref{PP001} is proved in Appendix \ref{proof-rec-sto-Mat} where we construct an appropriate Banach space $\mathscr{B}$ and show that Hypotheses \ref{BASP}-\ref{CECO} are satisfied with $N(x) = \abs{x}^{1+\ee}$, for some $\ee > 0$. \begin{remark} The set $\mathscr{D}_+(V)$ depends on the law of $(A_i,B_i)$. In the case when $A_i$ are independent of $B_i$ and the support of the law of $\scal{u}{B_i}$ contains a sequence converging to $+\infty$, one can verify that $\mathscr{D}_+(V) = \bb R^d \times \bb R$. \end{remark} \subsection{Two components Markov chains in compact sets under the Doeblin-Fortet condition} \label{Compact1} Let $(X,d_{X})$ be a compact metric space, $\mathscr{C}\left( X \right)$ and $\mathscr{L}\left( X \right)$ be the spaces of continuous and Lipschitz complex functions on $X$, respectively. Define \[ \abs{h}_{\infty} = \sup_{x \in X} \abs{h(x)}, \quad \forall h \in \mathscr{C}\left( X \right) \] and \[ \left[ h \right]_{X} = \sup_{\substack{(x,y) \in X \\ x\neq y}} \frac{\abs{h(x)-h(y)}}{d_{X}(x,y)}, \quad \forall h \in \mathscr{L}\left( X \right). \] We endow $\mathscr{C}\left( X \right)$ with the uniform norm $\abs{\cdot}_{\infty}$ and $\mathscr{L}\left( X \right)$ with the norm $\abs{\cdot}_{\mathscr{L}} = \abs{\cdot}_{\infty} + \left[ \cdot \right]_{X}$, respectively. Consider the space $\bb X := X \times X$ with the metric $d_{\bb X}$ on $\bb X$ defined by $d_{\bb X} ( (x_1, x_2) , (y_1,y_2) ) = d_{X} ( x_1,y_1 ) + d_{X} ( x_2,y_2 ),$ for any $(x_1, x_2)$ and $(y_1,y_2)$ in $\bb X$. Denote by $\mathscr{L}\left( \bb X \right)$ the space of the Lipschitz complex function on $\bb X$ endowed with the norm $\norm{\cdot}_{\mathscr{L}} = \norm{\cdot}_{\infty} + \left[ \cdot \right]_{\bb X}$, where \[ \norm{h}_{\infty} = \sup_{x \in \bb X} \abs{h(x)}, \quad \forall h \in \mathscr{C}\left( \bb X \right) \] and \[ \left[ h \right]_{ \bb X} = \sup_{\substack{(x,y) \in \bb X \\ x\neq y}} \frac{\abs{h(x)-h(y)}}{d_{\bb X}(x,y)}, \quad \forall h \in \mathscr{L}\left( \bb X \right). \] Following Guivarc'h and Hardy \cite{guivarch_theoremes_1988}, consider a Markov chain $\left( \chi_n \right)_{n\geqslant 0}$ on $X$ with transition probability $P$. Let $\left( X_n \right)_{n\geqslant 0}$ be the Markov chain on $\bb X$ defined by $X_n = \left( \chi_{n-1}, \chi_{n} \right)$, $n\geqslant 1$ and $X_0=\left( 0, \chi_0 \right)$: its transition probability is given by \[ \mathbf P((x_1,x_2), \dd y_1 \times \dd y_2) = \bs \delta_{x_2} \left( \dd y_1 \right) P\left( x_2, \dd y_2 \right). \] For a fixed real function $f$ on $\bb X$, let $S_n := \sum_{k=1}^n f\left( X_n \right)$ be the associated Markov walk and, for any $y\in \bb R$, let $\tau_y := \inf \left\{ n \geqslant 1, \, y+S_n \leqslant 0 \right\}$ be the associated exit time. In order to apply the results stated in the previous section, we need some hypotheses on the function $f$ and the operator $P$ on $\mathscr{C}(X)$ defined by $Ph(x) = \int_{X} h(y) P(x,\dd y)$ for any $x\in X$ and any $h \in \mathscr{C}(X)$. \begin{hypothesisH}\ \label{hypoHcompactI} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{H1compactI} For any $h$ in $\mathscr{C} \left( X \right),$ respectively in $\mathscr{L} \left( X \right)$, the function $Ph$ is an element of $\mathscr{C} (X),$ respectively of $\mathscr{L} (X)$. \item \label{H2compactI} There exist constants $n_0 \geqslant 1$, $0 < \rho < 1$ and $C>0$ such that, for any function $h \in \mathscr{L}(X)$, we have \[ \abs{P^{n_0}h}_{\mathscr{L}} \leqslant \rho \abs{h}_{\mathscr{L}} + C \abs{h}_{\infty} \] \item \label{H3compactI} The unique eigenvalue of $P$ of modulus $1$ is $1$ and the associated eigenspace is generated by the function $e$: $x\mapsto 1$, \textit{i.e.}\ if there exist $\theta \in \bb R$ and $h \in \mathscr{L}(X)$ such that $Ph = \e^{i\theta} h$, then $h$ is constant and $\e^{i\theta} = 1$. \end{enumerate} \end{hypothesisH} Under Hypothesis \ref{hypoHcompactI}, one can check that conditions (a), (b), (c) and (d) of Chapter 3 in Norman \cite{norman1972markov} hold true and we can apply the theorem of Ionescu Tulcea and Marinescu \cite{tulcea_theorie_1950} (see also \cite{guivarch_theoremes_1988}). Coupling this theorem with the point \ref{H3compactI} of Hypothesis \ref{hypoHcompactI} we obtain the following proposition. \begin{proposition}\ \label{TroupourPpasgras} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item There exists a unique $P$-invariant probability $\nu$ on $X$. \item For any $n\geqslant 1$ and $h \in \mathscr{L}(X)$, \[ P^nh = \nu(h) + R^nh, \] where $R$ is an operator on $\mathscr{L}(X)$ with a spectral radius $r(R)<1$. \end{enumerate} \end{proposition} Suppose that $f$ and $\nu$ satisfy the following hypothesis. \begin{hypothesisH}\ \label{hypoHcompactII} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{H1compactII} The function $f$ belongs to $\mathscr{L}(\bb X)$. \item \label{H2compactII} The function $f$ is centred, in the sense that \[ \int_{\bb X} f(x,y) P(x,\dd y) \nu(\dd x) = 0. \] \item \label{H3compactII} The function $f$ is non-degenerated, that means that there is no function $h\in \mathscr{L}(X)$ such that \[ f(x,y) = h(x) - h(y), \] for $P_{\nu}$-almost all $(x,y),$ where $P_{\nu}(\dd x \times \dd y) = P(x,\dd y)\nu(\dd x)$. \end{enumerate} \end{hypothesisH} Assuming Hypotheses \ref{hypoHcompactI} and \ref{hypoHcompactII}, Guivarc'h and Hardy \cite{guivarch_theoremes_1988} have established that the sequence $\left( S_n / \sqrt{n} \right)_{n\geqslant 1}$ converges weakly to a centred Gaussian random variable of variance $\sigma^2 >0$, under the probability $\bb P_x$ generated by the finite dimensional distributions of the Markov chain $(X_n)_{n\geqslant 0}$ starting at $X_0=x$, for any $x\in X$. Moreover, under the same hypotheses, we show in Appendix \ref{proof-cas-compact} that \ref{BASP}-\ref{CECO} are satisfied, thereby proving the following assertion. \begin{proposition} \label{PP002} Under Hypotheses \ref{hypoHcompactI} and \ref{hypoHcompactII}, Theorems \ref{thonV}, \ref{thontau}, \ref{thontau2} and \ref{loideRayleigh} hold true for the Markov chain $(X_n)_{n\geqslant 1}$, the function $f$ and the Banach space $\mathscr{L}(\bb X)$. \end{proposition} \subsection{Markov chains in compact sets under spectral gap assumptions} \label{Compact2} In the previous Section \ref{Compact1}, we considered a Markov chain with two components satisfying the Doeblin Fortet condition and proved, inter alia, that this chain has a spectral gap (Hypothesis \ref{SPGA}). In this section, for Markov chains with values in a compact set, we give more general conditions which ensure the applicability of the results of the previous section. Let $\left( \bb X, d \right)$ be a compact metric space and $\left( X_n \right)_{n\geqslant 0}$ be a Markov chain living in $\bb X$. Denote by $\mathbf P$ the transition probability of $\left( X_n \right)_{n\geqslant 0}$ and by $\mathscr{C}(\bb X)$ the Banach algebra of the continuous complex functions on $\bb X$ endowed with the uniform norm \[ \abs{h}_{\infty} = \underset{x \in \bb X}{\sup} \abs{h(x)}, \qquad h \in \mathscr{C}(\bb X). \] Consider a real function $f$ defined on $\bb X$, the transition operator $\mathbf P$ on $\mathscr{C}(\bb X)$ associated to the transition probability of $\left( X_n \right)_{n\geqslant 0}$ and the unit function $e$ defined on $\bb X$ by $e(x) =1$, for any $x \in \bb X$. \begin{hypothesisH}\ \label{hypoHcompactIII} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{H1compactIII} For any $h\in \mathscr{C}(\bb X)$, the function $\mathbf Ph$ is an element of $\mathscr{C}(\bb X)$. \item \label{H2compactIII} The operator $\mathbf P$ has a unique invariant probability $\bs \nu$. \item \label{H3compactIII} For any $n\geqslant 1$, \[ \mathbf P^n = \Pi + Q^n, \] where $\Pi$ is the one-dimensional projector on $\mathscr{C}(\bb X)$ defined by $\Pi(h) = \bs \nu(h) e$, for any $h \in \mathscr{C}(\bb X)$, $Q$ is an operator on $\mathscr{C}(\bb X)$ of spectral radius $r(Q) < 1$ satisfying $\Pi Q= Q \Pi = 0$. \item \label{H4compactIII} The function $f$ belongs to $\mathscr{C}(\bb X)$ and is $\bs \nu$-centred, \textit{i.e.}\ $\bs \nu(f) = 0$. \item \label{H5compactIII} The function $f$ is non-degenerated, that is there is no function $h\in \mathscr{C}(\bb X)$ such that \[ f(X_1) = h(X_0) - h(X_1), \qquad \bb P_{\bs \nu}\text{-a.s.,} \] where $\bb P_{\bs \nu}$ is the probability generated by the finite dimensional distributions of the Markov chain $(X_n)_{n\geqslant 0}$ when the initial law of $X_0$ is $\bs \nu$. \end{enumerate} \end{hypothesisH} Consider the Markov walk $S_n = \sum_{k=1}^n f(X_k)$. It is well known, that under Hypothesis \ref{hypoHcompactIII} the normalized sum $S_n/\sqrt{n}$ converges in law to a centred normal distribution of variance $\sigma^2 > 0$ with respect to the probability $\bb P_x$ generated by the finite dimensional distributions of the Markov chain $(X_n)_{n\geqslant 0}$ starting at $X_0=x$, for any $x\in \bb X$. \begin{proposition} \label{PP003} Under Hypothesis \ref{hypoHcompactIII}, Theorems \ref{thonV}, \ref{thontau}, \ref{thontau2} and \ref{loideRayleigh} hold true for the Markov chain $(X_n)_{n\geqslant 1}$, the function $f$ and the Banach space $\mathscr{C}(\bb X)$. \end{proposition} All the elements of the proof are contained in the proof of Proposition \ref{PP002} (see Appendix \ref{proof-cas-compact}), which therefore is left to the reader. \begin{remark} \label{casfini} As a special example of the compact case, consider the Markov chain $(X_n)_{n\geqslant 1}$ taking values in a finite space $\bb X.$ Assume that $(X_n)_{n\geqslant 1}$ is aperiodic and irreducible with transition matrix $\mathbf P$. Let $f$ be a finite function on $\bb X$. We shall verify Hypotesis \ref{hypoHcompactIII}. The Banach space $\mathscr B$ consists of all finite real functions on $\bb X,$ therefore condition \ref{H1compactIII} is obvious. Moreover, there is a unique invariant measure $\bs \nu,$ which proves condition \ref{H2compactIII}. According to Perron-Frobenius theorem, the transition matrix $\mathbf P$ admits $1$ as the only simple eigenvalue of modulus $1$, which implies condition \ref{H3compactIII}. Assume in addition that $\bs \nu(f)=0$ and that there exists a path $x_0,\dots,x_n$ in $\bb X$ such that $\mathbf P(x_0,x_1)>0,\dots,\mathbf P(x_{n-1},x_n)>0,\mathbf P(x_n,x_0)>0$ and $f(x_0)+\dots +f(x_n)\not= 0$ (conditions \ref{H4compactIII} and \ref{H5compactIII} respectively). As a consequence of Proposition \ref{PP003} it follows that the asymptotics as $n\to+\infty$ of the probability $\bb P_x \left( \tau_y >n \right)$ and of the conditional law $\bb P_x \left( \sachant{y+S_n \leqslant \cdot \sqrt{n}}{\tau_y >n} \right)$ are given by Theorems \ref{thontau} and \ref{loideRayleigh}, respectively. \end{remark} \section{Martingale approximation} \label{Mart Approx} All over Sections \ref{Mart Approx}-\ref{AsCondMarkWalk} we assume Hypotheses \ref{BASP}-\ref{CECO}. The aim of Sections \ref{Mart Approx}-\ref{PosHaFun} is to prove the existence and the positivity of the harmonic function claimed in Theorem \ref{thonV}. To summarize the approach, we approximate the walk $(y+S_n)_{n\geqslant 1}$ by a martingale $(z+M_n)_{n\geqslant 1}$ in Section \ref{Mart Approx} and prove a difficult result on the uniform boundedness in $n$ of the expectation $\bb E_x \left( y+S_n \,;\, \tau_y > n \right)$ in Section \ref{CMWI}. The key for doing this is the introduction of two stopping times $T_{z}$ and $\hat{T}_{z}$ defined by \eqref{twostopingtimes001}. The existence and the positivity of the harmonic function on a non-empty set are proved in Sections \ref{Sec Harm Func} and \ref{PosHaFun}, respectively. It is well known that the Poisson equation \[ \Theta - \mathbf P \Theta = f \] admits as a solution the real valued function $\Theta$ defined for any $x\in \bb X$ by \[ \Theta(x) = f(x) + \sum_{k=1}^{+\infty} \mathbf P^k f(x). \] For any $x\in \bb X$, let \[ r(x) = \mathbf P \Theta(x) = \Theta(x) - f(x) = \sum_{k=1}^{+\infty} \mathbf P^k f(x). \] From \eqref{bound_EfXn} we deduce the following assertion. \begin{lemma} \label{MTR} The functions $\Theta$ and $r$ exist on $\bb X$ and for any $x \in \bb X$, \[ \abs{\Theta(x)} \leqslant c \left( 1+N(x) \right) \qquad \text{and} \qquad \abs{r(x)} \leqslant c \left( 1+N(x) \right). \] \end{lemma} Following Gordin \cite{gordin_central_1969}, define the process $\left( M_n \right)_{n\geqslant 0}$ by setting $M_0=0$ and, for any $n\geqslant 1$, \[ M_n = \sum_{k=1}^{n} \left[ \Theta \left(X_k\right) - \mathbf P \Theta \left(X_{k-1}\right) \right] = \sum_{k=1}^{n} \left[ \Theta \left(X_k\right) - r \left(X_{k-1}\right) \right]. \] It is easy to see that, for any $x \in \bb X,$ the process $(M_n)_{n\geqslant 0}$ is a zero mean $\bb P_x$-martingale for the natural filtration $\left(\mathscr{F}_n\right)_{n \geqslant 0}$, where $\mathscr{F}_n$ is the $\sigma$-algebra generated by $X_1,\,X_2,\dots,\,X_n$ and $\mathscr{F}_0$ the trivial $\sigma$-algebra. Denote by $\xi_n$ the increments of the martingale $(M_n)_{n\geqslant 0}$: for any $n\geqslant 1$, \[ \xi_n := \Theta\left(X_n\right) - r\left(X_{n-1}\right). \] In the sequel it will be convenient to consider the martingale $(z+M_n)_{n\geqslant 1}$ starting at \[ z=y+r(x). \] The reason for this is the following approximation which is an easy consequence of the definition of the martingale $(z+M_n)_{n\geqslant 1}$: for any $x \in \bb X$ and $y\in \bb R,$ on the event $\Omega,$ \begin{equation} \label{decMSX} z+M_n = y+r(x)+\sum_{k=1}^n \left[ r \left( X_k \right) + f \left( X_k \right) - r \left( X_{k-1} \right) \right] = y + S_n + r \left( X_n \right). \end{equation} \begin{lemma}\ \label{majmart} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{majmart001} For any $p \in [1,\alpha]$, $x\in \bb X$ and $n \geqslant 1$, \[ \bb E_x^{1/p} \left( \abs{M_n}^p \right) \leqslant c_p \sqrt{n} \left( 1 + N(x) \right). \] \item \label{majmart002} For any $x\in \bb X$ and $n \geqslant 1$, \[ \bb E_x \left( \abs{M_n} \right) \leqslant c \left( \sqrt{n} + N(x) \right). \] \end{enumerate} \end{lemma} \begin{proof} First we control the increments $\xi_n$. By Lemma \ref{MTR}, for any $n\geqslant 1$, \begin{equation} \label{majdesxi000} \abs{\xi_n} \leqslant c\left( 1 + N\left( X_n \right) + N\left( X_{n-1} \right) \right). \end{equation} So, using the point \ref{Momdec001} of Hypothesis \ref{Momdec} and \eqref{decexpN}, for any $n\geqslant 1$, \begin{align} \label{Majdesxi001} \bb E_x^{1/p} \left( \abs{\xi_n}^p \right) &\leqslant c_p \left( 1+N(x) \right) \qquad \forall p \in [1,\alpha],\\ \label{Majdesxi002} \bb E_x \left( \abs{\xi_n} \right) &\leqslant c + \e^{-c n} N(x). \end{align} \textit{Proof of the claim \ref{majmart001}}. By Burkholder's inequality, for $2< p \leqslant \alpha$, \[ \bb E_x^{1/p} \left( \abs{M_n}^p \right) \leqslant c_p \norm{ \left( \sum_{k=1}^n \xi_k^2 \right)^{1/2} }_p = c_p \bb E_x^{1/p} \left( \left( \sum_{k=1}^n \xi_k^2 \right)^{p/2} \right). \] Using H\"older's inequality with the exponents $u=p/2>1$ and $v=\frac{p}{p-2}$, we obtain \[ \bb E_x^{1/p} \left( \abs{M_n}^p \right) \leqslant c_p \bb E_x^{1/p} \left[ \left( \sum_{k=1}^n \xi_k^{2u} \right)^{\frac{p}{2u}} n^{\frac{p}{2v}} \right] = c_p n^{\frac{p-2}{2p}} \left(\sum_{k=1}^n \bb E_x \left[ \abs{\xi_k}^{p} \right] \right)^{1/p}. \] From \eqref{Majdesxi001}, for any $p\in (2,\alpha]$, \begin{equation} \label{PMM} \bb E_x^{1/p} \left( \abs{M_n}^p \right) \leqslant c_p n^{\frac{p-2}{2p}} \left( \sum_{k=1}^n c_p \left( 1+N(x) \right)^p \right)^{1/p} \leqslant c_p \sqrt{n} \left( 1+N(x) \right). \end{equation} Using the Jensen inequality for $p\in [1,2]$, we obtain the claim \ref{majmart001}. \textit{Proof of the claim \ref{majmart002}}. Consider $\ee \in (0,1/2)$. By \eqref{Majdesxi002}, \begin{align*} \bb E_x \left( \abs{M_n} \right) &\leqslant \sum_{k=1}^{\pent{n^\ee}} \bb E_x \left( \abs{\xi_k} \right) + \bb E_x \left( \abs{M_n-M_{\pent{n^\ee}}} \right)\\ &\leqslant c n^\ee + c N(x) + \bb E_x \left( \abs{M_n-M_{\pent{n^\ee}}} \right). \end{align*} Since $(X_n, M_n)_{n\geqslant 0}$ is a Markov chain, by the Markov property, the claim \ref{majmart001} and \eqref{decexpN}, \begin{align*} \bb E_x \left( \abs{M_n} \right) &\leqslant c n^\ee + c N(x) + \bb E_x \left( \bb E\left( \sachant{\abs{M_n-M_{\pent{n^\ee}}}}{\mathscr{F}_{\pent{n^\ee}}} \right) \right) \\ &\leqslant c n^\ee + c N(x) + \bb E_x \left[ c \left(n-\pent{n^\ee}\right)^{1/2} \left( 1+N \left( X_{\pent{n^\ee}} \right) \right) \right] \\ &\leqslant c \sqrt{n} + c_{\ee} N(x). \end{align*} \end{proof} A key point in the proof of the existence and of the positivity of the harmonic function is the introduction of two stopping times. The first one is the first time when the martingale $(z+M_n)_{n\geqslant 1}$ becomes non-positive, say $T_z$, and the second one is the first time, after the time $\tau_y,$ when the martingale $(z+M_n)_{n\geqslant 1}$ becomes non-positive, say $\hat{T}_z$. Precisely, for any $x\in \bb X$, $z\in \bb R$ and $y=z-r(x)$, set \begin{equation} T_z := \inf \left\{ k \geqslant 1,\, z+M_k \leqslant 0 \right\} \quad \text{and} \quad \hat{T}_z := \inf \left\{ k \geqslant \tau_y,\, z+M_k \leqslant 0 \right\}. \label{twostopingtimes001} \end{equation} The following lemmas will be useful in the next sections. \begin{lemma} \label{Mnsubmartingale} For any $x\in \bb X$, $z \in \bb R$, the sequence $\left( (z+M_n) \mathbbm 1_{\left\{ \hat{T}_z > n \right\}} \right)_{n\geqslant 0}$ is a $\bb P_x$-submartingale. \end{lemma} \begin{proof} For any $n\geqslant 0$, \begin{align*} \bb E_x &\left( \sachant{z+M_{n+1} \,;\, \hat{T}_z > n+1}{\mathscr{F}_n} \right) \\ &\quad= \bb E_x \left( \sachant{z+M_{n+1} \,;\, \hat{T}_z > n}{\mathscr{F}_n} \right) - \bb E_x \left( \sachant{z+M_{n+1} \,;\, \hat{T}_z = n+1}{\mathscr{F}_n} \right)\\ &\quad= \left( z+M_n \right) \mathbbm 1_{\left\{ \hat{T}_z > n \right\}} - \bb E_x \left( \sachant{z+M_{\hat{T}_z} \,;\, \hat{T}_z = n+1}{\mathscr{F}_n} \right). \end{align*} By the definition of $\hat{T}_z$ we have $z+M_{\hat{T}_z} < 0$ a.s.\ and the result follows. \end{proof} The proof of the following Lemma shows how to apply the Markov property with the event $\left\{\hat{T}_z >n\right\}.$ The same approach will be used repeatedly in the case of more complicated functionals, as for exemple $\bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right)$, without giving the details. \begin{lemma} \label{MarkovpropforTz} For any $x\in \bb X$, $z \in \bb R$, $n\geqslant 0$, $k\leqslant n$ and $y=z-r(x)$, \begin{align*} \bb P_x &\left( \hat{T}_z > n \right) = \int_{\bb X \times \bb R} \bb P_{x'} \left( \hat{T}_{z'} > n-k \right) \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \right) \\ &+ \int_{\bb X \times \bb R} \bb P_{x'} \left( T_{z'} > n-k \right) \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y \leqslant k \,,\, \hat{T}_{z} > k \right). \end{align*} \end{lemma} \begin{proof} For any $k\leqslant n$, we have \[ \bb P_x \left( \hat{T}_z > n \right) = \bb P_x \left( \tau_y > n \right) + \sum_{i=1}^{n-k} \bb P_x \left( \tau_y =i+k \,,\, \hat{T}_z > n \right) + \bb P_x \left( \tau_y \leqslant k \,,\, \hat{T}_z > n \right). \] By the Markov property and \eqref{decMSX}, with $y'=z'-r(x')$, \begin{align*} \bb P_x \left( \hat{T}_z > n \right) =\;& \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y'} > n-k \right) \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \right) \\ &+ \sum_{i=1}^{n-k} \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y'} =i \,,\, z'+M_{i} > 0 \,,\, \dots \,,\, z'+M_{n-k} > 0 \right) \\ &\hspace{2cm} \times\bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \right) \\ &+ \int_{\bb X \times \bb R} \bb P_{x'} \left( T_{z'} > n-k \right) \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_{y} \leqslant k \,,\, \right. \\ &\hspace{6cm} \left. z+M_{\tau_y} > 0 \,,\, \dots \,,\, z+M_k > 0 \right). \end{align*} Putting together the first two terms we get the result. \end{proof} \section{Integrability of the killed martingale and of the killed Markov walk} \label{CMWI} First we give a bound of order $n^{1/2-2\ee}$ of the expectation of the martingale $(z+M_n)_{n\geqslant 0}$ killed at $T_z$ and a similar bound when the martingale is killed at $\hat{T}_z$. From these two results we will deduce a uniform in $n$ bound for the second expectation, \textit{i.e.}\ for the expectation of the martingale $(z+M_n)_{n\geqslant 0}$ killed at $\hat{T}_z$. We will conclude the section by showing that the expectation of the Markov walk $(y+S_n)_{n\geqslant 0}$ killed at $\tau_y$ is also bounded uniformly in $n$. \begin{lemma} \label{firstupperboundforMnTz} There exists $\ee_0>0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $z\in \bb R$ and $n \in \bb N$, it holds \[ \bb E_x \left( z+M_n \,;\, T_z > n \right) \leqslant \max(z,0) + c_{\ee} \left( n^{1/2-2\ee} + N(x) \right). \] \end{lemma} \begin{proof} Using the fact that $( M_n )_{n\geqslant 0}$ is a zero mean martingale and the optional stopping theorem, \[ \bb E_x \left( z+M_n \,;\, T_z > n \right) = z - \bb E_x \left( z + M_n \,;\, T_z \leqslant n \right) = z - \bb E_x \left( z + M_{T_z} \,;\, T_z \leqslant n \right). \] By the definition of $T_z$, on the event $\left\{T_z > 1\right\}$, we have \[ \xi_{T_z} = z + M_{T_z} - \left( z+M_{T_z-1} \right) < z + M_{T_z} \leqslant 0. \] Using this inequality and \eqref{majdesxi000}, we obtain \begin{align} \bb E_x \left( z+M_n \,;\, T_z > n \right) &\leqslant z \bb P_x \left( T_z > 1 \right) + \bb E_x \left( \abs{\xi_1} \,;\, T_z=1 \right) + \bb E_x \left( \abs{\xi_{T_z}} \,;\, 1 < T_z \leqslant n \right) \nonumber\\ &\leqslant \max(z,0) + c\bb E_x \left( 1 + N\left( X_{T_z} \right) + N\left( X_{T_z-1} \right) \,;\, T_z \leqslant n \right). \label{xitoN} \end{align} We bound $\bb E_x \left( N\left( X_{T_z} \right) \,;\, T_z \leqslant n \right)$ as follows. Let $\ee$ be a real number in $(0,1/6)$ and set $l=\pent{n^{1/2-2\ee}}$. Using the point \ref{Momdec001} of Hypothesis \ref{Momdec} we write \begin{align*} \bb E_x \left( N\left( X_{T_z} \right) \,;\, T_z \leqslant n \right) \leqslant n^{1/2-2\ee} &+ \bb E_x \left( N\left( X_{T_z} \right) \,;\, N\left( X_{T_z} \right) > n^{1/2-2\ee} \,,\, T_z \leqslant n \right) \\ \leqslant n^{1/2-2\ee} &+ \sum_{k=1}^{\pent{n^{\ee}}} \bb E_x \left( N\left( X_k \right) \right) + \sum_{k=\pent{n^{\ee}}+1}^n \bb E_x \left( N_l \left( X_k \right) \right). \end{align*} By \eqref{decexpN} and \eqref{decexpNl}, \[ \bb E_x \left( N\left( X_{T_z} \right) \,;\, T_z \leqslant n \right) \leqslant c n^{1/2-2\ee} + c N(x) + \frac{cn}{l^{1+\beta}} + \e^{-cn^{\ee}} \left( 1+N(x) \right). \] Choosing $\ee < \min(\frac{\beta}{4(2+\beta)},\frac{1}{6})$, we find that \begin{equation} \label{NXT} \bb E_x \left( N\left( X_{T_z} \right) \,;\, T_z \leqslant n \right) \leqslant c_{\ee} n^{1/2-2\ee}+ c_{\ee} N(x). \end{equation} In the same manner, we obtain that $\bb E_x \left( N\left( X_{T_z-1} \right) \,;\, T_z \leqslant n \right) \leqslant c_{\ee} n^{1/2-2\ee}+ c_{\ee} N(x)$. Consequently, from \eqref{NXT} and \eqref{xitoN}, we conclude the assertion of the lemma. \end{proof} \begin{lemma} \label{firstupperbound} There exists $\ee_0 >0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $z\in \bb R$ and $n \in \bb N$, we have \[ \bb E_x\left( z+M_n \,;\, \hat{T}_z>n \right) \leqslant \max(z,0) + c_{\ee} \left( n^{1/2-2\ee} + n^{2\ee} N(x) \right). \] \end{lemma} \begin{proof} Let $\ee$ be a real number in $(0,1/4)$. Denoting $z_+ := z + n^{1/2-2\ee}$ we have, \begin{align} \bb E_x\left( z+M_n \,;\, \hat{T}_z>n \right) &= \underbrace{\bb E_x\left( z+M_n \,;\, T_{z_+} \leqslant n \,,\, \hat{T}_z>n \right)}_{=:J_1} \nonumber\\ &\qquad + \underbrace{\bb E_x\left( z+M_n \,;\, T_{z_+} > n \,,\, \hat{T}_z>n \right)}_{=:J_2}. \label{decmartkillpremmaj} \end{align} \textit{Bound of $J_1$.} Let $y=z-r(x)$. Using the fact that $\bb P_x \left( \tau_y \leqslant k \,,\, \hat{T}_z > k \,,\, T_{z_+} =k \right) = 0$ and the Markov property, in the same way as in the proof of Lemma \ref{MarkovpropforTz}, \begin{align*} J_1 &= \sum_{k=1}^n \int_{\bb X \times \bb R} \bb E_{x'} \left( z'+ M_{n-k} \,;\, \hat{T}_{z'} > n-k \right) \\ &\hspace{5cm} \times \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \,,\, T_{z_+} =k \right). \end{align*} Since $z+M_{T_{z_+}} < 0$, using the point \ref{majmart002} of Lemma \ref{majmart}, we have \[ J_1 \leqslant c \bb E_x \left( \sqrt{n} + N\left(X_{T_{z_+}}\right) \,;\, \tau_y > T_{z_+} \,,\, T_{z_+} \leqslant n \right). \] By the approximation \eqref{decMSX}, on the event $\{ \tau_y > T_{z_+} \},$ it holds \[ r \left( X_{T_{z_+}} \right) = z + M_{T_{z_+}} - \left( y + S_{T_{z_+}} \right) < -n^{1/2-2\ee}. \] Therefore, by Lemma \ref{MTR}, \begin{align*} J_1 &\leqslant c n^{2\ee} \bb E_x \left( \abs{r \left( X_{T_{z_+}} \right)} + N\left(X_{T_{z_+}}\right) \,;\, \abs{r \left( X_{T_{z_+}} \right)} > n^{1/2-2\ee} \,,\, T_{z_+} \leqslant n \right)\\ &\leqslant c n^{2\ee} + c n^{2\ee} \bb E_x \left( N\left(X_{T_{z_+}}\right) \,;\, T_{z_+} \leqslant n \right). \end{align*} Choosing $\ee$ small enough, by \eqref{NXT}, \begin{equation} \label{MajorJ1} J_1 \leqslant c n^{2\ee} + c_{\ee} n^{2\ee} \left( n^{1/2-4\ee} + N(x) \right) \leqslant c_{\ee} n^{1/2-2\ee} + c_{\ee} n^{2\ee} N(x). \end{equation} \textit{Bound of $J_2$.} By Lemma \ref{firstupperboundforMnTz}, there exists $\ee_0>0$ such that, for any $\ee\in (0,\ee_0)$, \[ J_2 \leqslant \bb E_x\left( z_+ + M_n \,;\, T_{z_+} > n \right) \leqslant \max(z,0) + c_{\ee} n^{1/2-2\ee} + c_{\ee} N(x). \] Inserting this bound and \eqref{MajorJ1} into \eqref{decmartkillpremmaj}, for any $\ee\in (0,\ee_0)$, we deduce the assertion of the lemma. \end{proof} Let $\nu_n$ be the first time when the martingale $z+M_n$ exceeds $n^{1/2-\ee}$: for any $n \geqslant 1$, $\ee \in (0,1/2)$ and $z\in \bb R$, \[ \nu_n = \nu_{n,\ee,z} := \min \left\{ k \geqslant 1,\, z+M_k > n^{1/2-\ee} \right\}. \] The control on the joint law of $\nu_n$ and $\hat{T}_z$ is given by the following lemma. \begin{lemma} \label{concentnu} There exists $\ee_0>0$ such that, for any $\ee \in (0,\ee_0)$, $\delta>0$, $x \in \bb X$, $z \in \bb R$ and $n \in \bb N$, \[ \bb P_x \left( \nu_n > \delta n^{1-\ee} \,,\, \hat{T}_z > \delta n^{1-\ee} \right) \leqslant \e^{-c_{\ee,\delta} n^{\ee}} \left( 1 + N \left( x \right) \right). \] \end{lemma} \begin{proof} Let $\ee \in (0,1/4)$ and $K:= \pent{n^\ee/2}$. We split the interval $[1,\delta n^{1-\ee}]$ by subintervals of length $l:= \pent{\delta n^{1-2\ee}}$. Introduce the event $A_{k,z} := \{ \max_{1 \leqslant k' \leqslant k} \left( z+M_{k'l} \right) \leqslant n^{1/2-\ee} \}$. Then \begin{equation} \bb P_x \left( \nu_n > \delta n^{1-\ee} \,,\, \hat{T}_z > \delta n^{1-\ee} \right) \leqslant \bb P_x \left( A_{2K,z} \,,\, \hat{T}_z > 2Kl \right). \label{Probnun} \end{equation} By the Markov property, as in the proof of Lemma \ref{MarkovpropforTz}, with $y=z-r(x)$, we have \begin{align} \bb P_x &\left( A_{2K,z} \,,\, \hat{T}_z > 2Kl \right) \nonumber\\ &= \int_{\bb X \times \bb R} \bb P_{x'} \left( A_{2,z'} \,,\, \hat{T}_{z'} > 2l \right) \bb P_x \left( X_{2(K-1)l} \in \dd x' \,,\, z+M_{2(K-1)l} \in \dd z' \,,\, \right. \nonumber\\ &\hspace{8cm} \left. A_{2(K-1),z} \,,\, \tau_y > 2(K-1)l \right) \nonumber\\ &\quad + \int_{\bb X \times \bb R} \bb P_{x'} \left( A_{2,z'} \,,\, T_{z'} > 2l \right) \bb P_x \left( X_{2(K-1)l} \in \dd x' \,,\, z+M_{2(K-1)l} \in \dd z'\,,\, \right. \nonumber\\ &\qquad \hspace{4cm}\left. A_{2(K-1),z} \,,\, \tau_y \leqslant 2(K-1)l \,,\, \hat{T}_z > 2(K-1)l \right). \label{PA00} \end{align} \textit{Bound of $\bb P_{x'} \left( A_{2,z'} \,,\, \hat{T}_{z'} > 2l \right)$.} With $y'=z'-r(x')$, we write \begin{align} \bb P_{x'} &\left( A_{2,z'} \,,\, \hat{T}_{z'} > 2l \right) \nonumber\\ &= \int_{\bb X \times \bb R} \bb P_{x''} \left( A_{1,z''} \,,\, \hat{T}_{z''} > l \right) \bb P_{x'} \left( X_l \in \dd x'' \,,\, z'+M_l \in \dd z'' \,,\, A_{1,z'} \,,\, \tau_{y'} > l \right) \nonumber\\ \label{PA1} &\hspace{1cm} + \int_{\bb X \times \bb R} \bb P_{x''} \left( A_{1,z''} \,,\, T_{z''} > l \right) \\ &\hspace{3cm} \times \bb P_{x'} \left( X_l \in \dd x'' \,,\, z'+M_l \in \dd z'' \,,\, A_{1,z'} \,,\, \tau_{y'} \leqslant l \,,\, \hat{T}_{z'} > l \right). \nonumber \end{align} \textit{Bound of $\bb P_{x''} \left( A_{1,z''} \,,\, \hat{T}_{z''} > l \right)$.} Note that on the event $\{ \tau_{y'} > l \}$ we have $y' + S_{l} > 0$. So it is enough to consider that $y''=z''-r(x'')>0$. Using \eqref{decMSX} we have, \begin{align*} \bb P_{x''} \left( A_{1,z''} \,,\, \hat{T}_{z''} > l \right) &\leqslant \bb P_{x''} \left( y''+S_l \leqslant 2 n^{1/2-\ee} \,,\, \abs{r \left( X_l \right)} \leqslant n^{1/2-\ee} \right) \\ &\hspace{6cm} + \bb P_{x''} \left( \abs{r \left( X_l \right)} > n^{1/2-\ee} \right). \end{align*} Therefore, there exists a constant $c_{\ee,\delta}$ such that \[ \bb P_{x''} \left( A_{1,z''} \,,\, \hat{T}_{z''} > l \right) \leqslant \bb P_{x''} \left( \frac{S_l}{\sqrt{l}} \leqslant c_{\ee,\delta} \right) + \bb E_{x''} \left( \frac{\abs{r \left( X_l \right)}}{n^{1/2-\ee}} \right). \] Using Corollary \ref{BerEss} and Lemma \ref{MTR}, there exists $\ee_0 \in (0,1/4)$, such that, for any $\ee \in (0,\ee_0)$, \[ \bb P_{x''} \left( A_{1,z''} \,,\, \hat{T}_{z''} > l \right) \leqslant \int_{-\infty}^{c_{\ee,\delta}} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} + \frac{c_{\ee}}{l^\ee} \left(1+N(x'')\right) + \frac{c}{n^{1/2-\ee}} \bb E_{x''} \left( 1+N \left( X_l \right) \right). \] Using the point \ref{Momdec001} of Hypothesis \ref{Momdec} and the fact that $l^\ee \geqslant n^{\ee/2}/c_{\ee,\delta}$ for $\ee < 1/4$, we have, \begin{equation} \bb P_{x''} \left( A_{1,z''} \,,\, \hat{T}_{z''} > l \right) \leqslant q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \left(1+N(x'')\right), \label{majq001} \end{equation} with $q_{\ee,\delta} := \int_{-\infty}^{c_{\ee,\delta}} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} < 1$. \textit{Bound of $\bb P_{x''} \left( A_{1,z''} \,,\, T_{z''} > l \right)$.} On the event $\{ T_{z''} > l \}$ we have $z''+M_l > 0$. Using \eqref{decMSX} and Corollary \ref{BerEss}, in the same way as in the proof of the bound \eqref{majq001}, we obtain \begin{align} \bb P_{x''} \left( A_{1,z''} \,,\, T_{z''} > l \right) &\leqslant \bb P_{x''} \left( 0 < z''+M_l \leqslant n^{1/2-\ee} \right) \nonumber \\ &\leqslant \int_{\frac{-y''}{\sqrt{l}}-c_{\ee,\delta}}^{\frac{-y''}{\sqrt{l}}+c_{\ee,\delta}} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \left( 1+N(x'') \right) \nonumber\\ &\leqslant q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \left(1+N(x'')\right). \label{majq002} \end{align} Inserting \eqref{majq001} and \eqref{majq002} into \eqref{PA1} and using \eqref{decexpN}, we have \begin{align} \bb P_{x'} \left( A_{2,z'} \,,\, \hat{T}_{z'} > 2l \right) &\leqslant q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \bb E_{x'} \left( N \left( X_l \right) \right) \nonumber\\ &\leqslant q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} + \e^{-c_{\ee,\delta} n^{1-2\ee}} N(x'). \label{majq003} \end{align} \textit{Bound of $\bb P_{x'} \left( A_{2,z'} \,,\, T_{z'} > 2l \right)$.} By the Markov property, \begin{align*} \bb P_{x'} \left( A_{2,z'} \,,\, T_{z'} > 2l \right) &= \int_{\bb X \times \bb R} \bb P_{x''} \left( A_{1,z''} \,,\, T_{z''} > l \right) \\ &\qquad \times \bb P_{x'} \left( X_l \in \dd x'' \,,\, z'+M_{l} \in \dd z'' \,,\, A_{1,z'} \,,\, T_{z'} > l \right). \end{align*} Using \eqref{majq002} to bound the probability inside the integral, we get \begin{equation} \label{majq004} \bb P_{x'} \left( A_{2,z'} \,,\, T_{z'} > 2l \right) \leqslant q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} + \e^{-c_{\ee,\delta} n^{1-2\ee}} N(x'). \end{equation} Inserting the bounds \eqref{majq003} and \eqref{majq004} into \eqref{PA00}, we find that \begin{align*} \bb P_x &\left( A_{2K,z} \,,\, \hat{T}_z > 2Kl \right) \\ &\leqslant \left( q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \right) \bb P_x \left( A_{2(K-1),z} \,,\, \hat{T}_z > 2(K-1)l \right) + \e^{-c_{\ee,\delta} n^{1-2\ee}} \left( 1+N(x) \right). \end{align*} Iterating this inequality, we get \[ \bb P_x \left( A_{2K,z} \,,\, \hat{T}_z > 2Kl \right) \leqslant \left( q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \right)^K + \e^{-c_{\ee,\delta} n^{1-2\ee}} \left( 1+N(x) \right) \sum_{k=0}^{K-1} \left( q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \right)^k. \] As $K=\pent{n^\ee/2}$ and $q_{\ee,\delta} < 1$ it follows that, for $n$ large enough, $\left( q_{\ee,\delta} + \frac{c_{\ee,\delta}}{n^{\ee/2}} \right)^K \leqslant \e^{-c_{\ee,\delta} n^\ee}$, which, in turn, implies \[ \bb P_x \left( A_{2K,z} \,,\, \hat{T}_z > 2Kl \right) \leqslant \e^{-c_{\ee,\delta} n^{\ee}} \left( 1+N(x) \right). \] \end{proof} \begin{lemma} \label{intofthemartcond} There exists $\ee_0 > 0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $z \in \bb R$, $n\geqslant 2$ and any integer $n_f \in \{2, \dots, n \}$, \[ \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(z,0) + cN(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x). \] \end{lemma} \begin{proof} Let $\ee \in (0,1/2)$ and $y=z-r(x)$. Considering the stopping time \[ \nu_n^\ee:= \nu_n + \pent{n^\ee}, \] we have, \begin{align} \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) \leqslant\;& \underbrace{\bb E_x \left( z+M_n \,;\, \hat{T}_z > n \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right)}_{=:J_1} \nonumber\\ &+ \underbrace{\bb E_x \left( z+M_n \,;\, \hat{T}_z > n \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right)}_{=:J_2}. \label{UBJ1ET2} \end{align} \textit{Bound of $J_1$.} Set $m_{\ee} = \pent{n^{1-\ee}}-\pent{n^\ee}$. Using the fact that $\{ \nu_n^\ee > \pent{n^{1-\ee}} \} = \{ \nu_n > m_\ee \}$ and the Markov property as in the proof of Lemma \ref{MarkovpropforTz}, \begin{align*} J_1 =\; &\int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_{n-m_\ee} \,;\, \hat{T}_{z'} > n-m_\ee \right) \\ &\qquad \times \bb P_x \left( X_{m_\ee} \in \dd x' \,,\, z+M_{m_\ee} \in \dd z' \,,\, \tau_y > m_\ee \,,\, \nu_n > m_\ee \right)\\ &+\int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_{n-m_\ee} \,;\, T_{z'} > n-m_\ee \right) \\ &\qquad \times \bb P_x \left( X_{m_\ee} \in \dd x' \,,\, z+M_{m_\ee} \in \dd z' \,,\, \tau_y \leqslant m_\ee \,,\, \hat{T}_z > m_\ee \,,\, \nu_n > m_\ee \right). \end{align*} On the event $\{ \nu_n > m_\ee \}$, we have $z'=z+M_{m_\ee} \leqslant n^{1/2-\ee}$. Moreover by the point \ref{majmart002} of Lemma \ref{majmart}, $\bb E_{x'} \left( \abs{M_{n-m_\ee}} \right) \leqslant c n^{1/2} + c N(x')$. Therefore, \[ J_1 \leqslant c \bb E_x \left( n^{1/2} + N\left(X_{m_\ee}\right) \,;\, \hat{T}_z > m_\ee \,,\, \nu_n > m_\ee \right). \] Set $m_{\ee}'=m_{\ee}-\pent{n^{\ee}} = \pent{n^{1-\ee}} - 2\pent{n^{\ee}}$. Using the Markov property and \eqref{decexpN}, \begin{align*} J_1 &\leqslant c \int_{\bb X} n^{1/2} +\bb E_{x'} \left( N\left( X_{\pent{n^{\ee}}} \right) \right) \bb P_x\left( X_{m_{\ee}'} \in \dd x' \,,\, \hat{T}_z > m_\ee' \,,\, \nu_n > m_\ee' \right) \\ &\leqslant c n^{1/2} \bb P_x\left( \hat{T}_z > m_\ee' \,,\, \nu_n > m_\ee' \right) + \e^{-c n^\ee} \bb E_x \left( N\left( X_{m_\ee'} \right) \right). \end{align*} By Lemma \ref{concentnu} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, \begin{equation} \label{MajJ1} J_1 \leqslant c n^{1/2} \e^{-c_{\ee} n^\ee} \left( 1 + N(x) \right) + \e^{-c n^\ee} \left( 1+N(x) \right) \leqslant \e^{-c_\ee n^\ee} \left( 1+N(x) \right). \end{equation} \textit{Bound of $J_2$.} By the Markov property, as in the proof of Lemma \ref{MarkovpropforTz}, we have \begin{align*} J_2 = \sum_{k=1}^{\pent{n^{1-\ee}}} &\int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_{n-k} \,;\, \hat{T}_{z'} > n-k \right) \\ &\qquad \times \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \,,\, \nu_n^\ee = k \right) \\ &+ \int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_{n-k} \,;\, T_{z'} > n-k \right) \\ &\qquad \times \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y \leqslant k \,,\, \hat{T}_{z} > k \,,\, \nu_n^\ee = k \right). \end{align*} By Lemmas \ref{firstupperbound} and \ref{firstupperboundforMnTz}, \begin{align} J_2 &\leqslant \underbrace{c_\ee \bb E_x \left( n^{1/2-2\ee} + n^{2\ee} N\left( X_{\nu_n^\ee} \right) \,;\, \hat{T}_{z} > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right)}_{=:J_{21}} \nonumber\\ &\qquad+ \underbrace{\bb E_x \left( \max \left( z+M_{\nu_n^\ee} , 0 \right) \,;\, \hat{T}_{z} > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right)}_{=:J_{22}}. \label{decJ2} \end{align} \textit{Bound of $J_{21}$.} Using the Markov property and \eqref{decexpN}, \begin{align} J_{21} &\leqslant c_\ee \int_{\bb X} \bb E_{x'} \left( n^{1/2-2\ee} + n^{2\ee} N\left( X_{\pent{n^\ee}} \right) \right) \bb P_x \left( X_{\nu_n} \in \dd x' \,,\, \hat{T}_{z} > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\leqslant c_{\ee} \bb E_x \left( n^{1/2-2\ee} + \e^{-c_{\ee} n^{\ee}} N\left( X_{\nu_n} \right) \,;\, \hat{T}_{z} > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\leqslant \underbrace{c_{\ee} \bb E_x \left( n^{1/2-2\ee} \,;\, \hat{T}_{z} > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right)}_{=:J_{21}'} + \e^{-c_{\ee} n^{\ee}} n^{1-\ee} \left( 1 + N(x) \right). \label{decJ21} \end{align} By the definition of $\nu_n$, we have $n^{1/2-2\ee} < \frac{z+M_{\nu_n}}{n^\ee}$. So \[ J_{21}' \leqslant \frac{c_{\ee}}{n^\ee} \bb E_x \left( z+M_{\nu_n} \,;\, \hat{T}_{z} > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right). \] Using Lemma \ref{Mnsubmartingale}, \begin{align} J_{21}' \leqslant\;& \frac{c_{\ee}}{n^\ee} \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) \nonumber\\ &- \frac{c_{\ee}}{n^\ee}\underbrace{ \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \,,\, \nu_n > \pent{n^{1-\ee}} \right)}_{=:J_{21}''}. \label{decJ21'} \end{align} On the event $\{ \tau_y > \pent{n^{1-\ee}} \}$, by \eqref{decMSX}, it holds $z+M_{\pent{n^{1-\ee}}} > r \left( X_{\pent{n^{1-\ee}}} \right)$ and on the event $\{\tau_{y} \leqslant \pent{n^{1-\ee}} \,,\, \hat{T}_{z} > \pent{n^{1-\ee}} \}$ it holds $z+M_{\pent{n^{1-\ee}}} >0$. So, by the definition of $\hat{T}_z$, \begin{align*} -J_{21}'' \leqslant\;& -\bb E_x \left( r \left( X_{\pent{n^{1-\ee}}} \right) \,;\, \tau_{y} > \pent{n^{1-\ee}} \,,\, \nu_n > \pent{n^{1-\ee}} \right)\\ \leqslant\;& c\bb E_x \left( 1+N \left( X_{\pent{n^{1-\ee}}} \right) \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \,,\, \nu_n > \pent{n^{1-\ee}} \right). \end{align*} Denoting $m_{\ee} = \pent{n^{1-\ee}} - \pent{n^\ee}$ and using the Markov property and \eqref{decexpN}, \begin{align*} -J_{21}'' &\leqslant c \bb E_x \left( 1+e^{-c_{\ee} n^{\ee}} N\left( X_{m_{\ee}} \right) \,;\, \hat{T}_{z} > m_{\ee} \,,\, \nu_n > m_{\ee} \right) \\ &\leqslant c \bb P_x \left( \nu_n > m_{\ee} \,,\, \hat{T}_{z} > m_{\ee} \right) + e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{align*} By Lemma \ref{concentnu}, \begin{equation} \label{MajJ21''} -J_{21}'' \leqslant e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{equation} Putting together \eqref{MajJ21''} and \eqref{decJ21'}, \begin{equation} \label{MajJ21'} J_{21}' \leqslant \frac{c_{\ee}}{n^\ee} \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) + e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{equation} So, using \eqref{decJ21}, \begin{equation} \label{MajJ21} J_{21} \leqslant \frac{c_{\ee}}{n^\ee} \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) + \e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{equation} \textit{Bound of $J_{22}$.} On the event $\{ \hat{T}_{z} > \nu_n^\ee \,,\, \tau_y \leqslant \nu_n^\ee \}$ we have $z+M_{\nu_n^\ee} > 0$. Consequently \begin{align*} J_{22} =\;& \bb E_x \left( z+M_{\nu_n^\ee} \,;\, \hat{T}_{z} > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right) \\ &+ \bb E_x \left( \max \left( z+M_{\nu_n^\ee} , 0 \right) - \left(z+M_{\nu_n^\ee}\right) \,;\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right). \end{align*} By Lemma \ref{Mnsubmartingale}, \begin{align} J_{22} \leqslant\;& \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) \nonumber\\ \label{decJ22} &-\underbrace{\bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right)}_{=:J_{22}''}\\ &\underbrace{- \bb E_x \left( z+M_{\nu_n^\ee} \,;\, z+M_{\nu_n^\ee} < 0 \,,\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right)}_{=:J_{22}'}. \nonumber \end{align} In the same way as in the proof of the bound of $J_{21}''$, replacing $\nu_n$ by $\nu_n^{\ee}$, one can prove that \begin{equation} \label{MajJ22''} -J_{22}'' \leqslant e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{equation} Moreover, using \eqref{decMSX}, on the event $\{ \tau_y > \nu_n^\ee \}$, we have $-(z+M_{\nu_n^\ee}) < -r \left( X_{\nu_n^\ee} \right)$. So, by Lemma \ref{MTR}, the Markov property and \eqref{decexpN}, \begin{align} J_{22}' &\leqslant \bb E_x \left( \abs{ r \left( X_{\nu_n^\ee} \right) } \,;\, \hat{T}_{z} > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ \label{MajJ22'} &\leqslant c \bb E_x \left( 1+\e^{-c n^\ee} N \left( X_{\nu_n} \right) \,;\, \hat{T}_{z} > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &\leqslant J_{21}' + \e^{-c_{\ee} n^\ee} \left( 1+N(x) \right). \nonumber \end{align} With \eqref{MajJ21'}, \eqref{decJ22} and \eqref{MajJ22''} we obtain, \begin{align} \label{MajJ22} J_{22} &\leqslant \left( 1 + \frac{c_ {\ee}}{n^\ee} \right) \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) + \e^{-c_{\ee} n^\ee} \left( 1+N(x) \right). \end{align} Inserting \eqref{MajJ22} and \eqref{MajJ21} into \eqref{decJ2}, \begin{equation} \label{MajJ2} J_2 \leqslant \left( 1 + \frac{c_{\ee}}{n^\ee} \right) \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) + \e^{-c_{\ee} n^{\ee}} \left( 1 + N(x) \right). \end{equation} Now, inserting \eqref{MajJ1} and \eqref{MajJ2} into \eqref{UBJ1ET2}, we find that \begin{align} \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) &\leqslant \left( 1 + \frac{c_{\ee}}{n^\ee} \right) \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_{z} > \pent{n^{1-\ee}} \right) \nonumber\\ &\qquad+ \e^{-c_{\ee} n^{\ee}} \left( 1 + N(x) \right). \label{REC001} \end{align} By Lemma \ref{Mnsubmartingale}, the sequence $( \bb E_x ( z+M_n \,;\, \hat{T}_z > n ) )_{n\geqslant 1}$ is non-decreasing. Therefore, using Lemma \ref{lemanalyse}, we obtain that for any $n\geqslant 2$ and $n_f \in \{ 2, \dots, n \}$, \[ \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \bb E_x \left( z+M_{n_f} \,;\, \hat{T}_z > n_f \right) + \e^{-c_{\ee} n_f^{\ee}} \left( 1 + N(x) \right). \] Finally, by the point \ref{majmart002} of Lemma \ref{majmart}, \[ \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(z,0) + cN(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x). \] \end{proof} \begin{corollary} \label{unifmajdelafoncinv} There exists $\ee_0 > 0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $y \in \bb R$, $n\geqslant 0$ and any integer $n_f \in \{2, \dots, n \}$, \[ \bb E_x \left( y+S_n \,;\, \tau_y > n \right) \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(y,0) + c N(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x). \] \end{corollary} \begin{proof} First, using the definition of $\hat{T}_z$ and Lemma \ref{intofthemartcond}, with $z=y+r(x)$, \begin{align} \bb E_x \left( z+M_n \,;\, \tau_y > n \right) &= \bb E_x \left( z+M_n \,;\, \hat{T}_z>n \right) - \bb E_x \left( z+M_n \,;\, \tau_y \leqslant n \,,\, \hat{T}_z>n \right) \nonumber\\ \label{VnPPWn} &\leqslant \bb E_x \left( z+M_n \,;\, \hat{T}_z>n \right) \\ \label{VnPPWn2} &\leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(z,0) + cN(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x). \end{align} Now, using \eqref{decMSX}, Lemma \ref{MTR} and \eqref{decexpN}, \begin{align*} \bb E_x \left( y+S_n \,;\, \tau_y > n \right) &= \bb E_x \left( z+M_n \,;\, \tau_y > n \right) - \bb E_x \left( r \left(X_n\right) \,;\, \tau_y > n \right) \\ &\leqslant \bb E_x \left( z+M_n \,;\, \tau_y > n \right) + c \left( 1+\e^{-cn} N(x) \right)\\ &\leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(z,0) + cN(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x). \end{align*} Using the definition of $z$ concludes the proof. \end{proof} \section{Existence of the harmonic function} \label{Sec Harm Func} In this section we rely upon the results of the previous Sections \ref{Mart Approx} and \ref{CMWI} to construct a non-trivial harmonic function $V$ (Corollary \ref{ExofVW}) and state some of its properties (Proposition \ref{IFP}). The idea consists in establishing the existence of the limit as $n\to +\infty$ of the expectation $-\bb E_x \left( M_{\tau_y} \,;\, \tau_y \leqslant n \right)$ using the Lebesgue dominated convergence theorem. To this end, we state the following Lemma. \begin{lemma} \label{ExitfinitTmanuz} For any $x \in \bb X$ and $z \in \bb R$, \[ \hat{T}_z < + \infty \quad \text{$\bb P_x$-a.s.} \] \end{lemma} \begin{proof} In order to apply Lemmas \ref{Exitfinit} and \ref{ExitfinitTz}, we write, with $y=z-r(x)$, \begin{align*} \bb P_x \left( \hat{T}_z > n \right) &\leqslant \bb P_x \left( \tau_y > \pent{n/2} \right) + \int_{\bb X \times \bb R} \bb P_{x'} \left( T_{z'} > n - \pent{n/2} \right) \bb P_x \left( X_{\pent{n/2}} \in \dd x' \,,\, \right. \\ &\hspace{0.5cm} \left. z+M_{\pent{n/2}} \in \dd z' \,,\, \tau_y \leqslant \pent{n/2} \,,\, \hat{T}_{z} > \pent{n/2} \right). \end{align*} Using \eqref{tauyto0}, \eqref{Tzto0} and the definition of $y$, we have \begin{align*} \bb P_x \left( \hat{T}_z > n \right) \leqslant\;& \frac{c_{\ee}}{n^{\ee}} \left( 1+\max(y,0)+N(x) \right) \\ &+ \frac{c_{\ee}}{n^{\ee}} \bb E_x \left( 1+z+M_{\pent{n/2}} + N \left( X_{\pent{n/2}} \right) \,;\, \tau_y \leqslant \pent{n/2} \,,\, \hat{T}_z > \pent{n/2} \right). \end{align*} By the point \ref{Momdec001} of Hypothesis \ref{Momdec}, \begin{align*} \bb P_x \left( \hat{T}_z > n \right) \leqslant\;& \frac{c_{\ee}}{n^{\ee}} \left( 1+\max(y,0)+N(x) \right) + \frac{c_{\ee}}{n^{\ee}} \bb E_x \left( z+M_{\pent{n/2}} \,;\, \hat{T}_z > \pent{n/2} \right) \\ &- \frac{c_{\ee}}{n^{\ee}} \bb E_x \left( z+M_{\pent{n/2}} \,;\, \tau_y > \pent{n/2} \right). \end{align*} Using \eqref{decMSX}, we see that on the event $\{ \tau_y > \pent{n/2} \}$ we have $z+M_{\pent{n/2}} > r \left( X_{\pent{n/2}} \right)$. Then, by Lemma \ref{MTR} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, \[ \bb P_x \left( \hat{T}_z > n \right) \leqslant \frac{c_{\ee}}{n^{\ee}} \left( 1+\max(y,0)+N(x) \right) + \frac{c_{\ee}}{n^{\ee}} \bb E_x \left( z+M_{\pent{n/2}} \,;\, \hat{T}_z > \pent{n/2} \right). \] Using Lemma \ref{intofthemartcond}, we have \[ \bb P_x \left( \hat{T}_z > n \right) \leqslant \frac{c_{\ee}}{n^{\ee}} \left( 1+\max(y,0)+N(x) \right). \] Finally, we conclude that \[ \bb P_x \left( \hat{T}_z = + \infty \right) = \underset{n\to +\infty}{\lim} \bb P_x \left( \hat{T}_z > n \right) = 0. \] \end{proof} \begin{lemma} \label{intdeMtau} Let $x \in \bb X,$ $y \in \bb R$ and $z=y+r(x)$. The random variables $M_{\hat{T}_z}$, $M_{T_z}$ and $M_{\tau_y}$ are integrable and \[ \max \left\{ \bb E_x \left( \abs{M_{\hat{T}_z}} \right), \bb E_x \left( \abs{M_{T_z}} \right), \bb E_x \left( \abs{M_{\tau_y}} \right) \right\} \leqslant c \left( 1+ \abs{z} + N(x) \right) < + \infty. \] \end{lemma} \begin{proof} The stopping times $\tau_y \wedge n$, $T_z \wedge n$ and $\hat{T}_z \wedge n$ are bounded and satisfy $\tau_y \wedge n \leqslant \hat{T}_z \wedge n$ and $T_z \wedge n \leqslant \hat{T}_z \wedge n.$ Since $(\abs{M_n})_{n\geqslant 0}$ is a submartingale, we have \begin{equation} \label{MtauinfnleqMTinfn} \max \left\{ \bb E_x \left( \abs{M_{\tau_y \wedge n}} \right), \bb E_x \left( \abs{M_{T_z \wedge n}} \right) \right\} \leqslant \bb E_x \left( \abs{M_{\hat{T}_z \wedge n}} \right). \end{equation} Using the optional stopping theorem, \begin{align*} \bb E_x \left( \abs{M_{\hat{T}_z \wedge n}} \right) \leqslant\;& -\bb E_x \left( z+M_{\hat{T}_z} \,;\, \hat{T}_z \leqslant n \right) + \bb E_x \left( \abs{z+M_{n}} \,;\, \tau_y > n \right) \\ & + \bb E_x \left( z+M_{n} \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \right) + \abs{z} \\ =\;& -\bb E_x \left( z+M_{n} \,;\, \hat{T}_z \leqslant n \right) - 2\bb E_x \left( z+M_{n} \,;\, z+M_{n} \leqslant 0 \,,\, \tau_y > n \right) \\ & + \bb E_x \left( z+M_{n} \,;\, \tau_y > n \right) + \bb E_x \left( z+M_{n} \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \right) + \abs{z}\\ =\;& -z + 2\bb E_x \left( z+M_{n} \,;\, \hat{T}_z > n \right) \\ &- 2\bb E_x \left( z+M_{n} \,;\, z+M_{n} \leqslant 0 \,,\, \tau_y > n \right) + \abs{z}. \end{align*} On the event $\{ z+M_{n} \leqslant 0 \,,\, \tau_y > n \}$, by \eqref{decMSX}, it holds $\abs{z+M_n} \leqslant \abs{r \left( X_n \right)}$. Therefore, by Lemma \ref{MTR} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, we have \[ - 2\bb E_x \left( z+M_{n} \,;\, z+M_{n} \leqslant 0 \,,\, \tau_y > n \right) \leqslant c \left( 1+N(x) \right), \] Using Lemma \ref{intofthemartcond}, \begin{equation} \label{majMTzinfn} \bb E_x \left( \abs{M_{\hat{T}_z}} \,;\, \hat{T}_z \leqslant n \right) \leqslant \bb E_x \left( \abs{M_{\hat{T}_z \wedge n}} \right) \leqslant c \left( 1+ \abs{z} + N(x) \right). \end{equation} By the Lebesgue monotone convergence theorem and the fact that $\hat{T}_z < + \infty,$ we deduce that $M_{\hat{T}_z}$ is $\bb P_x$-integrable and \[ \bb E_x \left( \abs{M_{\hat{T}_z}} \right) \leqslant c \left( 1+ \abs{z} + N(x) \right). \] In the same manner, using (\ref{MtauinfnleqMTinfn}), (\ref{majMTzinfn}) and Lemmas \ref{Exitfinit} and \ref{ExitfinitTz}, we conclude that $M_{\tau_y}$ and $M_{T_z}$ are $\bb P_x$-integrable and \[ \max \left\{ \bb E_x \left( \abs{M_{\tau_y}} \right), \bb E_x \left( \abs{M_{T_z}} \right) \right\} \leqslant c \left( 1+ \abs{z} + N(x) \right). \] The assertion of the lemma follows obviously from the last two inequalities. \end{proof} \begin{corollary} \label{ExofVW} For any $x \in \bb X$, $y \in \bb R$ and $z \in \bb R$, the following functions are well defined: \[ W(x,z) := -\bb E_x \left( M_{T_z} \right), \quad \hat{W}(x,z) := -\bb E_x \left( M_{\hat{T}_z} \right) \quad \text{and} \quad V(x,y) := -\bb E_x \left( M_{\tau_y} \right). \] \end{corollary} \begin{proposition}\ \label{IFP} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{IFP001} Let $x\in \bb X,$ $y \in \bb R$ and $z=y+r(x)$. Then \[ V(x,y) = \underset{n\to +\infty}{\lim} \bb E_x \left( z+M_n \,;\, \tau_y > n \right) = \underset{n\to +\infty}{\lim} \bb E_x \left( y+S_n \,;\, \tau_y > n \right) \] and \begin{align*} W(x,z) &= \underset{n\to +\infty}{\lim} \bb E_x \left( z+M_n \,;\, T_z > n \right),\\ \hat{W}(x,z) &= \underset{n\to +\infty}{\lim} \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right). \end{align*} \item \label{IFP002} For any $x \in \bb X$, the functions $y\mapsto V(x,y)$, $z\mapsto W(x,z)$ and $z\mapsto \hat{W}(x,z)$ are non-decreasing on $\bb R$. \item \label{IFP003} There exists $\ee_0>0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $z\in \bb R$ and any integer $n_f \geqslant 2$, \begin{equation} \label{WLZX} \hat{W}(x,z) \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(z,0) + cN(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x) \end{equation} and, for any $x\in \bb X$, $y\in \bb R$ and $z=y+r(x)$, \begin{equation} \label{VWppthW} 0 \leqslant \min \left\{ V(x,y), W(x,z) \right\} \leqslant \max \left\{ V(x,y), W(x,z) \right\} \leqslant \hat{W}(x,y). \end{equation} In particular, for any $x\in \bb X$ and $y\in \bb R$, \begin{equation} \label{encadrV} 0 \leqslant V(x,y) \leqslant c \left( 1 + \max(y,0) + N(x) \right). \end{equation} \item \label{IFP004} For any $x\in \bb X$ and $y\in \bb R$, \[ V(x,y) = \bb E_x \left( V(X_1,y+S_1) \,;\, \tau_y > n \right) \] and $\left( V\left( X_n, y+S_n \right) \mathbbm 1_{\left\{ \tau_y > n \right\}} \right)_{n\geqslant 0}$ is a $\bb P_x$-martingale. \end{enumerate} \end{proposition} \begin{proof} \textit{Claim \ref{IFP001}.} Let $\mathfrak{T}$ be any of the stopping times $\tau_y, T_z,$ or $\hat{T}_z$. By the martingale property, \[ \bb E_x \left( z+M_n \,;\, \mathfrak{T} > n \right) = z\bb P_x \left( \mathfrak{T} > n \right) - \bb E_x \left( M_{\mathfrak{T}} \,;\, \mathfrak{T} \leqslant n \right). \] Using Lemmas \ref{Exitfinit}, \ref{ExitfinitTz}, \ref{ExitfinitTmanuz}, \ref{intdeMtau} and the Lebesgue dominated convergence theorem, \[ \bb E_x \left( z+M_n \,;\, \mathfrak{T} > n \right) = -\bb E_x \left( M_{\mathfrak{T}} \right). \] Moreover, by \eqref{decMSX}, \[ \bb E_x \left( y+S_n \,;\, \tau_y > n \right) = \bb E_x \left( z+M_n \,;\, \tau_y > n \right) - \bb E_x \left( r\left(X_n\right) \,;\, \tau_y > n \right). \] Since, by Lemma \ref{MTR}, the point \ref{Momdec001} of Hypothesis \ref{Momdec} and Lemma \ref{Exitfinit}, we have \begin{align} \abs{\bb E_x \left( r\left(X_n\right) \,;\, \tau_y > n \right)} &\leqslant c\bb E_x^{1/2} \left( \left( 1+N \left( X_n \right) \right)^2 \right) \bb P_x^{1/2} \left( \tau_y > n \right) \nonumber\\ &\leqslant c \left( 1+N(x) \right) \bb P_x^{1/2} \left( \tau_y > n \right) \underset{n\to +\infty}{\longrightarrow} 0, \label{intrXn} \end{align} the claim \ref{IFP001} follows. \textit{Proof of the claim \ref{IFP002}.} Let $x\in \bb X$. For any $y' \leqslant y$, we obviously have $\tau_{y'} \leqslant \tau_y$. Therefore, \[ \bb E_x \left( y'+S_n \,;\, \tau_{y'} > n \right) \leqslant \bb E_x \left( y+S_n \,;\, \tau_{y'} > n \right) \leqslant \bb E_x \left( y+S_n \,;\, \tau_{y} > n \right). \] Taking the limit as $n\to +\infty$ and using the claim \ref{IFP001}, it follows that $V(x,y')\leqslant V(x,y)$. In the same way $W(x,z') \leqslant W(x,z)$ for $z'\leqslant z$. To prove the monotonicity of $\hat{W}$, we note that, for any $z' \leqslant z,$ $y'=z'-r(x)$ and $y=z-r(x)$, we have $\hat{T}_{z'} = \min \{ k \geqslant \tau_{y'},\, z'+M_k \leqslant 0 \} \leqslant \min \{ k \geqslant \tau_y,\, z'+M_k \leqslant 0 \} \leqslant \hat{T}_z$. So \begin{align*} \bb E_x \left( z'+M_n \,;\, \hat{T}_{z'} > n \right) \leqslant\;& \bb E_x \left( z+M_n \,;\, \hat{T}_{z'} > n \,,\, \hat{T}_z > n \right)\\ \leqslant\;& \bb E_x \left( y+S_n \,;\, \tau_y > n \right) + \bb E_x \left( \abs{r\left( X_n \right)} \,;\, \tau_y > n \right) \\ &+ \bb E_x \left( z+M_n \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \right) \\ \leqslant\;& \bb E_x \left( z+M_n \,;\, \hat{T}_z>n \right) + 2\bb E_x \left( \abs{r\left( X_n \right)} \,;\, \tau_y > n \right). \end{align*} As in \eqref{intrXn}, taking the limit as $n\to +\infty$, by the claim \ref{IFP001}, we have $\hat{W}(x,z')\leqslant \hat{W}(x,z)$. \textit{Proof of the claim \ref{IFP003}.} The inequality \eqref{WLZX} is a direct consequence of the claim \ref{IFP001} and Lemma \ref{intofthemartcond}. Moreover, taking the limit as $n\to \infty$ in \eqref{VnPPWn}, we get $V(x,y) \leqslant \hat{W}(x,z)$. To bound $W$, we write \begin{align*} \bb E_x \left( z+M_n \,;\, T_z > n \right) &\leqslant \bb E_x \left( z+M_n \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \,,\, T_z > n \right) \\ &\qquad \qquad + \bb E_x \left( z+M_n \,;\, z+M_n > 0 \,,\, \tau_y > n \,,\, T_z > n \right). \end{align*} Since $z+M_n >0$ on the event $\{ \tau_y \leqslant n \,,\, \hat{T}_z > n \}$, \begin{align*} \bb E_x \left( z+M_n \,;\, T_z > n \right) &\leqslant \bb E_x \left( z+M_n \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \right) \\ &\qquad \qquad+ \bb E_x \left( z+M_n \,;\, z+M_n > 0 \,,\, \tau_y > n \right) \\ &= \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) \\ &\qquad \qquad- \bb E_x \left( z+M_n \,;\, z+M_n \leqslant 0 \,,\, \tau_y > n \right). \end{align*} Using the approximation \eqref{decMSX}, \begin{equation} \label{MetMchap} \bb E_x \left( z+M_n \,;\, T_z > n \right) \leqslant \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) + \bb E_x \left( \abs{r\left(X_n\right)} \,;\, \tau_y > n \right). \end{equation} As in \eqref{intrXn}, using the claim \ref{IFP001}, \[ W(x,z) \leqslant \hat{W}(x,z). \] Now, since $y+S_n$ is positive on the event $\{ \tau_y > n \}$, by the claim \ref{IFP001}, we see that $V(x,y) \geqslant 0$ and in the same way $W(x,z) \geqslant 0$. This proves \eqref{VWppthW}. Inequality \eqref{encadrV} follows from \eqref{WLZX} and \eqref{VWppthW}. \textit{Proof of the claim \ref{IFP004}.} By the Markov property, \begin{align*} \bb E_x \left( y+S_{n+1} \,;\, \tau_y > n+1 \right) &= \int_{\bb X \times \bb R} \bb E_{x'} \left( y'+S_n \,;\, \tau_{y'} > n \right) \\ &\qquad \qquad \times \bb P_x \left( X_1 \in \dd x' \,,\, y+S_1 \in \dd y' \,,\, \tau_y > 1 \right), \end{align*} where, by Corollary \ref{unifmajdelafoncinv}, $\bb E_{x'} \left( y'+S_n \,;\, \tau_{y'} > n \right) \leqslant c \left( 1+\abs{y'} + N\left(x'\right) \right)$ and by the point \ref{Momdec001} of Hypothesis \ref{Momdec}, \[ c\bb E_x \left( 1+\abs{y+S_1} + N\left(X_1\right) \right) \leqslant c \left( 1+\abs{y}+N(x) \right) < +\infty. \] The Lebesgue dominated convergence theorem implies that \[ V(x,y) = \bb E_x \left( V \left( X_1, y+S_1 \right) \,;\, \tau_y > 1 \right). \] \end{proof} \section{Positivity of the harmonic function} \label{PosHaFun} The aim of this section is to prove that the harmonic function $V$ is non-identically zero and to precise its domain of positivity $\mathscr{D}_{+}(V)$. For any $x\in \bb X$, $z\in \bb R$ and $n\in \bb N$, denote for brevity, \[ \hat{W}_n(x,z) = \hat{W} \left( X_n, z+M_n \right) \mathbbm 1_{\left\{\hat{T}_z > n \right\}}. \] Although it is easy to verify that $\hat{W}(x,z) \geqslant z$ (see Lemma \ref{prophW}) which, in turn, ensures that $\hat{W}(x,z)>0$ for any $z>0$, there is no straightforward way to bound from below the function $V$. We take advantage of the lower bound $V(x,y) \geqslant \lim_{n} \bb E_x ( \hat{W}_n(x,z) \,;\, \tau_y > n )$ (Lemma \ref{minpos001}) and of the fact that $( \hat{W}_n(x,z) \mathbbm 1_{\left\{ \tau_y > n \right\}} )_{n\geqslant 0}$ is a $\bb P_x$-supermartingale (Lemma \ref{prophW}). By a recurrent procedure we obtain a lower bound for $V$ (Lemma \ref{PosdeV}) which subsequently is used to prove the positivity of $V$ (Lemma \ref{posdeVsurDgamma}). \begin{lemma}\ \label{prophW} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{prophW001} For any $x\in \bb X$ and $z\in \bb R$, \[ \hat{W}(x,z) \geqslant z. \] \item \label{prophW001bis} For any $x\in \bb X$, \[ \underset{z\to +\infty}{\lim} \frac{\hat{W}(x,z)}{z} = 1. \] \item \label{prophW002} For any $x\in \bb X$ and $z\in \bb R$, \[ \bb E_x \left( \hat{W}_n(x,z) \right) \geqslant \hat{W}(x,z). \] \item \label{prophW003} For any $x\in \bb X$ and $z\in \bb R$, $\left( \hat{W}_n(x,z) \mathbbm 1_{\left\{ \tau_y > n \right\}} \right)_{n\geqslant 0}$ is a $\bb P_x$-supermartingale. \end{enumerate} \end{lemma} \begin{proof} \textit{Claim \ref{prophW001}.} By the Doob optional theorem and the definition of $\hat{T}_z$, \[ \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) = z - \bb E_x \left( z+M_{\hat{T}_z} \,;\, \hat{T}_z \leqslant n \right) \geqslant z. \] Taking the limit as $n\to+\infty$ and using the point \ref{IFP001} of Proposition \ref{IFP} proves the claim \ref{prophW001}. \textit{Proof of the claim \ref{prophW001bis}.} By the claim \ref{prophW001}, $\liminf_{z\to +\infty} \hat{W}(x,z)/z \geqslant 1$. Moreover, by \eqref{WLZX}, for any $n_f \geqslant 2$, \[ \underset{z\to \infty}{\limsup} \frac{\hat{W}(x,z)}{z} \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right). \] Taking the limit as $n_f \to +\infty$, the claim follows. \textit{Proof of the claim \ref{prophW002}.} Let $y=z-r(x)$. Using the Markov property, as in the proof of Lemma \ref{MarkovpropforTz}, \begin{align} \bb E_x \left( z+M_{n+k} \,;\, \hat{T}_z > n+k \right) =\;& \int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_n \,;\, \hat{T}_{z'} > n \right) \nonumber\\ &\quad \times \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \right) \nonumber\\ \label{decint} &+\int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_n \,;\, T_{z'} > n \right) \\ &\quad \times \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y \leqslant k \,,\, \hat{T}_z > k \right). \nonumber \end{align} We will find the limits as $n \to +\infty$ of the two terms in the right hand side. By Lemmas \ref{intofthemartcond} and \ref{MTR}, $\bb E_{x'} \left( z'+M_n \,;\, \hat{T}_{z'} > n \right) \leqslant c \left( 1+\abs{y'} + N\left(x'\right) \right)$, with $y'=z'-r(x')$. Moreover by the point \ref{Momdec001} of Hypothesis \ref{Momdec}, $\bb E_x \left( 1+\abs{y+S_k} + N\left(X_k\right) \right) \leqslant ck \left( 1+\abs{y}+N(x) \right) < +\infty$. So, by the Lebesgue dominated convergence theorem and the point \ref{IFP001} of Proposition \ref{IFP}, \begin{align} \int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_n \,;\, \hat{T}_{z'} > n \right) &\bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \right) \nonumber\\ &\hspace{2cm} \underset{n\to+\infty}{\longrightarrow} \bb E_x \left( \hat{W} \left( X_k, z+M_k \right) \,;\, \tau_y > k \right). \label{limint001} \end{align} Moreover, using \eqref{MetMchap}, Lemmas \ref{intofthemartcond} and \ref{MTR} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, \[ \bb E_{x'} \left( z'+M_n \,;\, T_{z'} > n \right) \leqslant c \left( 1+\abs{z'} + N\left(x'\right) \right). \] Again, by the Lebesgue dominated convergence theorem and the point \ref{IFP001} of Proposition \ref{IFP}, we have \begin{align} &\int_{\bb X \times \bb R} \bb E_{x'} \left( z'+M_n \,;\, T_{z'} > n \right) \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y \leqslant k \,,\, \hat{T}_z > k \right) \nonumber\\ &\hspace{5cm} \underset{n\to+\infty}{\longrightarrow} \bb E_x \left( W \left( X_k, z+M_k \right) \,;\, \tau_y \leqslant k \,,\, \hat{T}_z > k \right). \label{limint002} \end{align} Putting together \eqref{decint}, \eqref{limint001}, \eqref{limint002} and using the point \ref{IFP001} of Proposition \ref{IFP}, \begin{align} \hat{W}(x,z) =\;& \bb E_x \left( \hat{W} \left( X_k, z+M_k \right) \,;\, \tau_y > k \right) \nonumber\\ &+ \bb E_x \left( W \left( X_k, z+M_k \right) \,;\, \tau_y \leqslant k \,,\, \hat{T}_z > k \right). \label{marthW} \end{align} Now, taking into account \eqref{VWppthW} and the identity $\{ \tau_y > k \} = \{ \tau_y > k,\; \hat{T}_z > k \}$, we obtain the claim \ref{prophW002}. \textit{Proof of the claim \ref{prophW003}.} By the point \ref{IFP003} of Proposition \ref{IFP}, $W$ is a non-negative function. Therefore, using \eqref{marthW}, \[ \hat{W}(x,z) \geqslant \bb E_x \left( \hat{W} \left( X_1, z+M_1 \right) \,;\, \tau_y > 1 \right), \] which implies that $\left( \hat{W}_n(x,z) \mathbbm 1_{\left\{ \tau_y > n \right\}} \right)_{n\geqslant 0}$ is a supermartingale. \end{proof} \begin{lemma} \label{minpos001} For any $x\in \bb X,$ $y \in \bb R$ and $z=y+r(x)$, \[ V(x,y) = \underset{n\to+\infty}{\lim}\, \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right). \] \end{lemma} \begin{proof} For any $n\in \bb N$, $x\in \bb X,$ $y \in \bb R$ and $z=y+r(x)$, \[ \bb E_x \left( z+M_n \,;\, \tau_y > n \right) = \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) - \bb E_x \left( z+M_n \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \right). \] By the point \ref{prophW001} of Lemma \ref{prophW}, $z+M_n \leqslant \hat{W}_n(x,z)$ and therefore \begin{align} \bb E_x \left( z+M_n \,;\, \tau_y > n \right) \geqslant \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) &- \bb E_x \left( \hat{W}_n(x,z) \right) \nonumber\\ \label{minorzMn001} &+ \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right). \end{align} Moreover, by \eqref{WLZX}, for any $\delta >0$, \begin{align*} \bb E_x \left( \hat{W}_n(x,z) \right) \leqslant\;& \left( 1+\delta \right) \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) + c_{\delta} \bb E_x \left( 1+N\left(X_n\right) \,;\, \hat{T}_z > n \right) \\ & - (1+\delta) \bb E_x \left( z+M_n \,;\, z+M_n < 0 \,,\, \tau_y > n \right). \end{align*} On the event $\{ z+M_n < 0 \,,\, \tau_y > n \}$, by \eqref{decMSX}, it holds $r\left( X_n \right) < z+M_n < 0$. Therefore, using Lemma \ref{MTR}, \[ \bb E_x \left( \hat{W}_n(x,z) \right) \leqslant \left( 1+\delta \right) \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) + c_{\delta} \bb E_x\left( 1+N\left(X_n\right) \,;\, \hat{T}_z > n \right). \] By the Markov property and \eqref{decexpN}, \begin{align*} c_{\delta} \bb E_x\left( 1+N\left(X_n\right) \,;\, \hat{T}_z > n \right) &\leqslant c_{\delta} \bb E_x \left( 1+\e^{-cn/2} N\left( X_{\pent{n/2}} \right) \,;\, \hat{T}_z > \pent{n/2} \right) \\ &\leqslant c_{\delta} \bb P_x \left( \hat{T}_z > \pent{n/2} \right) + \e^{-c_{\delta}n} \left( 1+N(x) \right). \end{align*} By Lemma \ref{ExitfinitTmanuz} and the point \ref{IFP001} of Lemma \ref{IFP}, \begin{equation} \label{limdeWn} \underset{n\to+\infty}{\lim} \bb E_x \left( \hat{W}_n(x,z) \right) \leqslant \left( 1+\delta \right) \hat{W}(x,z). \end{equation} Taking the limit as $n\to +\infty$ in \eqref{minorzMn001} and using the previous bound, we obtain that \[ V(x,y) \geqslant -\delta \hat{W}(x,z) + \underset{n\to+\infty}{\lim}\, \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right). \] Since this inequality holds true for any $\delta >0$ small enough, we obtain the bound \begin{equation} \label{VppquelimWn} \underset{n\to+\infty}{\lim}\, \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \leqslant V(x,y). \end{equation} Now, by the point \ref{prophW001} of Lemma \ref{prophW}, \[ \bb E_x \left( z+M_n \,;\, \tau_y > n \right) \leqslant \bb E_x \left( \hat{W}(X_n,z+M_n) \,;\, \tau_y > n \right). \] Taking the limit as $n\to +\infty$ and using the point \ref{IFP001} of Proposition \ref{IFP}, we obtain that \[ V(x,y) \leqslant \underset{n\to+\infty}{\lim}\, \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right). \] Together with \eqref{VppquelimWn}, this concludes the proof. \end{proof} \begin{remark} Taking the limit in the point \ref{prophW002} of Lemma \ref{prophW}, we can deduce that $\lim_{n\to+\infty} \bb E_x \left( \hat{W}_n(x,z) \right) \geqslant \hat{W}(x,z)$. Coupling this result with \eqref{limdeWn}, it follows that \[ \underset{n\to+\infty}{\lim} \bb E_x \left( \hat{W}_n(x,z) \right) = \hat{W}(x,z). \] \end{remark} \begin{lemma} \label{mWtau} There exists $\ee_0>0$ such that, for any $\ee \in (0,\ee_0)$, $n\in \bb N$, $x\in \bb X,$ $z\in \bb R$ and $y=z-r(x)$, we have \[ \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant \hat{W}(x,z) + c \min(z,0) -c_{\ee} \left( n^{1/2-2\ee} + n^{2\ee} N(x) \right). \] \end{lemma} \begin{proof} Using the point \ref{prophW002} of Lemma \ref{prophW}, the upper bound for $\hat{W}(x,y)$ given by \eqref{WLZX} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, \begin{align*} \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) =\;& \bb E_x \left( \hat{W}_n(x,z) \right) - \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y \leqslant n \right) \\ \geqslant\;& \hat{W}(x,z) - c \bb E_x \left( z+M_n \,;\, \tau_y \leqslant n \,,\, \hat{T}_z > n \right) - c \left( 1+N\left(x\right) \right). \end{align*} By the point \ref{Momdec001} of \ref{Momdec}, Lemma \ref{firstupperbound} and the Doob optional stopping theorem, \begin{align*} \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant \hat{W}(x,z) &- c \left[ \bb E_x \left( z+M_n \,;\, \hat{T}_z > n \right) - \bb E_x \left( z+M_n \,;\, \tau_y > n \right) \right]\\ &- c \left( 1+N\left(x\right) \right) \\ \geqslant \hat{W}(x,z) &- c \left[ \max(z,0) - z + \bb E_x \left( z+M_{\tau_y} \,;\, \tau_y \leqslant n \right) \right] \\ &-c_{\ee} \left( n^{1/2-2\ee} + n^{2\ee} N(x) \right) - c \left( 1+N\left(x\right) \right). \end{align*} By \eqref{decMSX}, $z+M_{\tau_y} \leqslant r\left( X_{\tau_y} \right)$. Therefore, in the same way as in the proof of \eqref{NXT}, \[ \bb E_x \left( z+M_{\tau_y} \,;\, \tau_y \leqslant n \right) \leqslant c\bb E_x \left( 1+N \left( X_{\tau_y} \right) \,;\, \tau_y \leqslant n \right) \leqslant c_{\ee} n^{1/2-2\ee} + c_{\ee} N(x). \] Together with the previous bound this implies that \[ \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant \hat{W}(x,z) + c \min(z,0) -c_{\ee} \left( n^{1/2-2\ee} + n^{2\ee} N(x) \right). \] \end{proof} \begin{lemma} \label{mWtau2} There exists $\ee_0>0$ such that, for any $\ee \in (0,\ee_0)$, $n\geqslant 2$, $n_f \in \{2, \dots, n\}$, $x\in \bb X$ and $z\in \bb R$, with $y=z-r(x)$, we have \[ \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant \bb E_x \left( \hat{W}_{n_f}(x,z) \,;\, \tau_y > n_f \right) - \frac{c_{\ee}}{n_f^\ee} \left( \max(z,0) + 1 + N(x) \right). \] \end{lemma} \begin{proof} Let $\ee \in (0,1).$ Consider the stopping time $\nu_n^\ee = \nu_n + \pent{n^\ee}$. By the Markov property, with $y'=z'-r(x')$, \begin{align*} \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) &\geqslant \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right) \\ &= \sum_{k=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb E_{x'} \left( \hat{W}_{n-k}(x',z') \,;\, \tau_{y'} > n-k \right) \\ &\hspace{2cm} \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, \tau_y > k \,,\, \nu_n^\ee =k \right). \end{align*} Using Lemma \ref{mWtau}, we obtain, \begin{align*} \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant\;& \bb E_x \left( \hat{W}_{\nu_n^\ee}(x,z) \,;\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right)\\ &+ c \bb E_x \left( \min \left(z+M_{\nu_n^\ee},0\right) \,;\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right) \\ &- c_{\ee} \bb E_x \left( n^{1/2-2\ee} + n^{2\ee} N\left( X_{\nu_n^\ee} \right) \,;\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right). \end{align*} On the event $\{z+M_{\nu_n^\ee} \leqslant 0 \,,\, \tau_y > \nu_n^\ee \}$, by \eqref{decMSX}, we have $0 \geqslant z+M_{\nu_n^\ee} \geqslant r\left( X_{\nu_n^\ee} \right)$. Therefore, by Lemma \ref{MTR}, \begin{align*} &\bb E_x \left( \min\left(z+M_{\nu_n^\ee},0\right) \,;\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right) \\ &\hspace{6cm} \geqslant c\bb E_x \left( 1+N\left( X_{\nu_n^\ee} \right) \,;\, \tau_y > \nu_n^\ee \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right). \end{align*} Consequently, using the point \ref{prophW003} of Lemma \ref{prophW} and \eqref{decexpN}, \begin{align} \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant\;& \bb E_x \left( \hat{W}_{\pent{n^{1-\ee}}}(x,z) \,;\, \tau_y > \pent{n^{1-\ee}} \,,\, \nu_n^\ee \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &- c_{\ee} \bb E_x \left( n^{1/2-2\ee} + \e^{-c_{\ee}n^\ee} N\left( X_{\nu_n} \right) \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ \geqslant\;& \bb E_x \left( \hat{W}_{\pent{n^{1-\ee}}}(x,z) \,;\, \tau_y > \pent{n^{1-\ee}} \right) - \e^{-c_{\ee} n^\ee} \left( 1+N(x) \right) \nonumber\\ \label{minordec} &- \frac{c_{\ee}}{n^\ee} \underbrace{\bb E_x \left( z+M_{\nu_n} \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right)}_{=:I_1} \\ &-\underbrace{\bb E_x \left( \hat{W}_{\pent{n^{1-\ee}}}(x,z) \,;\, \tau_y > \pent{n^{1-\ee}} \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right)}_{=:I_2}. \nonumber \end{align} \textit{Bound of $I_1$.} Using the fact that $\{ \tau_y > \nu_n \} \subseteq \{ \hat{T}_z > \nu_n \}$ combined with the positivity of $z+M_{\nu_n}$ and using Lemma \ref{Mnsubmartingale}, we have \begin{align*} I_1 &\leqslant \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_z > \pent{n^{1-\ee}} \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &\leqslant \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_z > \pent{n^{1-\ee}} \right) - J_{21}'', \end{align*} where $J_{21}''$ is defined in \eqref{decJ21'}. Now, it follows from Lemma \ref{Mnsubmartingale} and the point \ref{IFP001} of Proposition \ref{IFP}, that $( \bb E_x ( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_z > \pent{n^{1-\ee}} ) )_{n\in \bb N}$ is a non-decreasing sequence which converges to $\hat{W}(x,z)$ and so $\bb E_x ( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_z > \pent{n^{1-\ee}} ) \leqslant \hat{W}(x,z)$. Using \eqref{MajJ21''}, we find that \begin{equation} \label{minorWntau001} I_1 \leqslant \hat{W}(x,z) + \e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{equation} \textit{Bound of $I_2$.} By \eqref{WLZX}, \begin{align*} I_2 &\leqslant c \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \left( 1- \mathbbm 1_{\left\{ z+M_{\pent{n^{1-\ee}}} < 0 \right\}} \right) \,;\, \hat{T}_z > \pent{n^{1-\ee}} \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right) \\ &\hspace{1.5cm}+ c \bb E_x \left( 1+N \left( X_{\pent{n^{1-\ee}}} \right) \,;\, \hat{T}_z > \pent{n^{1-\ee}} \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right). \end{align*} On the event $\{ z+M_{\pent{n^{1-\ee}}} < 0 \,,\, \hat{T}_z > \pent{n^{1-\ee}} \} = \{z+M_{\pent{n^{1-\ee}}} < 0\,,\, \tau_y > \pent{n^{1-\ee}} \}$, it holds $z+M_{\pent{n^{1-\ee}}} > r \left( X_{\pent{n^{1-\ee}}} \right)$. Therefore, using Lemma \ref{MTR}, \[ I_2 \leqslant c \bb E_x \left( z+M_{\pent{n^{1-\ee}}} + 1+N \left( X_{\pent{n^{1-\ee}}} \right) \,;\, \hat{T}_z > \pent{n^{1-\ee}} \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right). \] By Lemma \ref{Mnsubmartingale}, $\bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_z > \pent{n^{1-\ee}} \,,\, \nu_n^\ee > \pent{n^{1-\ee}} \right) \leqslant J_1$, where $J_1$ is defined in \eqref{UBJ1ET2}. Using inequalities \eqref{MajJ1}, \eqref{decexpN} and Lemma \ref{concentnu}, with $m_{\ee} = \pent{n^{1-\ee}}-\pent{n^\ee}$, we obtain \begin{align} I_2 &\leqslant \e^{-c_\ee n^\ee} \left( 1+N(x) \right) + c \bb E_x \left( 1+ \e^{-cn^{\ee}} N\left( X_{m_{\ee}} \right) \,;\, \hat{T}_z > m_{\ee} \,,\, \nu_n > m_{\ee} \right) \nonumber\\ &\leqslant \e^{-c_\ee n^\ee} \left( 1+N(x) \right). \label{minorWntau002} \end{align} Putting together \eqref{minorWntau002}, \eqref{minorWntau001} and \eqref{minordec} and using \eqref{WLZX}, we have \begin{align*} \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant\;& \bb E_x \left( \hat{W}_{\pent{n^{1-\ee}}}(x,z) \,;\, \tau_y > \pent{n^{1-\ee}} \right) \\ &- \frac{c_{\ee}}{n^\ee} \left( \max(z,0) + 1 + N(x) \right). \end{align*} By the point \ref{prophW003} of Lemma \ref{prophW}, $( \bb E_x ( \hat{W}_n(x,z) \,;\, \tau_y > n ) )_{n\in \bb N}$ is non-increasing. So using Lemma \ref{lemanalyse2} we conclude that, for any $n\geqslant 2$ and $n_f \in \{ 2, \dots, n \}$, \[ \bb E_x \left( \hat{W}_n(x,z) \,;\, \tau_y > n \right) \geqslant \bb E_x \left( \hat{W}_{n_f}(x,z) \,;\, \tau_y > n_f \right) - \frac{c_{\ee}}{n_f^\ee} \left( \max(z,0) + 1 + N(x) \right). \] \end{proof} \begin{proposition}\ \label{PosdeV} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{PosdeV001} For any $\delta \in (0,1)$, $x\in \bb X$ and $y >0$, \[ V(x,y) \geqslant \left( 1- \delta \right) y - c_{\delta} \left( 1 + N(x) \right). \] \item \label{PosdeV002} For any $x\in \bb X$, \[ \underset{y\to +\infty}{\lim} \frac{V(x,y)}{y} = 1. \] \end{enumerate} \end{proposition} \begin{proof} \textit{Claim \ref{PosdeV001}.} By Lemmas \ref{mWtau2} and \ref{minpos001}, we immediately have, with $z=y+r(x)$, \[ V(x,y) \geqslant \bb E_x \left( \hat{W}_{n_f}(x,z) \,;\, \tau_y > n_f \right) - \frac{c_{\ee}}{n_f^\ee} \left( \max(z,0) + 1 + N(x) \right). \] Using the point \ref{prophW001} of Lemma \ref{prophW} and the point \ref{majmart002} of Lemma \ref{majmart}, \begin{align*} V(x,y) &\geqslant \bb E_x \left( z+M_{n_f} \,;\, \tau_y > n_f \right) - \frac{c_{\ee}}{n_f^\ee} \left( \max(z,0) + 1 + N(x) \right) \\ &\geqslant z \bb P_x \left( \tau_y > n_f \right) - c \left( \sqrt{n_f} + N(x) \right) - \frac{c_{\ee}}{n_f^\ee} \left( \max(z,0) + 1 + N(x) \right). \end{align*} Since, by the Markov inequality, \[ \bb P_x \left( \tau_y > n_f \right) \geqslant \bb P_x \left( \underset{1\leqslant k \leqslant n_f}{\max} \abs{f\left( X_k \right)} < \frac{y}{n_f} \right) \geqslant 1-\frac{ cn_f^2 \left( 1+N(x) \right)}{y}, \] we obtain that, by the definition of $z$, \begin{equation} \label{PosdeV001ter} V(x,y) \geqslant \left( 1-\frac{c_{\ee}}{n_f^\ee} \right) y - c_{\ee} n_f^2 \left( 1 + N(x) \right). \end{equation} Let $\delta \in(0,1)$. Taking $n_f$ large enough, we obtain the desired inequality. \textit{Proof of the claim \ref{PosdeV002}.} By the claim \ref{PosdeV001}, for any $\delta \in (0,1)$ and $x\in \bb X$, we have that $\liminf_{y\to +\infty} V(x,y)/y \geqslant 1-\delta$. Taking the limit as $\delta \to 0$, we obtain the lower bound. Now by \eqref{VWppthW} and \eqref{WLZX}, for any integer $n_f \geqslant 2$, $y\in \bb R$ and $z=y+r(x)$, \[ V(x,y) \leqslant \hat{W}(x,z) \leqslant \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) \left( \max(z,0) + cN(x) \right) + c_{\ee} n_f^{1/2} + \e^{-c_{\ee} n_f^{\ee}} N(x). \] Using the definition of $z$, we conclude that \[ \underset{y\to +\infty}{\limsup} \frac{V(x,y)}{y} \leqslant \underset{n_f \to +\infty}{\lim} \left( 1 + \frac{c_{\ee}}{n_f^\ee} \right) = 1. \] \end{proof} Now, for any $\gamma > 0$, consider the stopping time: \[ \zeta_{\gamma} := \inf\left\{ k \geqslant 1, \, \abs{y+S_k} > \gamma \left( 1+N \left( X_k \right) \right) \right\}. \] The control on the tail of $\zeta_{\gamma}$ is given by the following Lemma. \begin{lemma} \label{concentmu} For any $\gamma>0$, $x \in \bb X$, $y \in \bb R$ and $n \geqslant 1$, \[ \bb P_x \left( \zeta_\gamma > n \right) \leqslant \e^{-c_{\gamma} n} \left( 1 + N \left( x \right) \right). \] \end{lemma} \begin{proof} The reasoning is very close to that of the proof of the Lemma \ref{concentnu}. Let $\gamma > 0$. Consider the integer $l\geqslant 1$ which will be chosen later. Define $K:= \pent{\frac{n}{2l}}$ and introduce the event $A_{k,y}^{\gamma} := \underset{k' \in \{ 1, \dots, k\}}{\bigcap}\left\{ \abs{y+S_{k'l}} \leqslant \gamma \left( 1+N \left( X_{k'l} \right) \right) \right\}$. We have \[ \bb P_x \left( \zeta_\gamma > n \right) \leqslant \bb P_x \left( A_{2K,y}^{\gamma} \right). \] By the Markov property, \begin{align} \bb P_x \left( A_{2K,y}^{\gamma} \right) = \int_{\bb X \times \bb R} &\int_{\bb X \times \bb R} \bb P_{x''} \left( A_{1,y''}^{\gamma} \right) \bb P_{x'} \left( X_l \in \dd x'' \,,\, y'+S_l \in \dd y'' \,,\, A_{1,y'}^{\gamma} \right) \nonumber\\ &\times \bb P_x \left( X_{2(K-1)l} \in \dd x' \,,\, y+S_{2(K-1)l} \in \dd y' \,,\, A_{2(K-1),y}^{\gamma} \right). \label{PAg00} \end{align} We write \begin{align*} \bb P_{x''} \left( A_{1,y''}^{\gamma} \right) \leqslant\;& \bb P_{x''} \left( \abs{y''+S_l} \leqslant 2\gamma \sqrt{l} \right) + \bb P_{x''} \left( N \left( X_l \right) > \sqrt{l} \right)\\ \leqslant\;& \bb P_{x''} \left( \frac{-y''}{\sqrt{l}} - 2\gamma \leqslant \frac{S_l}{\sqrt{l}} \leqslant \frac{-y''}{\sqrt{l}} + 2\gamma \right) + \bb E_{x''} \left( \frac{N \left( X_l \right)}{\sqrt{l}} \right). \end{align*} By Corollary \ref{BerEss} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, there exists $\ee_0 \in (0,1/4)$ such that, for any $\ee \in (0,\ee_0)$, \begin{align*} \bb P_{x''} \left( A_{1,y''}^{\gamma} \right) &\leqslant \int_{\frac{-y''}{\sqrt{l}} - 2\gamma}^{\frac{-y''}{\sqrt{l}} + 2\gamma} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} + \frac{2c_{\ee}}{l^\ee} \left(1+N(x'')\right) + \frac{c}{\sqrt{l}} \left( 1+N \left( x'' \right) \right). \end{align*} Set $q_{\gamma} := \int_{- 2\gamma}^{2\gamma} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma}<1$. From \eqref{PAg00}, we obtain \begin{align*} \bb P_x \left( A_{2K,y}^{\gamma} \right) &\leqslant \int_{\bb X \times \bb R} \left( q_{\gamma} + \frac{c_{\ee}}{l^{\ee}} + \frac{c_{\ee}}{l^{\ee}} \bb E_{x'} \left( N \left( X_l \right) \right) \right) \nonumber\\ &\qquad \times \bb P_x \left( X_{2(K-1)l} \in \dd x' \,,\, y+S_{2(K-1)l} \in \dd y' \,,\, A_{2(K-1),y}^{\gamma} \right) \\ &\leqslant \left( q_{\gamma} + \frac{c_{\ee}}{l^{\ee}} \right) \bb P_x \left( A_{2(K-1),y}^{\gamma} \right) + \e^{-c_{\ee}l} \bb E_x \left( N\left( X_{2(K-1)l} \right) \,;\, A_{2(K-1),y}^{\gamma} \right). \end{align*} For brevity, set $p_K= \bb P_x \left( A_{2K,y}^{\gamma} \right)$ and $E_{K} = \bb E_x \left( N\left( X_{2Kl} \right) \,;\, A_{2K,y}^{\gamma} \right)$. Then, the previous inequality can be rewritten as \begin{equation} \label{ineqpK} p_K \leqslant \left( q_{\gamma} + \frac{c_{\ee}}{l^{\ee}} \right) p_{K-1} + \e^{-c_{\ee}l} E_{K-1}. \end{equation} Moreover, from \eqref{decexpN}, we have \begin{equation} \label{ineqpK002} E_{K} \leqslant c p_{K-1} + \e^{-c 2l} E_{K-1}. \end{equation} Using \eqref{ineqpK} and \eqref{ineqpK002}, we write that \begin{equation} \label{recdouble} \begin{pmatrix} p_K \\ E_K \end{pmatrix} \leqslant A_l \begin{pmatrix} p_{K-1} \\ E_{K-1} \end{pmatrix} \end{equation} where \[ A_l := \begin{pmatrix} q_{\gamma} + \frac{c_{\ee}}{l^{\ee}} & \e^{-c_{\ee}l} \\ c & \e^{-c l} \end{pmatrix} \underset{l\to+\infty}{\longrightarrow} A = \begin{pmatrix} q_{\gamma} & 0 \\ c & 0 \end{pmatrix}. \] Since the spectral radius $q_{\gamma}$ of $A$ is less than $1$, we can choose $l=l(\ee,\gamma)$ large enough such that the spectral radius $\rho_{\ee,\gamma}$ of $A_l$ is less than $1$. Iterating \eqref{recdouble}, we get \[ p_K \leqslant c \rho_{\ee,\gamma}^K \max \left( p_1, E_1 \right) \leqslant c \rho_{\ee,\gamma}^K \left( 1+N(x) \right). \] Taking into account that $K \geqslant c_{\ee,\gamma} n$, we obtain \[ \bb P_x \left( A_{2K,y}^{\gamma} \right) \leqslant \e^{-c_{\gamma} n} \left( 1+N(x) \right). \] \end{proof} Now we shall establish some properties of the set $\mathscr{D}_{\gamma}$ introduced in Section \ref{sec-not-res}. It is easy to see that, for any $\gamma > 0$, \[ \mathscr{D}_{\gamma} = \left\{ (x,y) \in \bb X \times \bb R, \; \exists n_0 \geqslant 1, \bb P_x \left( \zeta_{\gamma} \leqslant n_0 \,,\, \tau_y > n_0 \right) > 0 \right\}. \] \begin{proposition} \label{posdeVsurDgamma}\ \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{posdeVsurDgamma001} For any $\gamma_1 \leqslant \gamma_2$, it holds $\mathscr{D}_{\gamma_1} \supseteq \mathscr{D}_{\gamma_2}$. \item \label{posdeVsurDgamma002} For any $\gamma >0$, there exists $c_{\gamma}>0$ such that \[ \mathscr{D}_{\gamma}^c \subseteq \left\{ (x,y) \in \bb X \times \bb R, \; \bb P_x \left( \tau_y > n \right) \leqslant \e^{-c_{\gamma} n} \left( 1+N(x) \right) \right\}. \] \item \label{posdeVsurDgamma003} For any $\gamma > 0$, the domain of positivity of the function $V$ is included in $\mathscr{D}_{\gamma}$: \[ \mathscr{D}_+(V) = \left\{ (x,y),\; V(x,y) >0 \right\} \subseteq \mathscr{D}_{\gamma}. \] \item \label{posdeVsurDgamma004} There exists $\gamma_0 > 0$ such that for any $\gamma \geqslant \gamma_0$, \[ \mathscr{D}_+(V) = \mathscr{D}_{\gamma}. \] Moreover, \[ \left\{ (x,y) \in \bb X \times \bb R_+^*,\; y > \frac{\gamma_0}{2} \left( 1+N(x) \right) \right\} \subseteq \mathscr{D}_+(V). \] \end{enumerate} \end{proposition} \begin{proof} \textit{Claim \ref{posdeVsurDgamma001}.} For any $\gamma_1 \leqslant \gamma_2$, we have $\zeta_{\gamma_1} \leqslant \zeta_{\gamma_2}$ and the claim \ref{posdeVsurDgamma001} follows. \textit{Claim \ref{posdeVsurDgamma002}.} Fix $\gamma > 0$. By the definition of $\mathscr{D}_{\gamma}$, for any $(x,y) \in \mathscr{D}_{\gamma}^c$ and $n\geqslant 1$, \[ 0 = \bb P_x \left( \zeta_{\gamma} \leqslant n \,,\, \tau_y > n \right) = \bb P_x \left( \tau_y > n \right) - \bb P_x \left( \zeta_{\gamma} > n \,,\, \tau_y > n \right). \] From this, using Lemma \ref{concentmu}, we obtain \[ \bb P_x \left( \tau_y > n \right)= \bb P_x \left( \zeta_{\gamma} > n \,,\, \tau_y > n \right) \leqslant \bb P_x \left( \zeta_{\gamma} > n \right) \leqslant \e^{-c_{\gamma} n} \left( 1 + N \left( x \right) \right). \] \textit{Claim \ref{posdeVsurDgamma003}.} Fix $\gamma >0$. Using the claim \ref{posdeVsurDgamma002} and Lemma \ref{majmart}, we have, for any $(x,y) \in \mathscr{D}_{\gamma}^c$ and $z=y+r(x),$ \begin{align*} \bb E_x \left( z+M_n \,;\, \tau_y > n \right) &\leqslant \abs{z} \bb P_x \left( \tau_y > n \right) + \bb E_x^{1/2} \left( \abs{M_n}^2 \right) \bb P_x^{1/2} \left( \tau_y > n \right)\\ &\leqslant \abs{z} \left( 1 + N \left( x \right) \right) \e^{-c_{\gamma} n} + c \sqrt{n} \left( 1+N(x) \right)^{3/2} \e^{-c_{\gamma} n}. \end{align*} Taking the limit when $n\to +\infty$, by the point \ref{IFP001} of Proposition \ref{IFP}, we get \[ V(x,y) = 0, \] and we conclude that $\mathscr{D}_{\gamma}^c \subseteq \mathscr{D}_+(V)^c$. \textit{Claim \ref{posdeVsurDgamma004}.} By the point \ref{PosdeV001} of Proposition \ref{PosdeV}, taking $\delta = 1/2$, there exists $\gamma_0 > 0$ such that, for any $x\in \bb X$ and $y>0,$ \begin{equation} \label{PosdeV001bis} V(x,y) \geqslant \frac{y}{2} - \frac{\gamma_0}{4} \left( 1 + N(x) \right). \end{equation} Now, fix $(x,y) \in \mathscr{D}_{\gamma_0}$ and let $n_0\geqslant 1$ be an integer such that $\bb P_x \left( \zeta_{\gamma_0} \leqslant n_0 \,,\, \tau_y > n_0 \right) > 0$. By the point \ref{IFP004} of Proposition \ref{IFP}, \begin{align*} V(x,y) &= \bb E_x \left( V\left( X_{n_0}, y+S_{n_0} \right) \,;\, \tau_y > n_0 \right) \\ &\geqslant \bb E_x \left( V\left( X_{n_0}, y+S_{n_0} \right) \,;\, \tau_y > n_0 \,,\, \zeta_{\gamma_0} \leqslant n_0 \right). \end{align*} By the Doob optional stopping theorem, \eqref{PosdeV001bis} and the definition of $\zeta_{\gamma_0}$, \begin{align*} V(x,y) &\geqslant \bb E_x \left( V\left( X_{\zeta_{\gamma_0}}, y+S_{\zeta_{\gamma_0}} \right) \,;\, \tau_y > \zeta_{\gamma_0} \,,\, \zeta_{\gamma_0} \leqslant n_0 \right) \\ &\geqslant \frac{1}{2} \bb E_x \left( y+S_{\zeta_{\gamma_0}} - \frac{\gamma_0}{2} \left( 1+ N\left( X_{\zeta_{\gamma_0}} \right) \right) \,;\, \tau_y > \zeta_{\gamma_0} \,,\, \zeta_{\gamma_0} \leqslant n_0 \right)\\ &\geqslant \frac{1}{2} \bb E_x \left( \frac{\gamma_0}{2} \left( 1+ N\left( X_{\zeta_{\gamma_0}} \right) \right) \,;\, \tau_y > \zeta_{\gamma_0} \,,\, \zeta_{\gamma_0} \leqslant n_0 \right)\\ &\geqslant \frac{\gamma_0}{4} \bb P_x \left( \tau_y > n_0 \,,\, \zeta_{\gamma_0} \leqslant n_0 \right). \end{align*} Now, since $n_0$ has been chosen such that the last probability is strictly positive, we get that $V(x,y) > 0$. This proves that $\mathscr{D}_{\gamma_0} \subseteq \mathscr{D}_+(V)$. Using the claims \ref{posdeVsurDgamma001} and \ref{posdeVsurDgamma003}, for any $\gamma \geqslant \gamma_0$, we obtain that $\mathscr{D}_{\gamma} \subseteq \mathscr{D}_{\gamma_0} \subseteq \mathscr{D}_+(V) \subseteq \mathscr{D}_{\gamma}$ and so $\mathscr{D}_{\gamma} = \mathscr{D}_{\gamma_0} = \mathscr{D}_+(V)$. Using \eqref{PosdeV001bis} proves the second assertion of the claim \ref{posdeVsurDgamma004}. \end{proof} \textbf{Proof of Theorem \ref{thonV}.} The claim \ref{thonV001} is proved by the point \ref{IFP001} of Proposition \ref{IFP}~; the claim \ref{thonV002} is proved by the point \ref{IFP004} of Proposition \ref{IFP}~; the claim \ref{thonV003} is proved by the points \ref{IFP002} and \ref{IFP003} of Proposition \ref{IFP} and by Proposition \ref{PosdeV}~; the claim \ref{thonV004} is proved by the point \ref{posdeVsurDgamma004} of Proposition \ref{posdeVsurDgamma}. \section{Asymptotic for the exit time} \label{AsExTi} \subsection{Preliminary results} \begin{lemma} \label{SurE1etE2} There exists $\ee_0 >0$ such that, for any $\ee\in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $z=y+r(x),$ \begin{align*} E_1 &:= \bb E_x \left( z+M_{\nu_n} \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \leqslant c_{\ee} \left( 1+\max(y,0)+N(x) \right), \quad \forall n \geqslant 1, \\ E_2 &:= \bb E_x \left( z+M_{\nu_n^{\ee^2}} \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \underset{n\to \infty}{\longrightarrow} V(x,y). \end{align*} Moreover, for any $n\geqslant 1$, $\ee\in (0,\ee_0)$, $x\in \bb X$ and $y\in \bb R,$ \[ \abs{E_2 - V(x,y)} \leqslant \frac{c_{\ee}}{n^{\ee/8}} \left( 1+\max(y,0)+N(x) \right). \] \end{lemma} \begin{proof} Using the fact $\{ \tau_y > \nu_n \} \subseteq \{ \hat{T}_z > \nu_n \}$ and Lemma \ref{Mnsubmartingale}, \[ E_1 \leqslant \bb E_x \left( z+M_{\pent{n^{1-\ee}}} \,;\, \hat{T}_z > \pent{n^{1-\ee}} \right) - J_{21}'', \] where $J_{21}''$ is defined in \eqref{decJ21'} and by \eqref{MajJ21''} the quantity $-J_{21}''$ does not exceed $\e^{-c_{\ee}n^{\ee}} ( 1+N(x) )$. Again, by Lemma \ref{Mnsubmartingale} and the point \ref{IFP001} of Proposition \ref{IFP}, we have that $( \bb E_x ( z+M_{n} \,;\, \hat{T}_z > n ) )_{n\geqslant 0}$ is a non-decreasing sequence which converges to $\hat{W}(x,z)$. So, using the point \ref{IFP003} of Proposition \ref{IFP} and the fact that $z=y+r(x)$, \begin{equation} \label{MajdeE1} E_1 \leqslant \hat{W}(x,z) + \e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right) \leqslant c_{\ee} \left( 1+\max(y,0)+N(x) \right). \end{equation} By the point \ref{IFP004} of Proposition \ref{IFP}, we have \begin{align*} V(x,y) =\;& \bb E_x \left( V \left( X_n, y+S_n \right) \,;\, \tau_y > n \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \\ &+ \bb E_x \left( V \left( X_n, y+S_n \right) \,;\, \tau_y > n \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right). \end{align*} Using the point \ref{IFP003} of Proposition \ref{IFP}, for any $n_f \geqslant 2$, \begin{align*} V(x,y) \leqslant\;& \bb E_x \left( V \left( X_{\nu_n^{\ee^2}}, y+S_{\nu_n^{\ee^2}} \right) \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \\ &+ c\bb E_x \left( \max\left( z+M_n, 0 \right) + 1 + N\left( X_n \right) \,;\, \tau_y > n \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right) \\ \leqslant\;& \left( 1+\frac{c_{\ee}}{n_f^{\ee}} \right) E_2 + c_{\ee} \bb E_x \left( \sqrt{n_f}+N \left( X_{\nu_n^{\ee^2}} \right) \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \\ &\underbrace{-c_{\ee} \bb E_x \left( z+M_{\nu_n^{\ee^2}} \,;\, z+M_{\nu_n^{\ee^2}} < 0 \,,\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right)}_{=J_{22}'(\ee^2)} \\ &+ c\bb E_x \left( z+M_n + \abs{r\left( X_n \right)} +1+N\left( X_n \right) \,;\, \tau_y > n \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right). \end{align*} From the previous bound, using the Markov property, the bound \eqref{decexpN} and the approximation \eqref{decMSX}, we get \begin{align*} V(x,y) \leqslant\;& \left( 1+\frac{c_{\ee}}{n_f^{\ee}} \right) E_2 + J_{22}'(\ee^2) + c\underbrace{\bb E_x \left( z+M_n \,;\, \hat{T}_z > n \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right)}_{=J_1(\ee^2)} \\ &+ c_{\ee} \bb E_x \left( \sqrt{n_f}+e^{-cn^{\ee^2}} N \left( X_{\nu_n} \right) \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &+ c\bb E_x \left( 1+e^{-c_{\ee}n} N\left( X_{\pent{n^{1-\ee}}} \right) \,;\, \tau_y > \pent{n^{1-\ee}} \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right). \end{align*} Proceeding in the same way as for the bound \eqref{MajJ22'}, \begin{align*} J_{22}'(\ee^2) &\leqslant c_{\ee} \bb E_x \left( 1+ \e^{-c n^{\ee^2}} N \left( X_{\nu_n} \right) \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &\leqslant \frac{c_{\ee}}{n^{1/2-\ee}} E_1 + \e^{-c_{\ee} n^{\ee^2}} \left( 1+N(x) \right). \end{align*} Moreover, similarly as for the bound \eqref{MajJ1}, we have \[ J_1(\ee^2) \leqslant \e^{-c_{\ee}n^{\ee^2}} \left( 1+N(x) \right). \] Taking into account these bounds and using Lemma \ref{concentnu}, \begin{equation} \label{MajorantE2} V(x,y) \leqslant \left( 1+\frac{c_{\ee}}{n_f^{\ee}} \right) E_2 + \frac{c_{\ee} \sqrt{n_f}}{n^{1/2-\ee}} E_1 + \e^{-c_{\ee} n^{\ee^2}} \left( 1+N(x) \right). \end{equation} Analagously, by \eqref{PosdeV001ter} and \eqref{decMSX}, we have the lower bound \begin{align} V(x,y) \geqslant\;& \bb E_x \left( V \left( X_{\nu_n^{\ee^2}}, y+S_{\nu_n^{\ee^2}} \right) \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ \geqslant\;& \left( 1-\frac{c_{\ee}}{n_f^{\ee}} \right) E_2 - c_{\ee} n_f^2 \bb E_x \left( 1+N \left( X_{\nu_n^{\ee^2}} \right) \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ \label{MinorantE2} \geqslant\;& \left( 1-\frac{c_{\ee}}{n_f^{\ee}} \right) E_2 - \frac{c_{\ee}n_f^2}{n^{1/2-\ee}} E_1 - n_f^2 \e^{-c_{\ee} n^{\ee^2}} \left( 1+N(x) \right). \end{align} Taking $n_f=n^{1/4-\ee}$ in \eqref{MinorantE2} and \eqref{MajorantE2}, we conclude that, for any $\ee \in (0,1/8)$, \[ \abs{V(x,y) - E_2} \leqslant \frac{c_{\ee}}{n^{\ee/8}} E_2 + \frac{c_{\ee}}{n^{\ee}} \left( E_1+1+N(x) \right). \] Again, using \eqref{MinorantE2}, \[ \abs{V(x,y) - E_2} \leqslant \frac{c_{\ee}}{n^{\ee/8}} V(x,y) + \frac{c_{\ee}}{n^{\ee}} \left( E_1+1+N(x) \right). \] Finally, employing \eqref{MajdeE1} and \eqref{encadrV}, \[ \abs{V(x,y) - E_2} \leqslant \frac{c_{\ee}}{n^{\ee/8}} \left( 1+\max(y,0)+N(x) \right). \] \end{proof} \begin{lemma} \label{taupptrn} There exists $\ee_0 > 0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $n\geqslant 1$, \[ \bb P_x \left( \tau_y > n \right) \leqslant \frac{c_\ee}{n^{1/2-\ee}} \left( 1+\max(y,0)+N(x) \right). \] Moreover, summing this bound, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $n\geqslant 1$, we have \[ \sum_{k=1}^{\pent{n^{1-\ee}}} \bb P_x \left( \tau_y > k \right) \leqslant c_\ee \left( 1+\max(y,0)+N(x) \right) n^{1/2+\ee}. \] \end{lemma} \begin{proof} Using Lemma \ref{concentnu} and Lemma \ref{SurE1etE2}, with $z=y+r(x)$, \begin{align*} \bb P_x \left( \tau_y > n \right) &\leqslant \bb P_x \left( \tau_y > n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) + \bb P_x \left( \hat{T}_z > n \,,\, \nu_n > \pent{n^{1-\ee}} \right) \\ &\leqslant \bb E_x \left( \frac{z+M_{\nu_n}}{n^{1/2-\ee}} \,;\, \tau_y > n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) + \e^{-c_\ee n^\ee} \left( 1+N(x) \right) \\ &\leqslant \frac{c_{\ee}}{n^{1/2-\ee}} \left( 1+\max(y,0)+N(x) \right). \end{align*} \end{proof} \begin{lemma} \label{MajE3} There exists $\ee_0 > 0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $z=y+r(x)$, \[ E_3 := \bb E_x \left( z+M_{\nu_n} \,;\, z+M_{\nu_n} > n^{1/2-\ee/2} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \underset{n\to +\infty}{\longrightarrow} 0. \] More precisely, for any $n\geqslant 1$, $\ee \in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $z=y+r(x)$, \[ E_3 \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{\ee}}. \] \end{lemma} \begin{proof} Notice that when $\nu_n \neq 1$ the following inclusion holds: $\{ z+M_{\nu_n} > n^{1/2-\ee/2} \} \subseteq \{ \xi_{\nu_n} > n^{1/2-\ee/2} - n^{1/2-\ee} \geqslant c_{\ee} n^{1/2-\ee/2} \}$. Therefore, \begin{align} E_3 \leqslant\;& \underbrace{\bb E_x \left( z+M_{\nu_n} \,;\, \nu_n \leqslant 2\pent{n^{\ee}} \right)}_{=:E_{30}} \nonumber\\ \label{decdeE3} &+ \underbrace{\sum_{k=2\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \bb E_x \left( z+M_k \,;\, \xi_k > c_{\ee} n^{1/2-\ee/2} \,,\, \tau_y > k \,,\, \nu_n=k \right)}_{=:E_{31}}. \end{align} \textit{Bound of $E_{30}$.} For $y\leqslant n^{1/2-2\ee}$, by the Markov inequality and Lemma \ref{majmart}, \begin{align*} \bb P_x \left( \nu_n \leqslant 2 \pent{n^{\ee}} \right) &\leqslant \sum_{k=1}^{2\pent{n^{\ee}}} \bb P_x \left( r(x)+M_k > c_{\ee} n^{1/2-\ee} \right) \leqslant \frac{c_{\ee} \left( 1+N(x) \right)}{n^{1/2-3\ee}}. \end{align*} For $y > n^{1/2-2\ee}$, in the same way, we have $\bb P_x \left( \nu_n \leqslant 2 \pent{n^{\ee}} \right) \leqslant \frac{c_{\ee} \left( 1+y+N(x) \right)}{n^{1/2-3\ee}}$. Putting together these bounds, we get, for any $y \in \bb R$, \begin{equation} \label{nupetit} \bb P_x \left( \nu_n \leqslant 2 \pent{n^{\ee}} \right) \leqslant \frac{c_{\ee} \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}}+N(x) \right)}{n^{1/2-3\ee}}. \end{equation} Using Lemma \ref{majmart}, \begin{align} E_{30} &\leqslant z \bb P_x \left( \nu_n \leqslant 2 \pent{n^{\ee}} \right) + \sum_{k=1}^{2\pent{n^{\ee}}} \bb E_x^{1/2} \left( \abs{M_k}^2 \right) \bb P_x^{1/2} \left( \nu_n \leqslant 2\pent{n^\ee} \right) \nonumber\\ \label{E30to0} &\leqslant \frac{c_{\ee} \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{\ee}}. \end{align} \textit{Bound of $E_{31}$.} Changing the index of summation ($j=k-\pent{n^\ee}$) and using the Markov property, \begin{align} E_{31} \leqslant\;& \sum_{j=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \max(z',0) \bb P_{x'} \left( \xi_{\pent{n^\ee}} > c_{\ee} n^{1/2-\ee/2} \right) \nonumber\\ &\underbrace{\hspace{3cm} \times \bb P_x \left( X_j \in \dd x' \,,\, z+M_j \in \dd z' \,,\, \tau_y > j \right)}_{=:E_{32}} \nonumber\\ \label{decdeE31} &+\sum_{j=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb E_{x'}^{1/2} \left( \abs{M_{\pent{n^\ee}}}^2 \right) \bb P_{x'}^{1/2} \left( \xi_{\pent{n^\ee}} > c_{\ee} n^{1/2-\ee/2} \right) \\ &\underbrace{\hspace{3cm} \times \bb P_x \left( X_j \in \dd x' \,,\, z+M_j \in \dd z' \,,\, \tau_y > j \right).}_{=:E_{33}} \nonumber \end{align} \textit{Bound of $E_{32}$.} Using \eqref{majdesxi000}, the Markov inequality and \eqref{decexpNl} with $l=\pent{c_{\ee} n^{1/2-\ee/2}}$, \begin{align*} \bb P_{x'} \left( \xi_{\pent{n^\ee}} > c_{\ee} n^{1/2-\ee/2} \right) \leqslant\;& \bb P_{x'} \left( N \left( X_{\pent{n^\ee}} \right) > c_{\ee} n^{1/2-\ee/2} \right) \\ &+ \bb P_{x'} \left( N \left( X_{\pent{n^\ee}-1} \right) > c_{\ee} n^{1/2-\ee/2} \right) \\ \leqslant\;& \frac{1}{l} \bb E_{x'} \left( N_l \left( X_{\pent{n^\ee}} \right) \right) + \frac{1}{l} \bb E_{x'} \left( N_l \left( X_{\pent{n^\ee}-1} \right) \right) \\ \leqslant\;& \frac{c}{l^{2+\beta}} + \frac{1}{l} \e^{-c n^{\ee}} \left( 1+N(x') \right). \end{align*} Choosing $\ee>0$ small enough we find that \begin{equation} \label{queuedexi} \bb P_{x'} \left( \xi_{\pent{n^\ee}} > c_{\ee} n^{1/2-\ee/2} \right) \leqslant \frac{c_{\ee}}{n^{1+\beta/4}} + \e^{-c_{\ee} n^{\ee}} N(x'). \end{equation} By the definition of $E_{32}$ in \eqref{decdeE31}, \begin{align*} E_{32} \leqslant\;& \frac{c_{\ee}}{n^{1+\beta/4}} \sum_{j=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \left[ \bb E_x \left( z+M_j \,;\, \tau_y > j \right) + \bb E_x \left( \abs{r\left( X_j \right)} \right) \right] \\ &+ \e^{-c_{\ee} n^{\ee}} \sum_{j=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \left[ \max(z,0) \bb E_x \left( N \left( X_j \right) \right) + \bb E_x^{1/2} \left( \abs{M_j}^2 \right) \bb E_x^{1/2} \left( N \left( X_j \right)^2 \right) \right]. \end{align*} Using \eqref{VnPPWn2}, Lemma \ref{majmart} and the point \ref{Momdec001} of Hypothesis \ref{Momdec}, we find that \begin{equation} \label{E32to0} E_{32} \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right) \left( 1+N(x) \right)}{n^{\beta/4}}. \end{equation} \textit{Bound of $E_{33}$.} Using \eqref{queuedexi} and Lemma \ref{majmart}, we have \[ E_{33} \leqslant \sum_{j=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \bb E_x \left( n^{\ee/2} \left( 1+N \left( X_j \right) \right) \left( \frac{c_{\ee}}{n^{1/2+\beta/8}} + \e^{-c_{\ee} n^{\ee}} N\left( X_j \right)^{1/2} \right) \,;\, \tau_y >j \right). \] By the Markov property, \[ E_{33} \leqslant \e^{-c_{\ee} n^{\ee}} \left( 1+ N(x) \right)^{3/2} + \frac{c_{\ee}}{n^{1/2+\beta/8-\ee/2}} \sum_{j=1}^{\pent{n^{1-\ee}}} \bb E_x \left( 1+ \e^{-c n^\ee} N \left( X_j \right) \,;\, \tau_y > j \right). \] Using Lemma \ref{taupptrn}, \begin{equation} \label{E33to0} E_{33} \leqslant c_{\ee} \frac{\max(y,0)+ \left( 1+N(x) \right)^{3/2}}{n^{\beta/8-3\ee/2}}. \end{equation} With \eqref{E33to0}, \eqref{E32to0} and \eqref{decdeE31}, for $\ee > 0$ small enough, we find that \[ E_{31} \leqslant c_{\ee} \frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right) \left( 1+N(x) \right)}{n^{\ee}}. \] This bound, together with \eqref{E30to0} and \eqref{decdeE3}, proves the lemma. \end{proof} \begin{lemma} \label{SurE4} There exists $\ee_0 > 0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $z=y+r(x)$, \[ E_4 := \bb E_x \left( z+M_{\nu_n^{\ee^2}} \,;\, z+M_{\nu_n^{\ee^2}} > n^{1/2-\ee/4} \,,\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \underset{n\to +\infty}{\longrightarrow} 0. \] More precisely, for any $n\geqslant 1$, $\ee \in (0,\ee_0)$, $x\in \bb X$, $y\in \bb R$ and $z=y+r(x)$, \[ E_4 \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{\ee/2}}. \] \end{lemma} \begin{proof} We will apply Lemma \ref{MajE3}. For this we write \begin{align} E_4 =\;& \bb E_x \left( z+M_{\nu_n^{\ee^2}} \,;\, z+M_{\nu_n^{\ee^2}} > n^{1/2-\ee/4} \,,\, z+M_{\nu_n} > n^{1/2-\ee/2} \,,\, \right. \nonumber\\ &\underbrace{\hspace{8cm} \left. \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right)}_{=:E_{41}} \nonumber\\ \label{decdeE4} +& \bb E_x \left( z+M_{\nu_n^{\ee^2}} \,;\, z+M_{\nu_n^{\ee^2}} > n^{1/2-\ee/4} \,,\, z+M_{\nu_n} \leqslant n^{1/2-\ee/2} \,,\, \right. \\ &\underbrace{\hspace{8cm} \left. \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right)}_{=:E_{42}}. \nonumber \end{align} \textit{Bound of $E_{41}$.} By the Markov property, \begin{align*} E_{41} &= \sum_{k=1}^{\pent{n^{1-\ee}}-\pent{n^{\ee^2}}} \int_{\bb X \times \bb R} \bb E_{x'} \left( z' + M_{\pent{n^{\ee^2}}} \,;\, z' + M_{\pent{n^{\ee^2}}} > n^{1/2-\ee/4} \,,\, \tau_{y'} > \pent{n^{\ee^2}} \right) \\ &\qquad \times \bb P_x \left( X_k \in \dd x' \,,\, z+M_k \in \dd z' \,,\, z+M_k > n^{1/2-\ee/2} \,,\, \tau_y > k \,,\, \nu_n = k \right), \end{align*} where $y'=z'-r(x')$. Moreover, for any $x' \in \bb X$, $z' \in \bb R$, using \eqref{VnPPWn2}, we have \begin{align*} \bb E_{x'} &\left( z' + M_{\pent{n^{\ee^2}}} \,;\, z' + M_{\pent{n^{\ee^2}}} > n^{1/2-\ee/4} \,,\, \tau_{y'} > \pent{n^{\ee^2}} \right) \\ &\qquad \leqslant \bb E_{x'} \left( z' + M_{\pent{n^{\ee^2}}} \,;\, z' + M_{\pent{n^{\ee^2}}} > 0 \,,\, \tau_{y'} > \pent{n^{\ee^2}} \right) \\ &\qquad \leqslant \bb E_{x'} \left( z' + M_{\pent{n^{\ee^2}}} \,;\, \tau_{y'} > \pent{n^{\ee^2}} \right) + \bb E_{x'} \left( \abs{r\left( X_{n^{\ee^2}} \right)} \right) \\ &\qquad \leqslant c_{\ee} \max(z',0) + c_{\ee} \left( 1+N(x') \right). \end{align*} Consequently, \begin{align} E_{41} &\leqslant c_{\ee} E_3 + c_{\ee} \bb E_x \left( 1 + N \left( X_{\nu_n} \right) \,;\, z+M_{\nu_n} > n^{1/2-\ee/2} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\leqslant 2c_{\ee} E_3 + c_{\ee} \bb E_x \left( N \left( X_{\nu_n} \right) \,;\, N \left( X_{\nu_n} \right) > n^{1/2-\ee} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\qquad +c_{\ee} \bb E_x \left( n^{1/2-\ee} \,;\, N \left( X_{\nu_n} \right) \leqslant n^{1/2-\ee} \,,\, z+M_{\nu_n} > n^{1/2-\ee/2} \,,\, \right. \nonumber\\ &\hspace{8cm} \left. \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ \label{decdeE41} &\leqslant 3c_{\ee} E_3 + c_{\ee} \underbrace{\bb E_x \left( N \left( X_{\nu_n} \right) \,;\, N \left( X_{\nu_n} \right) > n^{1/2-\ee} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right)}_{=:E_{41}'}. \end{align} Denoting $l=\pent{n^{1/2-\ee}}$ and using the point \ref{Momdec001} of \ref{Momdec} and \eqref{decexpNl}, we have \begin{align*} E_{41}' \leqslant\;& \bb E_x \left( \frac{N \left( X_{\nu_n} \right)^2}{n^{1/2-\ee}} \,;\, \nu_n \leqslant \pent{n^{\ee}} \right) + \sum_{k=\pent{n^\ee}+1}^{\pent{n^{1-\ee}}} \bb E_x \left( N_l \left( X_k \right) \,;\, \tau_y > k \,,\, \nu_n=k \right) \\ \leqslant\;& \frac{c n^{\ee} \left( 1+N(x) \right)^2}{n^{1/2-\ee}} + \sum_{k=1}^{\pent{n^{1-\ee}}} \left[\frac{c}{l^{1+\beta}}\bb P_x \left( \tau_y > k \right) + \e^{-cn^{\ee}} \bb E_x \left( 1+N \left( X_k \right) \right) \right]. \end{align*} Using Lemma \ref{taupptrn} and taking $\ee>0$ small enough, \begin{equation} \label{MajE41'} E_{41}' \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+N(x) \right)^2}{n^{\min(1,\beta)/4}}. \end{equation} In conjunction with Lemma \ref{MajE3}, from \eqref{decdeE41} we obtain that, for some $\ee >0$, \begin{equation} \label{E41to0} E_{41} \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{\ee}}. \end{equation} \textit{Bound of $E_{42}$.} For any $z' \in (0, n^{1/2-\ee/2}]$, we have \[ \left(z'+M_{\pent{n^{\ee^2}}} \right) \bb P_{x'} ( z'+M_{\pent{n^{\ee^2}}} > n^{1/2-\ee/4} ) \leqslant z' \bb P_{x'} ( M_{\pent{n^{\ee^2}}} > c_{\ee} n^{1/2-\ee/4} ) + \abs{M_{\pent{n^{\ee^2}}}}. \] Therefore, by the Markov property, \begin{align} E_{42} \leqslant\;& \int_{\bb X \times \bb R} z' \bb P_{x'} \left( M_{\pent{n^{\ee^2}}} > c_{\ee} n^{1/2-\ee/4} \right) \bb P_x \left( X_{\nu_n} \in \dd x' \,,\, z+M_{\nu_n} \in \dd z' \,,\, \right. \nonumber\\ &\underbrace{\hspace{4cm} \left. z+M_{\nu_n} \leqslant n^{1/2-\ee/2} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right)}_{=:E_{43}} \nonumber\\ \label{decdeE42} &+ \int_{\bb X \times \bb R} \bb E_{x'} \left( \abs{M_{\pent{n^{\ee^2}}}} \right) \bb P_x \left( X_{\nu_n} \in \dd x' \,,\, z+M_{\nu_n} \in \dd z' \,,\, \right. \\ &\underbrace{\hspace{4cm} \left. z+M_{\nu_n} \leqslant n^{1/2-\ee/2} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right)}_{=:E_{44}}. \nonumber \end{align} \textit{Bound of $E_{43}$.} Using Lemma \ref{majmart}, \[ \bb P_{x'} \left( M_{\pent{n^{\ee^2}}} > c_{\ee} n^{1/2-\ee/4} \right) \leqslant \frac{c_{\ee} n^{\ee^2} \left( 1+N(x') \right)}{n^{1/2-\ee/4}}. \] Therefore, we have \begin{align*} E_{43} \leqslant\;& \bb E_x \left( \frac{c_{\ee}}{n^{3\ee/4-\ee^2}}\left( z+M_{\nu_n} \right) \mathbbm 1_{\left\{ N \left( X_{\nu_n} \right) \leqslant n^{1/2-\ee} \right\}} + \frac{c_{\ee}}{n^{\ee/4-\ee^2}} N \left( X_{\nu_n} \right) \mathbbm 1_{\left\{ N \left( X_{\nu_n} \right) > n^{1/2-\ee} \right\}} \,;\, \right. \\ &\hspace{6cm} \left. z+M_{\nu_n} \leqslant n^{1/2-\ee/2} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ \leqslant\;& \frac{c_{\ee}}{n^{3\ee/4-\ee^2}} E_1 + \frac{c_{\ee}}{n^{\ee/4-\ee^2}} E_{41}'. \end{align*} By Lemma \ref{SurE1etE2} and \eqref{MajE41'}, we obtain for some small $\ee>0$, \begin{equation} \label{E43to0} E_{43} \leqslant c_{\ee}\frac{ \max(y,0) + \left( 1+N(x) \right)^2}{n^{\ee/2}}. \end{equation} \textit{Bound of $E_{44}$.} Again by Lemma \ref{majmart}, $\bb E_{x'} \left( \abs{M_{\pent{n^{\ee^2}}}} \right) \leqslant n^{\ee^2} \left( 1+N(x') \right)$. Consequently, \begin{align*} E_{44} \leqslant\;& \frac{c_{\ee}}{n^{\ee-\ee^2}} \bb E_x \left( z+M_{\nu_n} \,;\, N \left( X_{\nu_n} \right) \leqslant n^{1/2-2\ee} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &+ c_{\ee}n^{\ee^2} \bb E_x \left( N \left( X_{\nu_n} \right) \,;\, N \left( X_{\nu_n} \right) > n^{1/2-2\ee} \,,\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right). \end{align*} Proceeding exactly as in the proof of the bound of $E_{41}'$ but with $l=\pent{n^{1/2-2\ee}}$, we obtain, by Lemma \ref{SurE1etE2}, \[ E_{44} \leqslant c_{\ee}\frac{ \max(y,0) + \left( 1+N(x) \right)^2}{n^{\ee/2}}. \] Putting together this bound with \eqref{E43to0} and \eqref{decdeE42}, we find that \[ E_{42} \leqslant c_{\ee}\frac{ \max(y,0) + \left( 1+N(x) \right)^2}{n^{\ee/2}}. \] So, using \eqref{decdeE4} and \eqref{E41to0}, we obtain the second assertion. The first one is an easy consequence of the second one. \end{proof} The following results are similar to that provided by Lemmas \ref{SurE1etE2} and \ref{SurE4} (see $E_2$ and $E_4$ respectively). \begin{lemma} \label{SurF2etF4} There exists $\ee_0 >0$ such that, for any $\ee\in (0,\ee_0)$, $x\in \bb X$ and $y\in \bb R$, \begin{align*} F_2 &:= \bb E_x \left( y+S_{\nu_n^{\ee^2}} \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \underset{n\to \infty}{\longrightarrow} V(x,y), \\ F_4 &:= \bb E_x \left( y+S_{\nu_n^{\ee^2}} \,;\, y+S_{\nu_n^{\ee^2}} > n^{1/2-\ee/8} \,,\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \underset{n\to +\infty}{\longrightarrow} 0. \end{align*} More precisely, for any $n\geqslant 1$, $\ee \in (0,\ee_0)$, $x\in \bb X$ and $y\in \bb R$, \[ \abs{F_2 - V(x,y)} \leqslant \frac{c_{\ee}}{n^{\ee/8}} \left( 1+\max(y,0)+N(x) \right) \] and \[ F_4 \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{\ee/2}}. \] \end{lemma} \begin{proof} By \eqref{decMSX}, \[ \abs{F_2 - E_2} \leqslant \underbrace{\bb E_x \left( \abs{r\left( X_{\nu_n^{\ee^2}} \right)} \,;\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right)}_{=:F_2'}. \] Using the Markov property, the definition of $\nu_n$ and Lemma \ref{SurE1etE2}, \begin{align} F_2' &\leqslant c \bb E_x \left( 1 + \e^{-cn^{\ee^2}} N\left( X_{\nu_n} \right) \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\leqslant \frac{c}{n^{1/2-\ee}} E_1 + \e^{-cn^{\ee^2}} \left( 1+N(x) \right) \nonumber\\ \label{MajF2'} &\leqslant \frac{c_{\ee}}{n^{1/2-\ee}} \left( 1+\max(y,0)+N(x) \right). \end{align} Therefore, by Lemma \ref{SurE1etE2}, \[ \abs{F_2 - V(x,y)} \leqslant \abs{E_2 - V(x,y)} + F_2' \leqslant \frac{c_{\ee}}{n^{\ee/8}} \left( 1+\max(y,0)+N(x) \right). \] To bound $F_4$, set $z=y+r(x)$. By \eqref{decMSX}, on the event $\left\{ z+M_{\nu_n^{\ee^2}} \leqslant n^{1/2-\ee/4} \right\} \cap \left\{ y+S_{\nu_n^{\ee^2}} > n^{1/2-\ee/8} \right\}$ we have $\abs{r\left( X_{\nu_n^{\ee^2}} \right)} > c_{\ee} n^{1/2-\ee/8}.$ Therefore, $$y+S_{\nu_n^{\ee^2}} \leqslant n^{1/2-\ee/4} - r\left( X_{\nu_n^{\ee^2}} \right) \leqslant \left( \frac{c_{\ee}}{n^{\ee/8}}+1 \right) \abs{r\left( X_{\nu_n^{\ee^2}} \right)},$$ which implies that \[ F_4 \leqslant \bb E_x \left( y+S_{\nu_n^{\ee^2}} \,;\, z+M_{\nu_n^{\ee^2}} > n^{1/2-\ee/4} \,,\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) + c_{\ee} F_2'. \] By \eqref{decMSX}, Lemma \ref{SurE4} and \eqref{MajF2'}, we conclude that \[ F_4 \leqslant E_4 + F_2' + c_{\ee} F_2' \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{\ee/2}}. \] \end{proof} \subsection{Proof of Theorem \ref{thontau}} Assume that $(x,y) \in \bb X \times \bb R$. Let $\left( B_t \right)_{t \geqslant 0}$ be the Brownian motion defined by Proposition \ref{majdeA_k}. Consider the event \begin{equation} \label{defdeAk} A_k = \{ \sup_{0\leqslant t \leqslant 1} \abs{S_{\pent{tk}} - \sigma B_{tk}} \leqslant k^{1/2-2\ee} \} \end{equation} and denote by $\overline{A}_k$ its complement. Using these notations, we write \begin{align} \bb P_x \left( \tau_y >n \right) =\;& \bb P_x \left( \tau_y >n \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right) \nonumber\\ +&\sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y'} > n-k \,,\, \overline{A}_{n-k} \right) \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, \right. \nonumber\\ &\underbrace{\hspace{8.5cm} \left. \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:J_1} \nonumber\\ \label{taudecJ0} +&\sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y'} > n-k \,,\, A_{n-k} \right) \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, \right. \\ &\underbrace{\hspace{8.5cm} \left. \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:J_2} \nonumber. \end{align} \textit{Bound of $J_1$.} Since $n-k \geqslant c_{\ee} n$, for any $k\leqslant \pent{n^{1-\ee}}$, by Proposition \ref{majdeA_k}, we have $\bb P_{x'} \left( \tau_{y'} > n-k \,,\, \overline{A}_{n-k} \right) \leqslant \frac{c_{\ee} \left( 1+N(x') \right)}{n^{2\ee}}$. So, using the fact that $n^{1/2-\ee} \leqslant z+M_{\nu_n}$ and Lemma \ref{SurE1etE2}, \begin{align} J_1 &\leqslant \frac{c_{\ee}}{n^{2\ee}} \bb E_x \left( 1+\e^{-c n^{\ee^2}} N\left( X_{\nu_n} \right) \,;\, \tau_y > \nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\leqslant \frac{c_{\ee}}{n^{1/2+\ee}} E_1 + \e^{-c_{\ee} n^{\ee^2}} \left( 1+N(x) \right) \nonumber\\ &\leqslant \frac{c_{\ee} \left( 1+\max(y,0)+N(x) \right)}{n^{1/2+\ee}}. \label{taumajJ1} \end{align} \textit{Bound of $J_2$.} We split $J_2$ into two terms: \begin{align} J_2 =\;& \sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y'} > n-k \,,\, A_{n-k} \right) \nonumber\\ &\underbrace{\quad \times \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, y+S_k > n^{1/2-\ee/8} \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:J_3} \nonumber\\ \label{taudecJ2} &+\sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y'} > n-k \,,\, A_{n-k} \right) \\ &\underbrace{\quad \times \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, y+S_k \leqslant n^{1/2-\ee/8} \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:J_4}. \nonumber \end{align} \textit{Bound of $J_3$.} With $y'_+ = y'+(n-k)^{1/2-2\ee}$, we have \begin{equation} \label{tautotaubm} \bb P_{x'} \left( \tau_{y'} > n-k \,,\, A_{n-k} \right) \leqslant \bb P_{x'} \left( \tau_{y_+'}^{bm} > n-k \right), \end{equation} where $\tau_y^{bm}$ is defined in \eqref{defdetaubm}. By the point \ref{exittimeforB001} of Proposition \ref{exittimeforB} and Lemma \ref{SurF2etF4}, \begin{align} J_3 &\leqslant \frac{c_{\ee}}{\sqrt{n}} \bb E_x \left( y+S_{\nu_n^{\ee^2}} +n^{1/2-2\ee} \,;\, y+S_{\nu_n^{\ee^2}} > n^{1/2-\ee/8} \,,\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right) \nonumber\\ &\leqslant \frac{2c_{\ee}}{\sqrt{n}} F_4 \nonumber\\ \label{taumajJ3} &\leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/2}}. \end{align} \textit{Upper bound of $J_4$.} For $y' \leqslant n^{1/2-\ee/8}$ and any $k\leqslant \pent{n^{1-\ee}}$, it holds $y_+' \leqslant 2n^{1/2-\ee/8} \leqslant c_{\ee} (n-k)^{1/2-\ee/8}$. Therefore, by \eqref{tautotaubm} and the point \ref{exittimeforB002} of Proposition \ref{exittimeforB} with $\theta_m = c_{\ee} m^{-\ee/8}$ and $m=n-k$, we have \begin{align*} J_4 &\leqslant \sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \frac{2\left( 1+ \theta_{n-k}^2 \right)}{\sqrt{2\pi (n-k)}\sigma} \bb E_x \left( y + S_k +(n-k)^{1/2-2\ee} \,;\, \right. \\ &\hspace{7cm} \left. y+S_k \leqslant n^{1/2-\ee/8} \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right). \end{align*} Since $\frac{2\left( 1+ \theta_{n-k}^2 \right)}{\sqrt{2\pi (n-k)}\sigma} \leqslant \frac{2}{\sqrt{2\pi n}\sigma} \left( 1+\frac{c_{\ee}}{n^{\ee/4}} \right)$ and $n^{1/2-\ee} \leqslant z+M_{\nu_n}$, we get \begin{align*} J_4 &\leqslant \frac{2}{\sqrt{2\pi n}\sigma} \left( 1+\frac{c_{\ee}}{n^{\ee/4}} \right) \bb E_x \left( y+S_{\nu_n^{\ee^2}} +n^{1/2-2\ee} \,;\, y+S_{\nu_n^{\ee^2}} \leqslant n^{1/2-\ee/8} \,,\, \right.\\ &\hspace{9cm} \left. \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right)\\ &\leqslant \frac{2}{\sqrt{2\pi n}\sigma} \left( 1+\frac{c_{\ee}}{n^{\ee/4}} \right) F_2 + \frac{c_{\ee}}{n^{1/2+\ee}} E_1. \end{align*} By Lemmas \ref{SurE1etE2}, \ref{SurF2etF4} and \eqref{encadrV}, \begin{equation} \label{taumajJ4001} J_4 \leqslant \frac{2V(x,y)}{\sqrt{2\pi n}\sigma} + \frac{c_{\ee} \left( 1+\max(y,0)+N(x) \right) }{n^{1/2+\ee/8}}. \end{equation} \textit{Lower bound of $J_4$.} With $y_-'=y'-(n-k)^{1/2-2\ee}$, we have $\bb P_{x'} \left( \tau_{y'} > n-k \,,\, A_{n-k} \right) \geqslant \bb P_{x'} \left( \tau_{y_-'}^{bm} > n-k \right) - \bb P_{x'} \left( \overline{A}_{n-k} \right)$. Considering the event $\{ y+S_k > (n-k)^{1/2-2\ee} \}$ and repeating the arguments used to bound $J_1$ (see \eqref{taumajJ1}), we obtain \begin{align*} J_4 &\geqslant \sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( \tau_{y_-'}^{bm} > n-k \right) \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, \right. \\ &\hspace{3cm} \left. y+S_k \leqslant n^{1/2-\ee/8} \,,\, y+S_k > (n-k)^{1/2-2\ee} \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right) \\ &\quad-\frac{c_{\ee}\left(1+\max(y,0)+N(x)\right)}{n^{1/2+\ee}}. \end{align*} Using the point \ref{exittimeforB002} of Proposition \ref{exittimeforB} and Proposition \ref{majdeA_k}, \begin{align*} J_4 \geqslant\;& \frac{2}{\sqrt{2\pi n}\sigma} \left( 1-\frac{c_{\ee}}{n^{\ee/4}} \right) \bb E_x \left( y+S_{\nu_n^{\ee^2}} - (n-\nu_n^{\ee^2})^{1/2-2\ee} \,;\, \right.\\ &\qquad \left. y+S_{\nu_n^{\ee^2}} > (n-\nu_n^{\ee^2})^{1/2-2\ee} \,,\, y+S_{\nu_n^{\ee^2}} \leqslant n^{1/2-\ee/8} \,,\, \tau_y > \nu_n^{\ee^2} \,,\, \nu_n^{\ee^2} \leqslant \pent{n^{1-\ee}} \right)\\ &-\frac{c_{\ee}\left(1+\max(y,0)+N(x)\right)}{n^{1/2+\ee}} \\ \geqslant\;& \frac{2}{\sqrt{2\pi n}\sigma} \left( 1-\frac{c_{\ee}}{n^{\ee/4}} \right) F_2 - \frac{c_{\ee}}{\sqrt{n}} F_4 - \frac{c_{\ee}}{n^{1/2+\ee}} E_1 -\frac{c_{\ee}\left(1+\max(y,0)+N(x)\right)}{n^{1/2+\ee}}. \end{align*} By Lemmas \ref{SurE1etE2}, \ref{SurF2etF4} and \eqref{encadrV}, \begin{equation} \label{taumajJ4002} J_4 \geqslant \frac{2V(x,y)}{\sqrt{2\pi n}\sigma} -c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/8}}. \end{equation} Putting together \eqref{taumajJ4002}, \eqref{taumajJ4001}, \eqref{taumajJ3} and \eqref{taudecJ2}, \[ \abs{J_2 - \frac{2V(x,y)}{\sqrt{2\pi n}\sigma}} \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/8}}. \] Taking into account \eqref{taumajJ1}, \eqref{taudecJ0} and Lemma \ref{concentnu}, we conclude that, for any $(x,y) \in \bb X \times \bb R$, \begin{equation} \abs{\bb P_x \left( \tau_y > n \right) - \frac{2V(x,y)}{\sqrt{2\pi n}\sigma}} \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/8}}. \label{taumajJ4003} \end{equation} Taking the limit as $n\to +\infty$ in \eqref{taumajJ4003}, we obtain the point \ref{thontau001} of Theorem \ref{thontau}. The point \ref{thontau002} of Theorem \ref{thontau} is an immediate consequence of the points \ref{posdeVsurDgamma002} and \ref{posdeVsurDgamma004} of Proposition \ref{posdeVsurDgamma}. \subsection{Proof of Theorem \ref{thontau2}} The point \ref{thontau001bis} of Theorem \ref{thontau2} is exactly \eqref{taumajJ4003}. In order to prove the point \ref{thontau003} of Theorem \ref{thontau2}, we will first establish a bound for $\bb P_x \left( \tau_y > n \right)$ when $z=y+r(x)\geqslant n^{1/2-\ee}$. Set $m_{\ee} = n -\pent{n^{\ee}}$. By the Markov property, \begin{align} \bb P_x \left( \tau_y > n \right) = \int_{\bb X \times \bb R} &\bb P_{x'} \left( \tau_{y'} > m_{\ee} \right) \nonumber\\ \label{majtauygrand} &\qquad \times \bb P_x \left( X_{\pent{n^{\ee}}} \in \dd x' \,,\, y+S_{\pent{n^{\ee}}} \in \dd y' \,,\, \tau_y > \pent{n^{\ee}} \right). \end{align} For any $x' \in \bb X$ and $y' >0$, using $A_{m_{\ee}}$ defined by \eqref{defdeAk}, we have \[ \bb P_{x'} \left( \tau_{y'} > m_{\ee} \right) \leqslant \bb P_{x'} \left( \tau_{y_+'}^{bm} > m_{\ee} \right) + \bb P_{x'} \left( \overline{A}_{m_{\ee}} \right), \] where $\tau_{y_+'}^{bm}$ is defined by \eqref{defdetaubm} and $y_+' = y'+ m_{\ee}^{1/2-2\ee}$. By the point \ref{exittimeforB001} of Proposition \ref{exittimeforB} and Proposition \ref{majdeA_k}, \[ \bb P_{x'} \left( \tau_{y'} > m_{\ee} \right) \leqslant \frac{c y_+'}{\sqrt{m_{\ee}}} + \frac{c_{\ee}}{m_{\ee}^{2\ee}} \left( 1+N(x') \right) \leqslant \frac{c_{\ee} y'}{\sqrt{n}} + \frac{c_{\ee}}{n^{2\ee}} + \frac{c_{\ee}}{n^{2\ee}} N(x'). \] Introducing this bound in \eqref{majtauygrand}, we get \[ \bb P_x \left( \tau_y > n \right) \leqslant \frac{c_{\ee}}{\sqrt{n}} \bb E_x \left( y+S_{\pent{n^{\ee}}} \,,\, \tau_y > \pent{n^{\ee}} \right) + \frac{c_{\ee}}{n^{2\ee}} + \frac{c_{\ee}}{n^{2\ee}} \bb E_x \left( N \left( X_{\pent{n^{\ee}}} \right) \right). \] Using Corollary \ref{unifmajdelafoncinv}, the inequality \eqref{decexpN} and the fact that $n^{1/2-\ee} \leqslant z$, we find \begin{equation} \label{point2th22} \bb P_x \left( \tau_y > n \right) \leqslant \frac{c_{\ee} \left( z+N(x) \right)}{\sqrt{n}}. \end{equation} Now, for any $x\in \bb X$, $z \in \bb R$ and $y=z-r(x)$, using the Markov property, \eqref{point2th22} and the fact that $\sqrt{n-\nu_n} \geqslant c_{\ee} \sqrt{n}$ on the event $\{ \nu_n \leqslant \pent{n^{1-\ee}} \}$, we have \begin{align*} \bb P_x \left( \tau_y > n \right) \leqslant\;& \frac{c_{\ee}}{\sqrt{n}} \bb E_x \left( z+M_{\nu_n} +N \left( X_{\nu_n} \right) \,;\, \tau_y >\nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &+ \bb P_x \left( \tau_y > n \,,\, \nu_n > \pent{n^{1-\ee}} \right). \end{align*} Using Lemma \ref{concentnu} and the fact that $N \left( X_{\nu_n} \right) \leqslant z+M_{\nu_n}$ on the event $\{ N \left( X_{\nu_n} \right) \leqslant n^{1/2-\ee} \}$, with $l=\pent{n^{1/2-\ee}}$, it holds \begin{align*} \bb P_x \left( \tau_y > n \right) \leqslant\;& \frac{c_{\ee}}{\sqrt{n}} \bb E_x \left( \left( z+M_{\nu_n} \right) \left( 1 + \mathbbm 1_{\left\{ N \left( X_{\nu_n} \right) \leqslant n^{1/2-\ee} \right\}} \right) \,;\, \tau_y >\nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) \\ &+ \frac{c_{\ee}}{\sqrt{n}} \bb E_x \left( N_l \left( X_{\nu_n} \right) \,;\, \tau_y >\nu_n \,,\, \nu_n \leqslant \pent{n^{1-\ee}} \right) + \e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right) \\ \leqslant\;& \frac{2c_{\ee}}{\sqrt{n}} E_1 + \frac{c_{\ee}}{\sqrt{n}} \sum_{k=1}^{\pent{n^{\ee}}} \bb E_x \left( N_l \left( X_k \right) \right) \\ &+ \frac{c_{\ee}}{\sqrt{n}} \sum_{k=\pent{n^{\ee}}+1}^{\pent{n^{1-\ee}}} \bb E_x \left( N_l \left( X_k \right) \,;\, \tau_y > k \right) + \e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right). \end{align*} By \eqref{decexpNl} and the Markov property, \begin{align*} \bb P_x \left( \tau_y > n \right) \leqslant\;& \frac{c_{\ee}}{\sqrt{n}} E_1 + \frac{c_{\ee}}{\sqrt{n}} \left( \frac{c n^{\ee}}{l^{1+\beta}} + \left( 1+N(x) \right) \right) + \e^{-c_{\ee} n^{\ee}} \left( 1+N(x) \right) \\ &+ \frac{c_{\ee}}{\sqrt{n}} \sum_{j=1}^{\pent{n^{1-\ee}}-\pent{n^{\ee}}} \left[ \frac{c}{l^{1+\beta}} \bb P_x \left( \tau_y > j \right) + \e^{-cn^{\ee}} \bb E_x \left( \left( 1+N\left( X_j \right) \right) \right) \right] \\ \leqslant\;& \frac{c_{\ee}}{\sqrt{n}} E_1 + \frac{c_{\ee} \left( 1+N(x) \right)}{\sqrt{n}} + \frac{c_{\ee}}{\sqrt{n}} \frac{c}{l^{1+\beta}} \sum_{j=1}^{\pent{n^{1-\ee}}} \bb P_x \left( \tau_y > j \right). \end{align*} Using Lemmas \ref{SurE1etE2} and \ref{taupptrn}, we deduce the point \ref{thontau003} of Theorem \ref{thontau2}. \section{Asymptotic for the conditioned Markov walk} \label{AsCondMarkWalk} In this section, we prove Theorem \ref{loideRayleigh}. The arguments are similar to those given in Section \ref{AsExTi}. We also keep the same notations. Assume that $(x,y) \in \bb X \times \bb R$ and let $t_0> 0$ be a positive real. For any $t\in \left[ 0 , t_0 \right]$, we write \begin{align} \bb P_x &\left( y+S_n \leqslant t\sqrt{n} \,,\, \tau_y > n \right) \nonumber\\ =\;& \bb P_x \left( y+S_n \leqslant t\sqrt{n} \,,\, \tau_y > n \,,\, \nu_n^{\ee^2} > \pent{n^{1-\ee}} \right) \nonumber\\ &+ \sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( y'+S_{n-k} \leqslant t\sqrt{n} \,,\, \tau_{y'} > n-k \,,\, \overline{A}_{n-k} \right) \nonumber\\ &\underbrace{\hspace{3cm} \times \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:L_1} \nonumber\\ \label{decL0} &+ \sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( y'+S_{n-k} \leqslant t\sqrt{n} \,,\, \tau_{y'} > n-k \,,\, A_{n-k} \right).\\ &\underbrace{\hspace{3cm} \times \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:L_2}. \nonumber \end{align} \textit{Bound of $L_1$.} With $J_1$ defined in \eqref{taudecJ0} and with the bound \eqref{taumajJ1}, we have, \begin{equation} \label{MajL1} L_1 \leqslant J_1 \leqslant \frac{c_{\ee} \left( 1+\max(y,0)+N(x) \right)}{n^{1/2+\ee}}. \end{equation} \textit{Bound of $L_2$.} According to whether $y+S_k \leqslant n^{1/2-\ee/8}$ or not, we write \begin{align} L_2 =\;& \sum_{k=\pent{n^{\ee^2}} +1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( y'+S_{n-k} \leqslant t \sqrt{n} \,,\, \tau_{y'} > n-k \,,\, A_{n-k} \right) \nonumber\\ &\underbrace{\quad \times \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, y+S_k > n^{1/2-\ee/8} \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:L_3} \nonumber\\ \label{decL2} &+\sum_{k=\pent{n^{\ee^2}}+1}^{\pent{n^{1-\ee}}} \int_{\bb X \times \bb R} \bb P_{x'} \left( y'+S_{n-k} \leqslant t \sqrt{n} \,,\, \tau_{y'} > n-k \,,\, A_{n-k} \right) \\ &\underbrace{\quad \times \bb P_x \left( X_k \in \dd x' \,,\, y+S_k \in \dd y' \,,\, y+S_k \leqslant n^{1/2-\ee/8} \,,\, \tau_y > k \,,\, \nu_n^{\ee^2} = k \right)}_{=:L_4}. \nonumber \end{align} \textit{Bound of $L_3$.} With $J_3$ defined in \eqref{taudecJ2} and with the bound \eqref{taumajJ3}, we have \begin{equation} \label{MajL3} L_3 \leqslant J_3 \leqslant c_{\ee}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/2}}. \end{equation} \textit{Bound of $L_4$.} We start with the upper bound. Set $y_+' = y' + (n-k)^{1/2-2\ee}$ and $t_+ = t+\frac{2}{n^{2\ee}}$. Note that on the event $\{ y'+S_{n-k} \leqslant t\sqrt{n} \,,\, \tau_{y'} > n-k \,,\, A_{n-k} \}$ we have $y_+' + \sigma B_{n-k} \leqslant t_+\sqrt{n}$ and $\tau_{y_+'}^{bm} > n-k$. Therefore, by Proposition \ref{intBro}, \begin{align*} &\bb P_{x'} \left( y'+S_{n-k} \leqslant t\sqrt{n} \,,\, \tau_{y'} > n-k \,,\, A_{n-k} \right) \\ &\hspace{4cm} \leqslant \frac{2}{\sqrt{2\pi}} \int_0^{\frac{t_+ \sqrt{n}}{\sigma \sqrt{n-k}}} \e^{-s^2/2} \sh \left( s \frac{y'_+}{\sqrt{n-k}\sigma} \right) \dd s. \end{align*} We shall use the following bounds: \begin{align*} &\sh(u) \leqslant u \left( 1+\frac{u^2}{6} \ch(u) \right), \qquad &&\text{for } u\geqslant 0, \\ &\frac{y_+'}{\sigma\sqrt{n-k}} \leqslant \frac{y_+'}{\sigma\sqrt{n}} \left( 1+ \frac{c_{\ee}}{n^{\ee}} \right) \leqslant \frac{c_{\ee}}{n^{\ee/8}}, \qquad &&\text{for } y' \leqslant n^{1/2-\ee/8} \text{ and } k\leqslant \pent{n^{1-\ee}},\\ &\frac{t_+ \sqrt{n}}{\sigma \sqrt{n-k}} \leqslant \frac{t}{\sigma} + \frac{c_{\ee,t_0}}{n^{\ee}} \leqslant c_{\ee,t_0}, \qquad &&\text{for } k\leqslant \pent{n^{1-\ee}}. \end{align*} Consequently, \begin{align*} &\bb P_{x'} \left( y'+S_{n-k} \leqslant t\sqrt{n} \,,\, \tau_{y'} > n-k \,,\, A_{n-k} \right) \\ &\hspace{2cm} \leqslant \frac{2y_+'}{\sqrt{2\pi n}\sigma} \left( 1+ \frac{c_{\ee}}{n^{\ee}} \right) \int_0^{\frac{t_+ \sqrt{n}}{\sigma \sqrt{n-k}}} s \e^{-s^2/2} \left( 1 + \frac{c_{\ee} s^2}{n^{\ee/4}}\ch \left( c_{\ee} s \right) \right) \dd s \\ &\hspace{2cm} \leqslant \frac{2y_+'}{\sqrt{2\pi n}\sigma} \left( 1+ \frac{c_{\ee}}{n^{\ee}} \right) \left( 1 + \frac{c_{\ee,t_0}}{n^{\ee/4}} \right) \left( \int_0^{\frac{t}{\sigma}} s \e^{-s^2/2} \dd s + \int_{\frac{t}{\sigma}}^{\frac{t_+ \sqrt{n}}{\sigma \sqrt{n-k}}} s \e^{-s^2/2} \dd s \right) \\ &\hspace{2cm} \leqslant \frac{2y_+'}{\sqrt{2\pi n}\sigma} \left( 1+\frac{c_{\ee,t_0}}{n^{\ee/4}} \right) \left( 1-\e^{-\frac{t^2}{2\sigma^2}} + \frac{c_{\ee,t_0}}{n^{\ee}} \right). \end{align*} This implies the upper bound (with $F_2$ and $E_1$ from Lemmas \ref{SurF2etF4} and \ref{SurE1etE2}, respectively) \begin{align*} L_4 &\leqslant \frac{2}{\sqrt{2\pi n}\sigma} \left( 1+\frac{c_{\ee,t_0}}{n^{\ee/4}} \right) \left( 1-\e^{-\frac{t^2}{2\sigma^2}} + \frac{c_{\ee,t_0}}{n^{\ee}} \right) F_2 + \frac{c_{\ee,t_0}}{n^{1/2+\ee}} E_1 \\ &\leqslant \frac{2V(x,y)}{\sqrt{2\pi n}\sigma} \left( 1-\e^{-\frac{t^2}{2\sigma^2}} \right) + \frac{c_{\ee,t_0} \left( 1+\max(y,0)+N(x) \right)}{n^{1/2+\ee/8}}. \end{align*} The proof of the following lower bound of $L_4$, being similar, is left to the reader: \[ L_4 \geqslant \frac{2V(x,y)}{\sqrt{2\pi n}\sigma} \left( 1-\e^{-\frac{t^2}{2\sigma^2}} \right) - c_{\ee,t_0}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/8}}. \] Combining the upper and the lower bounds of $L_4$ and \eqref{MajL3} with \eqref{decL2} we obtain an asymptotic developpement of $L_2.$ Implementing this developpement and the bound \eqref{MajL1} into \eqref{decL0} and using Lemma \ref{concentnu}, we conclude that \begin{align*} &\abs{\bb P_x \left( y+S_n \leqslant t \sqrt{n} \,,\, \tau_y > n \right) - \frac{2V(x,y)}{\sqrt{2\pi n}\sigma} \left( 1-\e^{-\frac{t^2}{2\sigma^2}} \right)} \\ &\hspace{6cm} \leqslant c_{\ee,t_0}\frac{\max(y,0) + \left( 1+ y\mathbbm 1_{\{y> n^{1/2-2\ee}\}} +N(x) \right)^2}{n^{1/2+\ee/8}}. \end{align*} Using the asymptotic of $\bb P_x (\tau_y>n)$ provided by Theorem \ref{thontau} finishes the proof of Theorem \ref{loideRayleigh}. \section{Appendix} \label{Appendix} \subsection{Convergence of recursively bounded monotonic sequences} We recall two lemmas from \cite{GLLP_affine_2016} which give sufficient conditions for a monotonic sequence to be bounded. \begin{lemma} \label{lemanalyse} Let $(u_n)_{n\geqslant 1}$ be a non-decreasing sequence of reals such that there exist $\ee \in (0,1)$ and $\alpha, \beta, \gamma, \delta \geqslant 0$ such that, for any $n\geqslant 2$, \[ u_n \leqslant \left( 1+ \frac{\alpha}{n^\ee} \right) u_{\pent{n^{1-\ee}}} + \frac{\beta}{n^\ee} + \gamma \e^{-\delta n^\ee}. \] Then, for any $n\geqslant 2$ and any integer $n_f \in \{2, \dots, n \}$, \[ u_n \leqslant \left( 1+ \frac{c_{\alpha,\ee}}{n_f^\ee} \right) u_{n_f} + \beta \frac{c_{\alpha,\ee}}{n_f^\ee} + \gamma \e^{-c_{\alpha,\delta,\ee} n_f^\ee}. \] In particular, choosing $n_f$ constant, it follows that $(u_n)_{n\geqslant 1}$ is bounded. \end{lemma} \begin{lemma} \label{lemanalyse2} Let $(u_n)_{n\geqslant 1}$ be a non-increasing sequence of reals such that there exist $\ee \in (0,1)$ and $\beta \geqslant 0$ such that, for any $n\geqslant 2$, \[ u_n \geqslant u_{\pent{n^{1-\ee}}} - \frac{\beta}{n^\ee}. \] Then, for any $n\geqslant 2$ and any integer $n_f \in \{2, \dots, n \}$, \[ u_n \geqslant u_{n_f} - c_\ee \frac{\beta}{n_f^\ee}. \] In particular, choosing $n_f$ constant, it follows that $(u_n)_{n\geqslant 1}$ is bounded. \end{lemma} \subsection{Brownian motion and strong approximation} \label{Strong Approx} We consider the standard Brownian motion $\left( B_t \right)_{t\geqslant 0}$ with values in $\bb R$ living on the probability space $\left( \Omega, \mathscr F, \bb P \right)$. Define the exit time \begin{equation} \label{defdetaubm} \tau_y^{bm} = \inf \{ t\geqslant 0, \, y+\sigma B_t \leqslant 0 \}, \end{equation} where $\sigma>0$. The following affirmations are due to L\'evy \cite{levy_theorie_1937}. \begin{proposition} \label{intBro} For any $y>0$, $0\leqslant a \leqslant b$ and $n \geqslant 1$, \[ \bb P \left( \tau_y^{bm} > n \,,\, y+\sigma B_n \in [a,b] \right) = \frac{1}{\sqrt{2\pi n} \sigma} \int_a^b \left( \e^{-\frac{(s-y)^2}{2n\sigma^2}} - \e^{-\frac{(s+y)^2}{2n\sigma^2}} \right) \dd s. \] \end{proposition} \begin{proposition}\ \label{exittimeforB} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{exittimeforB001} For any $y>0$, \[ \bb P \left( \tau_y^{bm}>n \right) \leqslant c\frac{y}{\sqrt{n}}. \] \item \label{exittimeforB002} For any sequence of real numbers $(\theta_n)_{n\geqslant 0}$ such that $\theta_n \underset{n\to +\infty}{\longrightarrow} 0$, \[ \underset{y\in [0; \theta_n \sqrt{n}]}{\sup} \left( \frac{\bb P \left( \tau_y^{bm}>n \right)}{\frac{2y}{\sqrt{2\pi n}\sigma}} - 1 \right) = O(\theta_n^2). \] \end{enumerate} \end{proposition} Moreover, under hypotheses \ref{BASP}-\ref{CECO} it is proved in \cite{ion_grama_rate_2014} that there is a version of the Markov walk $(S_n)_{n\geqslant 0}$ and of the standard Brownian motion $(B_t)_{t\geqslant 0}$ living on the same probability space which are close enough in the following sense: \begin{proposition} \label{majdeA_k} Assume that the Markov chain $\left(X_n\right)_{n\geqslant 0}$ and the function $f$ satisfy Hypotheses \ref{BASP}-\ref{CECO}. There exists $\ee_0 >0$ such that, for any $\ee \in (0,\ee_0]$, $x\in \bb X$ and $n\geqslant 1$, without loss of generality (on an extension of the initial probability space) one can reconstruct the sequence $(S_n)_{n\geqslant 0}$ with a continuous time Brownian motion $(B_t)_{t\in \bb R_{+} }$, such that \begin{equation} \bb P_x \left( \underset{0 \leqslant t \leqslant 1}{\sup} \abs{S_{\pent{tn}}-\sigma B_{tn}} > n^{1/2-\ee} \right) \leqslant \frac{c_{\ee}}{n^{\ee}} ( 1+N(x) ), \label{KMT001} \end{equation} where $\sigma$ is defined in the point \ref{MomAs002} of Proposition \ref{MomAs}. \end{proposition} In the original result the right-hand side in \eqref{KMT001} is $c_\ee n^{-\ee} (1+N(x))^{\alpha}$ with $\alpha>2.$ To obtain the result of Proposition \ref{majdeA_k} it suffices to take the power $1/\alpha$ on the both sides and to use the obvious inequality $p<p^{1/\alpha},$ for $p \in [0,1]$. Using this proposition, we deduce the following result. \begin{corollary} \label{BerEss} There exists $\ee_0 >0$ such that, for any $\ee \in (0,\ee_0)$, $x\in \bb R$ and $n\geqslant 1$, \[ \underset{t\in \bb R}{\sup} \, \abs{ \bb P_x \left( \frac{S_n}{\sqrt{n}} \leqslant t \right) - \int_{-\infty}^{t} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} } \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right). \] \end{corollary} \begin{proof} Using Proposition \ref{majdeA_k}, \begin{align*} \bb P_x \left( \frac{S_n}{\sqrt{n}} \leqslant t \right) \leqslant \; &\bb P_x \left( \abs{S_n-\sigma B_n} > n^{1/2-\ee} \right) + \bb P_x \left( \frac{\sigma B_n}{\sqrt{n}} \leqslant t + \frac{1}{n^\ee} \right) \\ \leqslant \; & \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right) + \int_{-\infty}^{t+\frac{1}{n^\ee}} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma}. \end{align*} Therefore, \[ \bb P_x \left( \frac{S_n}{\sqrt{n}} \leqslant t \right) - \int_{-\infty}^{t} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right). \] In the same way way, \[ \bb P_x \left( \frac{S_n}{\sqrt{n}} \leqslant t \right) \geqslant \int_{-\infty}^{t-\frac{1}{n^\ee}} \e^{-\frac{u^2}{2\sigma^2}} \frac{\dd u}{\sqrt{2\pi} \sigma} - \bb P_x \left( \frac{\abs{S_n-\sigma B_n}}{\sqrt{n}} > \frac{1}{n^\ee} \right) \] and the result follows. \end{proof} \subsection{Finiteness of the exit times \texorpdfstring{$\tau_y$}{} and \texorpdfstring{$T_z$}{}} \label{secExitfinit} \begin{lemma} \label{Exitfinit} For any $x\in \bb X$ and $y \in \bb R$, \[ \tau_y < +\infty \quad \bb P_x\text{-a.s.} \] \end{lemma} \begin{proof} Let $x \in \bb X.$ Assume first that $y > 0$. Since $\{ \tau_y > n \}$ is a non-increasing sequence of events, \[ \bb P_x \left( \tau_y = + \infty \right) = \underset{n\to +\infty}{\lim} \bb P_x \left( \tau_y > n \right) = \underset{n\to +\infty}{\lim} \bb P_x \left( y+S_k > 0, \, \forall k \leqslant n \right). \] Using Proposition \ref{majdeA_k}, \[ \bb P_x \left( y+S_k > 0, \, \forall k \leqslant n \right) \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right) + \bb P \left( \tau_{y+n^{1/2-\ee}}^{bm} > n \right). \] Thus, by the point \ref{exittimeforB001} of Proposition \ref{exittimeforB}, \begin{equation} \bb P_x \left( \tau_y > n \right) \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right) + c\frac{y+n^{1/2-\ee}}{\sqrt{n}} \leqslant \frac{c_{\ee}}{n^\ee} \left(1+y+N(x)\right). \label{Pourypos} \end{equation} When $y \leqslant 0$, we have, for any $y' > 0$, $\bb P_x \left( \tau_y > n \right) \leqslant \bb P_x \left( \tau_{y'} > n \right)$. Taking the limit when $y'\to 0,$ we obtain that \begin{equation} \bb P_x \left( \tau_y > n \right) \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right). \label{Pouryneg} \end{equation} From \eqref{Pourypos} and \eqref{Pouryneg} it follows that, for any $y \in \bb R$, \begin{equation} \label{tauyto0} \bb P_x \left( \tau_y > n \right) \leqslant \frac{c_{\ee}}{n^{\ee}} \left(1+\max(y,0)+N(x)\right). \end{equation} Taking the limit as $n\to +\infty$, we conclude that $\tau_y < +\infty$ $\bb P_x$-a.s. \end{proof} The same result can be obtained for the exit time $T_z$ of the martingale $(z+M_n)_{n\geqslant 0}$. \begin{lemma} \label{ExitfinitTz} For any $x \in \bb X$ and $z \in \bb R$, \[ T_z < + \infty \quad \bb P_x\text{-a.s.} \] \end{lemma} \begin{proof} Let $x \in \bb X,$ $z\in \bb R$ and $y=z-r(x)$. Assume first that $y=z-r(x)>0.$ Following the proof of Lemma \ref{Exitfinit}, \[ \bb P_x \left( T_z = + \infty \right) = \underset{n\to +\infty}{\lim} \bb P_x \left( z+M_k > 0 , \, \forall k \leqslant n \right). \] By \eqref{decMSX} the martingale $(z+M_n)_{n\geqslant 0}$ is relied to the Markov walk $(y+S_n)_{n\geqslant 0}$, which gives \begin{align} \bb P_x \left( z+M_k > 0,\, \forall k \leqslant n \right) \leqslant\;& \bb P_x \left( y+S_k > -n^{1/2-\ee},\, \forall k \leqslant n \right) \nonumber\\ \label{decpourTzfinit} &\hspace{2cm} + \bb P_x \left( \underset{1 \leqslant k \leqslant n}{\max} \abs{r\left( X_k \right)} > n^{1/2-\ee} \right). \end{align} On the one hand, in the same way as in the proof of Lemma \ref{Exitfinit}, \begin{equation} \label{decpourTzfinit001} \bb P_x \left( y+S_k > -n^{1/2-\ee},\, \forall k \leqslant n \right) \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right) + \bb P_x \left( \tau_{y+2n^{1/2-\ee}}^{bm} > n \right). \end{equation} On the other hand, using Lemma \ref{MTR}, for $n$ large enough, \[ \bb P_x \left( \underset{1 \leqslant k \leqslant n}{\max} \abs{r\left( X_k \right)} > n^{1/2-\ee} \right) \leqslant \sum_{k=1}^{\pent{n^{\ee}}} \bb E_x \left( \frac{c N\left( X_k \right)}{n^{1/2-\ee}} \right) + \sum_{k=\pent{n^{\ee}}+1}^{n} \bb E_x \left( \frac{c N_l \left( X_k \right)}{n^{1/2-\ee}} \right), \] where $l = c n^{1/2-\ee}$. So, using \eqref{decexpNl} and taking $\ee \leqslant \min \left( \frac{1}{6}, \frac{\beta}{2(3+\beta)} \right)$, we obtain \begin{equation} \label{decpourTzfinit002} \bb P_x \left( \underset{1 \leqslant k \leqslant n}{\max} \abs{r\left( X_k \right)} > n^{1/2-\ee} \right) \leqslant \frac{c_{\ee}}{n^{\ee}} \left( 1+N(x) \right). \end{equation} Putting together \eqref{decpourTzfinit}, \eqref{decpourTzfinit001} and \eqref{decpourTzfinit002} and using the point \ref{exittimeforB001} of Proposition \ref{exittimeforB}, we have, for $z>r(x),$ \[ \bb P_x \left( T_z > n \right) \leqslant \frac{c_{\ee}}{n^\ee} \left(1+N(x)\right) + c\frac{y+2n^{1/2-\ee}}{\sqrt{n}} \leqslant \frac{c_{\ee}}{n^\ee} \left(1+\max(z,0)+N(x)\right). \] Since $z \mapsto T_z$ is non-decreasing, we obtain the same bound for any $z \in \bb R$, \begin{equation} \label{Tzto0} \bb P_x \left( T_z > n \right) \leqslant \frac{c_{\ee}}{n^{\ee}} \left(1+\max(z,0)+N(x)\right). \end{equation} Taking the limit as $n\to +\infty$ we conclude that $T_z < +\infty$ $\bb P_x$-a.s. \end{proof} \subsection{Proof of Proposition \ref{PP001}} \label{proof-rec-sto-Mat} In this section, for the affine random walk in $\bb R^d$ conditioned to stay in a half-space, we verify that Hypotheses \ref{BASP}-\ref{CECO} hold true on an appropriate Banach space which we proceed to introduce. Let $\delta>0$ be the constant from Hypothesis \ref{hypoH}. Denote by $\mathscr{C}( \bb R^d )$ the space of continuous complex valued functions on $\bb R^d$. Let $\ee$ and $\theta$ be two positive numbers satisfying \[ 1+\ee < \theta < 2 < 2+2\ee < 2+2\delta. \] For any function $h\in \mathscr{C}( \bb R^d )$ introduce the norm $\norm{h}_{\theta,\ee} = \abs{h}_{\theta} + \left[h\right]_{\ee}$, where \[ \abs{h}_{\theta} = \underset{x \in \bb R^d}{\sup} \frac{\abs{h(x)}}{\left( 1+\abs{x} \right)^{\theta}}, \quad \left[h\right]_{\ee} = \underset{x\neq y}{\sup} \frac{\abs{h(x)-h(y)}}{\abs{x-y}^{\ee}\left( 1+\abs{x} \right)\left( 1+\abs{y} \right)} \] and consider the Banach space \[ \mathscr{B} = \mathscr{L}_{\theta,\ee} = \left\{ h \in \mathscr{C} \left( \bb R^d \right),\; \norm{h}_{\theta,\ee} < +\infty \right\}. \] \textit{Proof of \ref{BASP}.} Conditions \ref{BASP001}, \ref{BASP002} and \ref{BASP003} of \ref{BASP} can be easily verified under the point \ref{H1} of Hypothesis \ref{hypoH} and the fact that $\theta < 2+2\delta$ and $\norm{\bs \delta_x}_{\mathscr{B}'} \leqslant \left( 1+\abs{x} \right)^{\theta}$, for any $x \in \bb R^d$. We verify the point \ref{BASP004} of Hypothesis \ref{BASP}. For any $(x,y) \in \bb R^d \times \bb R^d$ and $t \in \bb R$, we have $\abs{\e^{itf(x)} - \e^{itf(y)}} \leqslant \abs{t} \abs{f(x) - f(y)} \leqslant \abs{t} \abs{u} \abs{x-y}$ and $\abs{\e^{itf(x)} - \e^{itf(y)}} \leqslant 2$. Therefore, we write \[ \abs{\e^{itf(x)} - \e^{itf(y)}} \leqslant 2^{1-\ee} \abs{t}^{\ee} \abs{u}^{\ee} \abs{x-y}^{\ee}. \] Supposing that $\abs{x} \leqslant \abs{y}$, we obtain, for any $h \in \mathscr{L}_{\theta,\ee}$, \[ \abs{\e^{itf(x)}h(x) - \e^{itf(y)}h(y)} \leqslant \abs{\e^{itf(x)} - \e^{itf(y)}} \abs{h}_{\theta} \left( 1+\abs{x} \right)^{\theta} +\abs{h(x) - h(y)}. \] Since $\theta < 2$, we have $\left[ \e^{itf}h - \e^{itf}h \right]_{\ee} \leqslant 2^{1-\ee} \abs{t}^{\ee} \abs{u}^{\ee} \abs{h}_{\theta} + \left[h\right]_{\ee}$. Consequently, $\norm{\e^{itf}h}_{\theta,\ee} \leqslant \left( 1+2^{1-\ee}\abs{t}^{\ee}\abs{u}^{\ee} \right) \norm{h}_{\theta,\ee}$ and the point \ref{BASP004} is verified. \textit{Proof of \ref{SPGA} and \ref{PETO}.} We shall verify that the conditions of the theorem of Ionescu-Tulcea and Marinescu are satisfied (see \cite{norman1972markov} and \cite{tulcea_theorie_1950}). We start by establishing two lemmas. \begin{samepage} \begin{lemma} Assume Hypothesis \ref{hypoH}. \label{opPt} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{opPt001} There exists a constant $c>0$ such that, for any $t \in \bb R$, $n\geqslant 1$, and $h \in \mathscr{L}_{\theta,\ee}$, \[ \abs{\mathbf{P}_t^n h}_{\theta} \leqslant c \abs{h}_{\theta}. \] \item \label{opPt002} There exist constants $c_1$, $c_2$ and $\rho<1$ such that, for any $n \geqslant 1$, $h \in \mathscr{L}_{\theta,\ee}$ and $t\in \bb R$, \[ \left[ \mathbf{P}_t^n h \right]_{\ee} \leqslant c_1 \rho^n \left[h\right]_{\ee} + c_2 \abs{t}^{\ee} \abs{h}_{\theta}. \] \item \label{opPt003} For any $t\in \bb R,$ the operator $\mathbf{P}_t$ is compact from $(\mathscr{B},\norm{\cdot}_{\theta,\ee})$ to $(\mathscr{C}\left( \bb R^d \right),\abs{\cdot}_{\theta})$. \end{enumerate} \end{lemma} \end{samepage} \begin{proof} \textit{Claim \ref{opPt001}.} For any $x \in \bb R^d$, \[ \abs{\mathbf{P}_t^n h(x)} = \abs{\bb E_x \left( \e^{itS_n} h \left( X_n \right) \right)} \leqslant 3^{\theta} \abs{h}_{\theta} \left( 1+\bb E \left( \abs{\Pi_n}^{\theta} \right) \abs{x}^{\theta}+\bb E \left( \abs{X_n^0}^{\theta} \right) \right), \] with $\Pi_n = A_nA_{n-1} \dots A_1$ and $X_n^0 = g_n \dots g_1 \cdot 0 = \sum_{k=1}^n A_n \dots A_{k+1}B_k$. By the point \ref{H1} of Hypothesis \ref{hypoH}, there exist $c(\delta)> 0$ and $0<\rho(\delta)<1$ such that, for any $n\geqslant 1$, \[ \bb E^{\frac{2+2\delta}{\theta}} \left( \abs{\Pi_n}^{\theta} \right) \leqslant \bb E \left( \abs{\Pi_n}^{2+2\delta} \right) \leqslant c(\delta) \rho(\delta)^{n} \underset{n\to +\infty}{\longrightarrow} 0, \] from which it follows that \[ \bb E \left( \abs{X_n^0}^{\theta} \right) \leqslant \left( \sum_{k=1}^n \bb E^{1/\theta} \left( \abs{\Pi_n}^{\theta} \right) \bb E^{1/\theta} \left( \abs{B_1}^{\theta} \right) \right)^{\theta} < +\infty. \] This proves the claim \ref{opPt001}. \textit{Proof of the claim \ref{opPt002}.} For any $x\neq y \in \bb R^d$, with $\abs{x} \leqslant \abs{y}$, we have \begin{align*} &\abs{\mathbf{P}_t^n h(x) - \mathbf{P}_t^n h(y)} \\ &\quad \leqslant \bb E \left( 2^{1-\ee} \abs{t}^{\ee} \abs{u}^{\ee} \left( \sum_{k=1}^n \abs{\Pi_k} \right)^{\ee} \abs{x-y}^{\ee} \abs{h}_{\theta} \left( 1+ \abs{\Pi_n} \abs{x} + \abs{X_n^0} \right)^{\theta} \right)\\ &\quad \qquad+ \bb E \left( \left[h\right]_{\ee} \abs{\Pi_n}^{\ee} \abs{x-y}^{\ee} \left( 1+\abs{\Pi_n}\abs{x}+\abs{X_n^0} \right)\left( 1+\abs{\Pi_n}\abs{y}+\abs{X_n^0} \right) \right). \end{align*} Since $\theta < 2$, we obtain that \[ \left[\mathbf{P}_t^n h\right]_{\ee} \leqslant 2^{1-\ee} \abs{t}^{\ee} \abs{u}^{\ee} C_2(n) \abs{h}_{\theta} + C_1(n) \left[h\right]_{\ee}, \] where \[ C_1(n) = \bb E \left( \abs{\Pi_n}^{\ee} \left( 1+\abs{\Pi_n}+\abs{X_n^0} \right)^2 \right) \] and \[ C_2(n) = \bb E \left( \left( \sum_{k=1}^n \abs{\Pi_k} \right)^{\ee} \left( 1+ \abs{\Pi_n} + \abs{X_n^0} \right)^{\theta} \right). \] Since $2+2\ee < 2+2\delta=p$, by the H\"older inequality, \begin{align*} C_1(n) &\leqslant \bb E^{\frac{\ee}{1+\ee}} \left( \abs{\Pi_n}^{1+\ee} \right) \bb E^{\frac{1}{1+\ee}} \left( \left( 1+ \abs{\Pi_n} + \abs{X_n^0} \right)^{2+2\ee} \right) \\ &\leqslant c(\delta)^{\frac{\ee}{p}} \rho(\delta)^{\frac{n\ee}{p}} 3^2 \left( 1+c(\delta)^{\frac{2}{p}}+\left( \frac{c(\delta)^{\frac{1}{p}} \bb E^{\frac{1}{p}} \left( \abs{B_1}^{p} \right)}{1-\rho(\delta)^{\frac{1}{p}}} \right)^2 \right), \end{align*} which shows that $C_1(n) $ converges exponentially fast to $0.$ In the same way, taking into account that $\theta < 2$ we show that $C_2(n)$ is bounded: \begin{align*} C_2(n) &\leqslant \left( \sum_{k=1}^n \bb E^{\frac{1}{1+\ee}} \left( \abs{\Pi_k}^{1+\ee} \right) \right)^{\ee} \bb E^{\frac{1}{1+\ee}} \left( \left( 1+ \abs{\Pi_n} + \abs{X_n^0} \right)^{2+2\ee} \right) \\ &\leqslant \left( \frac{c(\delta)^{\frac{1}{p}}}{1-\rho(\delta)^{\frac{1}{p}}} \right)^{\ee} 3^2 \left( 1+c(\delta)^{\frac{2}{p}}+\left( \frac{c(\delta)^{\frac{1}{p}} \bb E^{\frac{1}{p}} \left( \abs{B_1}^{p} \right)}{1-\rho(\delta)^{\frac{1}{p}}} \right)^2 \right). \end{align*} \textit{Proof of the claim \ref{opPt003}.} Let $B$ be a bounded subset of $\mathscr{B}$, $(h_n)_{n\geqslant 0}$ be a sequence in $B$ and $K$ be a compact of $\bb R^d$. Using the claim \ref{opPt001}, it follows that, for any $x\in K$ and $n\geqslant 0$, \[ \abs{\mathbf{P}_th_n(x)} \leqslant c\abs{h_n}_{\theta} \left( 1+\abs{x} \right)^{\theta} \leqslant c_{K}, \] which implies that the set $\mathscr A=\{ \mathbf{P}_th_n,\, n\geqslant 0\}$ is uniformly bounded in $(\mathscr{C}\left( K \right),\abs{\cdot}_{\infty})$, where $\abs{\cdot}_{\infty}$ is the supremum norm. By the claims \ref{opPt001} and \ref{opPt002}, we have that, for any $x,y \in K$ and $n\geqslant 0$, \[ \abs{\mathbf{P}_t h_n (x) - \mathbf{P}_t h_n (y)} \leqslant \left[ \mathbf{P}_t h_n \right]_{\ee} \abs{x-y}^{\ee} \left(1+\abs{x} \right)^{\theta} \left(1+\abs{y} \right)^{\theta} \leqslant c_{K} \norm{h_n}_{\mathscr{B}} \abs{x-y}^{\ee} \] and, thereby, the set $\mathscr A$ is uniformly equicontinuous. By the theorem of Arzel\` a-Ascoli, we conclude that $\mathscr A$ is relatively compact in $(\mathscr{C}\left( K \right),\abs{\cdot}_{\infty})$. Using a diagonal extraction, we deduce that there exist a subsequence $(n_k)_{k\geqslant 1}$ and a function $\varphi \in \mathscr{C} ( \bb R^d )$ such that, for any compact $K \subset \bb R^d$, \[ \sup_{x\in K} \abs{P_t h_{n_k}(x) - \varphi(x)} \underset{n\to+\infty}{\longrightarrow} 0. \] Moreover, by the claims \ref{opPt001} and \ref{opPt002}, for any $n \geqslant 1$ and $x\in \bb R^d$, \[ \abs{P_t h_n (x)} \leqslant \abs{P_t h_n (0)} + \left[P_t h_n \right]_{\ee} \abs{x}^{\ee} \left( 1+\abs{x} \right) \leqslant c \abs{h_n}_{\theta} + c \norm{h_n}_{\mathscr{B}} \abs{x}^{\ee} \left( 1+\abs{x} \right). \] Since $B$ is bounded, we have $\abs{P_t h_n (x)} \leqslant c ( 1+\abs{x} )^{1+\ee}$, for any $x\in \bb R^d,$ as well as $\varphi(x) \leqslant c ( 1+\abs{x} )^{1+\ee}$, for any $x\in \bb R^d$. Consequently, for any $k \geqslant 1$ and $A > 0$, \[ \sup_{x\in \bb R^d} \frac{\abs{P_t h_{n_k} (x) - \varphi(x)}}{\left(1+\abs{x}\right)^{\theta}} \leqslant \sup_{\abs{x} \leqslant A} \abs{P_t h_{n_k} (x) - \varphi(x)} + 2c \sup_{\abs{x} > A} \frac{\left(1+\abs{x}\right)^{1+\ee}}{\left(1+\abs{x}\right)^{\theta}}. \] Taking the limit as $k\to +\infty$ and then the limit as $A\to +\infty$, we can conclude that $\lim_{k\to+\infty} \abs{P_t h_{n_k} - \varphi}_{\theta} = 0$. \end{proof} \begin{samepage} \begin{lemma} Assume Hypothesis \ref{hypoH}. \nobreak \label{uneseulevp} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{uneseulevp001} The operator $\mathbf P$ has a unique invariant probability $\bs \nu$ which coincides with the distribution of the $\bb P$-a.s.\ convergent series $Z:= \sum_{k=1}^{+\infty} A_1 \dots A_{k-1} B_k$. Moreover, the unique eigenvalue of modulus $1$ of the operator $\mathbf{P}$ on $\mathscr{B}$ is $1$ and the associated eigenspace is generated by the function $e$: $x \mapsto 1$. \item \label{uneseulevp002} Let $t \in \bb R^*$. If $h \in \mathscr{B}$ and $z\in \bb C$ of modulus $1$ are such that \[ \mathbf P_t h(x) = z h(x), \qquad x \in \supp(\bs \nu), \] then $h=0$ on $\supp(\bs \nu)$. \end{enumerate} \end{lemma} \end{samepage} \begin{proof} We proceed as in Guivarc'h and Le Page \cite{guivarch_spectral_2008} and Buraczewski, Damek and Guivarc'h \cite{buraczewski_convergence_2009}. For any $g=(A,B) \in \GL\left( d, \bb R \right) \times \bb R^d$ and $x\in \bb R^d$, we set $g \cdot x = Ax+B$. \textit{Proof of claim \ref{uneseulevp001}.} Since $k(\delta) < 1$, the series $\sum_{k} \bb E^{\frac{1}{2+2\delta}} ( \abs{A_1 \dots A_{k-1} B_k}^{2+2\delta} )$ converges and so the sequence $g_1 \dots g_n \cdot x = A_1 \dots A_n x + \sum_{k=1}^n A_1 \dots A_{k-1} B_k$ converges almost surely to $Z=\sum_{k=1}^{+\infty} A_1 \dots A_{k-1} B_k$ as $n\to+\infty$. Therefore, for any $\varphi \in \mathscr{B}$, the sequence $\varphi(g_1 \dots g_n \cdot x)$ converges to $\varphi(Z)$ almost surely as $n\to+\infty$. Moreover, since $\abs{\varphi(x)} \leqslant \abs{\varphi}_{\theta} \left( 1+\abs{x} \right)^{\theta}$ and $\theta < 2 +2\delta$, the sequence $(\varphi(g_1 \dots g_n \cdot x))_{n\geq1}$ is uniformly integrable. So $\mathbf{P}^n\varphi (x)$ converges to $\bb E(\varphi(Z))$ as $n\to+\infty$. This proves that the distribution $\bs \nu$ of $Z$ is the only invariant probability of $\mathbf P$. Fix $z \in \bb C$ such that $\abs{z}=1$ and let $h \neq 0$ belonging to $\mathscr{B}$ be an eigenfunction of $\mathbf{P}$, so that $\mathbf{P}h = zh$. From the previous argument, it follows that, for any $x \in \bb R^d$, \[ z^n h(x) = \mathbf{P}^nh(x) \underset{n\to +\infty}{\longrightarrow} \bs \nu(h). \] Since there exists $x \in \bb R^d$ such that $h(x) \neq 0$, the sequence $(z^n)_{n\geqslant 1}$ should be convergent which is possible only if $z=1$. From this, we deduce that for any $x\in \bb R^d$, $h(x) = \bb E (h(Z))$ which implies that $h$ is constant. \textit{Proof of the claim \ref{uneseulevp002}.} Our argument is by contradiction. Let $t \in \bb R^* $, $h \in \mathscr B$ and $z\in \bb C$ of modulus $1$ be such that $\mathbf P_t h (x) = z h(x),$ for any $x \in \supp (\bs\nu)$ and suppose that there exists $x_0\in \supp(\bs \nu)$ such that $h(x_0)\not=0$. First we establish that $\abs{h}$ is constant on the support of the distribution $\bs \nu$. Since $\bs \nu$ is $\bs \mu$-invariant, for any $(g,x) \in \supp(\bs \mu) \times \supp(\bs \nu)$ we have $g\cdot x \in \supp(\bs \nu).$ From this fact it follows that $\mathbf{P}_t^n h(x)=z^n h(x)$, for any $n\geqslant 1$ and $x\in \supp(\bs \nu)$. This implies that $\abs{h}(x) \leqslant \mathbf{P}^n \abs{h}(x),$ for any $x\in \supp(\bs \nu)$. Note also that $\abs{h}$ belongs to $\mathscr{B}$. Therefore, as we have seen in the proof of the first claim, we have, $\lim_{n\to +\infty} \mathbf{P}^n \abs{h}(x) = \bs \nu( \abs{h} ) = \bb E( \abs{h} (Z) ) <+\infty$, for any $x \in \supp (\bs \nu)$. So $\abs{h}(x) \leqslant \int_{x' \in \bb R^d} \abs{h}(x') \bs \nu( \dd x')$, for any $x\in \supp (\bs \nu)$. Since $\abs{h}$ is continuous, this implies that $\abs{h}$ is constant on the support of $\bs \nu$. In particular, this means that $h(x)\not=0$ for any $x\in \supp (\bs \nu)$. Since the support of $\bs \nu$ is stable by all the elements of the support of $\bs \mu$, we deduce that the random variable $\xi_n(x) =\exp( it \scal{u}{\sum_{k=1}^n g_k \dots g_1 \cdot x}) h( g_n \dots g_1 \cdot x)$ takes values on the sphere $\bb S_{\nu(\abs{h})} = \{ a \in \bb C: \abs{a} = \bs \nu(\abs{h}) \}$, for all $x$ in the support of $\bs \nu$. Moreover, the mean $z^n h(x)$ of $\xi_n(x)$ is also on $\bb S_{\nu(\abs{h})}$, which is possible only if $\xi_n(x)$ is a constant, for any $x\in \supp (\bs \nu)$. Consequently, for any pair $x,y \in \supp (\bs \nu)$, there exists an event $\Omega_{x,y}$ of $\bb P$-probability one such that on $\Omega_{x,y}$ it holds, for any $n\geqslant 1$, \[ \exp \left( it \scal{u}{\sum_{k=1}^n g_k \dots g_1 \cdot v} \right) h\left( g_n \dots g_1 \cdot v \right) = z^n h(v), \] with $v \in \{x,y\}$, from which we get \begin{equation} \label{hpresqueconst002} \frac{h\left( g_n \dots g_1 \cdot y \right)}{h\left( g_n \dots g_1 \cdot x \right)} = \frac{h(y)}{h(x)} \exp\left(it \scal{\sum_{k=1}^n {}^t\!A_1 \dots {}^t\!A_k u}{ x-y } \right). \end{equation} In addition, for any $n \geqslant 1$, \[ \bb E \left( \abs{ \frac{h\left( g_n \dots g_1 \cdot y \right)}{h\left( g_n \dots g_1 \cdot x \right)} -1 } \right) = \bb E \left( \abs{ \frac{h\left( g_1 \dots g_n \cdot y \right)}{h\left( g_1 \dots g_n \cdot x \right)} -1 } \right). \] Since, for $v\in\{x,y\}$, the sequence $h(g_1 \dots g_n \cdot v)$ converges a.s.\ to $h(Z)$ and since $h$ is bounded with a constant modulus, we have by \eqref{hpresqueconst002}, \begin{align*} 0 &= \underset{n\to+\infty}{\lim} \bb E \left( \abs{ \frac{h\left( g_n \dots g_1 \cdot y \right)}{h\left( g_n \dots g_1 \cdot x \right)} -1 } \right) \\ &= \underset{n\to+\infty}{\lim} \bb E \left( \abs{ \frac{h(y)}{h(x)} \exp\left(it \scal{\sum_{k=1}^n {}^t\!A_1 \dots {}^t\!A_k u}{ x-y } \right) -1 } \right). \end{align*} Taking into account that the series $\sum_{k=1}^n {}^t\!A_1 \dots {}^t\!A_k$ converges a.s.\ to a random variable $Z'$, we have for any $x,y \in \supp(\bs \nu)$, \begin{equation} \label{hpresqueconst003} \bb E \left( \abs{ \frac{h(y)}{h(x)} \e^{it \scal{Z' u}{ x-y }} -1 } \right) = 0. \end{equation} Since the support of $\bs \nu$ is invariant by all the elements of the support of $\bs \mu$, by the point \ref{H2} of Hypothesis \ref{hypoH}, we deduce that the support of $\bs \nu$ is not contained in an affine subspace of $\bb R^d$, \textit{i.e.}\ for any $1 \leqslant j \leqslant d$, there exist $x_j,y_j \in \supp(\bs \nu)$, such that the family $(v_j)_{1\leqslant j \leqslant d} = (x_j-y_j)_{1\leqslant j \leqslant d}$ generates $\bb R^d$. From \eqref{hpresqueconst003}, we conclude that for any $1\leqslant j \leqslant d$, \[ \frac{h(y_j)}{h(x_j)} \e^{it \scal{Z' u}{ v_j }} = 1, \qquad \bb P\text{-a.s.} \] Let $\theta_j$ be such that $\frac{h(x_j)}{h(y_j)} = \e^{i\theta_j}$. Denoting by $\bs \eta_u$ the distribution of $Z'u$, we obtain that $\scal{Z'u}{v_j} \in \frac{\theta_j +2\pi \bb Z}{t}$ $\bb P$-a.s.\ and so the support of $\bs \eta_u$ is discrete. Moreover, the measure $\bs \eta_u$ is invariant for the Markov chain $X_{n+1}' = {}^t\!A_{n+1} ( X_n' + u )$ and so, for any Borel set $B$ of $\bb R^d$, \begin{equation} \label{etaB} \bs \eta_u \left( B \right) = \bb E \left( \int_{v \in \bb R^d} \mathbbm 1_{B} \left( {}^t\!A_1 \left( v +u \right) \right) \bs \eta_u(\dd v) \right). \end{equation} Since $\bs \eta_u$ is discrete, the set $E_{max} = \{ x\in \bb R^d: \bs \eta_u \left( \{x\} \right) = \max_{y\in \bb R^d} \bs \eta_u \left( \{y\} \right) \}$ is non-empty and finite. Moreover, using \eqref{etaB} with $B=\left\{ x\right\}$ and $x\in E_{max},$ we can see that the image ${}^tA_1^{-1} x-u$ belongs to $E_{max}$ $\bb P$-a.s. Denoting by $v_0$ the barycentre of $E_{max}$, we find that \[ \bb P \left( {}^tA_1^{-1} v_0-u = v_0 \right) = 1. \] The fact that $u \neq 0$ implies that $v_0\neq 0$. The latter implies that ${}^tA_1^{-1} v_0 = v_0 +u = {}^tA_2^{-1} v_0$ almost surely, which contradicts the point \ref{H3} of Hypothesis \ref{hypoH}. \end{proof} The conditions (b), (c) and (d) of the theorem of Ionescu-Tulcea and Marinescu as stated in Chapter 3 of Norman \cite{norman1972markov} follow from points \ref{opPt001}-\ref{opPt003} of Lemma \ref{opPt} repectively. It remains to show the condition (a). Let $\left( h_n \right)_{n\geqslant 0}$ be a sequence in $\mathscr{L}_{\theta,\ee}$ satisfying $\norm{h_n}_{\theta,\ee} \leqslant K$, for any $n\geqslant 0$ and some constant $K$ and suppose that there exists $h\in \mathscr{C}( \bb R^d )$ such that $\lim_{n\to+\infty} \abs{h_n-h}_{\theta} = 0$. For any $x,y,z \in \bb R^{d} $ and $n\geqslant 0,$ \begin{eqnarray*} && \frac{\abs{h(x)-h(y)}}{\abs{x-y}^{\ee} (1+\abs{x})(1+\abs{y})} + \frac{\abs{h(z)}}{(1+\abs{z})^{\theta}}\\ &&\leqslant \abs{h_n-h}_{\theta} \left( \frac{(1+\abs{x})^{\theta} + (1+\abs{y})^{\theta}}{\abs{x-y}^{\ee} (1+\abs{x})(1+\abs{y})} +1 \right) + \left[ h_n \right]_{\ee} + \abs{ h_n }_{\theta}. \label{IT001} \end{eqnarray*} Taking the limit as $n \to +\infty,$ shows that $h \in \mathscr{L}_{\theta,\ee}$ and $\norm{h}_{\theta,\ee} \leqslant K$. The theorem of Ionescu-Tulcea and Marinescu and the unicity of the one-dimen\-sional projector proved in the point \ref{uneseulevp001} of Lemma \ref{uneseulevp} imply Hypothesis \ref{SPGA}. Hypothesis \ref{PETO} is obtained easily from Lemma \ref{opPt}. The point \ref{uneseulevp002} of Lemma \ref{uneseulevp} will be used latter to prove that $\sigma^2 > 0.$ \textit{Proof of \ref{Momdec}.} By the hypothesis $\alpha=\frac{2+2\delta}{1+\ee}>2$. Consider the function $N$: $\bb R^d \to \bb R_{+}$ defined by $N(x) = \abs{x}^{1+\ee}$. For any $x,y \in \bb R^d$ satisfying $\abs{x} \leqslant \abs{y}$, \[ \abs{N(x) - N(y)} \leqslant (1+\ee) \abs{y}^{\ee} \abs{x-y}. \] Using the fact that $\abs{N(x)-N(y)} \leqslant 2 \abs{y}^{1+\ee}$, we have \[ \abs{N(x) - N(y)} \leqslant (1+\ee)^{\ee} 2^{1-\ee} \abs{y}^{\ee^2+(1+\ee)(1-\ee)} \abs{x-y}^{\ee} = c_{\ee} \abs{y} \abs{x-y}^{\ee}. \] Together with $\abs{N}_{\theta} < +\infty$, this proves that the function $N$ is in $\mathscr{B}=\mathscr{L}_{\theta,\ee}$. Obviously $\abs{f(x)}^{1+\ee} = \abs{ \scal{u}{x} }^{1+\ee} \leqslant \abs{u}^{1+\ee} \left( 1+N(x) \right)$. Moreover, for any $h\in \mathscr{L}_{\theta,\ee}$, \[ \abs{h(x)} \leqslant \left[ h \right]_{\ee} \abs{x}^{\ee} \left( 1+\abs{x} \right) + \abs{h(0)} \leqslant 2 \norm{h}_{\theta,\ee} \left( 1+N(x) \right) \] and so $\norm{\bs \delta_x}_{\mathscr B'} \leqslant 2 \left( 1+N(x) \right)$. Note that for any $p\in [1,\alpha]$, \[ \bb E^{1/p} \left( N\left( g_n \dots g_1 \cdot x \right)^{p} \right) \leqslant 2^{1+\ee} \left( \bb E^{1/p} \left( \Pi_n^{p(1+\ee)} \right)N(x) + \bb E^{1/p} \left( \abs{g_n \dots g_1 \cdot 0}^{p(1+\ee)} \right) \right). \] Since $p(1+\ee) \leqslant 2+2\delta$, the previous inequality proves that $\bb E_x^{1/p} \left( N\left( X_n \right)^p \right) \leqslant c \left( 1+N(x) \right)$. Thus, we proved the first inequality of the point \ref{Momdec001} of \ref{Momdec}. For any $l \geqslant 1$, we consider the function $\phi_l$ on $\bb R_+$ defined by: \[ \phi_l(t) = \left\{ \begin{array}{ll} 0 & \text{ if } t\leqslant l^{\frac{1}{1+\ee}} - 1, \\ t- \left( l^{\frac{1}{1+\ee}} - 1 \right) & \text{ if } t\in \left[ l^{\frac{1}{1+\ee}} - 1, l^{\frac{1}{1+\ee}} \right], \\ 1 & \text{ if } t\geqslant l^{\frac{1}{1+\ee}}. \end{array} \right. \] Define $N_l$ on $\bb R^d$ by $N_l(x) = \phi_l(\abs{x})N(x)$. For any $x\in \bb R^d$, we have $N(x) \mathbbm 1_{\left\{ N(x) > l \right\}} \leqslant N_l(x) \leqslant N(x)$ which implies that $\abs{N_l}_{\theta} \leqslant \abs{N}_{\theta} < +\infty$. Moreover, for any $x,y \in \bb R^d$ satisfying $\abs{x} \leqslant \abs{y}$, we have \[ \abs{\phi_l(\abs{y}) - \phi_l(\abs{x})} \leqslant \min\left( \abs{y} - \abs{x}, 1 \right). \] So \[ \abs{N_l(y) - N_l(x)} \leqslant \left[ N \right]_{\ee} \abs{x-y}^{\ee} \left( 1+\abs{x} \right) \left( 1+\abs{y} \right) + \abs{x}^{1+\ee} \abs{y-x}^{\ee}. \] Since $\abs{x} \leqslant \abs{y}$, we obtain that $\left[ N_l \right]_{\ee} \leqslant \left[ N \right]_{\ee} + 1 <+\infty$. Therefore, the function $N_l$ belongs to $\mathscr{B} = \mathscr{L}_{\theta,\ee}$, which finishes the proof of the point \ref{Momdec001} of \ref{Momdec}. Moreover, $\norm{N_l}_{\theta,\ee} \leqslant \norm{N}_{\theta,\ee}+1$ and, so the point \ref{Momdec002} of \ref{Momdec} is also established. Since $\int_{\bb X} \abs{x}^p \bs \nu(\dd x) < +\infty$, for any $p\leqslant 2+2\delta$, we find that \[ \bs \nu \left( N_l \right) \leqslant \int_{\bb X} \abs{x}^{1+\ee} \mathbbm 1_{\left\{\abs{x} \geqslant l^{\frac{1}{1+\ee}}-1 \right\}} \bs \nu(\dd x) \leqslant \frac{\int_{\bb X} \abs{x}^{2+2\delta} \bs \nu(\dd x)}{\left( l^{\frac{1}{1+\ee}}-1 \right)^{2+2\delta-(1+\ee)}}. \] Choosing $\beta = \alpha - 2 > 0$, we obtain the point \ref{Momdec003} of \ref{Momdec}. \textit{Proof of \ref{CECO}.} Using \eqref{mu-sigma001} and the point \ref{H4} of Hypothesis \ref{hypoH}, \begin{equation} \label{muestnul} \mu = \int_{\bb R^d} \scal{u}{x} \bs \nu(\dd x) = \scal{u}{\bb E\left( \sum_{k=1}^{+\infty} A_1 \dots A_{k-1} B_k \right)} = 0. \end{equation} Now we prove that $\sigma^2 > 0$. For this, suppose the contrary: $\sigma^2 = 0$. One can easily check that the function $f$ belongs to $\mathscr{B}$. Using \ref{SPGA} and the fact that $\nu (f) = \mu = 0$, we deduce that $\sum_{n\geqslant 0} \norm{\mathbf{P}^n f}_{\theta,\ee}$ = $\sum_{n\geqslant 0} \norm{Q^n f}_{\theta,\ee} < +\infty$ and therefore the series $\sum_{n\geqslant 0} \mathbf{P}^n f$ converges in $\left(\mathscr{B},\norm{\cdot}_{\theta,\ee}\right)$. We denote by $\Theta \in \mathscr{B}$ its limit and notice that the function $\Theta$ satisfies the Poisson equation: $\Theta - \mathbf{P}\Theta = f$. Using the bound \eqref{bound_EfXn}, we have $\abs{\sum_{n=1}^N f(x) \mathbf{P}^n f(x)} \leqslant c \left( 1+ N(x) \right)^2$. By the Lebesgue dominated convergence theorem, from \eqref{mu-sigma001}, we obtain \begin{align*} \sigma^2 &= \int_{\bb R^d} f(x) \left( 2\Theta(x) - f(x) \right) \bs \nu(\dd x) \\ &= \int_{\bb R^d} \left( \Theta^2(x) - \left( \mathbf{P}\Theta \right)^2(x) \right) \bs \nu(\dd x) \\ &= \int_{\GL\left( d, \bb R \right) \times \bb R^d \times \bb R^d} \left( \Theta(g_1 \cdot x) - \mathbf{P}\Theta(x) \right)^2 \bs \mu(\dd g_1) \bs \nu(\dd x). \end{align*} As $\sigma^2=0$, we have $\Theta(g_1 \cdot x) = \mathbf{P}\Theta(x)$, \textit{i.e.}\ $f(g_1 \cdot x) = \mathbf{P}\Theta(x) - \mathbf{P}\Theta(g_1 \cdot x)$, $\bs \mu \times \bs \nu$-a.s. Consequently, there exists a Borel subset $B_0$ of $\bb R^d$ such that $\bs \nu(B_0)=1$ and for any $t\in \bb R$ and $x\in B_0$, \[ \int_{\GL\left( d, \bb R \right) \times \bb R^d} \e^{it\scal{u}{g_1 \cdot x}} \e^{it \mathbf{P}\Theta(g_1 \cdot x)} \bs \mu (\dd g_1) = \e^{it \mathbf{P}\Theta(x)}. \] Since the functions in the both sides are continuous, this equality holds for every $x\in \supp(\bs \nu)$. Since $\Theta \in \mathscr{L}_{\theta,\ee}$, the function $x \mapsto \e^{it \mathbf{P}\Theta(x)}$ belongs to $\mathscr{L}_{\theta,\ee} \smallsetminus \{0\}$. This contradicts the point \ref{uneseulevp002} of Lemma \ref{uneseulevp} and we conclude that $\sigma^2>0$ and so \ref{CECO} holds true. \subsection{Proof of Proposition \ref{PP002}} \label{proof-cas-compact} We show that \ref{BASP}-\ref{CECO} hold true for the Markov chain $(X_n)_{n\geqslant 1}$, the function $f$ and the Banach space $\mathscr{L}(\bb X)$ given in Section \ref{Compact1}. \textit{Proof of \ref{BASP}.} Obviously the Dirac measure belongs to $\mathscr{L}(\bb X)'$ and $\norm{\bs \delta_x}_{\mathscr{L}(\bb X)'} \leqslant 1$ for any $x\in \bb X$. For any $h \in \mathscr{L}(\bb X)$ and $t\in \bb R$ the function $\e^{itf}h$ belongs to $\mathscr{L}(\bb X)$ and \begin{equation} \label{opxhitborne} \norm{\e^{itf}h}_{\mathscr{L}} \leqslant \abs{t} \left[ f \right]_{\bb X} \norm{h}_{\infty} + \norm{h}_{\mathscr{L}} \leqslant \left( \abs{t} \left[ f \right]_{\bb X} + 1 \right) \norm{h}_{\mathscr{L}}. \end{equation} \textit{Proof of \ref{SPGA}.} Let $(x_1,x_2)$ and $(y_1,y_2)$ be two elements of $\bb X$ and $h \in \mathscr{L}(\bb X)$. Since \[ \mathbf Ph(x_1,x_2) = \int_{X} h(x_2,x') P(x_2, \dd x'), \] we have $\norm{\mathbf Ph}_{\infty} \leqslant \norm{h}_{\infty}$. Denote by $h_{x_2}$ the function $z \mapsto h(x_2,z)$, which is an element of $\mathscr{L}(X).$ Since $\left[ h_{x_2} \right]_{X} \leqslant \left[ h \right]_{\bb X}$ and $\abs{h_{x_2}}_{\infty} \leqslant \norm{h}_{\infty}$, we obtain also that \begin{align*} \abs{\mathbf Ph(x_1,x_2) - \mathbf Ph(y_1,y_2)} &= \abs{Ph_{x_2}(x_2) - Ph_{y_2}(y_2)} \\ &\leqslant \left[ Ph_{x_2} \right]_{X}d_{X}(x_2,y_2) + \left[ h \right]_{\bb X} d_{X}(x_2,y_2) \\ &\leqslant \left( \abs{P}_{\mathscr{L} \to \mathscr{L}}\norm{h}_{\bb X} + \left[ h \right]_{\bb X} \right) d_{X}(x_2,y_2), \end{align*} where $\abs{P}_{\mathscr{L} \to \mathscr{L}}$ is the norm of the operator $P$: $\mathscr{L}(X) \to \mathscr{L}(X)$. Therefore $\mathbf P$ is a bounded operator on $\mathscr{L}(\bb X)$ and $\norm{\mathbf P}_{\mathscr{L} \to \mathscr{L}} \leqslant \left( 1+\abs{P}_{\mathscr{L} \to \mathscr{L}} \right).$ Now, for any $h \in \mathscr{L}(\bb X)$, we define the function $F_h$ by \[ F_h (x_2) := \int_{X} h(x_2,x') P(x_2,\dd x') = \mathbf Ph(x_1,x_2). \] Notice that $F_h$ belongs to $\mathscr{L}(X)$ and $\abs{F_h}_{\mathscr{L}} \leqslant \norm{\mathbf Ph}_{\mathscr{L}}$. So by Proposition \ref{TroupourPpasgras}, for any $n\geqslant 2$, $(x_1,x_2) \in \bb X$ and $h \in \mathscr{L}(\bb X)$, \[ \mathbf P^n h(x_1,x_2) = P^{n-1}F_h (x_2) = \nu(F_h) + R^{n-1}F_h (x_2) = \bs \nu(h) e(x_1,x_2) + Q^{n}h(x_1,x_2), \] where the probability $\bs \nu$ is defined on $\bb X$ by \[ \bs \nu(h) = \nu( F_h ) = \int_{X\times X} h(x',x'') P(x',\dd x'') \nu(\dd x'), \] the function $e$ is the unit function on $\bb X$, $e(x_1,x_2) = 1$, $\forall (x_1,x_2) \in \bb X$ and $Q$ is the linear operator on $\mathscr{L}(\bb X)$ defined by $Qh=R(F_h)=\mathbf P h - \bs \nu(h)$. By Proposition \ref{TroupourPpasgras}, the operator $Q$ is bounded and for any $n\geqslant 1$, $\norm{Q^n}_{\mathscr{L} \to \mathscr{L}} \leqslant \abs{R^{n-1}}_{\mathscr{L} \to \mathscr{L}} \norm{\mathbf P}_{\mathscr{L} \to \mathscr{L}} \leqslant C_{Q} \kappa^n$. Since $\nu$ is invariant by $P$, one can easily verify that $\Pi Q = Q \Pi = 0$, where $\Pi$ is the one-dimensional projector defined on $\mathscr{L}(\bb X)$ by $\Pi h = \nu(h) e$. \textit{Proof of \ref{PETO}.} For any $t\in \bb R$, $h\in \mathscr{L}(\bb X)$ and $(x_1,x_2) \in \bb X$, \[ \mathbf P_th(x_1,x_2) = \int_X \e^{itf(x_2,x')} h(x_2,x') P(x_2,\dd x') = \sum_{n=0}^{+\infty} \frac{i^n t^n}{n!} L_n(h)(x_1,x_2), \] where $L_n(h) = \mathbf P (f^n h)$. Since $\left( \mathscr{L}(\bb X), \norm{\cdot}_{\mathscr{L}} \right)$ is a Banach algebra, it follows that $L_n$ is a bounded operator on $\mathscr{L}(\bb X)$ and $\norm{L_n}_{\mathscr{L} \to \mathscr{L}} \leqslant \norm{\mathbf P}_{\mathscr{L} \to \mathscr{L}} \norm{f}_{\mathscr{L}}^n$. Consequently, the application $t \mapsto \mathbf P_t$ is analytic on $\bb R$ and so, by the analytic perturbation theory of linear operators (see \cite{kato_perturbation_1976}), there exists $\ee_0 > 0$ such that, for any $\abs{t} \leqslant \ee_0$, \[ \mathbf P_t^n = \lambda_t^n \Pi_t + Q_t^n, \] where $\lambda_t$ is an eigenvalue of $\mathbf P_t$, $\Pi_t$ is the projector on the one-dimensional eigenspace of $\lambda_t$ and $Q_t$ is an operator of spectral radius $r(Q_t) < \abs{\lambda_t}$ such that $\Pi_t Q_t = Q_t \Pi_t = 0$. The functions $t \mapsto \lambda_t$, $t\mapsto \Pi_t$ and $t \mapsto Q_t$ are analytic on $[-\ee_0,\ee_0]$. Furthermore, for any $h\in \mathscr{L}(\bb X)$ and $(x_1,x_2) \in \bb X$, \[ \abs{\mathbf P_t h}(x_1,x_2) = \abs{\int_X \e^{itf(x_2,x')} h(x_2,x') P(x_2,\dd x')} \leqslant \norm{h}_{\infty} \] and necessarily $\abs{\lambda_t} \leqslant 1$, for any $\abs{t} \leqslant \ee_0$. Consequently \[ \sup_{\abs{t} \leqslant \ee_0, n \geqslant 1} \norm{\mathbf P_t^n}_{\mathscr{L} \to \mathscr{L}} \leqslant c. \] \textit{Proof of \ref{Momdec} and \ref{CECO}.} Since for any $x\in \bb X$, $\abs{f(x)} \leqslant \abs{f}_{\infty}$ and $\norm{\bs \delta_x}_{\mathscr{L}(\bb X)'} \leqslant 1$, we can choose $N=0$ and $N_l = 0$ for any $l\geqslant 1$ and Hypothesis \ref{Momdec} is obviously satisfied. Finally, Hypothesis \ref{hypoHcompactII} ensures that \ref{CECO} holds true. \bibliographystyle{plain}
1,314,259,995,559
arxiv
\subsection{Brane-Induced Gravity} The principal motivation for modifying General Relativity (GR) at ultra-large distances is the cosmological constant problem~\cite{weinberg}. (See~\cite{bhuvreview} for a recent review of cosmological tests of gravity.) Vacuum energy is the zero-momentum component of stress energy and hence its backreaction depends sensitively on the nature of gravity in the far infrared. A compelling approach is {\it degravitation}, in which gravity acts as a high-pass filter~\cite{dilute,ADGG,degrav}: the cosmological term is in fact large, in accordance with field theory expectations, but gravitates very weakly. A perennial challenge in devising consistent IR-modified theories of gravity is quantum stability, {\it i.e.} avoidance of ghost-like (negative energy) instabilities. Giving the graviton a hard mass \`a la Pauli-Fiertz~\cite{FP}, for instance, unavoidably leads to instabilities~\cite{BoulwareDeser,nima,Creminellipaper}. A more promising approach is brane-induced gravity~\cite{DGP, DGP2,DGP3,DGP4,DGP5,cascade1,cascade2,claudiareview,nonFP,intersecting,cascade3,ghostfree,gigashif,aux1,aux2,nishant}, which relies on branes and extra dimensions. The most widely known example is the Dvali-Gabadadze-Porrati (DGP) brane-world model~\cite{DGP,DGP2,DGP3}. The normal branch of the DGP model is perturbatively ghost free, in contrast to the self-accelerating branch~\cite{luty,nicolis,dgpghost}, and thus represents a perturbatively consistent IR modification of gravity. The Cascading Gravity framework~\cite{cascade1,cascade2,intersecting,cascade3,ghostfree} extends the DGP model to $D\geq 6$ space-time dimensions. In the simplest version with a $6D$ bulk space-time, our 3-brane is embedded in a 4-brane, each with their own induced gravity terms. The upshot of this generalization is twofold. First, the soft mass term for the graviton is a more slowly varying function of momentum than in DGP, which is a necessary condition for degravitation~\cite{degrav}. Thus Cascading Gravity is a promising framework for realizing this phenomenon~\cite{cascade3}. Furthermore, the cascading graviton mass term results in an expansion history that closely resembles $\Lambda$CDM cosmology and is therefore less constrained by observations~\cite{claudiaandrew,niayeshghazal}. What about perturbative stability? Perturbing around $6D$ Minkowski space with empty branes reveals a ghost scalar mode. Early work~\cite{cascade1} revealed, however, that the ghost is excised by including a sufficiently large tension on the 3-brane or, alternatively, by considering a higher-dimensional Einstein-Hilbert term localized on the brane~\cite{cascade2,gigashif,massimo1,massimo2}. While the original derivation of~\cite{cascade1} was restricted to a particular decoupling limit of the theory, recently the absence of perturbative ghosts has been proven rigorously by perturbing the full $6D$ solution in the presence of brane tension~\cite{ghostfree}. These results establish the cascading gravity framework as a perturbatively consistent IR modification of gravity. \subsection{Summary of Results} In this paper we show that the observational bulk flow anomaly is alleviated in brane-induced gravity theories. The bulk flow enhancement depends on the number $D$ of bulk space-time dimensions and the cross-over scale $r_c$ beyond which gravity on the brane becomes higher-dimensional. Our bulk flow results can be summarized in a fitting formula for the rms of the 1-dimensional velocity on $50h^{-1}$~Mpc scales, which is valid for $D \geq 4$ and $r_c\mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 1.5 H_0^{-1}$: \begin{equation} \label{vfitintro} v_{1-{\rm dim}}^{\rm G}\simeq 102 \times \frac{3 \gamma}{2} \( \frac{9}{5} r_c H_0 \)^{-\left(\gamma-\frac{2}{3}\right)}{\rm km/s}\,, \end{equation} where \begin{equation} \gamma \equiv \frac{1}{6}\left(-1+\sqrt{\frac{49D-146}{D-2}}\right)\,. \label{gam} \end{equation} The superscript ``G" indicates the use of a Gaussian window function. In particular, for $D=4$ ({\it i.e.}, $\gamma = 2/3$), this matches the $\Lambda$CDM prediction: $v_{1-{\rm dim}}^{\rm G} = 102$~km/s for our fiducial parameter choices. In Sec.~\ref{de} we generalize~(\ref{vfitintro}) to a fitting formula valid on a range of scales --- see~(\ref{vfitintro2}). Our fiducial cosmology is consistent with WMAP 7-year data~\cite{Larson:2010gs} and assumes a spatially-flat universe with $\Omega_{\rm m} = 0.24$, $\Omega_{\rm b}=0.042$, a primordial power spectrum with tilt $n_s = 0.96$, and a primordial amplitude chosen to yield a present-day normalization of $\sigma_8 = 0.8$ for a $\Lambda$CDM growth history. The bulk flow enhancement~(\ref{vfitintro}) grows with increasing $D$, since more extra dimensions imply more scalar fields on the brane contributing to the gravitational attraction. It also grows with decreasing $r_c$, since smaller $r_c$ implies that departures from standard $4D$ gravity turn on at earlier times. Because bulk flows measure today's evolution, higher-$D$ models generate bigger departures from $\Lambda$CDM for fixed late-time normalization, since the force enhancement grows with $D$. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{allMaxima.pdf} \caption{The 1-dimensional rms velocity as a function of scale for standard gravity (black line) and brane-induced gravity, derived from linear analysis. The dashed (blue), dash-dotted (red), dotted (green), and top dashed (orange) lines show the maximum velocity allowed in our model with $D=5$, 6, 7, and 10, respectively, for late-time power spectra with $\sigma_8 \simeq 0.88$, consistent with observational constraints. The expansion history and primordial normalization are identical in all cases.} \label{maxima} \end{figure} The enhanced gravitational attraction in cascading gravity also boosts the amplitude of density perturbations, resulting in a larger $\sigma_8$ at late times versus standard gravity for fixed primordial normalization. The late time $\sigma_8$ is constrained by various observables, such as cluster abundance~\cite{Vikhlinin:2008ym, Lueker:2009rx}, the weak lensing power spectrum~\cite{Fu:2007qq} and galaxy clustering~\cite{urosbias}. We discuss these constraints in Sec.~\ref{ampcons}. The tightest constraint comes from X-ray galaxy cluster counts from the ROSAT All-Sky Survey~\cite{Vikhlinin:2008ym}, which for $\Lambda$CDM growth history imposes $\sigma_8 \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 0.88$ at the 95\% C.L. We therefore require that the late-time amplitude in our models approximately satisfies this bound. For a given $D$, this gives a lower bound on the allowed $r_c$. On one hand, this is conservative: because our additional scalar force turns off in high-density environments, like galaxy clusters, the amplitude of the late-time {\it linear} power spectrum, when compared with data using $\Lambda$CDM methods, overestimates the cluster abundance~\cite{marcos}. On the other hand, the additional scalar force, if not completely screened, may increase the dynamical mass of any given cluster; this could increase the expected number of clusters at fixed dynamical mass, thereby tightening constraints. See~\cite{Schmidt:2010jr} for a careful study of dynamical effects in a variety of models. A similar treatment in our class of theories is work in progress~\cite{marcos}. Figure~\ref{maxima} shows the 1-dimensional rms velocity $v_{1-{\rm dim}}^{\rm G}$ (the variance from zero mean of the theoretical distribution) as a function of scale for standard gravity and brane-induced/Cascading Gravity with $D = 5,6,7$ and 10, derived from linear theory using a Gaussian window function. For each $D$, we choose the minimum allowed value of $r_c$, so each of the modified gravity curves have a similar late-time $\sigma_8$. To compare with data from the peculiar velocity surveys analyzed by Watkins {\it et al.}~\cite{hudson1}, we use a window function that includes observational effects but is only valid on $50h^{-1}$~Mpc scales. Velocities obtained with this window function are denoted with superscript ``W". In practice, this results in a $\sim 1$\% increase compared to the Gaussian window function, through the inclusion of some higher-$k$ modes. Over the range $5 \leq D < \infty$, we find \begin{equation} 220 < v_{3-{\rm dim}}^{\rm W} < 237~{\rm km/s} \,. \label{vrange} \end{equation} This is compared with the $\Lambda$CDM value, $179$~km/s. The upper end of this range is nearly consistent at the $2\sigma$ level with the observed bulk flow of $407 \pm 81$~km/s~\cite{hudson1}. While we have used linear theory to derive these results, we also perform N-body simulations of the non-linear evolution; these are in excellent agreement with the linear analysis for the scales of interest ({\it cf.} \S \ref{numerics}). The stronger effective gravitational attraction in our models also makes the occurrence of a high-velocity merging system like the Bullet Cluster much more probable. Assuming that the majority of the infall velocity is caused by the $\simeq 10^{15} M_\odot$ main cluster, we treat the clusters as point particles released from rest from a large initial separation ($30$~Mpc) and calculate the resulting velocity at $5$~Mpc, the initial separation for gas collision simulations. To include the rest of large scale structure, we add a bulk flow component to the infall velocity in quadrature. We find that the resulting velocity at $5$~Mpc separation is 14\% ($D=5$) to 27\% ($D=10$) larger in our theories, where we again take the minimum allowed $r_c$ for each $D$. For $D = 10$ (and $r_c = 2750$~Mpc), an initial velocity of $3000$ km/s at $z=0$ is a 4.8$\sigma$ event, as compared with 6.5$\sigma$ in $\Lambda$CDM; or, in terms of probability, $6.6 \times10^{-7}$ versus $3.3\times 10^{-11}$, a boost of over $10^4$. Meanwhile, at $z=0.5$, closer to the actual merger redshift of $z=0.296$, the modified gravity result is 3.9$\sigma$ from the mean, versus 5.8$\sigma$ in standard gravity. The probability of achieving that velocity is thus increased to $5.1\times10^{-5}$, compared to $3.6\times 10^{-9}$, again a $\sim10^4$ probability enhancement. If the required initial velocity is reduced to 2000~km/s, the probability of such an occurrence in our model becomes $\sim 0.8$\% at $z=0$ and 14.2\% at $z=0.5$, respectively 257 and 65 times more likely than the $\Lambda$CDM expectation (that is, respectively 2.4 and 1$\sigma$ in modified gravity, versus 4 and 2.9$\sigma$ in standard gravity). \subsection{Modeling Cascading Cosmology} Extracting exact cosmological predictions from a higher-dimensional set-up, such as Cascading Gravity, is technically very challenging; even the precise form of the modified Friedmann equation is not yet known. Hence our results must rely on some phenomenological input. Fortunately, the scales of interest for this study are well within the Newtonian regime. In this regime, the theory admits a local description on the 3-brane, which arises through a certain decoupling limit. In this limit, the complexities of the full higher dimensional theory that are irrelevant to $4D$ physics are left out. The result is an effective theory in $4D$ with new degrees of freedom. The relevant degrees of freedom are weak-field gravity and $D-4$ scalar fields coupled to the trace of the matter stress-tensor, describing brane-bending along each extra dimension. Whereas gravity is weakly coupled in this limit, the scalars have non-linear derivative interactions that are responsible for the Vainshtein effect, a phenomenon generic to this kind of theory that leads to the approximate recovery of standard gravity in high density regions. While the precise form of the decoupling theory for Cascading Gravity is not known, we draw upon known results in DGP to infer the non-linear interactions of the scalars. For the background, meanwhile, we assume a $\Lambda$CDM expansion history. As mentioned earlier, the form of the modified graviton propagator in Cascading Gravity suggests that brane-world corrections to the Friedmann equation~\cite{cedricbrane,cline,branereview,renjie} are more slowly-varying functions of $Hr_c$ than in standard DGP~\cite{cedric}; they should closely resemble vacuum energy contributions. Furthermore, by assuming a $\Lambda$CDM expansion history, our analysis isolates the effects of the modified growth history~\cite{bhuvpenjie}. We describe this approach in detail in Sec.~\ref{massgrav}. Finally, we wish to emphasize that, although our analysis does not constitute a fully rigorous derivation of cosmological predictions in Cascading Gravity, our results should most likely capture the essence of the predictions of the complete theory. Because the observables of interest are in the Newtonian regime, the description in terms of conformally-coupled scalar fields with non-linear interactions should be accurate. The details of the Vainshtein screening mechanism may vary, but the corrections to our predictions --- which lie on scales where linear theory is valid --- are expected to be small. In short, our approach is not merely a toy model of cascading cosmology, but a first iteration in extracting predictions from a broad class of IR-modified gravity theories. \section{Massive/Resonance Gravity} \label{massgrav} The defining feature of theories with infinite-volume extra dimensions, such as DGP and Cascading Gravity, is that 4$D$ gravity is mediated by a resonance graviton --- a continuum of massive states --- with general propagator \begin{equation} \frac{1}{k^2 +m^2(k)}\,. \label{prop} \end{equation} In DGP, for instance, $m^2(k) = r_c^{-1}k$. In real space, this gives a gravitational potential that interpolates from the $4D$ scaling, $1/r$, at short distances to the $5D$ scaling, $1/r^2$, at large distances, with a cross-over scale set by $r_c$. In Cascading Gravity, the soft mass term $m^2(k)$ is a more complicated function of $k$, involving multiple cross-over scales~\cite{cascade1,cascade2}. For simplicity, we shall assume that all cross-over scales are comparable and denote this common scale by $r_c$. Because $4D$ gravity is massive, each graviton has 5 polarizations: the usual 2 helicity-2 states of GR, 2 helicity-1 states and 1 helicity-0 degree of freedom. At distances $r\ll r_c$, only the helicity-2 and helicity-0 degrees of freedom are relevant --- the helicity-1 states are very weakly coupled to matter and can be safely ignored. In the DGP model, the helicity-0 mode has a nice geometrical interpretation. It measures the extrinsic curvature of the brane in the extra dimension and is thus referred to as a brane-bending mode. Cascading Gravity theories have $D-5$ additional helicity-0 or scalar modes, accounting for the higher number of extra dimensions in which the brane can bend. This counting of degrees of freedom can alternatively be understood from a decomposition of the $D$-dimensional massless spin-2 representation~\cite{cascade1,nonFP}. These scalar modes couple to the trace of the stress-energy tensor of matter on the brane and, combined with the helicity-2 states, result in a one-graviton exchange amplitude between conserved sources having the tensor structure of $D$-dimensional massless gravity~\cite{cascade2}: \begin{equation} {\cal A} \sim T^{\mu\nu} \cdot \frac{1}{k^2}\cdot \tilde{T}_{\mu\nu} - \frac{1}{D-2}T^\mu_{\; \mu} \cdot \frac{1}{k^2}\cdot \tilde{T}^\nu_{\; \nu}\,, \label{amp} \end{equation} where we have neglected the graviton mass term for the scales of interest~\cite{footnote1}. For non-relativistic sources, this corresponds to the modified Poisson equation \begin{equation} k^2\Psi_{\rm dyn} = -4\pi G \left(1 + \frac{D-4}{D-2}\right) \rho \,. \label{poissonflat} \end{equation} In other words, the gravitational attraction is a factor of $1+\frac{D-4}{D-2}$ stronger than in standard Newtonian gravity. If~(\ref{amp}) were valid in the solar system, the theory would already be ruled out by post-Newtonian constraints, for arbitrarily small $m$ --- this is the famous van Dam-Veltman-Zhakarov (VDVZ) discontinuity~\cite{vDVZ}. The resolution, first proposed by Vainshtein for massive gravity~\cite{vainshtein}, is that the weak-field/linear assumption implicit in~(\ref{amp}) is actually a poor approximation for the scalar modes in the vicinity of astrophysical bodies or in the early universe. Instead, as shown explicitly in the DGP model~\cite{resolve,resolveothers}, non-linearities in these modes in high-density regions result in them decoupling from matter, leading to an approximate recovery of Einstein gravity in, {\it e.g.}, the solar system. Let us start with the DGP case. On scales $\ll H^{-1}$ (and thus $\ll r_c$), we can neglect time derivatives relative to gradients of the graviton helicity-0 mode. This mode, denoted by $\chi$, satisfies the approximate equation~\cite{lue,Fabian,Roman} \begin{eqnarray} \nonumber & & \nabla^2 \chi + \frac{r_c^2}{3\beta_{\rm DGP}a^2} [ (\nabla^2\chi)^2 - (\nabla_i\nabla_j\chi)(\nabla^i\nabla^j\chi) ] \\ && \qquad \qquad\qquad\qquad= \frac{8\pi Ga^2\delta\rho}{3\beta_{\rm DGP}} \,, \label{pieqn} \end{eqnarray} where the $\nabla$'s denote spatial derivatives, and where \begin{equation} \beta_{\rm DGP} \equiv 1 + 2 Hr_c\left(1+\frac{\dot{H}}{3H^2}\right)\,. \label{betaDGP} \end{equation} The overdot denotes a derivative with respect to proper time. The Vainshtein effect appears in two guises in the above. Firstly, in the early universe when $Hr_c \gg 1$, the coupling to matter density becomes vanishingly small since $\beta_{\rm DGP} \gg 1$. Secondly, even at late times when $Hr_c\mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 1$ and $\beta_{\rm DGP} \approx1$, sufficiently large overdensities trigger non-linearities in $\chi$ and result in its decoupling. The analogue of~(\ref{pieqn}) is not yet known for Cascading Gravity. There are multiple scalars in this case, each of which is expected to exhibit Vainshtein screening. There have not yet been any successful calculations that keep the non-linearities of all scalar modes. For the purpose of this paper, we shall take a phenomenological approach and assume that all cascading scalar degrees of freedom obey an equation of the form~(\ref{pieqn}). This is consistent with our earlier assumption of a single cross-over scale $r_c$. In other words, we collectively denote the scalars by $\chi$ and assume that they satisfy \begin{eqnarray} \nonumber & & \nabla^2 \chi + \frac{r_c^2}{3\beta a^2} [ (\nabla^2\chi)^2 - (\nabla_i\nabla_j\chi)(\nabla^i\nabla^j\chi) ] \\ && \qquad \qquad\qquad\qquad= \frac{8\pi G a^2\delta\rho}{3\beta}\,. \label{pieqn2} \end{eqnarray} Using the scaling of the resulting deviation from GR in the solar system~\cite{degrav}, we can infer that the cosmological deviation scales as $\beta \sim H^2r_c^2$. We then generalize~(\ref{betaDGP}) to \begin{equation} \beta \equiv 1 + 2 H^2r_c^2\left(1+\frac{\dot{H}}{3H^2}\right)\,. \label{beta} \end{equation} This modification makes a sharper transition in time than~(\ref{betaDGP}). This is related to the fact that gravity rapidly weakens on long length scales in models that can successfully degravitate the cosmological constant. This also means that whatever replaces~(\ref{pieqn2}) in Cascading Gravity will likely exhibit even more efficient Vainshtein screening within collapsed objects. The effect of the $\chi$ field on matter is through its contribution to the actual potential $\Psi_{\rm dyn}$ that moves particles: \begin{equation} \Psi_{\rm dyn} = \Psi_{\rm N} + \frac{3}{2} \(\frac{D-4}{D-2} \)\chi\,, \label{dyn} \end{equation} where $\Psi_{\rm N}$ is the usual Newtonian potential. The $D$-dependence here is chosen to recover~(\ref{poissonflat}) in the linearized limit. When performing N-body simulations, we solve~(\ref{pieqn2}) using a multi-grid relaxation scheme similar to the one described in~\cite{Fabian}. Our N-body code is an updated version of the one used in previous work~\cite{markw}, revised to solve~(\ref{pieqn2}) exactly and without resorting to the approximation of spherical symmetry. Meanwhile, we solve for $\Psi_{\rm N}$ in the usual way through Fourier transforms on a particle mesh. The code and further numerical results will be described in more detail elsewhere~\cite{marcos}. Most of our results are derived within linear theory, since this is a valid approximation for bulk flows over the scales of interest. In this regime, we can obtain a modified evolution equation for the density perturbations $\delta\equiv \delta\rho/\bar{\rho}$. Using the fact that the energy-momentum tensor on the brane is covariantly conserved, density and velocity perturbations evolve as usual via \begin{equation} \dot{\delta} = -\frac{k}{a}v\;;\qquad \dot{v} + Hv = \frac{k}{a}\Psi_{\rm dyn}\,. \label{emomcons} \end{equation} These can be combined with the linearized version of~(\ref{pieqn}) to yield \begin{equation} \ddot{\delta} + 2H\dot{\delta} = 4\pi G \bar{\rho} \left(1+\beta^{-1}\frac{D-4}{D-2}\right)\delta\,. \label{deltaevolve} \end{equation} Note that this is consistent with~(\ref{poissonflat}) in the flat space limit $\beta\rightarrow 1$. As another quick check, letting $\beta\rightarrow\beta_{\rm DGP}$ and setting $D=5$, this agrees with explicit cosmological results in DGP~\cite{lue}. In this paper, we will solve for peculiar velocities using linear theory and cross-check the results with simulations. \section{Review of bulk flow formalism} \label{bulkrev} Since matter responds simply to gravitational gradients at the linear level, it is straightforward to estimate the expected bulk flows on length scales where matter overdensities are in the linear regime. See \S II.14 of~\cite{Peebles} for an introductory exposition. The root-mean-square of the one-dimensional (1-dim) velocity, $\sigma_v^2 \equiv \langle v_{\rm 1-dim}^2\rangle$, on a scale $R$ is given by~\cite{Watkins:1997tq} \begin{equation} \label{sigv} \sigma_v^2(R) = \frac{1}{3} \cdot \frac{H_0^2 }{2 \pi^2} \int_0^\infty P(k) W^2(k,R)f^2(k) {\rm d}k\,, \end{equation} where $P(k)$ is the power spectrum of matter density fluctuations, and $W(k)$ is a window function with scale $R$ (To get the three-dimensional answer, we simply remove the $1/3$). Similarly to~\cite{Watkins:1997tq}, we choose the window function to be a Gaussian for the majority of our analysis: \begin{equation} W^{\rm G}(k) = \exp(-k^2 R^2)\,. \label{gauss} \end{equation} This is designed to capture only the small-$k$/long-distance behavior of the power spectrum, {\it i.e.} the bulk flow. Note that the window function used for determining the bulk flow from peculiar velocity surveys \cite{hudson1} is slightly different from this, including some higher-$k$/smaller-$R$ modes. We present this observational window function in Sec.~\ref{data} and use it when comparing with the results of~\cite{hudson1}. In practice, however, it gives very similar results to the Gaussian window function. See Fig.~\ref{wind} for a sneak preview of these window functions. The velocity also depends on the growth rate $f(k)$ of perturbations, \begin{equation} f(k) \equiv \left. \frac{{\rm d}\ln g(a,k)}{{\rm d}\ln a}\right\vert_{a = a_0}\,, \label{growthrate} \end{equation} where $g$ is the growth function, and $a_0$ is the present scale factor. In the gravity theories of interest (as well as in theories with clustering dark energy) both $g$ and $f$ are generically scale-dependent. This formalism is similar to that used to set the normalization of the matter power spectrum. The parameter most commonly used for this purpose is $\sigma_8$, the matter fluctuation within $8 h^{-1}$ Mpc spheres: \begin{equation} \sigma_8^2 \equiv \frac{1}{2\pi^2} \int_0^\infty k^2 \, P(k) W^2_8(k) {\rm d}k\,, \end{equation} where $W_8(k) = 3j_1(kR_8)/kR_8$, with $R_8 = 8 h^{-1}$ Mpc and $j_1$ a spherical Bessel function. Note that the window function for $\sigma_8$ is a shallower function of $k$ than that for $\sigma_v$. Hence, $\sigma_8$ folds in more higher-$k$ modes, whereas peculiar velocities are a direct probe of low-$k$ power. The long-distance modifications of gravity of interest boost peculiar velocities in two ways: \begin{itemize} \item Faster development of structure at late times, encoded in the growth rate $f$; \item Greater build-up of structure due to the integrated influence of stronger gravity, resulting in a larger amplitude of the power spectrum ({\it i.e.}, larger $\sigma_8$). \end{itemize} As mentioned earlier, however, the amplitude of the late-time matter power spectrum cannot be too drastically altered, as it is constrained by large-scale structure observations. These will be taken into account in Sec.~\ref{ampcons} to constrain our model parameters. Nevertheless, the boost in the bulk flow can be substantial because of the growth rate effect. In other words, whereas the power spectrum is the integrated result of the entire growth history, peculiar velocities are also sensitive to the present growth rate. Bulk flows are often discussed in a variety of different ways, which can be confusing. The bulk flows that are measured using peculiar velocity surveys and other techniques are reconstructions of a full three-dimensional bulk flow of our region of space, but they are based on measurements that are separately only one-dimensional --- our line-of-sight measurements always project the velocity of each object onto the radial direction. What is measured, then, is a 1-dimensional velocity for each object in the survey. By collecting and averaging over a large number of objects, however, we are able to reconstruct the full 3D bulk flow~\cite{hudson1}, albeit with a small residual contribution from some smaller scales. This can be seen in the difference between our window function~(\ref{gauss}) and the ideal experimental window function for surveys~(\ref{winfit}), shown in Fig. \ref{wind}, which includes a ``bump" at around $k\sim 0.05h$~Mpc$^{-1}$. Present observational techniques only allow reliable measurement of our local bulk flow. Thus, we have three independent measurements to compare with the expected variance in the local velocity calculated in~(\ref{sigv}). After reviewing the results of our determination of the variance in Secs.~\ref{est}-\ref{numerics}, we compare those theoretical expectations with the current best local bulk flow data in Sec.~\ref{data}. Although it is an effect on a different length scale, we also discuss how our model change expectations for Bullet cluster initial velocities in light of~\cite{Lee:2010hj} in Sec.~\ref{data}. \section{Analytical Estimates} \label{est} In this Section, we estimate the expected bulk flows in Cascading Gravity, first through analytical methods assuming matter-dominated cosmology (Sec.~\ref{md}) and through fitting formulas of linear theory integration for $\Lambda$CDM expansion history (Sec.~\ref{de}). \subsection{Enhancement in Matter-Dominated Cosmology} \label{md} In the linear regime and on scales much smaller than $r_c$, density perturbations evolve according to~(\ref{deltaevolve}). To get a rough estimate of the expected bulk flow enhancement analytically, we first ignore dark energy and consider an Einstein-de Sitter ($\Omega_{\rm m} =1$) universe. At early times ($Hr_c \gg 1$), the cosmological Vainshtein effect results in $\beta \gg 1$, and density perturbations evolve as in standard gravity, with growing mode solution $\delta\sim a$. Once $Hr_c \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 1$, however, the extra scalar modes of Cascading Gravity become effective and $\beta \approx 1$. Equation~(\ref{deltaevolve}) then reduces to \begin{equation} \ddot{\delta} + \frac{4}{3t}\dot{\delta} -\frac{4}{3t^2}\left(1+\frac{D-4}{D-2}\right)\delta = 0\,, \end{equation} with growing mode solution \begin{equation} \delta \sim t^\gamma\,, \label{delgam} \end{equation} where $\gamma$ is given by~(\ref{gam}). As a quick check, since $\gamma = 2/3$ for $D=4$, this reproduces the usual matter-dominated growth $\delta \sim a$. Since $H\sim t^{-1}$ in matter-dominated cosmology, the excess growth from the onset of the modified gravity phase (when $H = 1/r_c$) until the present time is \begin{equation} \delta = \delta^{\rm std\;grav} \times (H_0r_c)^{-\gamma+\frac{2}{3}}\,. \label{deltaenhance} \end{equation} Through the continuity equation,~(\ref{delgam}) translates into $v\sim \dot{\delta} = \gamma \delta/t$ for the peculiar velocity. The enhancement relative to standard gravity is thus \begin{equation} v = v^{\rm std\;grav} \times \frac{3\gamma}{2} \cdot (H_0r_c)^{-\gamma+\frac{2}{3}}\,. \label{vpecenhance} \end{equation} This expression neatly captures the two contributions to the bulk flow excess described in Sec.~\ref{bulkrev}: the larger growth rate, through the $3\gamma/2$ factor, and the boost in the power spectrum amplitude through~(\ref{deltaenhance}). The above derivation, which crudely assumes a sharp onset of the modified growth when $H = r_c^{-1}$, actually agrees to within a few \% with the exact integration of~(\ref{deltaevolve}) for matter-dominated cosmology. This illustrates the rapid turn-on of the scalar force for Cascading Gravity,~(\ref{beta}). \subsection{Corrections from including dark energy} \label{de} The above analytic estimate, while useful in guiding our thinking, does not capture the weakening of growth triggered by the onset of cosmic acceleration. Using a $\Lambda$CDM expansion history in~(\ref{deltaevolve}) is straightforward but requires numerical integration. The resulting growth rate, $f(a,k)$, and power spectrum, $P(k)$, are then substituted back into~(\ref{sigv}) to obtain the velocity dispersion $\sigma_v$ as a function of scale $R$. In comparing modified and standard gravity, we keep other cosmological parameters fixed at their fiducial values: $\Omega_{\rm m}=0.24$, $\Omega_{\rm b}=0.042$, $h=0.73$, $n_s=0.96$, and a primordial amplitude chosen to yield $\sigma_8=0.80$ for $\Lambda$CDM growth history, consistent with WMAP 7-year data. In a similar spirit to Peebles' famous expression for the growth rate in the presence of dark energy, $f_{\Lambda {\rm CDM}} \simeq \Omega_{\rm m}^{\;0.6}$, we use the results of numerically solving the linear perturbation equations to derive a fitting formula for the effect of our modifications to gravity on the amplitude of clustering today. For $D = 5,\ldots 10$ and over the range $500 < r_c < 7000$~Mpc, we find that a reasonable fit is given by \begin{equation} \label{fit} \sigma_8 \simeq \sigma_8^{\rm std\;grav} \times \left(\frac{2}{3}H_0 r_c\right)^{0.71\left(-\gamma+\frac{2}{3}\right)}\,. \end{equation} We can also use a fitting technique to extend the expression for the peculiar velocity on $50h^{-1}$~Mpc scales given by~(\ref{vfitintro}) to a formula valid over a range of scales $R \ll r_c$: \begin{eqnarray} \label{vfitintro2} &v_{1-{\rm dim}}^{\rm G}(R) \simeq 102\times \frac{3 \gamma}{2} \( \frac{9}{5} r_c H_0 \)^{-\left(\gamma-\frac{2}{3}\right)} \times \nonumber \\ &\quad \quad e^{-23 \left ( \( \frac{50h^{-1} {\rm Mpc}}{R\;[h^{-1}{\rm Mpc}]}\)^{0.1}\hspace{-0.05in}- 1\right )}\hspace{-0.05in} \( \frac{50h^{-1} {\rm Mpc}}{R\;[h^{-1}{\rm Mpc}]}\)^{4/5} \hspace{-0.12in}{\rm km/s} \,, \end{eqnarray} with $\gamma$ defined in~(\ref{gam}). \section{Constraints on the Amplitude} \label{ampcons} The boost in the amplitude of density perturbations translates into a larger value for $\sigma_8$ compared to standard gravity, for fixed primordial amplitude. It is important to emphasize that the precise, early-time WMAP limit on $\sigma_8$ applies only to the primordial amplitude as evolved to today using standard growth history; our choice of this amplitude, $\sigma_8 =0.8$, is consistent with WMAP 7-year results \cite{Larson:2010gs}. The amplitude of the late-time power spectrum is measured by various observations discussed below, constraining our models. \begin{figure}[htbp] \centering \includegraphics[width=0.5 \textwidth]{fudgefactor.pdf} \caption{The shaded region of the $D-r_c$ parameter space shows the values allowed by observational constraints on the power spectrum amplitude. See also Table~\ref{tab1}.} \label{fudge} \end{figure} Notable omissions in our discussion are large-scale tests, such as the cosmic microwave background temperature anisotropy and the Integrated Sachs-Wolfe galaxy cross-correlation. In DGP, these observables are the most constraining for $r_c$~\cite{lamdgpobs}. However, analysis of these effects requires evolving perturbations on horizon scales; this is beyond our Newtonian treatment. Though our decoupling-limit arguments (as in Sec. \ref{massgrav}) should give robust predictions on Newtonian scales, horizon-scale observables require a relativistic treatment of cosmological perturbation theory in Cascading Gravity. \begin{table}[htbp] \centering \begin{tabular}{|c|cc|} \hline \multicolumn{3}{|c|}{ $D$ \; $r_c$ (in Mpc)} \\ \hline 5 && 1665 \\ \hline 6 & & 2250 \\ \hline 7 & & 2486 \\ \hline 8 && 2614 \\ \hline 9 & &2694 \\ \hline 10 & & 2750 \\ \hline \end{tabular} \caption{Minimum values for $r_c$ in Mpc allowed by observational constraints on the power spectrum amplitude. This contains the same information as Fig.~\ref{fudge}. } \label{tab1} \end{table} \subsection{X-ray Clusters} The tightest constraint comes from cluster counts using X-ray observations of the ROSAT All-Sky Survey~\cite{Vikhlinin:2008ym}: \begin{equation} \sigma_8 \( \frac{\Omega_{\rm m}}{0.24} \)^{0.47} = 0.829 \pm 0.0275 \quad \mbox{(Clusters)}\,, \label{clusters} \end{equation} where the error bar includes a 9\% systematic uncertainty in the mass calibration~\cite{Vikhlinin:2008ym}. For our fiducial $\Omega_{\rm m} = 0.24$, this implies $\sigma_8 \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 0.88$ at the 95\% C.L. The translation from cluster abundance observations to $\sigma_8$ assumes standard gravity, while in our theories the dependence of the mass function on the linear power spectrum amplitude is modified. We will come back to this point shortly. In this work we take the $\sigma_8$ constraint at face value and, with our fiducial choice $\sigma_8^{\rm std\;grav} = 0.8$, impose $\sigma_8/\sigma_8^{\rm std\;grav} \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 1.1$. Using~(\ref{fit}), this translates into a constraint on $r_c$ and $D$: \begin{equation} \left(\frac{2}{3}H_0 r_c\right)^{0.71\left(-\gamma+\frac{2}{3}\right)} \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 1.1\,. \label{sig8bound} \end{equation} The allowed region of the $D-r_c$ parameter space is shown in Fig.~\ref{fudge}. Note that for a given number of space-time dimensions $D$, this translates into a lower bound on $r_c$. In particular, we find $r_c \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$>$}\cr\lower2.5pt\hbox{$\sim$}}}} 1665$ and 2750~Mpc for $D=5$ and 10, respectively. The above constraint is, on the one hand, conservative. Because our scalar force turns off in regions of high density, such as galaxy clusters, we expect fewer and smaller non-linear structures in our model as compared with a $\Lambda$CDM model with identical present-day power spectrum normalization. On the other hand, the additional scalar force in our model leads to systematically higher dynamical masses in clusters \cite{Schmidt:2010jr}. This implies an increase in the number of clusters of fixed dynamical mass as compared with $\Lambda$CDM with the same late-time $\sigma_8$. The bound we use~(\ref{sig8bound}) assumes that these opposing effects approximately cancel. These considerations have been studied in detail by~\cite{Schmidt:2009am} for chameleon/$f(R)$ cosmology. A similar treatment in Cascading Gravity is work in progress~\cite{marcos}. Substituting the {\it minimum} allowed value of $r_c$ for each $D$ in~(\ref{vfitintro2}), we obtain the maximum allowed bulk flow as a function of scale. The result is shown in Fig.~\ref{maxima} for $D = 5,6,7$ and 10. On scales $50h^{-1}$~Mpc probed by~\cite{hudson1}, using the appropriate observational window function (see Sec.~\ref{data}), we find the range~(\ref{vrange}): \begin{equation} 220 < v_{3-{\rm dim}}^{\rm W} < 237~{\rm km/s} \,. \label{vrange2} \end{equation} This should be contrasted with the $\Lambda$CDM prediction of $179$~km/s. The upper range of~(\ref{vrange2}) is therefore almost consistent at the $2\sigma$ level with the observed bulk flow of $407 \pm 81$~km/s~\cite{hudson1}. In Sec.~\ref{data} we present a more careful comparison with the data. \subsection{Weak Lensing} The Canadian France Hawaii Telescope weak Lensing Survey (CFHTLS)~\cite{Fu:2007qq} finds \begin{equation} \sigma_8 \( \frac{\Omega_{\rm m}}{0.24} \)^{0.53} = 0.855 \pm 0.086 \quad \mbox{(Weak Lensing)}\,. \label{WLcons} \end{equation} This is a much weaker constraint than~(\ref{clusters}) and is automatically satisfied for the range of late-time values for $\sigma_8$ considered here. As a side remark, note that the constraint on our modified gravity theories from weak lensing is actually weaker than implied by~(\ref{WLcons}) because of the cosmological Vainshtein effect. The lensing kernel for CFHT sources spans the redshift range $0.25 \leq z \leq 1$, where the effect of modifications to gravity is somewhat muted: for $D=10$ and $r_c = 2750$ Mpc, the case that yields the largest deviation from standard gravity, the difference from standard gravity varies from $\mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 5$\% at $z=1$ to $\approx$7.5\% at $z=0.5$. Hence the average enhancement of $\sigma_8$ for observations made in this redshift range is less than the $z=0$ result suggests. \subsection{Galaxy Clustering} The amplitude of the late time matter power spectrum is also constrained by the clustering of galaxies measured by redshift surveys, such as the Sloan Digital Sky Survey (SDSS)~\cite{SDSS} and the 2 Degree Field Galaxy Redshift Survey (2dFGRS)~\cite{2dF}. However, an immediate comparison of these observations is not straightforward due to the uncertainty in the galaxy bias. Over the mass range of interest, though, preliminary results using N-body simulations show little difference in halo bias in our model compared with the standard gravity prediction~\cite{marcos}. Marginalizing over bias, Seljak {\it et al.}~\cite{urosbias} found \begin{equation} \sigma_8 = 0.88 \pm 0.06 \quad \mbox{(Galaxies)}\,. \label{galax} \end{equation} Again this is a much less restrictive result than the X-ray cluster observations. It is worth noting that the value of $\sigma_8$ that we use in our modified gravity results, which is in considerably better agreement with the peculiar velocity data than $\Lambda$CDM, coincides with the central value of~(\ref{galax}). \subsection{$E_g$ Parameter} The expectation value of the ratio of galaxy-galaxy lensing to galaxy-velocity cross-correlations has been proposed as an observational test of gravity~\cite{rachelEg}. In linear theory, this combination, denoted by $E_g$, is independent of bias and initial power spectrum normalization. Using SDSS luminous red galaxies at $z=0.32$,~\cite{urosEg} recently obtained \begin{equation} E_g = 0.392 \pm 0.065\,. \label{Egobs} \end{equation} In both $\Lambda$CDM cosmology and Cascading Gravity, this parameter is given by \begin{equation} E_g = \frac{\Omega_{\rm m}}{f}\,, \end{equation} where $\Omega_{\rm m}$ is the present-day matter density, and $f = {\rm d}\ln g/{\rm d}\ln a$ is the growth rate at the redshift of observation. With our fiducial value of $\Omega_{\rm m} = 0.24$, the $\Lambda$CDM result is $E_g = 0.387$. Our IR-modified gravity theories predict $0.333 \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} E_g \mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 0.346$ for $10\geq D \geq 5$, assuming as before the minimum allowed $r_c$ for each $D$. Values in this range are consistent with~(\ref{Egobs}). \begin{figure}[htbp] \centering \includegraphics[width=0.5 \textwidth]{Flows_for_rc1000.pdf} \caption{The expected 1-dimensional peculiar velocity as a function of scale, using the Gaussian window function~(\ref{gauss}). The curves are from linear theory whereas points (with bootstrap-estimated error bars) are from N-body simulations. The black curve/points are the standard gravity results. The long dashed (blue), dash-dotted (red) and dotted (green) curves/points are respectively the $D=5,6$ and $7$ results, keeping $r_c =1000$~Mpc fixed. The points at a given scale are slightly offset from one another for readability. For this comparison, $\sigma_8[\Lambda\mbox{CDM}]=0.85$.} \label{peclin1000} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.5 \textwidth]{Flows_for_5D.pdf} \caption{The expected 1-dimensional peculiar velocity as a function of scale, using the Gaussian window function~(\ref{gauss}), for $D=5$ and, from top to bottom, $r_c = 500$, 1000, 1500 and 3000~Mpc respectively in long-dashed (blue), dash-dotted (red), dotted (green), and short dashed (orange) curves (linear theory), and circle, diamond, upright triangle, and inverted triangle points (simulations, with bootstrap-estimated error bars). The standard gravity results are plotted as a solid (black) curve and square points. The points at a given scale are slightly offset from one another for readability. For this comparison, $\sigma_8[\Lambda\mbox{CDM}]=0.85$.} \label{fiveD} \end{figure} \section{Numerical simulations} \label{numerics} While the linear theory analysis should be applicable to calculate bulk flows on scales $\mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$>$}\cr\lower2.5pt\hbox{$\sim$}}}} 50h^{-1}$~Mpc, in this Section we explicitly check this against N-body simulations to include non-linear effects. Note that for the sole purpose of this comparison, we use a slightly higher primordial normalization for which $\sigma_8[\Lambda\mbox{CDM}]=0.85$. The N-body results presented here are obtained by integrating the $\chi$ equation of motion~(\ref{pieqn}) exactly, using a particle mesh approach. This is a notable improvement over earlier simulations presented in~\cite{markw}, where a spherical approximation was employed. Although the exact evolution agrees with the approximate results reported in~\cite{markw} to within $\mathrel{\mathstrut\smash{\ooalign{\raise2.5pt\hbox{$<$}\cr\lower2.5pt\hbox{$\sim$}}}} 5$\% for the power spectrum over the relevant scales, we nonetheless use the full code in the present work; a detailed comparison of the two approaches will be discussed elsewhere~\cite{marcos}. We performed a series of particle-mesh simulations of $400h^{-1}$~Mpc boxes on a $512^3$ grid with $512^3$ particles. To determine bulk flow statistics, we perform a real-space average of velocities over a large number of spheres centered on points randomly placed throughout our simulation output, randomizing and rerunning each $400h^{-1}$~Mpc box 2 or 3 times for each choice of parameters to gather better statistics. However, it is worth noting that each box gives highly consistent results with each of the other boxes and with the ensemble. We also use identical initial conditions across different parameter choices to isolate the effects of the new gravitational physics from random fluctuations. For each sphere, we calculate a Gaussian-weighted average velocity, \begin{equation} \langle v(R)[\mbox{one sphere}] \rangle = \frac{1}{N} \sum_i v_i \; \exp\left[ -(r_i/R)^2\right]\,, \end{equation} where $r_i$ is the distance from the $i$th point from the randomly-selected center, and $N = \sum_i \; \exp\left[ -(r_i/R)^2 \right]$ is a normalization factor. This gives 3 independent 1-dimensional velocities for each sphere. All together, we average over $1200-4500$ different velocities for each parameter pair. We then calculate the standard deviation of all the sphere-averaged 1-dimensional velocities to obtain the rms velocity. Figures~\ref{peclin1000} and~\ref{fiveD} compare linear theory integration (lines) with N-body results (points). Figure~\ref{peclin1000} explores the sensitivity to the number of extra dimensions, keeping $r_c$ fixed at 1000~Mpc. (For this value of $r_c$, only the $D=5$ case satisfies the observational constraints discussed in Sec.~\ref{ampcons}; these plots are made only to illustrate the sensitivity to $D$.) Figure.~\ref{fiveD} shows the dependence on $r_c$, fixing the number of dimensions at $D=5$. In each case, we performed the calculations for $R= 10,25,$ and $50 h^{-1}$ Mpc. We generated the error bars by bootstrap resampling subsets of the measured velocities, computing the rms for each subset, and using this as a dataset for constructing an error estimate. The upshot is that these figures show excellent agreement between linear theory and N-body simulations over the scales of interest. For bulk flows, the effects of non-linearities are basically absent, as hoped. This is in contrast with other velocity-related phenomena studied in~\cite{Percival:2008sh}, where the effects of non-linearities persist to scales $\sim 50 h^{-1}$ Mpc. \section{Comparison with Data} \label{data} \subsection{Bulk Flows} Figure~\ref{vdata} compares the range of expected 1-dimensional velocity variances with the three local flow components reported in~\cite{hudson1}. Our task in this Section is to quantify the extent to which these data are more likely in Cascading Gravity as compared to GR. We focus on the largest scales ($50h^{-1}$~Mpc) to minimize the influence of non-linear structures. Moreover, the data on different scales are correlated and would require a careful treatment of covariances. Note that all cosmological parameters are kept fixed in this analysis. We leave a comprehensive parameter likelihood analysis to future study. As mentioned earlier, for the comparison with data on $50h^{-1}$~Mpc scales we use the same window function as determined in~\cite{hudson1} to analyze peculiar velocity surveys. This window function, shown in Fig.~\ref{wind}, is well-fitted by \begin{widetext} \begin{equation} \label{winfit} W^{2,{\rm W}}(k) \simeq 3 \cdot \frac{1.86\cos(93.5 \, k)+0.0004\cosh(114.1\,k)-11.58\sin(2 \,k)+1.35\sinh(46\,k)}{4.89\cos(0.0012\,k)+0.73\,\cosh(121.3\,k)+1.45\sinh(66.6\,k)} \,. \end{equation} \end{widetext} Figure~\ref{wind} compares this fitting function with the actual window function of~\cite{hudson1} and with our Gaussian window function. (In Fig.~\ref{vdata}, we instead use the Gaussian window function, since~(\ref{winfit}) only applies on $50h^{-1}$~Mpc scales.) \begin{figure}[htbp] \centering \includegraphics[width=0.5 \textwidth]{Versus3Data.pdf} \caption{Comparison of the data from~\cite{hudson1} (points) with the range of bulk flows achievable in Cascading Gravity models (shaded region, using Gaussian window function~(\ref{gauss})). The heavy curve defining the bottom of the shaded region is the result for standard gravity for our fiducial cosmology. Note that the $20h^{-1}$ and $50h^{-1}$~Mpc data points are not independent. Using only the $50h^{-1}$ Mpc data points and comparing their fit to standard gravity to the fit to the maximum allowed velocity (which is reached for $D=10$, $r_c=2750$ Mpc), we find a $\Delta \chi^2 \simeq -4.6$.} \label{vdata} \end{figure} The local bulk flow has three velocity components $v_i$, $i\in(1,2,3)$, measured by~\cite{hudson1}. Their measurements have observational uncertainties $\sigma_i$, which we assume represent independent Gaussian error bars. Although these components are in truth related by a covariance matrix, for simplicity we assume that the data points are independent. This assumption has only a small effect on the statistics \cite{mikeprivate}. We compare these measurements with the expected distribution of 1-dimensional velocities, which has vanishing mean and variance $v_{1-{\rm dim}}^{\rm W}$. Assuming a Gaussian probability distribution, we can compare the different likelihoods through a simple $\chi^2$ statistic: \begin{equation} \chi^2 \simeq \sum_{i=1}^3 \frac{v_i^2}{\langle v_{1-{\rm dim}}^2 \rangle + \sigma_i^2} \,. \end{equation} Recall that for our fiducial cosmology the standard gravity peculiar velocity variance is $v_{1-{\rm dim}}^{\rm W}[\rm{std\;grav}] = 104$~km/s. We wish to compare this with the maximum allowed variance in our cascading gravity models, $v_{1-{\rm dim}}^{\rm W}[\rm{mod\;grav}] = 137$~km/s, achieved with $D=10$ and $r_c = 2750$~Mpc. Comparing the $\chi^2$ values, we obtain \begin{equation} \Delta \chi^2_{\rm{eff}} \simeq -4.6 \end{equation} in favor of the Cascading Gravity model, or $-1.54$ per degree of freedom. \begin{figure}[htbp] \centering \includegraphics[width=0.5 \textwidth]{Windows.pdf} \caption{Comparison of the Watkins {\it et al.} window function from~\cite{hudson1} (squares), the fitting function given in~(\ref{winfit}) (solid line) and the Gaussian window function (dashed line) with $R=50 h^{-1}$~Mpc, normalized such that their value is unity for $k \rightarrow 0$. The difference between the results for the two different window functions is very small, of order $\sim 1$\%.} \label{wind} \end{figure} Although appreciable, this improvement in the fit does not by itself warrant the inclusion of two new parameters ($r_c$ and $D$) in a strict sense. However such parameter counting is often misleading. The underlying goal of long-distance modification of gravity is to relate these parameters to an existing one, $\Lambda$. In a self-consistent analysis of cosmological predictions in Cascading Gravity, it is conceivable that these parameters are in fact related to one another. For the purpose of this work, we can state conservatively that current peculiar velocity data tantalizingly hint at gravitational physics beyond $\Lambda$CDM but do not require it. Meanwhile, on the very large, $300 h^{-1}$ Mpc scales relevant to the result of~\cite{kash1}, our modifications to gravity cannot account for the observed bulk flow. For our $D=10$ and $r_c = 2750$~Mpc model, we find $v_{1-{\rm dim}}^{\rm G}[{\rm mod\;grav}] = 29$~km/s, versus 24~km/s for $\Lambda$CDM. \subsection{Bullet Cluster} As mentioned in the Introduction, recent hydronamical simulations of the Bullet Cluster have shown that an initial velocity of $v\simeq 3000$ km/s is required when the cluster and subcluster are $\sim5$ Mpc apart to best reproduce X-ray observations. Lee and Komatsu~\cite{Lee:2010hj} have recently estimated that the probability of such an initial velocity in the standard $\Lambda$CDM framework is between $3.3\times10^{-11}$ and $3.6\times10^{-9}$ --- respectively 6.5 and 5.8$\sigma$ from the mean of cluster velocities ---, where the uncertainty comes from the evolution of velocities with redshift. Most of the infall velocity is caused by the gravitational attraction of the main cluster, which is estimated to have a mass of $10^{15} M_{\odot}$ (That this is a good approximation for such massive clusters is established in~\cite{Lee:2010hj}). We therefore compute the infall velocity in our models by treating the clusters as point particles released from rest from some large initial separation $\sim30$ Mpc and integrate their dynamics down to a separation of 5~Mpc as in~\cite{Lee:2010hj}. For simplicity, we assume head-on collision. The gravitational potential due to the large cluster is obtained by solving~(\ref{pieqn2}) for $\chi$ and the standard Poisson equation for $\Psi_{\rm N}$ for a point particle of mass $10^{15} M_{\odot}$. The motion of the subcluster is then obtained from the full gravitational potential given by~(\ref{dyn}). We also include in quadrature the effect of the enhanced bulk flow on 5~Mpc scales; however, the final velocity is almost entirely determined by the infall calculation. We should expect the enhancement to scale as $G_{\rm eff}^{1/2}$, where $G_{\rm eff}$ is the effective Newton's constant enhanced by the additional scalar forces in our models --- see~(\ref{deltaevolve}). In the last stages of infall, however, the enhancement is somewhat tamed by the Vainshtein mechanism. A related point is that the enhanced force in modified gravity does not substantially reduce the velocity at 5~Mpc necessary to explain the merger velocity. Because of Vainshtein screening, the final velocity of a particle falling from 5~Mpc to the center of a $10^{15} M_{\odot}$ cluster differs by at most 1\% compared with the infall in standard gravity. We find that initial velocities are 14\% to 27\% larger in our framework, with the smallest difference coming from $D=5$ and the largest for $D=10$, again assuming the minimal value of $r_c$ allowed by constraints on $\sigma_8$: $r_c= 1665$ and $2750$~Mpc, respectively. These enhancements shift the mean of the probability density function for initial velocities calculated in~\cite{Lee:2010hj}, which is a function of $\log v$. The larger bulk flow component at 5~Mpc also slightly widens the variance of the distribution. The end result is that the required initial velocities for the Bullet Cluster merger are much more likely in our model. Focusing on the $D=10$, $r_c= 2750$~Mpc case, the probability of an initial velocity of $3000$ km/s at $z=0$ is increased to $6.6\times10^{-7}$ --- 2.0$\times 10^4$ times more likely than the $\Lambda$CDM value; in other words, a change from a 6.5$\sigma$ to a 4.8$\sigma$ event. The probability of that initial velocity at $z=0.5$, closer to the actual merger redshift of $z=0.296$, is increased to $5.1\times10^{-5}$, again more than $10^4$ times as probable as the $\Lambda$CDM result --- a shift from 5.8$\sigma$ to 3.9$\sigma$. If we follow~\cite{Lee:2010hj} and also consider the probability of finding an initial velocity of 2000~km/s, the probability of such an occurrence in our model becomes 0.8\% at $z=0$ and 14.2\% at $z=0.5$, respectively 257 and 65 times more likely than the $\Lambda$CDM expectation (that is, respectively 2.4 and 1$\sigma$ in modified gravity, versus 4 and 2.9$\sigma$ in standard gravity). This significant increase in probability is another tantalizing hint of new gravitational physics. \section{Conclusions} In this paper, we have explored how peculiar velocities are affected in a broad class of IR-modified gravity theories called brane-induced gravity. On the scales of interest, these theories admit a local $4D$ description in terms of weak-field gravity plus $D-4$ scalar fields coupled to the trace of the matter stress-tensor. These scalar degrees of freedom effectively strengthen the gravitational attraction at late times and speed up structure formation. As a result, peculiar velocities are systematically larger than those expected in standard gravity. Comparisons between N-body simulations and linear theory calculations show that linear theory gives an excellent description of the physics of bulk flows. We have found that large-scale bulk flows can be enhanced up to $\sim 40\%$ relative to $\Lambda$CDM cosmology. The enhancement is limited by observational constraints on $\sigma_8$, the tightest limit coming from X-ray cluster abundance. The predicted peculiar velocities alleviate the tension with recent observations of the bulk flow on $50h^{-1}$~Mpc scales by Watkins {\it et al.}~\cite{hudson1,hudson2}, from a $\approx 3\sigma$ discrepancy in $\Lambda$CDM gravity to a $\approx 2\sigma$ difference in Cascading Gravity. The agreement between theory and data is improved by $\Delta \chi^2 \simeq -4.6$ in our model. Although modest, this improvement offers further motivation for more accurate bulk flow observations. Peculiar velocities are also enhanced on much larger scales ($\sim 300h^{-1}$~Mpc) probed by kinetic Sunyaev-Zel'dovich observations, but this is insufficient to explain the enormous bulk flow inferred by~\cite{kash1,kash2}. Violent merging systems, such as the Bullet Cluster, are much more probable in Cascading Gravity. Drawing on the recent analysis of~\cite{Lee:2010hj}, we have found that the occurrence of the Bullet Cluster in our theories is $\gtrsim 10^4$ times more likely than in standard gravity. We are pursuing various parallel tracks to improve upon the preliminary analysis presented here. As mentioned above, the tightest constraint on our models comes from cluster counts. Through N-body simulations and various semi-analytical techniques, we are currently determining the halo mass function in Cascading Gravity as a function of $D$ and $r_c$ \cite{marcos}. This will allow a more accurate comparison with X-ray data. With regards to the Bullet Cluster, we can use similar simulations to calculate the distribution of initial velocities for subclusters, following the standard gravity analysis of~\cite{Lee:2010hj}. On the theoretical side, the calculations presented here rely on some phenomenological input. While we are confident that our results capture the essence of Cascading cosmology predictions, they do not derive from a rigorous cosmological analysis of the complete higher-dimensional theory. An important first step in this direction would be to obtain the decoupling limit of Cascading Gravity while keeping the non-linearities in all scalar degrees of freedom. This would allow a derivation of the modified Friedmann equation, as in~\cite{galnic} for DGP, as well as the perturbation equations in the Newtonian regime. This is work in progress~\cite{kurtmarkdan}. The observations of large bulk flows and violent cluster mergers offer tantalizing evidence that structure is evolving more rapidly than predicted by $\Lambda$CDM cosmology. As we extend our measurements of large-scale bulk flows beyond our local region and discover an increasing number of merging systems, it will become clear whether these are statistical flukes or the first indication of a new realm of gravitational physics on cosmological distances in the late universe. \vspace{0.5in} \section*{Acknowledgements} \vspace{-0.2in} The authors would like to thank N.~Afshordi, M.~Hudson, L.~Hui, B.~Jain, E.~ Komatsu, M.~Lima, R.~Sheth, C.~Springob, M.~Trodden, W.~Percival and R.~Watkins for helpful discussions, F.~Schmidt for helpful comments and coding advice, and especially H.~Feldman for providing his window function and for invaluable help with peculiar velocity calculations. The work of M.W. was supported by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation. M.W. is grateful to the Center for Particle Cosmology for hospitality while part of this work was completed. The work of J.K. was supported by NSERC of Canada and funds from the University of Pennsylvania.
1,314,259,995,560
arxiv
\section{INTRODUCTION} \label{sec:intro} With its extremely large effective area (4 m$^2$ at 2~keV, 8 m$^2$ at 5~keV, 10 m$^2$ at 8~keV, 1 m$^2$ at 30~keV), the good energy resolution of the Silicon Drift Detectors (SDD \cite{vacchi1991,campana2011,zampa2011,evangelista2012}), and relatively fast detector response time $<$ 5 $\mu$s, the LAD\cite{zane2012} onboard LOFT \cite{feroci2012} could perform high time resolution observations of compact objects, for the study of strong-field gravity, black holes mass and spins, and the equation of state of ultradense matter \cite{feroci2010}.\\ In particular, the LOFT mission, will allow for the study of ultradense matter and neutron star structure, via accurate measurements of spin pulsations, neutron star masses and radii, and through the astroseismological study of crustal oscillations following intense flares from Soft Gamma Repeaters. Understanding the properties of ultradense matter and determining its equation of state (EOS) is one of the most challenging problems in contemporary physics. At densities exceeding that of atomic nuclei, exotic states of matter such as Bose condensates or hyperons may appear; and at even higher densities a phase transition to strange quark matter may take place. These densities in the zero temperature regime relevant to these transitions could be investigated only in neutron stars.\\ The other main objective of the mission is the study of strong field gravity close to black holes and neutron stars, via measurements of black holes mass and spins from time variability and spectroscopy, general relativistic precession, epicyclic motion, quasi-periodic oscillations (QPOs) evolution and Fe line reverberation studies in bright AGNs.\\ To achieve these goals \cite{kennedy2012}, the absolute calibration accuracy of the LAD must be 10--15\% (which corresponds to a 5--7\% of error on the neutron star radius determined from the study of the cooling phase of a type I burst).\\ Dead time is also relevant to all sources where we want to perform accurate characterization of aperiodic phenomena, particularly at high frequency (1/t$_{dead}$). We need to be able to calibrate the dead time process to an accuracy better than the precision afforded by the count rates. Moreover, the dead time should be less than 0.5--1\% at 1 Crab, because with a too large dead time it will be hard to reduce the error.\\ The strong-field gravity, black hole mass and spins will be addressed via the study of the Fe line of AGNs. Thence an accurate energy calibration of the LAD is required. \section{Calibration activities} To accomplish these requirements, heavy ground and in-flight calibration campaigns are foreseen. The main activities are described in the following sections.\\ The FEE ASICs of the LAD will be equipped with an electronic calibration system, consisting of a test capacitor which could be connected to the inputs of the analog chains for the electric calibration through a gate. A calibration charge will be injected to the test capacitor during electronic calibrations. The electronic calibration procedure consists of a set of calibration trains, each train of a specific calibration charge. The calibration curve will be obtained from the comparison of the injected charge with the response of each analog chain of the FEE ASIC (see [\cite{pacciani2007}] for details in a similar case).\\ The threshold scan is a more sophisticated procedure, comprising an electronic calibration run for each step of threshold (see [\cite{pacciani2008}] for details in a similar case). The electronic calibration procedure is a fundamental step of the calibration campaigns. \subsection{Ground Energy calibration, effective area calibration} In the following, we synthetically summarize the foreseen calibration activities that we plan to perform on-ground for the LAD. \begin{itemize} \item pedestal run, electronic calibrations, threshold scan with electronic calibrators of FEE PCB prior to the detector tile integration (at room, but controlled temperature), a dedicated setup is foreseen. Purpose: functionality, and comparison after the integration with the detector) \item pedestal run, electronic calibrations, threshold scan with electronic calibrators of FEE PCB after the detector tile integration (at room, but controlled temperature), a dedicated setup is foreseen. Purpose: functionality, unbonded strips \item pedestal run, electronic calibrations, threshold scan with electronic calibrators (and with radioactive sources), radioactive sources run of FEE PCB after the integration of the detector plane of a module (at room, but controlled temperature), a dedicated setup is foreseen. Purpose: functionality (bonding verification), and comparison with similar calibrations before the integration with the detector, we will obtain the ratio C$_{feedback}$/C$_{detector}$, the equalization of test capacitances and of test pulse devices (through the comparison with measurements with radioactive sources). \item pedestal run, electronic calibrations, threshold scan with electronic calibrators of FEE PCB after the integration with the detector plane of a module (at least 2 temperatures in the range of operations of LAD on orbit), a dedicated setup is foreseen. \item calibration with radioactive sources at detector level, prior the collimator mounting (at least 2 temperatures in the range of operations of LAD on orbit) \item threshold scan with radioactive sources. Purpose: high statistics energy calibration, and study on the temperature dependency of the system. \item pedestal run, electronic calibration, threshold scan, radioactive sources acquisition (and source scan along the tile) after collimators plane mounting on the module at room temperature. Purpose: functionality, collimator mounting rough test (effective area study through the radioactive source scan). \item pedestal run, electronic calibration, threshold scan, radioactive sources acquisition (and source scan along the tile) after collimators plane mounting on the module; runs will be done at least for 2 temperatures in the range of LAD on orbit. Purpose: functionality, collimator mounting rough effective area study (through the radioactive source scan). \item test at an X-ray facility (with long guide tube to assure an almost parallel X-ray beam, and high X-ray throughput) at 2/3 temperatures (+ pedestal run, electronic calibration, threshold scan with electronic calibrator, threshold scan with a monochromatic line from the facility test beam). Purpose: energy calibration, on- and off-axis effective area. At the level of panel for MARSHALL-XRCF, and of module for facilities with smaller experimental halls. \end{itemize} \subsection{In-flight Energy calibration, Effective area calibration} Calibration activities will be performed in-flight, both using electronic calibrations, both observing astrophysical sources: \begin{itemize} \item pedestal run, electronic calibrations, threshold scan with electronic calibrator (at different temperature conditions). Purpose: functionality, energy calibration, comparison with ground based calibrations \item Cas A Fe line (+pedestal run, electronic calibrations, threshold scan with electronic calibrator). Purpose: (energy response, on-, and off-axis effective area) \item Pb fluorescence lines during observations of empty fields, L$\alpha$ (10.5~keV) and L$\beta$ (12.6~keV) lines (+pedestal run, electronic calibrations, threshold scan with electronic calibrator). Purpose: energy response \item Crab scan (+pedestal run, electronic calibrations, threshold scan with electronic calibrator). Purpose: effective area, and off axis behaviour in energy bins (comparison with the Wide Field Monitor-WFM\cite{donarumma2012} to compensate for fine Crab-variability). Due to the flat field of view of WFM, and the use of the same detector technology (the SDD), we expect that the use of the WFM to make a relative normalization of the LAD on /off-axis affective area will not introduce systematics, WFM single camera S/NR=250 in one day for Crab). \end{itemize} A possible strategy to make effective area absolute calibration is a two fold strategy: 1) On ground, detector tile by detector tile with calibration sources, and at X-ray facilities. 2) Relative detector tiles inter-calibrations/alignment studies will be addressed with the Crab raster scan in-flight. \subsection{Ground/In-flight Dead Time study} We are currently studying the strategy and the hardware implementation of the data acquisition system, and the relevant telemetry involved, with the aim to reduce and simplify the problems related to the dead time correction during scientific analysis. dead time study could be addressed both on-ground, both in-flight. In the following we give a short list of activities related to this task: \begin{itemize} \item dead time study moving away from detector plane a source, before integration with collimator, (it is only the study of a piece, FEE+MBEE). Purpose: dead time study of FEE+MBEE \item dead time study of the whole system through an intensive electronic calibration, on ground (at room but controlled temperature), and in-flight. Purpose: dead time study of the whole system. \item dead time study in flight, from the study of flux of bright astrophysical sources (e.g., Sco X-1, Cyg X-1). A dedicated observing mode is foreseen: a single module with only a minor number of strips enabled to give the source flux with minor dead time effect, the remaining LAD modules with nominal configuration, with the plain dead time effect. \end{itemize} \pagebreak \section{Ground calibrations} \subsection{Facilities} We are currently studying the feasibility of ground calibrations at the Marshall-XRCF X-ray facility, or at European facilities with long guide-tubes, to assure an almost parallel X-ray beam, and high throughput.\\ The X-ray Calibration Facility (XRCF)\footnote{http://optics.nasa.gov/facilities/xraycal.html} at Marshall Space Flight Center is a very attractive facility for the LAD calibrations, with its guide-tube which is 518 m long, and with a diameter of 1.5 m at exit. The experimental chamber has a diameter of 7 m and a length of 23 m, and it is thermally controlled in the temperature range -40 $^\circ$C -- +70$^\circ$ C with a maximal spatial variation of $\pm$1.1$^\circ$C.\\ There are two X-ray sources: 1)The Electron Impact Point Source (EIPS), which is an Henke-type source which produces X-rays by focusing an electron beam onto a pure or composite metal target. The energy range of EIPS is 0.09-10~keV; the EIPS flux intensity on the detector ranges from 1 to 500 photons/sec/cm$^2$, depending on the target. 2)The Rotating Anode Source (RAS) which is an 18 kW Rigaku electron impact X-ray generator operating in the 6-40 kV range, for a current ranging from 10 to 450 mA. There are three different anode materials and three different cathode configurations. A Double Crystal Monochromator (DCM) is attached to the RAS. The DCM provides a line in the 1-10~keV range.\\ The MPE X-ray test facility PANTER\footnote{http://www2011.mpe.mpg.de/heg/www/Projects/PANTER/main.html} \cite{freyberg2006} at Max-Planck-Institut f\"{u}r extraterrestrische Physik, has a guide tube 123.6 m long, 1 m diameter. The experimental chamber has a diameter of 3.5 m, and a length of 12 m. An open X-ray source with two filter wheels, and a target wheel with 16 different targets, provides 1--10$\times$10$^3$~ph/cm$^2$/s. A commercial X-ray source provides a couple of lines between 4.5 and 22~keV, and bremsstrahlung continuum up to 50~keV. A double crystal monochromator provides a line in the range 1.5–-25~keV (including second order). The X-ray facility at the Ferrara University department \cite{loffredo2004} works at 6--60~keV and 15--150~keV, beam size at 1~m from source is selected to be 0.3--1.0~mm$^2$, with a count rate in the experimental hall of 10$^3$ cts/s. A Double crystal monochromator (double crystal diffractometer), tunable from 7 to 120~keV, provides the X-ray calibration line. An upgrade is foreseen with a tunnel of $\sim$100 m.\\ The Palermo X-ray Calibration and Testing (XACT) facility \cite{barbera2006} has a 35 m long vacuum line, with a diameter at the beam exit of 1.5 m, to allow for the illumination of an 80~cm sample. The X-ray source is an Electron impact multi-anode micro focus Manson Model 5, with 6 anodes, and 4 filters. It produces up to 10$^5$ photons/cm$^2$/s at 20 m distance, operating in the 0.1-20~keV range. A fixed exit double crystal monochromator, tunable in the range 1-30~keV (producing a beam of size 60$\times$30~cm$^2$ at the test chamber) provides the calibration X-ray line. \\ Due to the beam divergence, the effective area calibration will be performed single module by single module, or detector tile by detector tile, depending on the chosen facility. A scan on the beam angle is foreseen to calibrate the on-, and off-axis effective area. A movement system is needed to tilt the detector system with respect to the beam axis. During a run, each detector tile will see the beam inclined with a different offset angle with respect to the other detector tiles. To obtain a coherent set of data, in the scanning for the off-axis effective area (but onlòy in the central part of the field of view), we have to choose a scanning step according to the offset angle of detector tiles (1$\div$2 arcmin). IN this way we could study the top of the effective area for each detector tile (or for each module, depending on the chosen facility) with respect to the others. A coarser scan step is foreseen for the outer positions of the LAD field of view. The alignment of the module within LAD will be reconstructed via the optical study of the alignment of each module under test with respect to the nominal beam axis. The comparison of the optical alignment of the whole LAD, with the optical alignment in the X-ray test facility will allow us to determine the overall effective area of the scientific instrument. A further alignment study will be performed in-flight with the Crab raster scan. \subsection{Simulations} In the following we perform some calculation to evaluate the feasibility of ground calibrations at an X-ray facility. We consider here the case of the Marshall-XRCF.\\ The double crystal monochromator system gives a non-uniform beam\cite{swartz1998}, therefore their usage for the LAD effective area study could be difficult. In the following we investigate the usage of the C-continuum mode for the study of both energy-calibration and of effective area calibration of the LAD.\\ In our evaluations, we refer here mainly to the paper reporting the Chandra HRMA ground-calibration activities \cite{zhao1998}. They used the EIPS X-ray source with a Carbon anode which produces a continuum spectrum with contamination from the Si-K$\alpha$, W-M$\alpha$, W-M$\beta$ multiplet (1.78~keV), Ca-K$\alpha$ (3.69~keV), Ti-L$\alpha$ (4.51~keV), V-K$\alpha$ (4.95~keV), Fe-K$\alpha$ (6.40~keV), W-L$\alpha$ (8.38~keV). Zhao and collaborators \cite{zhao1998} made use of these X-ray lines for the energy calibration of the solid state detectors (SSD, Canberra 30~mm ultra low energy Germanium detector \cite{camberra_bege}).\\ To perform the evaluation of the feasibility of calibrations at this X-ray facility, we need a rough estimation of the Fe-line photon flux, and of the continuum flux at the experimental hall: from that paper (their Fig.~3 and Fig.~4, referring to RUN-ID~116414, for the flat field evaluation), we can roughly evaluate the photon flux from the Fe line at the experimental hall with the following formula: \begin{equation} \label{eq:flux} F_{Fe\ line} = \frac{peak\_rate_{mon}}{dE_{bin}}\cdot\sqrt{2\pi}\cdot \frac{FWHM_{SSD}}{2.355}\cdot\frac{1}{\epsilon_{ssd}}\cdot\frac{1}{S_{aper}}\cdot\left(\frac{d_{mon}}{d_{exp\ hall}}\right)^2\, , \end{equation} where:\\ $peak\_rate_{mon}$ is the Fe line peak rate (continuum subtracted) measured at the monitor system located at a distance of 38 m from the X-ray source (shown in fig.~3 of Zhao and collaborators\cite{zhao1998}, and it corresponds to $\sim$0.03~c/s);\\ $dE_{bin}$ is the bin size of the spectrum shown (1 channel corresponds to $\sim$0.005~keV);\\ $FWHM_{SSD}$ is the SSD typical energy resolution ($\sim$ 0.16~keV at 6.4~keV)\cite{xrcf_phase1};\\ $\epsilon_{ssd}$ is the typical efficiency of SSD at 6.4~keV ($\sim 1)$\cite{mcdermott1997};\\ $S_{aper}$ is the Area of the aperture used for the SSD at the monitor position ($\sim$0.03~cm$^2$);\\ $d_{mon}$ is the distance of the monitor system from the X-ray source ($\sim$38 m);\\ $d_{exp\ hall}$ is the distance of the experiment from the X-ray source ($\sim$538 m).\\ With this method, we evaluated that the Fe-K$\alpha$ line flux at the experimental-hall is $\sim$0.17 photons/cm$^2$/s.\\ Similarly, we evaluated that the X-ray flux of the continuum component of the spectrum at the experimental hall is $\sim$2.3 photons/cm$^2$/s/keV at 6.4~keV (the measured rate of the continuum component of the spectrum measured with the SSD at the monitor station is $\sim$ 0.07 c/s at 6.4~keV).\\ Alternatively, we can roughly evaluate the Fe-K$\alpha$ line flux at the experimental hall from RUN ID 110542, that was used for the effective area calibration of shell 6 of HRMA. In this case we have the following formula for the Fe-K$\alpha$ line flux at the experimental hall: \begin{equation} \label{eq:flux_bis} F_{Fe\ line} = peak\_rate\_density_{HRMA\ focus}\cdot\sqrt{2\pi}\cdot\frac{FWHM_{SSD}}{2.355}\cdot\frac{1}{\epsilon_{ssd}}\cdot\frac{1}{A_{eff\ shell6}}\, , \end{equation} where:\\ $peak\_rate\_density_{HRMA\ focus}$ is the Fe line peak rate (continuum subtracted) divided for the energy bin of the histogram, measured with the SSD placed at the focus of the Shell 6 (shown in fig.~15 of Zhao and collaborators\cite{zhao1998}, and it corresponds at $\sim$80~c/s/keV);\\ $A_{eff\ shell6}$ is the effective area of the shell 6, which is reported in the same figure of the paper of Zhao and collaborators\cite{zhao1998}($\sim$70~cm$^2$ at 6.4~keV).\\ With this other method, we evaluated that the Fe-K$\alpha$ line flux at the experimental-hall is $\sim$0.19 photons/cm$^2$/s, which is in quite good agreement with the other method, considering the rough estimations of Fe line peak fluxes, and of the other quantities. Similarly, the flux of the continuum component of the spectrum at the experimental hall could be evaluated to be $\sim$1.9 photons/cm$^2$/s/keV (the measured rate of the continuum component measured with the SSD in the focus of the shell 6 of HRMA is $\sim$130 c/s/keV at 6.4~keV). \\ For the case of the LAD ground-calibrations, we simulated a parallel X-ray beam, with a C-continuum like spectrum. We made use of the spectrum that Zhao and collaborators reported for such a beam \cite{zhao1998}, for the particular case of the Fe-K$\alpha$ line, where we rescaled the line peak to continuum counts/keV, for the normalization factor $FWHM_{SDD}/FWHM_{Ge\ SSD}$, where FWHM$_{SDD}$ is the energy resolution (FWHM) of the SDD detector (0.2~keV). In Fig.~\ref{fig:singlestrip_marshall_spectrum} we show the simulated spectrum for one of the SDD channels. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=9cm]{sim1.ps} \end{tabular} \end{center} \caption { \label{fig:singlestrip_marshall_spectrum} Simulated spectrum for a single SDD channel. Energy units are reported in standard-deviations unit.} \end{figure} The results of the fit are reported in Fig.~\ref{fig:singlestrip_marshall_fit}.\\ \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=12cm]{sim2.ps} \end{tabular} \end{center} \caption { \label{fig:singlestrip_marshall_fit} single strip results of the fit for the energy-calibration assuming an exposure of 113~ks. Top-left panel: distribution of the fitted value of the line peak position (units are reported in standard deviations). Top-right panel: 1-sigma errors in the fit of the line peak position (units are reported in standard deviations). Middle panel: relative error in the counts collected from the X-ray line alone (continuum subtracted). Bottom panel: $\chi ^2$ of the fit for each of the simulated spectra.} \end{figure} To obtain a stable fit, with an error of $\sim$0.1 standard deviations (of the energy resolution) we need at least 980 counts for each strip. Assuming a fraction of single strips clusters of 0.4, an efficiency of 0.8 at 6.4~keV, an open fraction 0.5, an SDD strip area of 0.3~cm$^2$, we need an exposure $\sim$113~ks to perform the energy-calibration (strip by strip) for 1/3 of a LAD panel (the spectrum contains also the other X-ray lines that we need for the energy-calibration procedure). At Marshall-XRCF, we need 3 runs to illuminate a panel, and 18 runs to illuminate the whole LAD. Thence we need 3.5 weeks of data taking to assure an energy calibration with an error budget of 1/10 of the LAD energy resolution.\\ On- and off-axis effective area calibrations will be less demanding in terms of exposure: following the procedure reported in [\cite{zhao1998}], we will collect 2500 counts (for an energy bin of 0.1~keV around the Fe line, using only single strip clusters) on each detector tile in 880 s, corresponding to a statistical uncertainty of 1\% per module. Thence we will calibrate the on-axis effective area (and the effective are for single strip cluster / multiple strips clusters event-topologies) of the whole LAD with an exposure of 14000 s.\\ Additional sources of errors in the measurements come from the evaluation of the quantum efficiency of the monitor system, and from the error in the knowledge of the aperture area of the monitor system (0.8\% for the Chandra HRMA calibrations). We note, however, that these evaluations are performed assuming the set-up of the Marshall-XRCF used for the HRMA calibrations. We can act on the EIPS source set-up to assure for an higher X-ray flux, and shorter calibration exposures. To further reduce the facility usage, the energy-calibrations could be evaluated making use of the data from the runs for the on- and off-axis effective area calibrations (using the runs for the scan of the central part of the field of view).\\ We stress here that the beam uniformity at the experimental hall must be investigated for the particular case of the LAD calibrations, requiring a very large and uniform X-ray beam.\\ The usage of a shorter X-ray facility, e.g., the PANTER, will give similar exposures to calibrate the effective area at the level of detector tile, because the shorter distance of the experimental hall from the X-ray source will allow for higher X-ray fluxes. In this case, for each run, each detector tile will see the source with a different incident angle (with a step of $\sim$ 1 arcmin). Thence the effective area at a given angle at the level of module will be determined comparing different runs, to construct the effective area with the detector-tiles aligned with the X-ray beam. \pagebreak \section{In-flight calibrations} \subsection{Cas-A SNR} A good target for the in-flight calibrations of LAD is the Cas-A SNR, which is a very bright source, with an X-ray luminosity of 80 mCrab (1.5--10~keV) \cite{maccarone1999,maccarone2001}. For our study here, we approximate the spectrum with a power-law of photon index 3.6, with a superimposed Fe line whose intensity is 7.3$\times 10^{-3}$ ph/cm$^2$/s. The simulated spectrum for a single strip is reported in Fig.~\ref{fig:singlestrip_casa_spectrum}, where we assume a non earth-occulted fraction of 0.8, resulting in a total observing time of 640~ks. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=9cm]{sim_pow1.ps} \end{tabular} \end{center} \caption { \label{fig:singlestrip_casa_spectrum} Simulated spectrum for a single SDD channel for a simplified model of Cas A, including a power-law and a Fe Line only.} \end{figure} The distribution of the peak fit for each strip is reported in Fig.~\ref{fig:singlestrip_casa_fit}. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=12cm]{sim_pow2.ps} \end{tabular} \end{center} \caption { \label{fig:singlestrip_casa_fit} single strip results of the fit for the energy-calibration assuming an exposure of 640ks. Top panel: distribution of the fitted value of the line peak position. Bottom panel: $\chi ^2$ of the fit for each of the simulated spectra.} \end{figure} For a good fit we need at least 230 cts/strip from the Fe line, that will be collected in 640Ks of observation (including occultation time). \subsection{Pb-L$\alpha$ and Pb-L$\beta$ lines from background spectrum} Pb-L$\alpha$ and Pb-L$\beta$ lines will be a bright source for LAD in-flight energy-calibration\cite{campana2012}: we expect a joint Pb-L$\alpha$ and Pb-L$\beta$ lines intensity of 4.6$\times$10$^{-3}$ counts/cm$^2$/s. We simulated the background spectrum corresponding to 680~ks of exposure (Fig.~\ref{fig:singlestrip_pbl_spectrum}). \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=9cm]{sim_2pow1.ps} \end{tabular} \end{center} \caption { \label{fig:singlestrip_pbl_spectrum} Simulated spectrum for a single SDD channel for the background spectrum, including Pb-L$\alpha$ and Pb-L$\beta$ fluorescence lines.} \end{figure} In the simulation all the events are considered single-anode events. The distribution of the peak fit for each strip is reported in Fig.~\ref{fig:singlestrip_pbl_fit}. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=12cm]{sim_2pow2.ps} \end{tabular} \end{center} \caption { \label{fig:singlestrip_pbl_fit} Simulated spectrum for a single SDD channel for the background spectrum, including Pb-L$\alpha$ and Pb-L$\beta$ fluorescence lines.} \end{figure} \section{Conclusions} We have performed preliminary evaluations of the feasibility of ground and in-flight calibrations. This preliminary study shows that the calibrations of the LAD instrument on-board LOFT are an affordable task provided that ground calibrations activities will be performed at facilities such as the Marshall-XRCF or PANTER-MPE, with time scales of 1 month of facility usage (with the X-ray source set-up and intensity used for the Chandra HRMA calibrations).\\ The uniformity of the X-ray beam has not been investigated here,but this aspect needs a detailed study, because of the very large area to be calibrated. Also the possibility of rising the X-ray source intensity has not been covered here, and it surely demands a dedicated effort, with the aim to reduce the facility usage (and its impact on the costs and on the schedule of the LAD), or to obtain higher statistics.\\ Moreover, a dedicated plan of observations of astrophysical sources, such as Cas-A, and a Crab raster scan are foreseen. Further in-flight energy-calibrations activities will make use of fluorescence Pb-L$\alpha$ and Pb-L$\beta$ lines produced in the collimation system of the LAD. The in-flight strategy of calibratons will allow for an independent and complete set of data for the LAD energy and on/off-axis area calibrations.
1,314,259,995,561
arxiv
\section{Introduction} \noindent \emph{Knowledge graphs} are an extremely versatile and flexible data model. They allow knowledge to be encoded without a predefined format and they are extremely robust in the face of missing data. This versatility comes at a price. For a given knowledge graph, it can be difficult to see the forest for the trees: how is the graph structured at the lowest level? What kind of things can I ask of what types of entities? What are small, recurring patterns that might represent a novel insight into the data? Answering these questions could benefit problem domains like graph simplification, graph navigation and schema induction. In the domain of unlabeled simple graphs, \emph{network motifs} \cite{milo2002network} were introduced as a tool to provide insight into local graph structure. Network motifs are small subgraphs whose frequency in the graph is unexpected with respect to a \emph{null model}. \begin{figure*}[bth] \centering \includegraphics[width=\textwidth]{example.pdf} \caption{An example of the principle behind our motif code. a) A basic knowledge graph. We consider only the integer indices of the nodes and relations. Labels are included only for readability. b) A motif that occurs frequently. c) A compressed representation; we remove all edges that are part of an occurrence of the motif and store separately which nodes match the motif. Together with a the motif, this allows us to reconstruct the data.} \label{figure:example} \label{figure:codes} \end{figure*} Unfortunately, estimating this probability usually requires repeating the subgraph count on many samples from the null model. To avoid this costly operation, \cite{bloem2017large} introduces an alternative method, using \emph{compression} as a heuristic for motif relevance: the better a motif compresses the data, the more likely it is to be meaningful. In this paper, we extend this compression-based motif analysis to \emph{knowledge graphs}. For the purposes of this research we define knowledge graphs as labeled, directed multigraphs. Nodes are uniquely labeled with entity identifier, and links are non-uniquely labeled with relations. We extend the definition of a motif to that of a \emph{basic graph pattern}: a small graph labeled with both variables and explicit entitites and relations. A pattern matches if the variables can be replaced with specific values from the graph so that the pattern becomes a subgraph as a result. The intuition behind our method is that we can use graph patterns to \emph{compress the graph}: we store the pattern, its instances, and the remainder of the graph. The better this representation compresses the graph, the more relevant the pattern. Figure~\ref{figure:example} illustrates the principle. In Section~\ref{section:preliminaries}, we justify this intuition more formally. We perform several experiments to show that our method returns meaningful subgraphs. First we test the intuition that a random graph should contain no motifs. We also show that when we artificially insert motifs into a random graph, we can then detect these as motifs. Finally, we show the results of motif analysis on three real-world knowledge graphs, compared to the baseline of selecting the most frequent graph patterns. All code and datasets used in this paper are available online.\footnotemark \footnotetext{\url{https://github.com/MaestroGraph/motive-rdf}} \paragraph{Related Work} Network motifs for unlabeled simple graphs were introduced in \cite{milo2002network}. A more comprehensive overview of the related literature can be found in \cite{bloem2017large}. In \cite{bloem2017large}, the principle of Minimum Description Length (MDL) was first connected to motif analysis. However, the idea had earlier been exploited for detecting meaningful subgraphs in the SUBDUE algorithm \cite{cook1994substructure}. A few other methods have been proposed for inducing the structure of a given knowledge graph in terms of subgraphs. In \cite{pham2015deriving}, the authors use the principle of characteristic sets to characterize a knowledge graph in terms of the star patterns it contains. In \cite{pham2016exploiting}, they show that the majority of the LOD cloud can be efficiently described using such principles, showing the highly tabular structure of many knowledge graphs. In \cite{volker2011statistical}, association rule mining is used to induce basic patterns in the graph. To the best of our knowledge, ours is the first method presented that can potentially induce any basic graph pattern. \subsection{Preliminaries} \label{section:preliminaries} \paragraph{Minimum Description Length} Our method is based on the MDL principle: we should favour models that compress the data. We will show briefly how this intuition can be made mathematically precise. For more details, we refer the reader to \cite{grunwald2007minimum} for MDL in general, and to \cite{bloem2018tutorial}, for a more extensive discussion these principles in the domain of graph analysis. Let ${\mathbb B}$ be the set of all finite-length binary strings. We use $|b|$ to represent the length of $b \in {\mathbb B}$. Let $\log(x) = \log_2(x)$. A \emph{code} for a set of objects $\cal X$ is an injective function $f: {\cal X} \to {\mathbb B}$. All codes in this paper are \emph{prefix-free}: no code word is the prefix of another. We will denote a \emph{codelength function} with the letter $L$, ie. $\rc{L}(x) = |f(x)|$. We commonly compute $\rc{L}(x)$ directly, without first computing $f(x)$. There is a strong relation between codes and probability distributions: for each probability distribution $\rc{p}$ on $\cal X$, there exists a prefix-free code $\rc{L}$ such that for all $x \in \cal X$: $- \log \rc{p}(x) \leq \rc{L}(x) < -\log \rc{p}(x) + 1$. Inversely, for every prefix-free code $L$ for $\cal X$, there exists a probability distribution $\rc{p}$ such that for all $x \in \cal X$: $\rc{p}(x) = 2^{-\rc{L}(x)}$. For proofs, see \cite[Section~3.2.1]{grunwald2007minimum} or \cite[Theorem~5.2.1]{cover2006elements}. \paragraph{Relevance testing} We will use the MDL principle to perform a hypothesis test. Assume we have some data $x \in {\mathbb B}$ and a null hypothesis stating that it was sampled from distribution $\rc{p^\text{null}}$ (with corresponding code $\rc{L^\text{null}}$). A simple but crucial result, known as the \emph{no-hypercompression inequality} \cite[p103]{grunwald2007minimum} tells us that the probability of sampling any data $x$ from $\rc{p^\text{null}}$ that can be described in less than $\rc{L^\text{null}}(x) - k$ or more bits, \emph{using \bc{any code}} is less than $2^{-k}$. Thus, we can reject the hypothesis that the data was sampled from $\rc{p^\text{null}}$ by designing \bc{an alternative code} $\bc{L^\text{alt}}$ which compresses the data better than $\rc{L^\text{null}}$ by, say, 10 bits ($\rc{L^\text{null}}(x) - \bc{L^\text{alt}}(x) \geq 10$) and rejecting the null hypothesis with confidence $2^{-10}$. For a longer, more intuitive explanation of this principle in pattern induction, we refer the reader to \cite{bloem2018tutorial}. Note that when we use this procedure to find motifs, we are not providing statistical evidence for the hypothesis that the motif is ``correct'' \cite[Section~6.1]{bloem2017large}. We are simply using the principle of hypothesis testing as a \emph{heuristic} for pattern mining. The only assertion we are proving (in a statistical sense) is that the data did not come from the null model. \paragraph{Common codes} In the construction of our graph codes, we require some simpler codes as building blocks. First, when we store any positive integer $n$, we do so with the code corresponding to the distribution $p^{\mathbb N}(n) = 1/(n(n+1))$, and denote it $L^{\mathbb N}(n)$. For nonnegative numbers we add 1 to the argument. For the full range of integers ($L^{\mathbb Z}$), we add an extra bit for the sign, and then use the first code for negative integers and the second for positive ones. We will often need to encode \emph{sequences} of integers as well. These will be highly skewed, with only a subset of integers occurring frequently, and others occurring infrequently or not at all. As noted in \cite{de2016names} a code based on the Pitman-Yor model \cite{pitman1997two} is very effective in such situations. Let $\rc{S} = \langle \gc{S_1}, ..., \gc{S_n}\rangle$ be a sequence of integers of length $n$. We first store the set of its members $m(\rc{S})$ (the vocabulary) in the order in which they occur: we store $n$ and the first member using $L^{\mathbb N}$ and then store each subsequent member by encoding the distance to the previous member using $L^{\mathbb Z}$. Having encoded the members of $\rc{S}$ we can store the sequence itself using the Pitman-Yor model as follows. Let $f(\gc{A}, \rc{B})$ be the frequency of symbol $\gc{A}$ in sequence $\gc{B}$. We then store the complete sequence using the code corresponding to the following distribution: \begin{align*} p(\rc{S}) &= \prod_{i \in [1,k]} p(\gc{S_i}\mid \rc{S_{1:i-1}})\\ & \text{with}\;\; p(\gc{A} \mid \rc{B}) = \begin{cases} \frac{\bc{\alpha} - \bc{d} |m(\rc{B})|}{|m(\rc{B})| + \bc{\alpha}} & \text{if } f(\gc{A}, \rc{B}) = 0 \\ \frac{f(\gc{A}, \rc{B}) - \bc{d}}{|m(\rc{B})| + \bc{\alpha}} & \text{otherwise} \end{cases} \end{align*} See \cite{de2016names} for a more intuitive explanation. In all experiments we use $\bc{\alpha} = 0.5$, $\bc{d}=0.1$. We will refer to the total resulting codelength as $L^{PY}(S)$. \section{Method} We will first give a precise definition of a knowledge graph as used in this paper. We will then describe the null model which is used both as a point of comparison in our hypothesis test, and within the motif code to compress the remainder of the graph. Next, we describe how to compress a graph using a given motif, and a set of instances. Finally, we will describe how to search for likely motifs using simulated annealing. We analyse the \emph{structure} of knowledge graphs only, ignoring any meaning in relation to other graphs, encoded in the content of names or literals, or from ontology languages. Specifically, we model a knowledge graph as a multigraph with nodes and edges labeled with integers that map to entities and relations. This mapping is stored, but only the integer-labeled graph is modelled.\footnotemark \footnotetext{For practitioners this restriction is not noticeable, as the indices can simply be mapped back to the original strings when the found motifs are presented.} A \textbf{knowledge graph} $G$, is a tuple $G = (\gc{v}_G, \rc{r}_G, \bc{E}_G)$. $\gc{v}_G \in {\mathbb N}$ is the number of nodes in the graph, and $\rc{r}_G \in {\mathbb N}$ is the number of relations. We define the nodeset of $G$ as $\rc{V}_G = \{0, \ldots, \gc{v}_G-1\}$ and the relation-set as $\rc{R}_G = \{0, \ldots, \rc{r}_G\}$. The \emph{tripleset} $\bc{E}_G \subseteq \gc{V}_G \times \rc{R} \times \gc{V}_G$ determines the edges of the graph and their labels: each triple $(\gc{s}, \rc{p}, \gc{o}) \in \bc{E}_G$ encodes the subject node $\gc{s}$, the object node $\gc{o}$ and the \emph{predicate} or \emph{relation} $\rc{p}$ of an edge in the graph. This definition is compatible with RDF data. We interpret literals as nodes, considered the same node if they are expressed by the same string. A \textbf{pattern} $M$ for graph $G$ is a tuple $(\gc{V}_M, \rc{R}_M, G, \bc{E}_M)$. Let $\gc{v'}_M$ and $\rc{r'}_M$ indicate the number of variable nodes and variable links in $M$ respectively, then $\gc{V}_M \subseteq \{-\gc{v'}_M, \ldots, \gc{v}_G-1\}$ and $\rc{R}_M \subseteq \{-(\rc{r'}_M+\gc{v'}_M), \ldots,-\gc{v'}_M, 0,\ldots, \rc{r}_G-1\}$, with $\bc{E}_M \subset \gc{V}_M \times \rc{R}_M \times \gc{V}_M$ representing the edges as before. That is; nodes in a pattern can be labeled either with nonnegative integers referring to $G$'s nodes or with negative integers representing a variable node, and similar for relations. The negative integers are always contiguous within a single pattern, with the highest representing the node labels and the lowest representing the edge labels An \textbf{instance} for pattern $M$ in graph $G$ is a pair of sequences of integers: $I = (\gc{I^n}, \rc{I^r})$. $\gc{I^n}$ is a sequence of distinct integers of length $\gc{v}_M$. $\rc{I^r}$ is a sequence of non-distinct integers of length $\rc{r}_M$. For each edge $(\gc{s}, \rc{p}, \gc{o}) \in \bc{E}_M$ with any or all of $\gc{s}$, $\rc{p}$ and $\gc{o}$ negative, there is a corresponding link in $\bc{E}_G$ with a negative $\rc{s}$ replaced by $\gc{I}^\gc{n}_{-\gc{s}}$, a negative $\gc{o}$ replaced by $\gc{I}^\gc{n}_{-\gc{o}}$, and a negative $\rc{p}$ replaced by $\rc{I}^\rc{r}_{-\rc{p} - \gc{v'}_M}$. Put simply: for a pattern to match, variable edges marked with the same negative integer, must map to the same relation in order for the pattern to match, but variable links labeled with different negative integers \emph{may} map to the same relation. Variable nodes are always labeled distinctly and may never map to the same node in $G$. An instance describes a subgraph of $G$ that \emph{matches} the pattern $M$. Each edge in the motif may only match one edge in the graph. In other words, the occurrence of the motif in the graph must have as many edges as the motif itself.\footnotemark \footnotetext{In this aspect our definition differs from the SPARQL Basic Graph Pattern. Patterns for which this distinction is relevant are rare, and patterns returned by our method are still compatible with SPARQL.} \subsection{Null model} For a proper hypothesis test, we must compare the compression achieved by our motif code to the compression under a general model for knowledge graphs: a null model. The most common null model in classical motif analysis is the degree-sequence model (also known as the configuration model \cite{newman2010networks}): a uniform distribution over all graphs with a particular degree sequence. We extend this to knowledge graphs by also including the degree of each relation: that is, degree of a relation is the frequency with which it occurs in the tripleset. Let a \emph{degree sequence} $D$ of length $n$ be a triple of three integer sequences: $(\gc{D^\text{in}}, \rc{D^\text{rel}}, \gc{D^\text{out}})$. If $D$ is the degree sequence of a graph, then node $i$ has $\gc{D}^\gc{\text{in}}_i$ incoming links, $\gc{D}^\gc{\text{out}}_i$ outgoing links and for each relation $r$, there are $\rc{D}^\rc{\text{rel}}_r$ triples. Let $\bc{{\cal G}}_D$ be the set of all graphs with degree sequence $D$. Then the degree-sequence model can be expressed simply as \[ p^\text{DS}(G) = \frac{1}{|{\cal G}_D|} \] for any $G$ that satisfies $D$ and $p(G) = 0$ otherwise. Unfortunately, there is no efficient way to compute $|{\cal G}_D|$ and even approximations tend to be costly for large graphs. Following the approach in \cite{bloem2017large}, we define a fast approximation to the configuration model, which works well in practice for motif detection. We can describe a knowledge graph by three length-$m$ integer sequences: $\gc{S}$, $\rc{P}$, $\gc{O}$, such that $\{(\gc{S}_j, \rc{P}_j, \gc{O}_j)\}_j$ is the graph's tripleset. If the graph satisfies degree sequence $D$, then we know that $S$ should contain node $j$ $\gc{D}^\gc{\text{out}}_j$ times, $\rc{P}$ should contain relation $r$ $\rc{D}^\rc{\text{rel}}_r$ times and $\gc{O}$ should contain node $j$ $\gc{D}^\gc{\text{in}}_j$ times. Let ${\cal S}_D$ be the set of all such triples of integer sequences satisfying $D$. We have \[ |{\cal S}_D| = {m \choose {\gc{D}_1^\gc{\text{out}}, \ldots, \gc{D}_n^\gc{\text{out}}} } {m \choose {\rc{D}_1^\rc{\text{rel}}, \ldots, \rc{D}_{|R_G|}^\rc{\text{rel}}} } {m \choose {\gc{D}_1^\gc{\text{in}}, \ldots, \gc{D}_n^\gc{\text{in}}} } \text{.} \] While every member of $\bc{\cal S}_D$ represents a valid graph satisfying $D$, many graphs are represented multiple times. Firstly, many elements of $\bc{\cal S}_D$ contain the same link multiple times. We call the set without these elements $\bc{\cal S'}_D \subset \bc{\cal S}_D$. Secondly the links of the graph are listed in arbitrary order; if we apply the same permutation to all three lists $\gc{S}$, $\rc{P}$ and $\gc{O}$, we get a new representation of the same graph. Since we know that any element in $\bc{\cal S}'_D$ contains only unique triples, we know that each graph is present exactly $m!$ times. This gives us \[ |\bc{{\cal G}}_D| = |\bc{\cal S}'_D| \frac{1}{m!} \leq |\bc{\cal S}_D| \frac{1}{m!} \text{.} \] We can thus use \[ p^\text{EL}_D(G) = \frac{m!}{|\bc{\cal S}_D|} \leq p^\text{DS}(G) \text{.} \] Filling in the definition of the multinomial coefficient, and rewriting, we get a codelength of: \begin{align*} - \log p^\text{EL}_D(G) =&\;\; 2 \log(m!) - \sum_i \log(\gc{D}_i^\gc{\text{in}}!)\\ &\;- \sum_i \log(\rc{D}_i^\rc{\text{rel}}!) - \sum_i \log(\gc{D}_i^\gc{\text{out}}!) \end{align*} as an approximation for the DS model. We call this the edgelist (EL) model. It gives a probability that always lower-bounds the configuration model, since it affords some probability mass to graphs that cannot exist. Experiments in the classical motif setting have shown that the EL model is an acceptable proxy for the DS model \cite{bloem2017large}, especially considering the extra scalability it affords. \paragraph{Encoding D} In order to encode a graph with $L^\text{EL}_D$, we must first encode $D$.\footnotemark~For each of the three sequences $D'$ in $D$ we use the following model: \[ p(D') = \prod_i q^{{\mathbb N}}(D'_i)\;\;\;\;\; L(D') = - \sum_i \log q^{{\mathbb N}}(D'_i) \] \footnotetext{Or, equivalently, to make $p^\text{EL}$ a complete distribution on all graphs, we must provide it with a prior on $D$.} where $1^{{\mathbb N}}$ is any distribution on the natural numbers. This is an optimal encoding for $D$ assuming that its members are independently drawn from $q^{{\mathbb N}}$. When we use $p^\text{EL}$ as the null model, we use the data distribution for $q^{{\mathbb N}}$ to ensure that we have a lower bound to the optimal code-lenngth (in essence, we cheat in favor of the null model,giving it a slightly lower than optimal codelength). When we use $p^\text{EL}$ as part of the motif code, we must use a fair encoding, so we use the Pitman-Yor code to store each sequence in $D$. In the design of our method, we will constantly aim to find a trade-off between completeness and efficiency that allows the method to scale to very large graphs. Specifically, when we economize, we will only do so in a way that makes the hypothesis test \emph{more conservative}. \subsection{Motif code} Having defined our representation of a knowledge graph, and a general null model for compressing such knowledge graphs, we can now define how we use a given pattern (together with its instances) to compress a dataset. We will assume that a target pattern $M$ is given for the data $G$ and that we have a set of instances ${\cal I}$ of $M$ in $G$. Moreover, we require that all instances in ${\cal I}$ are mutually disjoint: no two subgraphs defined by a member of ${\cal I}$ may share an edge, but nodes may be shared. Given this information, we will define a motif code that will help us determine whether or not $M$ is a likely motif for $G$. In section Section~\ref{section:search}, we detail a method to search for pairs $(M, {\cal I})$ to pass to the motif code. As described above, we can perform our relevance test with any compression method which exploits the pattern $M$ and its instances ${\cal I}$ to store the graph efficiently. The better our method, the more motifs we will find. Note that there is no need for our code to be optimal in any sense. We know that we will not find all motifs that exist, and we will not use them optimally to represent the graph, but the test is still valid. This also means that we are free to trade off compression performance against efficiency of computation. We store the graph by encoding various aspects, one after the other. The information in all of these together is sufficient to reconstruct the graph. Note that everything is stored using prefix-free codes, so that we can simply concatenate the codewords we get for each aspect, to get a codeword for the whole graph. We also assume that we are given a code $L^\text{base}$ for generic knowledge graphs (in practice, this will be the null model, although the motif code is valid for any base code). We store, in order: \begin{description} \item[the graph dimensions] We first store $\gc{v}_G$, $\rc{r}_G$ and $|\bc{E}_G|$ using the generic code $L^{{\mathbb N}}(\cdot)$. \item[the pattern] We store the structure of the pattern using the base code, and its labels as a sequence using the Pitman-Yor code. \item[the template] This is the graph, minus all links occurring in instances of $M$. Let $\bc{E}_G'$ be $\bc{E}_G$ minus any link occurring in any member of $\cal I$. We then store $(\gc{v}_G, \rc{r}_G, \bc{E'}_G)$ using $L^\text{base}(\cdot)$. \item[the instances] To store the instances, we view the connections between the nodes made by motifs as a hypergraph, and we extend the EL code to store it. The details are given below. \end{description} The precise computation of the codelength is given in Algorithm~\ref{algorithm:motif-code}. \begin{pseudo}[tb] \caption{The motif code $L^\text{motif}(G ; M, {\cal I}, L^\text{base})$. Note that the nodes and relations of the graph are integers.} \label{algorithm:motif-code} { \small \textbf{function} $\text{codelength}(G; M, {\cal I}, L^\text{base})$:\\ \hspace*{5mm}\tab a graph $G$, a pattern $M$\\ \hspace*{5mm}\tab instances $\cal I$ of $M$ in $G$, a code $L^\text{base}$.\\ \\ $b_\text{dim} \leftarrow L^{\mathbb N}(\gc{v}_G) + L^{\mathbb N}(\rc{r}_G) + L^{\mathbb N}(|\bc{E}_G|)$ \\ ---\emph{Turn the pattern into a normal knowledge graph}\\ $\bc{E}_{M'} \leftarrow$ the edges of $M$ with positive integer labels \\ $M' \leftarrow (\gc{v}_{M}, \rc{r}_{M}, \bc{E}_{M'})$ \\ $S_M \leftarrow$ the labels of $M$ in canonical order \\ $b_\text{pattern} \leftarrow L^\text{base}(M') + L^{PY}(S_M) $\\ ---\emph{Store the template graph}\\ $\bc{E'}_G \leftarrow \bc{E}_G - \cup_{{\cal I} \in {\cal I}} \text{triples}(I)$ \\ $b_\text{template} \leftarrow L_\text{base}((\gc{v}_G, \rc{r}_G, \bc{E'}_G))$\\ $b_\text{instances} \leftarrow -\log p_M({\cal I}) + \sum_{D \in D^{{\cal I}}} L^{PY}(D)$\\ \textbf{return} $b_\text{dim} + b_\text{pattern} + b_\text{template} + b_\text{instances}$\\ } \end{pseudo} \paragraph{Encoding motif instances} To encode a list of instances ${\cal I}$ of a given pattern $M$, we generalize the idea of the edgelist model described above. To generalize this notion to arbitrary patterns, to be defined for a given template graph, we define the \emph{degree constraint} $D^{\cal I}$ of a list of instances for a given pattern as follows: for each variable node $\gc{i}$ in the pattern, the degree constraint provides an integer sequence $\gc{D^i}$ of length $\gc{v}_G$, indicating how often each node in the completed knowledge graph takes that position in the pattern. Similarly, for each variable edge $\rc{j}$ in the pattern, the degree constraint provides an integer sequence $\rc{C^j}$ of length $\rc{r}_G$ indicating for each relation how often it takes that position in the pattern. We store these sequences in the same manner as the degree sequence of the template graph, using the Pitman-Yor code for each. Given this information, all we need to do is describe which of the possible sequences of matches for this pattern satisfying the given degree constraint we are encoding. As with the configuration model, the ideal is a uniform code over all possible configurations, for which we will define an approximation. Given $\gc{w}$ variable nodes in a pattern, and $\rc{l}$ variable edges, we can define such a collection of instances using $\gc{w}+\rc{l}$ integer sequences: $\gc{N^1}, \ldots, \rc{N^n}, \rc{L^1}, \ldots, \rc{L^l}$, with the $t$-th instance defined by the integer tuple $(\rc{N}^\rc{1}_t, \ldots, \gc{N}^\gc{n}_t, \rc{L}^\rc{1}_t, \ldots, \rc{L}^\rc{l}_t)$. If this set of sequences satisfies the degree constraint, we know that node $q$ must occur $\gc{D}^\gc{i}_q$ times in sequence $\gc{N}^\gc{i}$, and similarly for the variable links. Let ${\cal S}_{\cal I}$ be the set of all such integer sequences satisfying the constraint. We follow the same logic as for the EL model. Let $k$ be the number of matches of the pattern. We have: \begin{align*} |{\cal S}_{\cal I}| = &{k \choose \gc{D}^\gc{1}_1, \ldots, \gc{D}^\gc{1}_v}\times \ldots \times{k \choose \gc{D}^\gc{w}_1, \ldots, \gc{D}^\gc{w}_v}\times \\ &{k \choose \rc{C}^\rc{1}_1, \ldots, \rc{C}^\rc{1}_r}\times \ldots \times{k \choose \rc{C}^\rc{l}_1, \ldots, \rc{C}^\rc{l}_r} \end{align*} As before, this set is larger than the set we are interested in. First, each set of pattern matches is contained multiple times (once for each permutation) and second, not all elements are valid pattern matches (in some, a single triple may be represented by multiple instances). Let ${\cal S}_{\cal I}'$ be the subset representing only valid matches, and let ${\cal G}_{\cal I}$ be the set of valid instances with permutations removed. As before, we have \[ |{\cal G}_{\cal I}| = |{\cal S}'_{\cal I}| \frac{1}{k!} \leq |{\cal S}_{\cal I}| \frac{1}{k!} \text{.} \] Which gives us the following distribution \[ p_M(G) = \frac{k!}{|{\cal S}_D|} < \frac{1}{{\cal G}_D} \text{,} \] with $-\log p_M({\cal I})$ as a code to store the instances. Rewriting as before, gives us a codelength of \begin{align*} - \log p_M(G) =&\;\; (\gc{w}+\rc{l}-1)\log(k!) \\ &\;-\sum_{\gc{j}\in [1,\gc{w}], i} \log(\gc{D}^\gc{j}_i!) -\sum_{\rc{j}\in [1,\rc{l}], i} \log(\rc{C} ^\rc{j}_i!) \end{align*} Note that if we store a graph with the pattern \texttt{?n1 ?rel ?n2} we obtain an empty template graph, and this code reduces to the EL code, achieving the same codelength as the edgelist model, up to a small constant amount for storing the pattern. For a given graph and pattern, we can simply find the complete list of instances using a graph pattern search. Since we require a slightly different semantics than standard graph pattern matchers, we adapt the DualIso algorithm \cite{saltz2014dualiso} for knowledge graph matching. Before computing the motif code, we prune the list of instances provided by this search iterating over the instances and removing any instance that produces a triple also produced by an earlier instance. To guard against rare patterns that produce long-running searches we terminate all searches after 5 seconds, returning only those matches that were found within the time limit. We express the strength of a motif by its log-factor: \[ L^\text{null}(G) - L^\text{motif}(G; M, {\cal I}, L^\text{base})\;\text{.} \] If this value is positive, the motif code compresses the graph better than the null model. If the log-factor is greater than 10 bits, it corresponds to a rejection of the null model at $p < 0.001$. \subsection{Motif search} \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{synthetic-plot.pdf} \caption{The result of the random graph experiment. We sort the motifs by their score in the $k=75$ experiment and plot their frequency and log-factor.} \label{figure:synthetic} \end{figure} \label{section:search} Ultimately, we want to find any patterns that have a high log-factor for a given graph $G$. Since we can readily compute the log-factor for any given pattern, any black-box optimization algorithm can be used to search the space of all possible motifs. For the sake of simplicity, we will use basic simulated annealing: We start with a given pattern, and iterate by modifying the pattern in one of seven ways, chosen randomly. At each iteration, we search for instances of the pattern (limiting the time per search to 5 seconds) and compute the log-factor. If the log factor is better, we move to the new pattern, if it is worse, we move to the new pattern with probability 0.5. The starting pattern is always a single random triple from the graph, with its relation made a variable. We define seven possible transition from one pattern to another: \\ \textbf{Extend} Choose an instance of the pattern and an adjacent triple not part of the instance. Add the triple to the pattern.\\ \textbf{Make a node a variable} Choose a random constant node, and turn it in to a variable node.\\ \textbf{Make an edge a variable } Choose a random constant edge label, and turn it in to a variable (always introducing a new variable).\\ \textbf{Make a variable node constant} Choose a random variable node and turn it into a constant. Take the value from a random instance.\\ \textbf{Make a variable edge constant} Choose a random variable edge and turn it into a constant. Take the value from a random instance.\\ \textbf{Remove an edge} Remove a random edge from the pattern, ensuring that it stays connected.\\ \textbf{Couple} Take two distinct edge variables, which for at least one instance hold the same value and turn them into a single variable. All transitions are equally likely. If the transition cannot be made (for instance, there are no constant nodes to make variable) or if the resultant pattern is in some way invalid, we sample a new transition. Once a new pattern has been sampled, we compare its codelength under the motif model to that of the previous sample. If the codelength is lower, we continue with the new pattern. If the codelength is longer, we continue with the new sample with probability $\alpha$ or return to the previous pattern otherwise. We use $\alpha = 0.5$ in all experiments. We store all encountered patterns and their scores. In order to exploit all available processor cores, we run several searches in parallel. We take the top 1000 patterns from each and sort them by motif codelength. Variables are re-ordered to a canonical ordering using the Nauty algorithm \cite{mckay1981practical}, so that isomorphic patterns are not tested twice. \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{synthrep-plot.pdf} \caption{The result of the repeated random graph experiment. Color and size show the number of matches of the pattern after pruning. Plot titles show the graph dimensions before adding instances.} \label{figure:synthrep} \end{figure} \section{Experiments} \paragraph{Random graphs} To validate the method, we first test it on random graphs. The aim is to test two requirements of a succesful pattern mining method: \begin{itemize} \item In a fully random graph, there should be no motifs, and we do not expect a motif code to outperform the null model. \item If we insert a small number of instances of a particular pattern into the graph manually, these should be recognized as motifs. \end{itemize} We sample a directed graph with a given number of nodes $n$ and edges $m$, with no self-connections and multiple edges (that is, we sample from from the $G(n, m)$ Erd\H{o}s-Renyi model). We then label the nodes uniformly at random with one of the relations in $0, \ldots, \rc{r}$. To make the dimensions realistic we base them on those of MUTAG dataset used in the next section. We then take one randomly chosen pattern, and insert $\bc{k}=75$ instances of the pattern into the graph. We run a search for 100\h000 iterations. And collect the 10 motifs with the best log-factor. We then sample two other graphs by the same method: one with $\bc{k}=0$ and one with $k=150$. We also test each of the 10 motifs found on these two graphs. The results are shown in Figure~\ref{figure:synthetic}. For $k=0$, as expected, we find no patterns with positive compression. We also ran a full search on this graph to verify that no motifs can be found unless they are explicitly added to the graph. As in \cite{bloem2017large}, we find that the inserted motif is recovered, even at a low frequency, but many other subgraphs, that share structure properties with the inserted pattern are also marked as motifs. We can recognize the inserted motif as the one, with the highest log-factor, but we see that many of these ``partial motifs'' will be included in the resulting list of patterns with a positive log factor. This experiment only tests a single pattern. To see the effect of multiple random patterns, we repeat the experiment many times, sampling both the pattern and the random graph. To sample the pattern we first sample a random number of nodes $n$ from $U(3, 6)$, the uniform distribution over the integer range $(3, 6)$ (including both end points). We then sample a random number of links $m$ from $U(n, n^2 - n)$, and sample a random directed graph from $G(n, m)$. We make $U(0, n)$ nodes and $U(0, m)$ links into variables, choosing constants for the rest uniformly from the data. If the pattern is disconnected, we reject and sample again. We sample a random graph as in the previous experiments, using the dimensions from the three real world datasets used later. We then add $k$ instances of the motif to the graph and compute the log-factor of the sampled pattern (we do not use simulated annealing here). We let $k$ range from 0 to 200, and repeat the experiment 25 times for each $k$, sampling a new graph and pattern each time. The results are shown in Figure~\ref{figure:synthrep}. We observe first that under this ad-hoc sampling regime, we produce some patterns that create only very few instances in the graph, after overlapping instances are pruned. Since it is no surprise that these don't allow significant compression, we plot these as small points so that they don't obscure the other points. We see that most of the other instances---those that generate enough non-overlapping instances---result in high positive log factors, allowing them to be retrieved as motifs. \begin{table}[p] \centering \tiny \begin{tabular}{r r p{0.5\linewidth}} \hline log-factor & frequency & \\ \hline \multicolumn{3}{c}{Dogfood , top 5 by log-factor ($>100$ positive)}\\ \hline 361495.0 & 10475 & \makecell{\texttt{?n1 dc:creator ?n2.} \\\texttt{?n1 foaf:maker ?n2.} \\\texttt{?n2 foaf:made ?n1.} } \\ \hdashline 244579.5 & 7674 & \makecell{\texttt{?n1 dc:creator ?n2.} \\\texttt{?n1 foaf:maker ?n2.} \\\texttt{?n1 swrs:author ?n2.} } \\ \hdashline 220360.2 & 12138 & \makecell{\texttt{?n1 foaf:maker ?n2.} \\\texttt{?n2 foaf:made ?n1.} } \\ \hdashline 189627.3 & 9888 & \makecell{\texttt{?n1 foaf:member ?n2.} \\\texttt{?n2 swrs:affiliation ?n1.} } \\ \hdashline 187972.9 & 10475 & \makecell{\texttt{?n1 dc:creator ?n2.} \\\texttt{?n2 foaf:made ?n1.} } \\ \hline \multicolumn{3}{c}{Dogfood, top 3 by frequency}\\ \hline -3076.2 & 134853 & \makecell{\texttt{?n1 rdf:\_1 ?n2.} \\\texttt{?n1 rdf:\_2 ?n4.} \\\texttt{?n1 rdf:\_3 ?n3.} } \\ \hdashline -3435.0 & 116074 & \makecell{\texttt{?n1 swc:heldBy ?n3.} \\\texttt{?n1 swc:heldBy ?n2.} } \\ \hdashline -2379.9 & 110461 & \makecell{\texttt{?n1 rdf:type owl:Thing.} \\\texttt{?n2 rdf:type owl:Thing.} } \\ \hline \multicolumn{3}{c}{AIFB, top 5 by log-factor ($>100$ positive)}\\ \hline 79234.0 & 7549 & \makecell{\texttt{?n1 ?p3 ?n2.} \\\texttt{?n2 ?p4 ?n1.} } \\ \hdashline 61310.4 & 4154 & \makecell{\texttt{?n1 swrs:publication ?n2.} \\\texttt{?n2 ?p3 ?n1.} } \\ \hdashline 57641.1 & 3965 & \makecell{\texttt{?n1 swrs:publication ?n2.} \\\texttt{?n2 swrs:author ?n1.} } \\ \hdashline 57603.1 & 3965 & \makecell{\texttt{?n1 swrs:author ?n2.} \\\texttt{?n2 ?p3 ?n1.} } \\ \hdashline 33168.0 & 7930 & \makecell{\texttt{?n1 swrs:publication ?n2.} \\\texttt{?n2 rdf:type ?n3.} \\\texttt{?n2 swrs:author ?n1.} } \\ \hline \multicolumn{3}{c}{AIFB, top 3 by frequency}\\ \hline -908.2 & 181246 & \makecell{\texttt{?n1 swrs:year ?n3.} \\\texttt{?n2 swrs:year ?n3.} } \\ \hdashline -1524.3 & 173059 & \makecell{\texttt{?n1 swrs:publication ?n3.} \\\texttt{?n1 swrs:publication ?n2.} } \\ \hdashline -1667.9 & 103434 & \makecell{\texttt{?n1 swrs:member ?n2.} \\\texttt{?n3 ?p5 ?n1.} \\\texttt{?n4 swrs:author ?n2.} } \\ \hline \multicolumn{3}{c}{Mutag, top 5 by log-factor ($87$ positive)}\\ \hline 178304.4 & 18634 & \makecell{\texttt{?n1 mtg:\_hasAtom ?n3.} \\\texttt{?n1 mtg:\_hasBond ?n2.} \\\texttt{?n2 mtg:\_inBond ?n3.} } \\ \hdashline 97237.8 & 9189 & \makecell{\texttt{?n1 mtg:\_hasAtom ?n2.} \\\texttt{?n2 mtg:\_charge ?n3.} } \\ \hdashline 93819.3 & 8924 & \makecell{\texttt{?n2 rdf:type ?n3.} \\\texttt{?n2 mtg:\_charge ?n1.} } \\ \hdashline 90447.5 & 18634 & \makecell{\texttt{?n1 mtg:\_hasBond ?n2.} \\\texttt{?n2 mtg:\_inBond ?n4.} \\\texttt{?n2 mtg:\_inBond ?n3.} } \\ \hdashline 79027.5 & 8924 & \makecell{\texttt{?n1 mtg:\_hasAtom ?n2.} \\\texttt{?n2 rdf:type ?n3.} } \\ \hline \multicolumn{3}{c}{Mutag, top 3 by frequency}\\ \hline -2040.6 & 68514 & \makecell{\texttt{?n1 rdfs:subClassOf ?n2.} \\\texttt{?n3 rdf:type owl:Class.} \\\texttt{?n4 rdf:type owl:Class.} \\\texttt{?n4 rdfs:subClassOf ?n2.} } \\ \hdashline -2077.8 & 60832 & \makecell{\texttt{?n1 ?p5 owl:Class.} \\\texttt{?n3 rdfs:subClassOf ?n2.} \\\texttt{?n4 ?p5 owl:Class.} \\\texttt{?n4 rdfs:subClassOf ?n2.} } \\ \hdashline -1532.6 & 32009 & \makecell{\texttt{?n1 mtg:\_cytogen\_sce "true".} \\\texttt{?n1 mtg:\_salmonella ?n3.} \\\texttt{?n2 mtg:\_amesTestPositive ?n3.} } \\ \hline \end{tabular} \vspace{2mm} \caption{Results of the experiment on real-world data. For each experiment we also report the number of motifs found with a positive log-factor.} \label{table:topk} \end{table} \paragraph{Real data} Finally, we will test our method on real data, to confirm that the motifs found coincide with our intuition. We test three datasets: The Semantic Web dogfood dataset \cite{moller2007recipes} ($n=7611, m=242256, r=170$) describing researchers and publications in the Semantic Web domain, the AIFB dataset \cite{bloehdorn2007kernel} ($n=8285, m=29226, r=47$) describing the structure of the AIFB institute, and the Mutag RDF dataset\footnotemark~ ($n=23644, m=74567, r=24$), describing a set of carcinogenic and non-carcinogenic molecules both in structure and properties. \footnotetext{Originally distributed as an example dataset with the DL-Learner framework \cite{lehmann2009dl}.} For all datasets, we run 32 parallel searches, with 3125 iterations per search. Table~\ref{table:topk} reports the top 5 motifs by log-factor, and the top 3 motifs by frequency. We provide the top 100 motifs under both criteria online. \footnotemark \footnotetext{\url{https://github.com/MaestroGraph/motive-rdf}} The method provides many positives. To see that these are not just random noise, consider those patterns that have high frequency, but a negative log-factor. For instance, the most frequent pattern in the AIFB data describes two entities having the same ``year'' property. Clearly, such a pattern can be matched often, and in many different ways, but it does not provide a satisfying explanation of the the structure of the graph. Much of what the motif code picks up on is redundancy in the original data. For instance, in the AIFB data both the \texttt{swrs:publication} relation and its inverse \texttt{swrs:author} are always included. Extracting these into a motif is simple way of achieving compression. In fact, the AIFB data contains so many of these relation pairs that the two-node loop with variable labels is the highest scoring motif. In the Dogfood data, we see similar patterns emerge. Table~\ref{table:selected} shows some interesting motifs from the top 100 for each dataset. We see, for instance that the assertions that something is true or false are both motifs. While these are single triples with only one variable, they occur often enough, that encoding them separately provides a positive compression. The example from the AIFB date shows a typical "star" pattern likely to emerge from relational data: a single entity, surrounded by a set of attributes. \section{Discussion} We have presented a new method for mining graph patterns from knowledge graphs. To our knowledge, this is the first method presented that can potentially find arbitrary basic graph patterns to describe the innate structure of a knowledge graph. \begin{table}[tb] \centering\small \begin{tabular}{r r c r} \hline log-factor & freq. & & \\ \hline 220360.2 & 12138 & \makecell{\texttt{?n1 foaf:maker ?n2.} \\\texttt{?n2 foaf:made ?n1.}} & D \\ \hdashline 3157.0 & 1011 & \makecell{\texttt{?n1 ?p2 "false".}} & M \\ \hdashline 3150.2 & 985 & \makecell{\texttt{?n1 ?p2 "true".}} & M \\ \hdashline 12871.8 & 8308 & \makecell{\texttt{?n1 rdf:type ?n2.} \\\texttt{?n1 swrs:year ?n3.} \\\texttt{?n4 swrs:publication ?n1.} }& A \\ \hdashline \end{tabular} \vspace{2mm} \caption{Selected motifs. The frequency is the number of matches found in the set time limit. The last column indicates the dataset (Dogfood, MUTAG and AIFB, respectively).} \label{table:selected} \end{table} \paragraph{Limitations and future work} Currently, the greatest limitation of this method is scalability. We note that this limitation only exists when motifs need to be \emph{found}. To \emph{test} whether a given pattern is a motif, the most expensive step required is simply to find instances of the pattern in the graph (as many as is feasible). However, the search space of all patterns is large and complex, making a search for motifs an expensive task. In \cite{bloem2017large}, the original method on which this method is based was shown to scale to graphs with billions of links. However, this scalability does not translate directly to knowledge graphs: the random walk sampling used there, to generate likely motifs fails in the face of common knowledge graph topologies with many very strong hubs. In such cases, the subgraphs that have a positive log factor are so unlikely to be sampled, that none are ever put to the test. For now, we have resorted to black box optimization for search. If a faster search algorithm can be designed specifically for this code, the problem of scaling may be overcome. One option is to replace the random walk used in \cite{bloem2017large} by a biased random walk more suited to the topology of knowledge graphs Our method currently produces a large number of motifs. We can show that worthwhile motifs are included, and that it performs better than a frequency baseline, but it still takes some manual effort to sort through the suggestions to find the kind of motifs that fit a particular use case. This is not surprising; it is the nature of knowledge graphs that many different and overlapping substructures can be seen as natural or meaningful. One promising avenue to reduce this manual effort is to search for a \emph{set} of motifs which together compress well, each motif claiming a certain part of the knowledge graph to represent. \paragraph{Acknowledgements} This publication was supported by the Amsterdam Academic Alliance Data Science (AAA-DS) Program Award to the UvA and VU Universities. \section{First Section} \subsection{A Subsection Sample} Please note that the first paragraph of a section or subsection is not indented. The first paragraph that follows a table, figure, equation etc. does not need an indent, either. Subsequent paragraphs, however, are indented. \subsubsection{Sample Heading (Third Level)} Only two levels of headings should be numbered. Lower level headings remain unnumbered; they are formatted as run-in headings. \paragraph{Sample Heading (Fourth Level)} The contribution should contain no more than four levels of headings. Table~\ref{tab1} gives a summary of all heading levels. \begin{table} \caption{Table captions should be placed above the tables.}\label{tab1} \begin{tabular}{|l|l|l|} \hline Heading level & Example & Font size and style\\ \hline Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\ 1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\ 2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\ 3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\ 4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\ \hline \end{tabular} \end{table} \noindent Displayed equations are centered and set on a separate line. \begin{equation} x + y = z \end{equation} Please try to avoid rasterized images for line-art diagrams and schemas. Whenever possible, use vector graphics instead (see Fig.~\ref{fig1}). \begin{figure} \includegraphics[width=\textwidth]{fig1.eps} \caption{A figure caption is always placed below the illustration. Please note that short captions are centered, while long ones are justified by the macro package automatically.} \label{fig1} \end{figure} \begin{theorem} This is a sample theorem. The run-in heading is set in bold, while the following text appears in italics. Definitions, lemmas, propositions, and corollaries are styled the same way. \end{theorem} \begin{proof} Proofs, examples, and remarks have the initial word in italics, while the following text appears in normal font. \end{proof} For citations of references, we prefer the use of square brackets and consecutive numbers. Citations using labels or the author/year convention are also acceptable. The following bibliography provides a sample reference list with entries for journal articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1}, and a homepage~\cite{ref_url1}. Multiple citations are grouped \cite{ref_article1,ref_lncs1,ref_book1}, \cite{ref_article1,ref_book1,ref_proc1,ref_url1}. \section{The canonical isomorphism algorithm for patterns} \tiny \subsection{Dogfood, top 100 by log factor} \begin{longtable}{ r r p{10cm} } log factor & frequency & \\ \hline\endhead \input{results/dogfood/motifs-byscore.latex} \hline \end{longtable} \subsection{Dogfood, top 100 by frequency} \begin{longtable}{ r r p{10cm} } log factor & frequency & \\ \hline\endhead \input{results/dogfood/motifs-byfreq.latex} \hline \end{longtable} \subsection{AIFB, top 100 by log factor} \begin{longtable}{ r r p{10cm} } log factor & frequency & \\ \hline\endhead \input{results/aifb/motifs-byscore.latex} \hline \end{longtable} \subsection{AIFB, top 100 by frequency} \begin{longtable}{ r r p{10cm} } log factor & frequency & \\ \hline\endhead \input{results/aifb/motifs-byfreq.latex} \hline \end{longtable} \subsection{Mutag, top 100 by log factor} \begin{longtable}{ r r p{10cm} } log factor & frequency & \\ \hline\endhead \input{results/mutag/motifs-byscore.latex} \hline \end{longtable} \subsection{Mutag, top 100 by frequency} \begin{longtable}{ r r p{10cm} } log factor & frequency & \\ \hline\endhead \input{results/mutag/motifs-byfreq.latex} \hline \end{longtable} \end{document}
1,314,259,995,562
arxiv
\section{Introduction} The origin of dust on cosmic scales is of great astrophysical interest. The cosmic importance of dust is illustrated by the fact that about $50\%$ of starlight is reprocessed by dust \citep{2006Dole}. Thus, dust dominates how we see galaxies. Dust is very important for the thermal balance of the interstellar medium (ISM) of galaxies and accordingly for star formation \citep[][and references therein]{2016McKinnon}. Large amounts of dust have been observed in high redshift galaxies \citep[$z\gtrapprox6$, e.g. ][]{2017Riechers, 2017Laporte}, where low- and intermediate-mass stars did not have time to evolve and thus to inject dust into the ISM. It is widely accepted now that dust must arise from rapidly evolving (1--10 Myr) massive stars, particularly in the ejecta of core-collapse supernovae (CC-SNe) \citep{2006Sugerman}. In fact, large amounts of dust ($0.1-1\,M_{\odot}$) were postulated by \citet{2001TF} and \citet{2003Nozawa}, under the assumption of favourable conditions in the SN ejecta. Nevertheless, for nearby CC-SNe, the dust masses inferred from mid-infrared observations are in general of the order of $10^{-4}-10^{-2}\,M_{\odot}$ \citep[e.g.][]{2009Kotak,2011Fox}. Dust masses of the order of $10^{-1}\,M_{\odot}$ were reported from sub-millimetre observations of SN1987A \citep[e.g.][]{2014Indebetouw, 2015Matsuura}, suggesting that perhaps cold dust emission is not captured by mid-infrared observations. However, such estimates are difficult to compare because they are often based on different observational constraints and assumptions. It is also expected that only a fraction of the newly formed dust reaches the ISM, as it will be heavily processed and destroyed by the reverse shock \citep[see][]{2018Micelotta, 2018Gall}. Dust can also grow in dense ISM from existing dust grains \citep[e.g.][]{2009Draine}, although this is still not sufficient, together with the contribution from SNe, to explain dust yields in galaxies \citep[e.g.][]{2019Tamura}. The topic is also controversial with respect to the nearby Magellanic Clouds (MCs). The MCs are two irregular dwarf galaxies with metallicities 0.5 and 0.25 times Solar for the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC), respectively. Their relative proximity to the Milky Way (MW), accurate distance measurements and low line-of-sight extinction make them an ideal laboratory, resembling the physical conditions of galaxies at the peak of star formation. In the past decades, several efforts were made to estimate dust yields in these galaxies and determine the primary dust sources at low metallicities. With the advent of \emph{Herschel} this became possible. Particularly the HERITAGE \emph{Herschel} key project \citep{2013Meixner}, observing in five bands from 100 to 500$\,\rm\mu m$, offered a plethora of data that advanced our understanding of the dust budget problem. Recent estimates report integrated dust masses of $(7.3\pm1.7)\times10^{5}$ and $(8.3\pm2.1)\times10^{4}\,M_{\odot}$ for the LMC and SMC, respectively \citep{2014Gordon}, although later \citet{2017Chastenet} derived dust masses about a factor of two smaller. Several teams tried to identify the main sources of dust in such galaxies. \citet{2009Srinivasan}, \citet{2009Matsuura}, \citet{2012Riebel}, \citet{2012Boyer} and \citet{2016Srinivasan} studied the dust production rate from low- and intermediate-mass stars in the MCs using mid-infrared observations. \citet{2015Jones} identified some tens of new evolved stellar objects from HERITAGE. These works generally agree on the result that the global dust production from the numerous population of asymptotic giant branch (AGB) and red supergiant (RSG) stars cannot account for the ISM dust reported from far-infrared and sub-millimetre observations. Another result is that the majority of dust input from low- and intermediate-mass stellar sources comes from ``extreme'' AGB \citep{2009Srinivasan, 2012Riebel, 2012Boyer}. \emph{Spitzer} observations from the SAGE legacy survey \citep{2006Meixner} were used by \citet{2009Bonanos,2010Bonanos} to investigate the infrared properties of massive stars. They found an excess of emission at 24$\,\rm \mu m$ due to dust from a handful of luminous blue variable (LBV) stars. Some of these, and other LBVs with known extended nebulae detected with {\it{HST\/}} \citep{2003Weis}, were then followed up and large masses of dust were measured \citep{2014Niyogi,2017AgliozzoA, 2017AgliozzoB, 2019Agliozzo}. In the classical scenario, LBVs are transitional objects from the main sequence of O, B stars to the Wolf-Rayet (WR) phase \citep{1984Conti}. This represents a short phase of evolution of the most massive ($M\geq25\,M_{\odot}$) stars, ranging in duration from $10^{4}$ to $10^{5}\,\rm yr$ \citep[e.g.][]{1989Maeder, 2007Massey, 2014Smith}. Observationally, LBVs are blue supergiants (BSGs) that experience instabilities, observed in the form of spectroscopic and photometric variabilities, usually in the optical and near-infrared. These instabilities are poorly understood but, on the basis of their observational properties, are phenomenologically divided into three main groups: giant eruptions, S~Doradus-type variability, and micro-variabilities \citep{1994HD,vanGenderen2001}. Although the physical mechanisms underlying these instabilities are possibly different \citep[e.g. the vicinity to the Eddington limit, sub-photospheric instabilities, bistability jump, envelope inflation, wind-envelope interaction, fast rotation, binarity, stellar merger;][]{1994HD,1989Gallagher,1999Vink,2009Groh,2012A&A...538A..40G,2016Portegies,2017Owocki, 2021Grassitelli}, observations suggest increases in the mass-loss rates by about a factor of three in S~Doradus-type variability (e.g. \citealt{2002Vink}) or by several orders of magnitude in giant eruptions, like that of $\eta$~Carinae (e.g. \citealt{1994HD}). During the quiescent state, LBV mass-loss rates range between $10^{-7}$ and $10^{-5}\,M_{\odot}\,\rm yr^{-1}$, typical of BSGs as explained by the radiatively-driven stellar-wind theory \citep{1975CAK, 2019A&A...632A.126S}. Particularly enigmatic are the rare giant eruptions, initially often mistaken for CC-SNe, because of their brightness, and subsequently named SN impostors \citep[e.g.][]{2012ApJ...746..179V, 2013Mauerhan, 2016ApJ...823L..23T, 2017Pastorello, 2018NElias, 2019Reguitti}. Famous examples in our Galaxy are $\eta$~Car and P~Cygni. Observationally, mass-loss rates exceeding $10^{-3}-10^{-2}\,M_{\odot}\,\rm yr^{-1}$ are estimated \citep[e.g.][]{2014Smith}. In the case of $\eta$ Car, more than $40 \,M_{\odot}$ \citep{2017Morris} were released during the Great Eruption in the XIX~century, which lasted about 20 years, forming the Homunculus nebula. In eruptions of SN impostors, nebular masses of the order of $0.1\,M_{\odot}$ are observed \citep[e.g.][]{2014Margutti}. Several observational and theoretical works suggest that some LBVs could be the immediate progenitors of SNe \citep[e.g. ][]{2006Kotak,2008Trundle,2009Natur.458..865G,2007ApJ...666.1116S,2008ApJ...686..467S,2013A&A...550L...7G,groh14b,boian18}. Luminous blue variable stars are often surrounded by dusty circumstellar nebulae. In the Milky Way typical dust masses ranging between $10^{-3}$ and $10^{-1}\,M_{\odot}$ are reported \citep{1997Hutsemekers,2012Umana,2013Vamvatira,2015Vamvatira,2014Agliozzo,2014Lau,2018Arneson}. Furthermore, several tens of candidate LBVs were identified on the basis of the detection of infrared circumstellar nebulae \citep{2002Egan, 2003Clark, 2010Gvaramadze} and of an infrared excess usually peaking between 24 and $70~\mu$m. The central star, which is a BSG during quiescence, is very hot, and one does not expect dust condensation in its wind. However, the physical conditions in the optically thick and cool ``pseudo-photospheres'' formed during eruptions are favourable for dust condensation and growth \citep{2011Kochanek}. \citeauthor{2011Kochanek} showed analytically that the required temperature and particle density for dust formation are met for mass-loss rates larger than $10^{-2.5}\,M_{\odot}\,\rm yr^{-1}$, that are typical of giant eruptions. Grain growth and dust evolution may be observed in S~Doradus variables (which have mass-loss rates that are a couple of orders of magnitudes lower), as suggested by the disappearance of the silicate bump around $10-13\,\rm \mu m$ during the most recent S~Dor outburst of the Magellanic LBV RMC~71 \citep{2017Mehner}. Another way to form dust around a massive star is in the colliding winds of close binaries, as in some Wolf Rayet stars. In these systems dust seems to form episodically, near the periastron passage, or persistently, creating pinwheel-like nebulae \citep[e.g.][]{2007Crowther,2020Lau}. This mechanism of dust formation is presumably the case of the Galactic LBV binary HR~Car \citep{2016Boffin}, a model of which to explain the clumpy and dusty inner nebula is an Archimedean spiral centred on the binary system \citep[alternative to the expanding bipolar lobe model, see][ and references therein]{2017Buemi}. Episodic dust formation also potentially occurs in the shocked regions of $\eta$~Car's binary colliding winds \citep{2010SmithN}. \begin{figure*} \centering \includegraphics[width=1\linewidth]{wholeLMC-ID} \caption{Cutout of the SAGE MIPS24 image \citep{2006Meixner} of the LMC and positions of the candidate and confirmed LBVs analysed in this work. The numbers are the IDs in Table~\ref{tab:sample}, and the red circles have a radius of 30\arcsec. North is towards the top right. } \label{fig:wholeLMC} \end{figure*} \begin{figure} \centering \includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=1\linewidth]{wholeSMC-ID} \caption{Cutout of the SAGE MIPS24 image \citep{2006Meixner} of the SMC and positions of the candidate and confirmed LBVs analysed in this work. The numbers are the IDs in Table~\ref{tab:sample}, and the red circles have a radius of 30\arcsec. North is up and east is left. } \label{fig:wholeSMC} \end{figure} In the MCs only up to one third of the LBV sample appears in studies of the primary dust sources (see above), and the dust masses from these objects remain unconstrained. Here we aim to expand these studies to the full population of LBVs in the MCs. Figs.~\ref{fig:wholeLMC} and \ref{fig:wholeSMC} show the locations of the candidate and confirmed LBVs in the LMC and SMC that are analysed in this work. We have built a maximally complete multi-wavelength dataset of 31 LBVs mostly from archival space telescope images and catalogues, but also have included ground-based mid-infrared observations of three sources. In addition, we derived new photometric values for sources missing in catalogues or with low-accuracy photometry (Sects.~\ref{sec:dataset} and \ref{sec:photometry}). The goals of this work are: \begin{enumerate} \item Review the status of the MC LBV sample by means of their infrared properties, combined with literature information (Sect.\ \ref{sec:review}). \item Estimate the dust masses of LBVs in the MCs by modelling individual sources (Sect.~\ref{sec:modelling:grey}) and modelling photometry from stacked images of the LBVs in the LMC (Sect.~\ref{sec:stacking}). \item Evaluate the importance of LBVs in producing dust by comparing the derived dust mass with that inferred from the stacking of lower-mass stars and with literature values of ISM dust yields (Sects.\ \ref{sec:stackingAGBs} and \ref{sec:discussion}). \end{enumerate} \section{The multiwavelength dataset} \label{sec:dataset} \begin{table*} \caption{Sources, celestial coordinates \citep{2018yCat.1345....0G}, LBV status (previous: \citealt{2018RNAAS...2c.121R}; new: this work), and SED-based classification proposed in this work. } \label{tab:sample} \centering \begin{tabular}{rllccccc} \hline\hline ID & Name 1 & Name 2 & RA (ICRS, J2000) & Dec (ICRS, J2000) & \multicolumn{2}{c}{LBV status} & SED \\ & & & & & Previous&New & Class\tablefootmark{a} \\ \hline 1 & RMC$\,$143 & CPD$\,-69\, 463$ & 05 38 51.617 & $-69$ $08$ $07.31$ & LBV & LBV & 1 \\ 2 & RMC$\,$127 & HD$\,$269858 & 05 36 43.694 & $-69$ $29$ $47.45$ & LBV & LBV & 1 \\ 3 & S$\,$Doradus & HD$\,$5343 & 05 18 14.357 & $-69$ $15$ $01.15$ & LBV & LBV & 1 \\ 4 & RMC$\,$110 & HD$\,$269662 & 05 30 51.476 & $-69$ $02$ $58.59$ & LBV & LBV & 1 \\ 5 & HD$\,$269582 & LHA$\,$120-S$\,$83 & 05 27 52.662 & $-68$ $59$ $08.49$ & LBV & LBV & 2 \\ 6 & RMC$\,$116 & HD$\,$269700 & 05 31 52.282 & $-68$ $32$ $38.86$ & LBV & LBV & 2 \\ 7 & HD$\,$269216 & LHA$\,$120-S$\,$88 & 05 13 30.781 & $-69$ $32$ $23.65$ & LBV & LBV & 1 \\ 8 & RMC$\,$71 & HD$\,$269006 & 05 02 07.394 & $-71$ $20$ $13.12$ & LBV & LBV & 1 \\ 9 & RMC$\,$85 & HD$\,$269321 & 05 17 56.074 & $-69$ $16$ $03.81$ & LBV & LBV & --- \\ 10 & RMC$\,$123 & HD$\,$37836 & 05 35 16.633 & $-69$ $40$ $38.44$ & cLBV& cLBV & 2 \\ 11 & RMC$\,$99 & HD$\,$269445 & 05 22 59.785 & $-68$ $01$ $46.63$ & cLBV& cLBV & 2 \\ 12 & Sk$\,-69\, 279$ & --- & 05 41 44.656 & $-69$ $35$ $14.90$ & cLBV& cLBV & 1 \\ 13 & LHA$\,$120-S$\,$119 & HD$\,$269687 & 05 31 25.525 & $-69$ $05$ $38.56$ & cLBV& cLBV & 1 \\ 14 & LHA$\,$120-S$\,$61 & AL$\,$418 & 05 45 51.939 & $-67$ $14$ $25.94$ & cLBV& cLBV & 1 \\ 15 & RMC$\,$74 & HD$\,$268939 & 05 04 14.909 & $-67$ $15$ $05.25$ & cLBV& cLBV & 2 \\ 16 & RMC$\,$78 & HD$\,$269050 & 05 07 20.422 & $-68$ $32$ $08.57$ & cLBV& cLBV & 1 \\ 17 & RMC$\,$81 & HD$\,$269128 & 05 10 22.789 & $-68$ $46$ $23.82$ & cLBV& cLBV & 1 \\ 18 & LHA$\,$120-S$\,$18 & Sk$\,-68\,42$ & 05 05 53.981 & $-68$ $10$ $50.54$ & cLBV& cLBV & 1 \\ 19 & RMC$\,$66 & HD$\,$268835 & 04 56 47.080 & $-69$ $50$ $24.77$ & cLBV&sgB[e]\tablefootmark{b} & 3 \\ 20 & RMC$\,$84 & HD$\,$269227 & 05 13 54.280 & $-69$ $31$ $46.66$ & cLBV&sgB[e]\tablefootmark{c} & 3 \\ 21 & HD$\,$34664 & Sk$\,-67\,64$ & 05 13 52.994 & $-67$ $26$ $54.82$ & cLBV&sgB[e]\tablefootmark{b} & 3 \\ 22 & HD$\,$38489 & Sk$\,-69\,259$ & 05 40 13.321 & $-69$ $22$ $46.49$ & cLBV&sgB[e]\tablefootmark{b} & 3 \\ 23 & RMC$\,$126 & HD$\,$37974 & 05 36 25.854 & $-69$ $22$ $55.79$ & cLBV&sgB[e]\tablefootmark{b} & 3 \\ 24 & Sk$\,-69\,271$ & CPD$\,-69\,500$ & 05 41 20.126 & $-69$ $36$ $22.89$ & cLBV&YSG\tablefootmark{d} & 4 \\ 25 & RMC$\,$128 & HD$\,$269859 & 05 36 47.188 & $-69$ $29$ $52.09$ & cLBV&B2Ia\tablefootmark{e} & 4 \\ 26 & RMC$\,$149 & Sk$\,-69\,257$ & 05 39 58.745 & $-69$ $44$ $04.07$ & cLBV&O8.5 II((f))\tablefootmark{f} & 4 \\ 27 & HD$\,$269604 & Sk$\,-68\,93$ & 05 28 31.367 & $-68$ $53$ $55.75$ & cLBV&A1 Ia-0\tablefootmark{g} & 4 \\ \hline 28 & RMC$\,$14 & HD$\,$5980 & 00 59 26.585 & $-72$ $09$ $53.93$ & LBV & LBV & 2 \\ 29 & RMC$\,$40 & HD$\,$6884 & 01 07 18.218 & $-72$ $28$ $03.66$ & LBV& LBV & 1 \\ 30 & RMC$\,$4 & LHA$\,$115-S$\,$6 & 00 46 55.030 & $-73$ $08$ $34.14$ & cLBV& cLBV & 3 \\ 31 & LHA$\,$115-S$\,$18 & --- & 00 54 09.542 & $-72$ $41$ $43.29$ & cLBV& cLBV & 3 \\ \hline \end{tabular} \tablefoot{The sample includes confirmed LBV and candidate LBV (cLBV) stars in the MCs. \tablefoottext{a}{Class~1: sources with a warm/cool dusty nebula peaking in the mid- to far-infrared. Class~2: sources displaying from 1 to $\sim24\,\rm \mu m$ only free-free emission from the stellar wind in addition to the stellar photosphere. Class~3: sources with an SED dominated at all wavelengths $\gtrsim 2\,\rm \mu m$} by a hot dusty component. Class~4: featureless sources only displaying the stellar photosphere. \tablefoottext{b}{\citet{1986Zickgraf}.}\tablefoottext{c}{\citet{1984Stahl}.}\tablefoottext{d}{\citet{2012Neugent}.} \tablefoottext{e}{\citet{1991Fitzpatrick}.} \tablefoottext{f}{\citet{2009Farigna}.} \tablefoottext{g}{\citet{1973Osmer}.} } \end{table*} The sample was extracted from the most recent census of LBVs in the Local Group \citep{2018RNAAS...2c.121R}, which contains 31 objects in the MCs. Table~\ref{tab:sample} includes the names and celestial coordinates of all the stars included in this work. More information on individual sources is provided in Sect.~\ref{sec:review}. \subsection{VISIR observations} \begin{figure*} \centering \includegraphics[width=16cm]{r71-visir} \medskip \includegraphics[width=16cm]{calib-visir} \caption{VISIR images of RMC$\,$71 (top panels), and the standard calibrators observed on the same nights (bottom panels, with identifications given in the lower right corners). North is up and east is left. The colour-scale is in arbitrary units. } \label{visirR71} \end{figure*} \begin{table*} \caption{Journal of VISIR observations. } \label{tab:obsVISIR} \centering \begin{tabular}{llccccc} \hline\hline Source & Filter & UT start time & Airmass & IQ at $\lambda$ & Exposure & {$F_\nu$} \\ & & & & (arcsec) & (s) & (mJy) \\ \hline\\ RMC~71 & J8.9 & 2017-10-07 07:47 & 1.47 & N/A & 90 & $136\pm19$\\ & J8.9 & 2017-10-11 07:30 & 1.47 & 0.42 & 90 & $122\pm12$\\ & PAH2\_2 & 2016-08-14 08:57 & 1.63 & 0.35 & 1892 & $408\pm 41$ \\ & PAH2\_2 & 2016-08-18 08:31 & 1.65 & 0.36 & 1892 & $482\pm 48$ \\ & PAH2\_2 & 2016-09-05 08:11 & 1.57 & 0.35 & 1892 & $408\pm 41$ \\ & Q1 & 2016-08-17 08:58 & 1.64 & 0.46 & 588 & $3261\pm326$ \\ & Q1 & 2017-10-07 07:56 & 1.46 & 0.46 & 1980 & $3207\pm321$ \\ & Q1 & 2017-10-11 07:35 & 1.46 & 0.47 & 1980 & $3490\pm349$\\ \\ HD~34664 & J8.9 & 2017-10-08 07:49 & 1.38 & 0.29 & 90 & $1051\pm105$\\ & PAH2$\_$2 & 2016-09-05 08:54 & 1.43 & 0.38 & 1260 & $944\pm94$\\ & PAH2$\_$2 & 2016-09-08 09:02 & 1.40 & 0.38 & 1260 & $903\pm90$\\ & Q1 & 2017-10-08 07:55 & 1.37 & 0.46 & 1980 & $580\pm58$\\ \\ HD~38489 & J8.9 & 2017-10-27 07:33 & 1.41 & N/A & 90 & $905\pm129$ \\ & PAH2$\_$2 & 2016-09-22 09:05 & 1.42 & 0.32 & 990 & $781\pm78$\\ & Q1 & 2017-10-27 07:54 & 1.41 & 0.46 & 1980 & $561\pm56$\\ \hline \end{tabular} \tablefoot{ IQ at $\lambda$ gives the approximate image quality at the observing wavelength at the zenith as measured from the flux standard stars observed before or after the science targets. Filter centre wavelengths are J8.9: $8.7\,\mu{\rm m}$, PAH2$\_$2: $11.68\,\mu{\rm m}$, Q1: $17.65\,\mu{\rm m}$. } \end{table*} RMC$\,$71, HD$\,$38489, and HD$\,$34664 were observed in 2016 and 2017 (ESO program IDs: 097.D-0612(A,B) and 0100.D-0469(A,B)) with the upgraded Vlt Imager and Spectrometer for the mid-InfraRed (VISIR; \citealt{2004Lagage,2015Kaufl,2016Kerber}) on the Very Large Telescope \citep[VLT, ][]{1991Enard}. Some of the data of RMC$\,$71 from program 097.D-0612 were previously published in \citet{2017Mehner}. The images were taken in standard imaging mode through filters J8.9 ($8.7 \pm0.37\,\mu$m), PAH2\_2 ($11.88 \pm0.19\,\mu$m) and Q1 ($17.65 \pm0.42\,\mu$m) with perpendicular nodding and a chop/nod throw of $20\arcsec$ in 097.D-0612 and $10\arcsec$ in 0100.D-0469, respectively. The individual observations are listed in Table~\ref{tab:obsVISIR}. J8.9 was used for target acquisition only (hence the short exposure times). For flux calibration and point spread function (PSF) reference, the science observations were preceded or followed by the observation of mid-infrared standard stars \citep{1999Cohen}, which provide a systematic flux uncertainty of $\le10\%$. The data were reduced with the custom Python tool VISIR and ISAAC Pipeline Environment\footnote{\url{https://github.com/danielasmus/vipe}} (VIPE). Fluxes and source sizes were measured with Gaussian-fitting photometry performed as part of the VIPE reduction. Both epochs of the PAH2\_2 observations of HD$\,$34664 were affected by a chopping-induced PSF instability at the VLT, leading to an artificial elongation of the source. However, in the J8.9 and Q1 filters, the object is consistent with being unresolved (FWHM $\le 0.4\arcsec$ in J8.9 and $\le0.5\arcsec$ in Q1). The PAH2\_2 observation of HD$\,$38489 suffers from the same PSF-instability problem while, during the Q1 observation, the source was offset such that it was on the detector in only half of the chopping positions. The latter fact has been taken into account during the data reduction. Like HD$\,$34664, HD$\,$38489 is not obviously resolved, with upper limits in FWHM of $\le 0.37\arcsec$ in J8.9 and $\le0.5\arcsec$ in Q1, respectively. RMC$\,$71 appears resolved in all images, with the following FWHM dimensions: $\sim 0.46\arcsec \times 0.42\arcsec$ in J8.9 (PA$\sim 126$\,deg), $\sim 0.96\arcsec \times 0.83\arcsec$ in PAH2\_2 (PA$\sim110$\,deg), and $\sim 0.87\arcsec$ round in Q1. These are to be regarded as upper limits because none of the observations was made under diffraction-limited conditions (also because of the low declination of the object). Thus the differences between the two axes in each filter are considered insignificant. However, the extended nature of this source is robust, because of the consistent appearance at both epochs, including the position angle, and the significantly larger FWHM than found for the calibrator stars. The circumstellar environment of RMC$\,$71 appears in agreement with a broad circular shell and an asymmetric core, which is offset from the centre by $\sim0.2\arcsec$ toward the north-west (Fig.~\ref{visirR71}). \subsection{Archival data} \label{sec:archive} We consulted the infrared catalogues with the CDS VizieR \citep{2000Ochsenbein} and IRSA Gator\footnote{ \url{https://irsa.ipac.caltech.edu/applications/Gator/}} tools and retrieved photometry from the 2MASS Point Source Catalogue \citep{2003Cutri}, the \textsl{Spitzer} SAGE legacy survey \citep[Data Release 3/final]{2006Meixner}, the ``Optically bright post-AGB population of LMC'' \citep{2011vanAarle} and ``\textsl{Spitzer} Atlas of Stellar Spectra'' \citep{2010Ardila} catalogues, the \textsl{AKARI} IRC all-sky survey and LMC point source catalogues \citep[][]{2010aIshihara,2012Kato}, \textsl{WISE} surveys \citep{2012Cutri}, the MSX Point Source Catalogue \citep{2003Egan}, and \textsl{Herschel} HERITAGE survey \citep{2013Meixner}. We evaluated all stars for possible contamination by neighbouring sources by visually inspecting the infrared images when available (for each source, these are shown in Figs.~\ref{fig:group1}, \ref{fig:group1b}, \ref{fig:group2}, \ref{fig:group3}, \ref{fig:group4} of Appendix \ref{appendix:imagesSEDs}). We checked for ALMA observations of the sample. ALMA Band~7 (343 GHz) observations of RMC\,127, RMC\,143 and LHA$\,$120-S$\,$61 were previously reported in \citet{2017AgliozzoA,2017AgliozzoB,2019Agliozzo}. Unpublished ALMA/ACA data of a mosaic region containing RMC\,14 are available in the archive, in Bands 3 ($\sim\!93\,\rm GHz$; project ID: 2018.A.00049.S) and 6 ($\sim\!220\,\rm GHz$; project ID: 2017.A.00054.S). RMC\,14 is not detected, with $3~\sigma$ upper limits of $1.41\,\rm mJy$ at $93\,\rm GHz$ and $6.0\,\rm mJy$ at $220\,\rm GHz$. We also made use of the SPT+\emph{Planck} maps of the MCs at 1.4~mm \citep{2016PlanckMCs} for the stacking analysis in Sect.~\ref{sec:stacking}. None of the sources is detected in these maps obtained with the large beam of 1.5\arcmin. Finally, we used the available Infrared Spectrograph \citep[IRS;][]{2004Houck} spectra from the Combined Atlas of Sources with \emph{Spitzer} IRS Spectra \citep[CASSIS;][]{2011CASSIS-LR,2015CASSIS-HR} or from the IRSA data archive (as SSC Enhanced Products). RMC$\,$127, LHA$\,$120-S$\,$61, and RMC~84 were observed under Program Name/Id: STELLARATLAS/485 and published by \citet{2010Ardila}. The RMC$\,$66, HD$\,$38489, HD$\,$34665, and LHA$\,$115-S$\,$18 spectra were published in \citet{2010Kastner}, although we found that the spectrum of LHA$\,$115-S$\,$18 is not on source. The spectra of RMC$\,$14 and RMC$\,$40 were discussed in \citet{2015Ruffle}, and those of RMC$\,$110 and S~Dor in \citet{2017Jones}. Data of RMC$\,$99 and Sk$\,-69\,279$ are available in the archive: we are not aware of any previous publications. The IRS spectra are shown with grey continuous lines in the flux density distribution plots in Appendix \ref{appendix:imagesSEDs}. RMC$\,$71 and RMC$\,$66 also have \emph{Infrared Space Observatory (ISO)} data\footnote{\url{https://irsa.ipac.caltech.edu/data/SWS/} or \url{https://www.cosmos.esa.int/web/iso/access-the-archive}} and low resolution Multiband Imaging Photometer for \emph{Spitzer} \citep[MIPS;][]{2004Rieke} SED-mode spectra \citep[presented in][which also include RMC~126]{2010Kemper,2010vanLoon} but they are not considered in this work. The IRS spectra are mainly used to guide the interpretation of the infrared photometry when no measurement at 24~$\rm \mu$m is possible due to confusion. \section{Data analysis} \label{sec:photometry} \subsection{Photometry from {\it Spitzer} MIPS24/MIPS70} \begin{table*} \caption{MIPS-24 aperture photometry.} \label{tab:photMIPS24} \centering \begin{tabular}{llcccccl} \hline\hline ID &Source & Ap. radius & Sky annulus & Ap. corr. & Measured $F_\nu$ & Catalogue $F_\nu & Comment \\ & & (\arcsec) & (\arcsec) & & (mJy) & (mJy) & \\ \hline 1&RMC$\,$143 & --- & --- & --- & $\mathbf{482.2\pm 96.4}$ & --- & from \citet{2019Agliozzo} \\ 2&RMC$\,$127 & 20 & 20--32 & 1.15 & $1255\pm10$ & $1180\pm60$ & \\ 3&S$\,$Doradus & 20 & 20--32 & 1.15 & $848\pm90$ & $898\pm22$ & \\ 4&RMC$\,$110 & 7 & 7--13 & 2.05 & $\mathbf{10\pm2}$ & $7.39\pm0.14$ & small aperture due to \\ & & & & & && confusion\\ 5&HD$\,$269582 & 7 & 7--13 & 2.05 & $7.7\pm0.4$ & $7.783\pm0.225$ & \\ 6&RMC$\,$116 & 7 & 7--13 & 2.05 & $1.4\pm0.8$ & $1.293\pm0.155$ & marginally detected \\ 7&HD$\,$269216 & 20 & 20--32 & 1.15 & $\mathbf{21\pm5}$ & $6.952\pm0.134$ & marginally resolved \\ 8&RMC$\,$71 & --- & --- & --- & --- & $>4100\pm410$ & saturated \\ 9&RMC$\,$85 & --- & --- & --- & --- & --- & not detected/confusion \\ 10&RMC$\,$123 & 7 & 7--13 &2.05 & $15.0\pm1.2$ & $16.13\pm0.28$ & small aperture due to \\ & & & & & && confusion\\ 11&RMC$\,$99 & --- & --- & --- & --- & $49.4\pm0.5$ & not detected/confusion \\ 12&Sk$\,-69\,279$ & 20 & 20--32 & 1.15 & $\mathbf{22\pm5}$ & --- & bubble \\ 13&LHA$\,$120-S$\,119$ & 35 & 40--50 & 1.08 & $\mathbf{703\pm23}$ & $537.1\pm5.3$ & extended \\ 14&LHA$\,$120-S$\,$61& 20 & 20--32 & 1.15 & $1474\pm10$ & $1470\pm74$ & extended \\ 15&RMC$\,$74 & 7 & 7--13 & 2.05 & $3.5\pm0.3$ & $3.074\pm0.145$ & \\ 16&RMC$\,$78 & 20 & 20--32 & 1.15 & $\mathbf{59\pm19}$ & --- & double source \\ 17&RMC$\,$81 & 35 & 40--50 & 1.08 & $\mathbf{108\pm29}$ & --- & extended \\ 18&LHA$\,$120-S$\,$18 & 7 & 7--13 & 2.05 & $\mathbf{11.5\pm1.5}$ & $7.882\pm0.101$ & smaller aperture \\ & & & & & && due to confusion,\\ & & & & & && lower limit?\\ 19&RMC$\,$66 & 20 & 20--32 & 1.15 & $856\pm8$ & $813\pm6$ & \\ 20&RMC$\,$84 & 20 & 20--32 & 1.15 & $104\pm4$ & $99.3\pm0.9$ & \\ 21&HD$\,$34664 & 7 & 7--13 & 2.05 & $389\pm19$ & $411\pm3$ & point source in the bright\\ & & & & & && ring of another star \\ 22&HD$\,$38489 & 20 & 20--32 & 1.15 & $490\pm42$ & $498\pm4$ & \\ 23 & RMC$\,$126 & 20 &20--32 & 1.15 & $1098\pm15$& $1120\pm10$ & \\ 24 & Sk$\,-69\,271$ & 20 &20--32 & 1.15 & < 23& --- & \\ 25&RMC$\,$128 & --- & --- & --- & --- & --- & not detected \\ & & & & & & & in the vicinity of LBV [2] \\ 26&RMC$\,$149 & --- & --- & --- & --- & --- & confusion \\ 27&HD$\,$269604 & 7 & 7--13 & 2.05 & $\mathbf{0.6\pm0.2}$ & $0.737$ & \\ \hline 28&RMC$\,$14 & --- & --- & --- & --- & $1.634\pm0.131$ & \\ 29&RMC$\,$40 & 20 & 20--32 & 1.15 & $29.4\pm1$ & $27.0\pm0.4$ & \\ 30&RMC$\,$4 & 20 & 20--32 & 1.15 & $\mathbf{67.3\pm8.8}$ & $45.4\pm0.5$ & \\ 31&LHA$\,$115-S$\,$18 & 20 & 20--32 & 1.15 & $82.1\pm2.6$ & $87.0\pm0.6$ & \\ \hline \end{tabular} \tablefoot{ The numbers highlighted in bold are new measurements with no counterpart in the catalogue or with revised photometry, which are adopted in the analysis. } \end{table*} \begin{table*} \caption{MIPS-70 aperture photometry and 3$\sigma$ upper limits.} \label{tab:photMIPS70} \centering \begin{tabular}{llcccccl} \hline\hline ID &Source & Aperture radius & Sky annulus & Ap. corr. & Measured $F_\nu$ & Catalogue $F_\nu$ & Comment \\ & & (\arcsec) & (\arcsec) & & (mJy) & (mJy) & \\ \hline 1&RMC$\,$143 & 16 & 18--39 & 2.07 & $<14551$ & --- & confusion \\ 2&RMC$\,$127 & 16 & 18--39 & 2.07 & $1021.3\pm54.4$ & $987\pm27$ & \\ 3&S$\,$Doradus & 16 & 18--39 & 2.07 & $<2090$ & --- & confusion \\ 4&RMC$\,$110 & 16 & 18--39 & 2.07 & $<1243$ & --- & confusion \\ 5&HD$\,$269582 & 16 & 18--39 & 2.07 & $<155$ & --- & confusion\\ 6&RMC$\,$116 & 16 & 18--39 & 2.07 & $<663$ & --- & confusion \\ 7&HD$\,$269216 & 16 & 18--39 & 2.07 & $\mathbf{128\pm71}$ & --- & detected \\ & & & & & && but confused\\ 8&RMC$\,$71 & 16 & 18--39 & 2.07 & $2029.4\pm49.6$ & $1858\pm24$ & \\ 9&RMC$\,$85 & 16 & 18--39 & 2.07 & $<3864$ & --- & confusion \\ 10&RMC$\,$123 & 16 & 18--39 & 2.07 & $<477$ & --- & confusion \\ 11&RMC$\,$99 & 16 & 18--39 & 2.07 & $<867$ & --- & confusion \\ 12&Sk$\,-69\,279$ & 16 & 18--39 & 2.07 & $<214$ & --- & confusion \\ 13&LHA$\,$120-S$\,119$ & 16 & 18--39 & 2.07 & $447.2\pm51.4$ & $428.3\pm15.1$ & marginally \\ & & & & & && resolved\\ 14&LHA$\,$120-S$\,$61 & 16 & 18--39 & 2.07 & $635.2\pm22.1$ & $579.3\pm8.8$ & \\ 15&RMC$\,$74 & 16 & 18--39 & 2.07 & $<58$ & --- & noise \\ 16&RMC$\,$78 & 16 & 18--39 & 2.07 & $<350$ & --- & confusion\\ 17&RMC$\,$81 & 16 & 18--39 & 2.07 & $\mathbf{150\pm56}$ & --- & possible detection \\ & & & & & && but confused\\ 18&LHA$\,$120-S$\,$18 & 16 & 18--39 & 2.07 & $<330$ & --- & confusion \\ 19&RMC$\,$66 & 35 & 39--65 & 1.24 & $531.5\pm56.2$ & $528\pm9$ & \\ 20&RMC$\,$84 & 16 & 18--39 & 2.07 & $\mathbf{104\pm29}$ & --- & possible detection \\ & & & & & && but confused\\ 21&HD$\,$34664 & 16 & 18--39 & 2.07 & $<4484$ & --- & contaminating \\ & & & & & &&nearby source \\ 22&HD$\,$38489 & 16 & 18--39 & 2.07 & $<769$ & --- & confusion \\ 23 & RMC$\,$126 & 16 & 18--39 & 2.07 & $299\pm126$& $380\pm24$ & inhomougeneous \\ & & & & & &&background \\ 24 & Sk$\,-69\,271$ & 16 & 18--39 & 2.07 & <298 & --- & confusion \\ 25&RMC$\,$128 & 16 & 18--39 & 2.07 & $<218$ & --- & contaminating \\ & & & & & &&nearby source \\ 26&RMC$\,$149 & 16 & 18--39 & 2.07 & $<16700$ & --- & confusion \\ 27&HD$\,$269604 & 16 & 18--39 & 2.07 & $<53$ & --- & marginal detection \\ \hline 28&RMC$\,$14 & 16 & 18--39 & 2.07 & $<1612$ & --- & confusion \\ 29&RMC$\,$40 & 16 & 18--39 & 2.07 & $\mathbf{58.7\pm8.0}$ & --- & \\ 30&RMC$\,$4 & 16 & 18--39 & 2.07 & $<338$ & --- & confusion \\ 31&LHA$\,$115-S$\,$18 & 16 & 18--39 & 2.07 & $<120$ & --- & confusion \\ \hline \end{tabular} \tablefoot{ The numbers highlighted in bold are new measurements with no counterpart in the catalogue or with revised photometry, which are adopted in the analysis. } \end{table*} \begin{table*} \caption{PACS-100 point source extraction and 3$\sigma$ upper limits. } \label{tab:photPACS100} \centering \begin{tabular}{llcccl} \hline\hline ID &Source & Measured $F_\nu$ & Catalogue $F_\nu$ & Comment \\ & & (mJy) & (mJy) & \\ \hline 1&RMC$\,$143 & --- & $636\pm254$ & \\ 2&RMC$\,$127 & $535\pm65$ & $493\pm43$ & & \\ 3&S$\,$Doradus & $<1100$ & --- & not detected \\ &&&& due to confusion \\ 4&RMC$\,$110 & $<350$ & --- & \\ 5&HD$\,$269582 & $<270$ & --- & \\ 6&RMC$\,$116 & $<560$ & --- & \\ 7&HD$\,$269216 & $<283$ & --- & \\ 8&RMC$\,$71 & $669.0\pm68.5$& $706\pm48$ & & \\ \\ 9&RMC$\,$85 & $<1850$ & --- & \\ 10&RMC$\,$123 & $<206$ & --- & \\ 11&RMC$\,$99 & $<699$ & --- & \\ 12&SK$\,$-69279 & $<137$ & --- & \\ 13&LHA$\,$120-S$\,119$ & $\mathbf{204.7 \pm 33.6}$& --- & marginally resolved\\ 14&LHA$\,$120-S$\,$61 & $292.8\pm37.6$ & $305\pm31$ & & \\ 15&RMC$\,$74 & $<38$ & --- & \\ 16&RMC$\,$78 & $<195$ & --- & \\ 17&RMC$\,$81 & $<105$ & --- & \\ 18&LHA$\,$120-S$\,$18 & $<132$ & --- & \\ 19&RMC$\,$66 & $ 318\pm33 $& $320\pm26$ & & \\ 20&RMC$\,$84 & $<59$ & --- & \\ 21&HD$\,$34664 & $<1805$ & --- & \\ 22&HD$\,$38489 & $\mathbf{123.3\pm 90.8}$&--- & marginal detection\\ &&&& due to confusion \\ 23 & RMC$\,$126 & $\mathbf{199\pm52}$& $136\pm29$ & \\ 24 & Sk$\,-69\,271$ & <151 & --- & \\ 25&RMC$\,$128 & $<94$ & --- & \\ 26&RMC$\,$149 & $<34800$ & --- & \\ 27&HD$\,$269604 & $<37$ & --- & \\ 28&RMC$\,$14 & $<440$ & --- & \\ 29&RMC$\,$40 & $<40$ & --- & \\ 30&RMC$\,$4 & $<434$ & --- & \\ 31&LHA$\,$115-S$\,$18 & $<62$ & --- & \\ \hline \end{tabular} \tablefoot{ The numbers highlighted in bold are new measurements with no counterpart in the catalogue or with revised photometry, which are adopted in the analysis. } \end{table*} A small number of sources are not included in the {\it Spitzer} SAGE catalogues, or they have poor catalogue photometry, most likely because of the strict requirements adopted to identify point-sources \citep{2006Meixner}. Indeed we find that these sources are usually extended or embedded in a confused region, requiring a manual extraction of the flux measurement. The most remarkable case is Sk$\,-69\,279$, which appears as a well detected extended bubble in the MIPS24 map, but has no catalogue photometry. We performed aperture photometry for all sources that were either lacking measurements or are extended in the images. As a cross-check, we also measured all targets with catalogue values. Following this verification, we chose to use the existing catalogue photometry for the non-extended sources, unless indicated otherwise. Our aperture photometry was performed with the astropy package ``photoutils''. The aperture radii, sky annuli, and aperture corrections, as recommended in the MIPS instrument handbook\footnote{\url{https://irsa.ipac.caltech.edu/data/SPITZER/docs/mips/mipsinstrumenthandbook/50/}}, are listed in Tables \ref{tab:photMIPS24} and \ref{tab:photMIPS70}, together with the measured flux densities, comparison with the catalogue value when available, and relevant comments on the sources and their environments. The reported flux density errors only account for the uncertainty in determining the background (defined as the standard deviation in the background annulus integrated over the aperture), which generally dominates over the absolute flux calibration error and statistical noise in the source aperture. For non-detected sources, we provide $3\sigma$~upper limits. We checked the quality of our photometry against the catalogues (see Tables \ref{tab:photMIPS24} and \ref{tab:photMIPS70}): the results are consistent with the published values. In bold we highlight our measurements that are used in the later analysis instead of the catalogue values. \subsection{Photometry and upper limits from {\it Herschel}} Only a few sources were well detected with the Photodetector Array Camera and Spectrometer \citep[PACS, ][]{2010PACS} at 100 and 160\,$\mu$m, and their flux densities were taken from the HERITAGE catalogue \citep{2013Meixner}. Additionally, we report in Table \ref{tab:photPACS100} new PACS100 photometry for LHA$\,$120-S$\,$119 and HD$\,$38489 (although the latter is only a marginal detection). To maximise the signal-to-noise ratio, we adopted a procedure equivalent to Gaussian fitting. Local peaks were searched for in boxes with size equivalent to the beam. The regions used to evaluate the sky contribution were separated by one or two beams. The $1\,\sigma$ total uncertainty is determined by summing in quadrature the standard deviation of the background and the flux calibration uncertainty of 10\%. Because of contamination of S~Dor from a strong nearby nebular region, we reduced the background spacing to one beam. Nevertheless, the source was not unambiguously detected. For the sources with no detection (either in this work or in the catalogue), we report the $3\,\sigma$ upper limit at $100\mu\rm m$ determined as described above. Finally, in PACS160 we only find a single new detection, LHA$\,$120-S$\,$61, which has a flux density of $60\pm16\,\rm mJy$. We do not report any new detection obtained at longer wavelengths with the Spectral and Photometric Imaging Receiver (SPIRE) instrument \citep{2010Griffin}. The upper limits at $160\,\rm \mu m$ and longer wavelengths are not provided as they do not add useful information owing to the larger beam sizes and high interstellar dust brightness at such wavelengths. \section{The LBV SED classification scheme and colour-magnitude diagram} \label{sec:review} \label{sec:sedreview} \begin{figure*} \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{sed-S_Doradus-ex} \includegraphics[width=1\linewidth]{sed-RMC_123-ex} \end{minipage} \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{sed-LHA_120-S_119-ex} \includegraphics[width=1\linewidth]{sed-HD_34664-ex} \end{minipage} \begin{minipage}{1\textwidth} \centering \includegraphics[width=.45\linewidth]{sed-HD_269604-ex} \end{minipage} \caption{Examples of each of the four LBV SED classes defined in Sect.~\ref{sec:review}. } \label{fig:example_sed} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.5]{cmdiagram3} \caption{Mid-infrared colour-magnitude diagram for stars in the LMC. The LBVs are shown with filled circles in different colours, depending on the SED class defined in this work. Several types of supergiant stars (RSGs, sgB[e], supergiants A and B, six LBVs and other massive stars) as in \citet{2009Bonanos} are displayed with triangles of different colours as identified in the legend. Dusty evolved stars from % \citet{2015Jones} are shown with diamond symbols (these include post-AGBs, planetary nebulae, sgB[e] stars, one LBV, one WR and two SN remnants). Pixel symbols indicate the AGBs and RSGs from \citet{2012Riebel} and point symbols are confirmed and candidate YSOs from \citet{2008Whitney}.} \label{fig:cmd-LMC} \end{figure*} We first need to ensure to limit our sample to LBV stars and find a classification criterion that does not rely on the spectroscopic and photometric variations only, but also on properties of the stellar ejecta. Table \ref{tab:sample} summarises the relevant information on our sample, including the stellar classification from the literature. Stellar parameters, extinction, and nebular size when known are summarised in Table \ref{tab:stellparam} of Appendix \ref{appendix:stellarparam}. Unfortunately, we cannot provide information on the binary status, as to our knowledge this is known only for RMC~14 in the SMC and RMC~81 in the LMC. The complete set of photometry from this work and from the catalogues are in Tables~\ref{tab:PhotometryNearIR}, \ref{tab:PhotometryMidIR} and \ref{tab:PhotometryFarIR} of Appendix \ref{appendix:allphoto}. For all sources, the flux density distribution starting from $\rm 1~\mu$m are shown in Figs.~\ref{fig:group1}, \ref{fig:group1b}, \ref{fig:group2}, \ref{fig:ungrouped}, \ref{fig:group3}, \ref{fig:group4} of Appendix~\ref{appendix:imagesSEDs}, where in addition to the photometric points the stellar continuum (on the basis of stellar parameters as in Table~\ref{tab:stellparam}) and the IRS spectra are also included when available. In their sample of 1750 massive stars in the LMC, \citet{2009Bonanos} included six LBVs (S~Dor, RMC~127, RMC~71, RMC~110, RMC~85, HD~269582) and concluded that these stars do not represent a homogeneous class of objects on the basis of their mid-infrared colours. We identify four classes based on the individual infrared spectral energy distributions (SED): \begin{enumerate} \item Sources with a dusty nebula peaking in the mid- to far-infrared (Class~1). These can be further divided into two groups: with or without a significant free-free excess above the stellar photosphere (Classes 1a and 1b, respectively). An example can be found in the top-right panel in Fig.~\ref{fig:example_sed}, where the object shows a moderate free-free continuum up to $\rm 10~\mu$m, and at longer wavelengths a dominant infrared excess due to dust in the extended nebula, previously imaged by \citet{2003Weis-s119} and \citet{2012Agliozzo}. By contrast, the top left panel shows an example of strong free-free emission. \item Sources with only a free-free emission excess (Class~2). An example is given in the middle-left panel in Fig.~\ref{fig:example_sed}. For these sources the MIPS24 flux density is consistent with a power law describing the free-free continuum. \item Sources dominated at all wavelengths $\gtrsim2\mu\rm m$ by a hot dusty component, suggestive of a disc close to the star (Class~3). In the literature, these are all classified as B[e] supergiants (see Table~\ref{tab:stellparam}). An example of a Class~3 SED is HD~34664, shown in the second-right panel of Fig.~\ref{fig:example_sed}. \item Featureless sources with a single component consistent with the stellar photosphere at all wavelengths (Class~4). An example is presented in the bottom panel of Fig.~\ref{fig:example_sed}. \end{enumerate} Table~\ref{tab:sample} summarises the classification of the sources in the scheme proposed here. Among them, only one object cannot be classified, RMC~85. This is an active LBV \citep{1998vG}. The star is embedded in a very crowded region. Because of the absence of reliable photometry longward of $\sim20\mu{\rm m}$ \citep[confirming][]{2009Bonanos}, we are not able to distinguish whether it is SED Class~1 or Class~2 (see Fig.~\ref{fig:ungrouped}). The dust composition of sources in Class~1 typically comprises amorphous silicates, although a mixed chemistry is sometimes reported, like in the case of RMC~71 in the LMC \citep{2014Niyogi} or HD~168625 in the Milky Way \citep{2010Umana}. Crystalline silicates are also found in LBVs, albeit rarely \citep[e.g. RMC~71 and HR~Car, ][respectively]{1999Voors,2009Umana}, while they seem to be a common feature of sgB[e] stars \citep[e.g.][]{2006Kastner, 2010Kastner,2014deWit}. In Fig. \ref{fig:cmd-LMC} we present the [8.0] versus [8.0]-[24] colour-magnitude diagram (CMD) of stars in the LMC, depicting the position of the LBV sample analysed in this work (filled circles of various colours representing the different LBV SED classes as described in the legend); early and late supergiants as in \citet{2009Bonanos} (downward pointing triangle symbols); grey diamonds are low- and intermediate-mass, but also some high mass stars, planetary nebulae and SN remnants from \citet{2015Jones}; AGBs and RSGs from \citet{2012Riebel} and Young Stellar Objects (YSOs) from \citet{2008Whitney}. Stars with SED Classes 2, 3 and 4 occupy the same region as RSGs and other types of massive stars in the diagram, which Bonanos et al. found indistinguishable from AGBs and young stellar objects, highlighting the importance of using wide-baseline photometry. However, we notice some differences: Class 2 objects tend to be fainter than RSGs at 8$\,\rm \mu m$, due to lacking a hot dusty envelope; Class 3 are usually brighter than RSGs, due to their hot dusty disc; the only Class 4 object with MIPS24 photometry is the faintest in this region of the diagram, consistent with no excess above the photosphere. For stars in the SED Class~1, the [8.0]-[24] colours coincide with the sample of stars in Jones et al. However, the nature of 7 out of their 35 objects was not spectroscopically confirmed. These could all be sources with a dusty gaseous nebula (e.g. SN remnants, planetary nebulae, LBV nebulae). For stars in the SMC we refer to the colour-diagram by \citet{2010Bonanos}, who included all the LBVs in the census \citep{2018RNAAS...2c.121R}. Bonanos et al.\ found that LBVs are among the most luminous stars at $24\,\rm \mu m$, but they are not clearly separated from other types of star. \subsection{Class~1: sources with dusty nebulae peaking in the mid- to far-infrared} This class contains well-known extended MCs LBV nebulae (LBVNe), extensively studied in the literature. Some of them are associated with very active LBVs \citep{2017Walborn}. Some stars are classified as candidate LBVs: these are stars that do not satisfy all LBV classification criteria (both observed spectroscopic and photometric variability; \citealt{1994HD}), however the presence of a circumstellar nebula around some of them suggests that they are ex/dormant LBVs. Depending on whether or not they have an excess of near-infrared emission due to the ionised stellar wind, stars in this class can be further divided into: Class 1a, which are objects that present free-free emission from the wind; Class 1b, which are objects with no detected free-free emission from the wind. For two objects, Sk$\,-69\,279$ and RMC\,78, we report the first identification of a dusty circumstellar nebula. The dust component in these sources peaks between 24 and 100~$\rm \mu$m, similarly to several Galactic objects with extended nebulae \citep[typically with sizes of the order of $0.1-1$~pc, e.g. G79.29+0.46, Pistol Star, AG Car, AFGL2298, HR Car; ][and references therein]{2014Agliozzo,2014Lau, 2015Vamvatira,2010Buemi, 2017Buemi}. Among the sources in this class, five (RMC$\,$127, RMC$\,$71, S~Dor, LHA$\,$120-S$\,$61, and RMC$\,$40) have an IRS spectrum and all of them show, together with the thermal continuum, the silicate bump around $10-12\,\rm \mu m$. The objects classified as belonging to Class~1 are briefly introduced (see Fig.~ \ref{fig:group1} and \ref{fig:group1b} in the Appendix): \\ \textbf{S~Dor} is the prototype of the most common LBV variability \citep[a summary of the pioneering works can be found in][]{1989Wolf}. The infrared excess from a dusty nebula was reported in \citet{2009Bonanos} from mid-infrared photometry and in \citet{2017Jones} from the IRS spectrum. The power-law behaviour in the IRS data below $10~\rm \mu m$ is more likely consistent with strong free-free emission in the ionised stellar wind rather than the stellar photosphere. The bright surrounding environment does not allow for the disentanglement of the nebula at $70~\rm \mu m$ and beyond. The SED Class of this object is 1a.\\ \textbf{RMC\,127}: this star belongs to the group of active LBVs \citep{2008Walborn,2017Walborn}. This is another object with SED Class 1a. See \citet{2017AgliozzoB} for a multiwavelength study of its nebula.\\ \textbf{LHA\,120-S\,61}: candidate LBV \citep{1994HD} with SED Class 1a, its dusty nebula was modelled by \citet{2017AgliozzoA}.\\ \textbf{Sk$\mathbf{\,-69\,279}$}: candidate LBV with SED Class 1a, the nebula was previously imaged by {\it{HST\/}} \citep{2002Weis}. We find that at $24\,\rm \mu m$ it appears as a bubble, resembling the morphology revealed in the optical. To our knowledge, this is the first report of dust in this object. \\ \textbf{LHA\,120-S\,119}: candidate LBV \citep{vanGenderen2001} with a known nebula (SED Class 1a) that was previously imaged at different wavelengths \citep[][and references therein]{2012Agliozzo}. In the infrared it is possible to notice a thermal component of dust, detected up to $100\,\rm \mu m$. The nebula is marginally resolved at $24\,\rm \mu m$, revealing a marked asymmetry similar to the optical and radio images. This asymmetry was explained as a bow-shock, given the run-away nature of its associated star \citep{2001D&C}. \\ \textbf{RMC\,81}: infrared emission from cool dust around this candidate LBV \citep{vanGenderen2001} was first reportedly detected by {\it{IRAS}} at $60\,\mu\rm m$ \citep{1988iras....1.....B,2002Tubb}. However, inspection of the MIPS70 image, comparison with the MIPS photometry, the lack of $25$ or $12\,\mu\rm m$ \emph{IRAS} detections, and considering the $1\sigma$ position offset, it appears that this flux was greatly elevated by ISM confusion. The object is detected in both MIPS bands, even at $70\,\rm \mu m$ where the surrounding environment\,is bright. The star is an eclipsing binary \citep{1987StahlR81}. The SED Class of this object is 1a. \\ \textbf{LHA$\,$120-S$\,$18}: at $24\,\rm \mu m$ and beyond, the environment around this candidate LBV \citep{vanGenderen2001} is bright and the object is not distinguishable from the background. The SED Class of this object is 1a.\\ \textbf{RMC$\,$143}: the dusty and massive nebula (SED Class 1b) was extensively studied by \citet{2019Agliozzo}, who also confirmed the LBV status of this star. \\ \textbf{RMC$\,$71}: this star also belongs to the group of very active LBVs \citep{2013Mehner, 2017Walborn}, with SED Class 1b. It is possible to observe variability in the infrared SED (compare the mid-infrared photometry with the IRS spectrum), as the data were acquired at different epochs during the most recent stellar outburst \citep{2017Mehner}. Here we show a similar dataset, but extended to longer wavelengths. The entire SED was previously modelled by \citet{2014Niyogi}. \\ \textbf{RMC$\,$78}: at $24\,\rm \mu m$ the object \citep[a candidate LBV,][]{1999vanG} appears as a double source (SED Class 1b). \\ \textbf{RMC$\,$40}: this star belongs to the group of confirmed LBVs \citep{1993Szeifert} in the SMC. The first detection at $24\,\rm \mu m$ was reported by \citet{2010Bonanos}. Here we add a new clear detection at $70\,\rm \mu m$. The SED Class of this object is 1b. An IRS spectrum is available and suggests that the dust emission peaks between the two MIPS bands.\\ \textbf{HD$\,$269216}: a confirmed LBV \citep{1991Prinja}, this object is well detected and marginally resolved in MIPS~24. We add a new measurement as the catalogue PSF fitting may have missed a large fraction of its flux density. The SED Class of this object is 1b. \\ \textbf{RMC$\,$110}: this is a confirmed LBV manifesting the eruptive phenomenon a few times in the past decades \citep{1990Stahl,2018Campagnolo}. Because of confusion at wavelengths beyond $\sim20\,\rm \mu m$ it is not possible to obtain reliable photometry of dust in its ejecta (SED Class 1b). \subsection{Class~2: sources with free-free excess only} This class consists of the following objects (see Fig.~\ref{fig:group2}) \\ \textbf{RMC\,116} is a strongly active S~Dor variable \citep{vanGenderen2001}. Its SED shows an ionised stellar wind in the form of a free-free excess above the stellar photosphere detected up to $24\rm \mu$m. \\ \textbf{RMC\,123} and \textbf{RMC\,99} belong to a sub-group proposed by \citet{1984Stahl} and \citet{1987S&W}, characterised by an excess of free-free emission from a gaseous disc, rich in forbidden lines, but with no dust features. The free-free emission of RMC\,123 is detected up to $24\rm \mu$m. At longer wavelengths, the SED of RMC\,99 is dominated by contamination from neighbouring sources so that the presence of an infrared excess is questionable. Both stars are catalogued as weakly active S~Dor variables in \citet{vanGenderen2001} and are considered candidate LBVs. RMC~99 was observed with IRS, but the spectrum is likely dominated by interstellar gas and dust.\\ \textbf{HD\,269582} is a very active LBV \citep{2017Walborn} with a strong ionised stellar wind detected up to $24\rm \mu$m. Differences between 2MASS and IRAC data suggest that the star was active (highly variable) in the years of the infrared observations. \\ \textbf{RMC\,74} is a candidate LBV \citep{vanGenderen2001} with a SED similar to that of RMC~116. \\ \textbf{RMC\,14}, a well-known multiple-system containing an active LBV in the SMC \citep[e.g.][]{1996Barba}, shows an SED similar to HD~269582, with a strong free-free excess above the continuum \citep[a similar SED can also be found in][]{2010Bonanos}. This is also seen in two other active LBVs \citep[RMC\,127,][and S~Dor]{2017AgliozzoA}. The system was observed with IRS, but, because of the large slit width, the spectrum is likely dominated by interstellar gas and dust.\\ \subsection{Class~3: sources dominated by a hot dusty disc} \textbf{ RMC\,66, RMC\,84, HD\,38489, HD\,34664, RMC\,126, RMC\,4, and LHA\,115-S\,18}: These stars are all known in the literature as supergiant B[e] stars \citep[e.g.][]{1996Zickgraf,1999vanG,2012Aret, 2009Bonanos, 2010Bonanos,2006Kastner, 2010Kastner}, but sometimes they are also listed in the sample of candidate LBVs because of other observational commonalities \citep[e.g. variability of emission lines and P~Cyg profiles,][]{vanGenderen2001}. Their SEDs are suggestive of a different dust formation mechanism. Dust in these stars forms in the equatorial plane (perhaps as a consequence of fast rotation or binarity), and resides in a circumstellar disc-like configuration, resembling the mid-infrared spectra of T~Tauri and disc-enshrouded Herbig Ae/Be stars \citep{2006Kastner, 2010Kastner}. Furthermore, observations suggest the silicates to be crystalline, which requires long-lived dust particles permanently close to the star, as opposed to outflowing material like in the case of LBV eruptions \citep[for a review, see][]{2014deWit}. The IRS spectrum, and IRAC and MIPS images of HD~38489, RMC~66, LHA~115-S~18, RMC~126 and HD~34664 first appeared in \citet{2006Kastner,2010Kastner}. \subsection{Class~4: sources with no evident free-free or dust excess} This class comprises the following objects: \\ \textbf{HD\,269604} was classified as A1~Ia-0 by \citet{1973Osmer}. It appears in several works as a candidate LBV, but \citet{vanGenderen2001} flagged it as non-candidate. We do not find evidence of free-free and/or dust infrared excess.\\ \textbf{RMC\,128} \citep[B2~Ia with N weak, ][]{1991Fitzpatrick} was identified as non-candidate by van~Genderen, but it is mentioned as a candidate LBV in several other works. RMC~128 is only $20\arcsec$ away from RMC~127, which is much more luminous and the star is hard to discern in the mid-infrared. In the near-infrared, there is no evidence of free-free and dust excess. We do not find any observational argument to keep the star in the list of candidate LBVs, although its spatial proximity to the very active RMC~127 certainly makes for an interesting target (e.g. if the two stars are co-eval). \\ \textbf{RMC\,149} was characterised by \citet{vanGenderen2001} as a weakly active S~Dor variable (with low-amplitude variations over decades). However, \citet{2009Farigna} re-classified it as O8.5 II((f)). We do not find typical signatures of LBV nebular features (ionised gas and/or dust excess). The infrared images reveal that the source is embedded in a crowded environment. \\ \textbf{Sk}$\mathbf{\,-69\,271}$ is a yellow supergiant (YSG) \citep{2012Neugent}. It appeared in the list of candidate LBVs because of nearby nebular material resembling a half shell, most likely of interstellar origin \citep{1997Weis}. This source is particularly faint already at $8\,\rm \mu m$ and does not show any features at longer wavelengths. For lack of other evidence to keep these stars in the LBV sample, we discard them from further analysis and also suggest their removal from the LBV census \citep[the most recent study to mention is ][]{2018RNAAS...2c.121R}. \section{Modelling the infrared spectral energy distributions with grey-body fitting} \subsection{Individual stars} \label{sec:modelling:grey} \begin{table*} \caption{Summary of estimated dust properties.} \label{tab:dustproperties} \centering \begin{tabular}{lcccr} \hline\hline Source & Dust Mass & Dust $T$ & $\beta$& Ref. \\ & ($10^{-2}\,\rm M_\odot$)& (K) & & \\ \hline LHA~120-S~119& $1.2^{+2.2}_{-0.7}$& $108^{+9}_{-10}$&$1.03\pm0.045$&this work\\ RMC~127\xspace& $0.2-2$& 71$-$90&1.5$-$2.0&[1]\\ LHA~120-S~61 &$0.5-3$& 105$-$145&0.55$-$1.5&[2]\\ RMC$\,$143\xspace &$5.5\pm1.8$ & $62-80$ & $1.0-1.6$ & [3] \\ RMC$\,$71 & 1& $107^{+10}_{-7}$& N/A&[4], [5]\\ & $0.9^{+0.3}_{-0.2}$& $100\pm10$&$1.8\pm0.2$&this work\\ \hline HD~269216 & $0.8\pm0.7$& $62\pm6$& 1.3 (fixed) &this work\\ RMC$\,$81 & $0.4\pm0.2$& $81\pm8$& 1.3 (fixed)&this work\\ RMC$\,$40 & $0.31\pm0.07$& $75\pm2$& 1.3 (fixed)&this work\\ \hline \end{tabular} \tablebib{ [1] \citet{2017AgliozzoB}; [2] \citet{2017AgliozzoA}; [3] \citet{2019Agliozzo}; [4] \citet{2014Niyogi}; [5] \citet{2010vanLoon}.} \end{table*} \begin{figure} \centering \includegraphics[trim={1cm 0.5cm 2cm 2cm},clip,width =1.0\columnwidth]{sed-LHA_120-S_119fit} \caption{Example of single grey-body fit of the {\it{2MASS}, \it{WISE}, \it{Spitzer}}, and {\it{Herschel}} data, similarly to \citet{2017AgliozzoA,2017AgliozzoB,2019Agliozzo}.} \label{fig:s119fit} \end{figure} Several dusty LBV nebulae are extended. Optical instruments detected H$_{\alpha}$ emission, almost unextinguished by the dust grains \citep{2012Agliozzo}. Indeed the extinction A$_{\rm H_{\alpha}}$ was mapped in three nebulae, revealing relatively low values and thus suggesting an optically thin configuration for the dust \citep{2017AgliozzoA,2017AgliozzoB,2019Agliozzo}. For such sources we can assume that radiative transfer effects are negligible. Among the stars classified as Class~1, five have a detection in at least one PACS filter, namely RMC~71, RMC~127, RMC~143, LHA~120-S~61, and LHA~120-S~119. With the exception of LHA~120-S~119, the dust mass for these sources has been previously determined in the literature (see Table~\ref{tab:dustproperties}). For LHA~120-S~119, we thus fit the infrared SED with a greybody to derive the characteristic temperature, dust mass and $\beta$ parameter (Fig.~\ref{fig:s119fit}), using an absorption coefficient value $\kappa_{\rm 850\mu m}$ of $\rm 1.7\, cm^{2}\, g^{-1}$ \citep{1987Sodroski}, the same as \citet{2017AgliozzoA,2017AgliozzoB,2019Agliozzo}. We assume a distance of $49.97\,\rm kpc$ for the LMC \citep{2013Pietrzynski} for all sources. The distances to individual sources will vary around this nominal value due to the inclined geometry and finite thickness of the LMC, but the errors due to this single distance assumption are expected to be less than 2~kpc or 4\% \citep[][and considering a maximum separation of any source from the LMC centre of $2^\circ$]{2001AJ....122.1807V}. This uncertainty ($\sigma_{D}\sim1\,\rm kpc\sim2\%$) is an almost negligible contribution to the dust mass uncertainty compared to the systematic uncertainty in $\kappa$, and the typical measurement uncertainties of the far-infrared photometry. Radiative transfer modelling was adopted to fit the data of RMC~71 by \citet{2014Niyogi}. For comparison, we also fit the data of RMC~71 with a grey-body and find a dust mass which agrees with the literature value. Three other sources (RMC~40, RMC~81, HD~269216) are detected in both the $24$ and $70\,\rm \mu m$ MIPS bands but not in the PACS bands. Their measurements suggest that the thermal emission peaks between 24 and 70$\rm \mu m$. We fit these data by fixing the parameter $\beta$=1.3 (the median of the $\beta$ values determined for the five LMC LBVs above, see also Table~\ref{tab:dustproperties}). The distance assumed for the SMC LBV RMC~40 is $62.44\, \rm kpc$ \citep{2020Graczyk}. We note that the uncertainty of HD~269216 is nearly 100$\%$ due to the marginal detection in MIPS70. For the other Class~1 sources the lack of detection longward of $24\,\rm \mu m$ does not allow to constrain the dust parameters. The results of this analysis are summarised in Table~\ref{tab:dustproperties}. \subsection{Stack analysis of LBVs in the LMC} \label{sec:stacking} \begin{table*} \caption{Flux densities from the stacked data.} \label{tab:stackedphoto} \centering \begin{tabular}{lcccc} \hline\hline Filter &$\lambda_{\rm eff}$ & LBVs & LBVs & x-AGBs \\ & & & (w/o RMC143)& \\ &($\rm \mu m$)&(Jy) &(Jy) & (Jy)\\ \hline $J$\tablefootmark{a} & 1.235 & 3.430$\pm$0.068 & 3.330$\pm$0.066 & 5.951 $\pm$ 0.595 \\ $H$\tablefootmark{a} & 1.662 & 2.761$\pm$0.070 & 2.680$\pm$0.068 & 16.128 $\pm$ 1.613 \\ $K_{s}$\tablefootmark{a} & 2.159 & 2.430$\pm$0.050 & 2.37$0\pm$0.050 & 32.604 $\pm$ 3.260 \\ IRAC3.6\tablefootmark{a} & 3.550 & 0.776$\pm$0.024 & 0.735$\pm$0.023 & 83.965 $\pm$ 8.397 \\ IRAC4.5\tablefootmark{a} & 4.493 & 0.604$\pm$0.017 & 0.574$\pm$0.016 & 93.59 $\pm$ 9.36 \\ IRAC5.8\tablefootmark{a} & 5.731 & 0.470$\pm$0.013 & 0.447$\pm$0.012 & 104.07$\pm$10.42 \\ IRAC8.0\tablefootmark{a} & 7.872 & 0.541$\pm$0.018 & 0.500$\pm$0.020 & 104.18$\pm$10.42 \\ WISE3\tablefootmark{a} & 11.56 & 0.996$\pm$ 0.082 & 0.795$\pm$0.080 & -- \\ MIPS24\tablefootmark{b} & 23.675 & 12.4$\pm$1.7 & 11.7$\pm$1.5 & 44.95 $\pm$4.64 \\ MIPS70\tablefootmark{b} & 71.42 & 4.3$\pm$2.6 & 4.1$\pm$0.8 & 7.2 $\pm$1.6 \\ PACS100\tablefootmark{b} & 100.0 & 2.34$\pm$0.31 & 1.66$\pm$0.220 & 4.1 $\pm$1.4 \\ PACS160\tablefootmark{b} & 160.0 & 0.82$\pm$ 0.40 & $>0.41\pm0.16$\tablefootmark{c} & $<4.26$\tablefootmark{d} \\ SPIRE250\tablefootmark{b} & 250.0 & $>0.122\pm0.037$\tablefootmark{c}& --- & --- \\ \hline \end{tabular} \tablefoot{\centering \tablefoottext{a}{Summed photometry from catalogues.}\tablefoottext{b}{Photometry of point-source at the centre of stacked images.}\tablefoottext{c}{Lower limit fluxes are measured fluxes from stacks of subsets of the least confused or brightest sources (see text).}\tablefoottext{d}{$3\sigma$ statistical upper limit from the noise in the stacked image.}} \end{table*} \begin{figure*}[!htb] \begin{minipage}{.5\linewidth} \centerline{\sf \large Stack of 18 LBVs in the LMC\hspace{3em}} \end{minipage}% \begin{minipage}{.5\linewidth} \centerline{\sf \large Stack of 17 LBVs in the LMC (w/o RMC143)\hspace{1.5em}} \end{minipage} \vspace{5pt} \includegraphics[scale=0.31]{WISE3-stack-resid-with-R143} \vspace{1pt} \includegraphics[scale=0.31]{WISE3-stack-resid-without-R143} \vspace{1pt} \includegraphics[scale=0.31]{Mips24-stack-resid-withR143} \vspace{1pt} \includegraphics[scale=0.31]{Mips24-stack-resid} \vspace{1pt} \includegraphics[scale=0.31]{Mips70-stack-resid-withR143} \vspace{1pt} \includegraphics[scale=0.31]{Mips70-stack-resid-without-RMC143} \vspace{1pt} \includegraphics[scale=0.31]{PACS100-stack-resid-withR143} \vspace{1pt} \includegraphics[scale=0.31]{PACS100-stack-resid-withoutR143} \vspace{1pt} \includegraphics[scale=0.31]{PACS160-stack-resid-withR143} \hfill \includegraphics[scale=0.31]{PACS160-stack-resid-fewer13-lowlimit} \caption{Left column: stacked images of LMC LBVs (SED Classes 1 and 2 in Table~\ref{tab:sample}) and residuals after Gaussian fitting. Right column: same as before but without RMC143. The field of view corresponds approximately to $70\arcsec \times 70\arcsec$. The stack image at 160$\rm \mu m$ on the bottom right was derived with a smaller number of sources (IDs: 2, 5, 6, 7, 8, 10, 12, 13, 14, 15, 16, 17, 18) and used as a lower limit. } \label{fig:residuals-LMC} \end{figure*} \begin{figure} \centering \includegraphics[scale=0.38]{sedtot-commonscale} \includegraphics[scale=0.38]{sedtot-withoutR143-commonscale} \caption{Flux density distribution of stacks of the LBVs in the LMC (SED Classes 1 and 2 in Table~\ref{tab:sample}), with (top) and without (bottom) RMC~143. 2MASS and IRAC values are the sum of the individual source fluxes, while WISE-W3, MIPS and PACS values are from photometry performed on the stacked images (Fig.~\ref{fig:residuals-LMC}). Parameters from the fitting procedure are shown ($\alpha$ and $c$ are the spectral index and offset of the best-fit power-law describing the near-infrared data; $T_{\rm d}$, $M_{\rm d}$ and $\beta$ are the dust temperature, mass and opacity power law index of the best-fit greybody describing the thermal emission from dust). The lower limits (red points and arrow symbols) for MIPS70 and SPIRE250 in the upper plot and PACS160 in the lower plot are from stacks of subsets of sources (see text for the subsets in each case), as there is not a reliable detection in the stacked images of the whole sample. } \label{fig:sedstot} \end{figure} \begin{figure} \centering \includegraphics[scale=0.38]{sedtot-x-AGBs-commonscale} \caption{Flux density distribution of the 1342 extreme x-AGBs in the LMC from the catalogue by \citet{2012Riebel}, constructed from the Gaussian fitting of the central detection in the stacked images (at wavelengths shorter than $\sim10~\rm \mu$m the value is just the sum of the catalogue measurements). Blue points: observed phtometry; green points: de-reddened photometry; red arrow: $3\sigma$ PACS160 upper limit. } \label{fig:sedstot-xAGBs} \end{figure} \begin{figure} \centerline{\sf \large Stack of 1342 extreme AGBs in the LMC\hspace{2em}} \vspace{5pt} \includegraphics[scale=0.31]{xAGBs-MIPS24-stack-resid} \vspace{1pt} \includegraphics[scale=0.31]{xAGBs-Mips70-stack-resid} \vspace{1pt} \includegraphics[scale=0.31]{xAGBs-PACS100-stack-resid} \vspace{1pt} \includegraphics[scale=0.31]{xAGBs-PACS160-stack-resid} \vfill \caption{Stacked images of the extreme-AGBs from \citet{2012Riebel} and residuals after Gaussian source extraction. The field of view corresponds approximately to $70\arcsec \times 70\arcsec$. Ten out of 1352 sources were excluded (see text), because the emission in their field is dominated by bright large-scale structures or because they are another type of stellar objects (e.g. YSOs). } \label{fig:residuals-xAGBs} \end{figure} Given the low detection rates of individual sources at $70\,\mu\rm m$ and longer, largely due to confusion, we have also carried out a stacking analysis to provide a statistical measure of total dust content. The stacking analysis adopted here consists in co-adding the images, at a given wavelength, of individual sources with known coordinates, in order to reduce the noise by averaging, resulting in an aggregate flux density measurement. This technique is particularly useful when the target sources are not detected. The stacking procedure is described in \citet{2010Kurczynski}, and references therein. Here, we do not adopt any deblending method described by the authors, as the confusion in our case is dominated by diffuse ISM emission, of which a-priori positional information is difficult to obtain. For the stacking analysis we considered all confirmed and candidate LBVs (SED Classes 1 and 2), thus excluding sgB[e] stars and the sources discarded in Sect.~\ref{sec:review}. The LMC stack sample thus comprises the first 18 entries in Table~\ref{tab:sample}. In the SMC there are only two sources to consider (ID number 28 and 29 in the same table), of which only one clearly shows a dust component (number 29). For this reason, we perform the stacking analysis on the LBVs (Class 1 and 2 in Table~\ref{tab:sample}) of the LMC only. To obtain stacked images, for each source we created cutouts of the SAGE and HERITAGE mosaic of the LMC centred, to the nearest pixel, at the sky coordinates as in Table~\ref{tab:sample}, after cross-checking the coordinates with the actual detections in the map. As a \emph{Wide-Field Infrared Survey Explorer (WISE)} image of the whole LMC galaxy is not available, in this case we queried the WISE image service in the IRSA website and downloaded images centred at the sky coordinates of each source. For none of the instruments it was necessary to re-sample the images as they originate from the same grid. Images of the following bands were stacked: WISE~W3, MIPS24, MIPS70, PACS100, PACS160, and SPIRE250. For each band we summed the cutouts of all sources. The resulting stacked images are shown in Fig.~\ref{fig:residuals-LMC}, while the associated photometry is summarised in Table~\ref{tab:stackedphoto}. In the same figure, we also show the same stack images without RMC$\,$143, which although the most massive LBV nebula recorded so far in the LMC \citep{2019Agliozzo}, is embedded in a very bright and crowded region, just south of the 30~Doradus star forming complex. The comparison of the resulting photometry allows us to assess the relative contribution of the other stars to the infrared emission. Following the same method, we also stacked cutout images extracted from the LMC $1.4\,{\rm mm}$ SPT+\emph{Planck} map of \citet{2016PlanckMCs}. However, we did not obtain any detection above the noise ($\sim\!250\,{\rm mJy/beam}$) and so do not show the resulting image here. We then determined the aggregate flux density of the combined source by performing a 2-D Gaussian fitting in each image. Peak height, $x$ and $y$ centre positions and background level are fitted parameters; the width parameters $\sigma_a$ and $\sigma_b$ were fixed to the instrumental beam $\sigma$ for MIPS70, PACS100, and PACS160 images, while they were left as free parameters (along with a position angle) for the WISE W3 and MIPS24 fitting because at these wavelengths sources were slightly resolved. We used the non-linear least-squares method of the \texttt{Scipy Optimize} package \citep{2020Scipy} for minimising the fitting function residuals. To account for systematic errors in this procedure (e.g. missing flux in sidelobes), we corrected the values by a factor determined in each filter by comparing our own estimates of individual test sources with catalogue measurements. Figure~\ref{fig:residuals-LMC} shows the stacked images and their residuals after subtraction of the 2-D Gaussian. We do not include SPIRE~250 because there is no source detected, owing to source confusion over the large beam. For the stack analysis at the IRAC bands, we only summed the photometry from the catalogue, as all the individual sources were well detected. We expect the amount of flux density missed by the small catalogue aperture to be negligible since the IRAC images do not show hints of resolved nebular emission at these wavelengths. For each passband, the resulting flux densities from the stack analysis are plotted in the SEDs in Fig.~\ref{fig:sedstot} as blue circles. The top figure visualises the sum of all 18 candidate and confirmed LBVs in the LMC, while the bottom shows the same without RMC~143. In the top figure the pink symbols represent lower limits. The $70~\mu$m limit is given by the measurement in the stacked image without RMC~143, where the central source is better detected without the contamination by the 30~Dor region (as is evident from the comparison of the $70~\mu$m error bars in the top and bottom panels). The $250~\mu$m limit is due to a source detected at the central position of a stack image built from the only four well detected sources in this filter (namely RMC~127, LHA~120-S~119, LHA~120-S~61, and RMC~71). In the lower plot, the $160~\mu$m lower limit is set based on the detection in the stack image obtained from fewer 13 sources (IDs: 2, 5, 6--8, 10, 12--18) that are less affected by crowding. The total SED from about 1 to $250~\mu$m resembles that of LBV stars with dense ionised stellar winds, surrounded by extended dusty nebulae (Class~1a). We initially fit the 2MASS $J$, $H$, $K_{\rm s}$ and IRAC~$3.6$, $4.5$ and $5.8\,\mu{\rm m}$ data with a power law representing the sum of stellar winds and photospheres \citep[power-law free-free emission from ionised stellar winds, ][and $F_\nu\propto\lambda^{-2}$ Rayleigh-Jeans photosphere emission]{1975PF}, and then we added the resulting best fit and uncertainties to the grey-body function built to fit the WISE~W3, MIPS24, MIPS70, PACS100, and PACS160 measurements. In the figure, the best-fit total model is given by a black continuous line. The power-law fit of the wind component is the blue dashed line. The grey shaded region represents all the grey-body components that fit the data producing a $\chi^2$ which is twice the minimum value of the best fit. Because of the large uncertainty of the $70~\mu$m produced by the crowded region around RMC~143, we set as lower limit the photometric measurement at $70~\mu$m without RMC~143 to constrain the range of possible models. We also introduce another constraint at longer wavelengths by using the lower limit at $250~\mu$m. The resulting dust temperature, mass, and parameter $\beta$ are listed in the figures. The dust mass of the aggregate source is computed by assuming the flux density extracted from the best fit at $\rm 850\,\mu$m. The adopted absorption coefficient $\kappa_{\rm 850\mu m}$ is $\rm 1.7\, cm^{2}\, g^{-1}$ \citep{1987Sodroski}. \subsection{Stack analysis of other evolved stars in the LMC} \label{sec:stackingAGBs} The same stacking analysis was performed for the sample of RSGs and AGBs in the catalogue of \citet{2012Riebel}, which contains $33\,501$ sources ($6\,709$ carbon-rich AGB, $19\,566$ oxygen-rich AGB, $5\,874$ RSG and $1\,352$ extreme ``x-AGB'' stars). Previously \citet{dharmawardena} stacked the PACS100 images of the sources in Riebel et al., split up by mass-loss rate bin and chemical class, and found some detections from carbon-rich and oxygen-rich AGB stars with the highest mass-loss rates. The wavelengths covered here are those of the four IRAC filters at 3.6, 4.5, 5.8 and 8~$\rm \mu$m, the two MIPS filters at 24 and 70~$\rm \mu$m, and the PACS filters at 100 and 160~$\rm \mu$m. For completeness, we also considered the SPIRE image at 250~$\rm \mu$m, although with low expectation of obtaining a detection because of source confusion at those wavelengths. We cross-matched the list of LBVs and the catalogues of Riebel et al. and found two common sources (RMC~66 and RMC~84), which we removed from their sample (they were catalogued as RSGs). Various attempts were made to obtain a detection at wavelengths longer than $24\,\rm \mu$m. Initially, the images of individual categories of stars were stacked separately, with and without applying a confusion threshold on the basis of the standard deviation evaluated from all the pixels in each field (measuring about 50$\times$50 pixels). Finally, all the $33\,501$ sources were summed together. In the $70\,\mu{\rm m}$ and the stack images at longer wavelength, we did not obtain any statistically meaningful detection for the RSGs, the C- and O-rich AGBs, or the whole sample. In the case of the ``extreme'' AGBs, which are considered the main dust producers among all AGBs and RSGs in the MCs \citep{2012Boyer,2012Riebel}, we evaluated possible detections at 24, 100, 160, 250, 350 and $500\,\rm \mu$m. We noticed that at far-infrared wavelengths the stacked images are dominated by confusion when the following fields are included: J053929.34-694614.8, J053911.41-690824.7, J054515.83-694648.1, J051913.89-693818.3, J045506.54-691708.6. We also found that five other sources have been identified as YSOs or other types of stars \citep[J053840.77-690603.3, J053856.58-690417.3, J053839.68-690537.9, J045400.16-691155.4, J053238.60-682522.1, ][]{2011Evans,2013WalbornBarba,2020vanGelder,2019Oliveira}. We thus excluded these ten sources from further analysis. The final sample of extreme AGB stars from \citet{2012Riebel} that we stacked comprises 1342 objects. We obtained significant detections up to $100\,\mu\rm m$, and a marginal detection at $160\,\mu\rm m$, as shown in Fig.~\ref{fig:residuals-xAGBs}. To produce the composite continuum spectrum in the figure, no weighting based on the stellar luminosity or normalisation was applied. The spectrum simply comprises the aggregated flux densities measured from the stacked images at each wavelength. We extracted the photometry as described in the preceding subsection. At $24\,\rm \mu$m we compared our photometry extracted from the stack image with the stacked photometry \citep[obtained by summing the photometry in][]{2012Riebel} and found a consistent value. We fitted the data with two components: a black body of fixed temperature (3000~K) describing the stellar photosphere, and a grey body with dust spectral index $\beta=0$. Because the near-infrared data of AGBs are extremely reddened by the optically thick dust, we also fitted the extinction by using the extinction curve of \citet{1999PASP..111...63F}, with selective parameter $R_{V} = 3.1$. Fig.~\ref{fig:sedstot-xAGBs} shows the photometry and resulting fit with and without extinction correction. For comparison, the integrated flux density at 100~$\mu$m from 18 LBVs is of the same order of magnitude of that from 1342 extreme-AGB stars (see Table~\ref{tab:stackedphoto}). The dust mass derived from this fitting assumes that the dust is optically thin (at the wavelength at which $\kappa$ is defined), and thus must be considered a lower limit. \section{Discussion} \label{sec:discussion} \subsection{LBV dust masses versus stellar luminosity and nebular size} \begin{figure} \centering \includegraphics[width=.9\linewidth]{mdustvsL} \caption{Estimated dust masses for individual stars as a function of the stellar bolometric luminosity. Red circles: LMC LBVs (this work, see Table~\ref{tab:dustproperties}). Blue circles: Galactic LBVs, specifically AG~Car \citep{2015Vamvatira}; G24.73+0.69, G25.5+0.2, HD~168625, Hen~3-519, AFGL~2298 \citep[][and references therein]{2003Clark}; G26.47+0.02 \citep{2012Umana}; G79.49+0.26 \citep{2014Agliozzo}; HR Car \citep{2002Machado,2017Buemi}; Pistol star \citep{2014Lau}; Wra 751 \citep{2013Vamvatira}. Error bars are shown when available from the literature.} \label{fig:mdvL} \end{figure} The uncertainty on the distance is only $\sim\!2\%$ for the Magellanic sources (see \S\ref{sec:modelling:grey}), and thus their stellar luminosity is well constrained ($\sim\!4\%$ uncertainty due to the distance). This is a great opportunity to investigate any possible dependence of the dust mass on the stellar parameters and on the nebular size. Fig.~\ref{fig:mdvL} shows the dust masses estimated in Sect.~\ref{sec:modelling:grey} for 7 Class 1 LBVs in the LMC and RMC\,40 in the SMC versus the stellar bolometric luminosity as in the literature (Table~\ref{tab:stellparam}). In the same plot, dust masses for some well-known nebulae around candidate and confirmed LBVs in the Milky Way are displayed. Dust production in these stars does not seem correlated to the stellar properties. For five objects the nebula size is known in the literature and one has been determined in this work. Sizes span from 0.3 to 2.2~pc (Table~\ref{tab:stellparam}). We do not observe a systematic trend in the dust mass versus the nebular size. \subsection{Total dust mass in the LMC} \begin{table} \centering \caption{Summary of estimated LBV dust masses in the LMC. } \label{tab:dustmasses} \resizebox{0.475\textwidth}{!}{ \begin{tabular}{lcc} \hline\hline Source & \multicolumn{2}{c}{Grey-body fitting} \\ &($\kappa_{\lambda}$ as SD87) & $(\kappa_{\lambda}$ as GD14) \\ \hline Stack 18& $9^{+33}_{-6}\times 10^{-2}\,M_{\odot}$&$11^{+6}_{-3}\times10^{-2}\,M_{\odot}$ \\ Mean per source& $0.5^{+1.8}_{-0.3}\times10^{-2}\,M_{\odot}$&$0.6^{+0.3}_{-0.2}\times10^{-2}\,M_{\odot}$ \\ \hline \end{tabular} } \tablefoot{Results are given for two different opacity values, as in \citealt[][]{1987Sodroski} (SD87) and \citealt[][]{2014Gordon} (GD14).} \end{table} Since the ultimate goal of this work is to compare the LBV dust masses with the entire LMC dust yield we also fit the stack data with the simple grey model that \citet{2014Gordon} used to derive the global dust properties of the ISM in the LMC and SMC. In particular, we adopt their absorption coefficient $\kappa_{\lambda}$ (calibrated with ultraviolet and optical gas-phase absorption measurements in the diffuse ISM of the Galaxy, which provide the expected amount of dust and are independent from the far-infrared and submillimetre dust emission). At $160\,\rm \mu m$, Gordon et al. derived $\kappa_{160} = 9.6\pm0.4 \pm 2.5$ where the first error is statistical and the second is the systematic uncertainty due to the assumption that the dust in the MCs is like that in the Galaxy. With this $\kappa_{160}$, our stack PACS160 flux and our stack SED fit temperature (top panel in Fig. \ref{fig:sedstot}), we obtain for the 18 LMC (Class~1 and 2) LBVs $M_{\rm dust}=0.11^{+0.06}_{-0.03}\, M _{\odot}$, and an average of $\langle M_{\rm dust}\rangle=0.006^{+0.003}_{-0.002}\, M _{\odot}$ per source. The uncertainty of about $25\%$ in $\kappa_{\lambda}$ due to the systematic error is not taken into account. Table~\ref{tab:dustmasses} summarises the dust masses derived from grey body fitting of the stack data. We notice that the integrated dust mass from 18 sources is of the same order of magnitude of the dust mass estimated in the Homunculus nebula of $\eta$~Carinae \citep{2017Morris}. This difference may be the result of different metallicities. It may also suggest that several stars have still not experienced the required mass loss to form dust (giant eruptions) or that they do not have colliding winds within a close binary. It should be noted that only a handful of sources contribute significantly to the stacked dust mass (compare Tables~\ref{tab:dustproperties} and \ref{tab:dustmasses}). Additionally, the dust masses estimated here could be a lower limit, as cooler dust may have escaped the observations. Sensitive sub-millimetre observations with ALMA of all Class~1 and 2 objects would be an asset for further studies. For the LMC, \citet{2014Gordon} derived an integrated dust mass of $(7.3\pm1.7)\times 10^{5}\,M_{\odot}$ (averaging the results from three different models, or $(8.1\pm2.8)\times 10^{5}\,M_{\odot}$ with the simple grey body model only). A more recent analysis based on the same dataset reports estimates about a factor of two lower \citep{2017Chastenet}. In the classical scenario, the LBV phase is estimated to last around $10^{4}-10^{5}\,\rm yr$, which is a small fraction compared to the age of the LMC \citep[about 15~Gyr,][]{2009H&Z}. To evaluate the potential contribution of LBVs to the ISM dust yield, we must account for the entire population of LBVs over the lifetime of the LMC. This is not an easy task, because stellar evolution models are not able to predict how many BSGs become LBVs (or manifest the LBV phenomenon). Ideally, we would like to compare the ratio of LBVs and BSGs of similar luminosity, but finding a list near to completeness is challenging. \citet{2017Urbaneja} analysed 90 LMC BSGs to study the gravity-luminosity relationship, but this number is probably a lower limit. In fact, in \citet{2009Bonanos} we count 133 stars earlier than B4 and of luminosity classes Ia and Iab in the LMC, excluding four objects that overlap with the LBV sample. However, \citet{2009Bonanos} could not claim completeness, as they targeted O, B supergiant stars with both accurate coordinates and spectral classification. We queried in Simbad for stars with spectral types earlier than B4 and of luminosity class I, within 2.5 deg from the LMC centre, and we find around 230 BSGs \citep[several of these objects appear in][]{2002Massey}. \citet{2010Massey} estimated $\sim6100$ unevolved massive stars with $M>20\,M_{\odot}$, and $\sim700$ with $M>50\,M_{\odot}$. \subsubsection{Simple case: Constant number of LBVs over time} The simplest estimate we can make of the total dust mass produced by LBVs in the LMC over all time is by assuming that the number of LBVs has on average been the same as today. This simple assumption avoids having to consider the OB to LBV rate, but it does depend directly on the assumed LBV phase duration. For the range of accepted LBV phase durations of $10^4$--$10^5$~yr, the LMC age of 15~Gyr, and the mass produced by the current LBV sample of $0.11^{+0.06}_{-0.03}\,M_\odot$, the range of total dust mass is from $1.2\times10^4\,M_\odot$ to $1.7\times10^5\,M_\odot$, with a nominal value of $6.6^{+10.4}_{-5.4}\times10^4\,M_\odot$ for an assumed LBV phase duration of $25\,{\rm kyr}$. \subsubsection{Accounting for star formation history and initial mass function} We assume that the LBV population closely followed the fluctuations in the star formation history (SFH) of the LMC. The latter was computed by \citet{2009H&Z}, who derived star formation rates (SFR) in about 1400 LMC sub-regions in time bins of $\Delta \log(t) = 0.2$. We integrate the SFR over these sub-regions, in order to obtain the total SFH for the LMC as in Fig.~11 of \citet{2009H&Z}. For the Initial Mass Function (IMF) we adopt the power law slope derived by \citet{2018Schneider}, who found an excess of massive stars by $30\%$ in the 30~Dor star forming region. We also use the Salpeter IMF as a possible lower limit in the LMC. Finally, we derive the total dust yield by LBVs as, \begin{multline*} \frac{\left(\displaystyle\sum_{i} {\rm SFR}(t_{i})\Delta t_i\right) \times\ f_{\rm LBV, MS}\times \displaystyle\int_{25M_\odot}^{100M_\odot} \xi(m) \,dm}{\displaystyle\int_{0.1M_\odot}^{100M_\odot} m \xi(m) \,dm} \times \langle M_{\rm dust}\rangle, \\ \end{multline*} where $\xi(m)=\xi_{0}\times m^{\gamma}$ is the IMF, ${\rm SFR}(t_{i})$ is the SFR in the time bin $i$, $\langle M_{\rm dust}\rangle$ is the mean dust mass produced per LBV, and $f_{\rm LBV, MS}$ is the fraction of stars in mass range $25$--$100M_\odot$ experiencing the LBV phenomenon. The IMF is normalised to the total mass of stars with masses between 0.1 and 100$\,M_{\odot}$. Assuming the slope $\gamma=-1.9$ from \citet{2018Schneider} (Case 1) and $\gamma=-2.35$ from Salpeter (Case 2), we find the following ranges of dust mass: $ 17.1^{+14.0}_{-3.7}\times10^{4}\times f_{\rm LBV, MS} \,M_{\odot}$ (Case 1) and $ 4.4^{+3.6}_{-1.0}\times10^{4}\times f_{\rm LBV, MS} \,M_{\odot}$ (Case 2). The error bars include the uncertainty on the dust mass and on the SFR. A major uncertainty in this relation lies in the fraction of massive main-sequence stars that will become LBV stars ($f_{\rm{LBV, MS}}$). Obviously, the number cannot exceed unity. Because the conditions leading to the LBV phenomenon are unknown, it is unconstrained by evolutionary models. The observed fraction with respect to BSGs, $f_{\rm{LBV, obs}} = n_{\rm LBV, obs}/n_{\rm BSG, obs}$, needs to be corrected for incompleteness and, more importantly, for the ratio of the duration of the BSG and LBV phases, $t_{\rm BSG} / t_{\rm LBV}$. Evolutionary models suggest $t_{\rm BSG} \leq 10^{6}\,\rm yr$ for stars more massive than $25\,M_{\odot}$ in the He-burning phase, and $t_{\rm MS} \leq 10^{7}\,\rm yr$ for stars in the H-burning phase \citep{1989M&M}. While the number of stars in the LMC currently exhibiting the LBV phenomenon should be reasonably correct ($n_{LBV, obs}$), it should be subject to fluctuations, and some current LBVs might be inactive. The number of BSGs in the LMC that may become LBVs ($n_{\rm BSG, obs} \sim 230$, see above) is also uncertain. The number of MS O,B stars (6700) may be more reliable \citep{2002Massey, 2010Massey}. From the currently available information, it seems plausible that $f_{\rm LBV,MS}$ ranges between about $30\%$\footnote{Comparing the LBVs with the MS OBs, taking $10^7{\rm yr}$ as the MS lifetime and an upper limit of $10^5{\rm yr}$ for the LBV phase duration gives,\\ $f_{\rm LBV} = (18\,{\rm LBVs}/6700\,{\rm MS\,OBs})\times(10^7{\rm yr}/10^5{\rm yr})=27\%$.} and $100\%$\footnote{Comparing the LBVs with the BSGs, taking $10^6{\rm yr}$ as the BSG phase duration and $10^5{\rm yr}$ for the LBV phase duration gives,\\ $(18\,{\rm LBVs}/230\,{\rm BSGs})\times(10^6{\rm yr}/10^5{\rm yr})=78\%$.\\ As the LBV phase may be an order of magnitude shorter than this, it suggests that the number of 230 BSGs may be incomplete.}. We repeat a similar calculation for the AGB/RSG stars. We use the average dust mass per star determined from the stack of all extreme-AGBs ($9^{+2.2}_{-3.0}\times10^{-8}\,M_{\odot}$) and we integrate over stellar masses between 1 and 25~$M_\odot$ to also include RSGs. We find a total dust mass of $136^{+56}_{-35}\,M_{\odot}$ (Case 1) or $144^{+59}_{-37}\,M_{\odot}$ (Case 2), which are much lower values than LBVs, but maybe consistent with the long dust injection timescales of AGBs \citep[$\sim2\times10^{9}\,\rm yr^{-1}$][]{2018Micelotta}. Moreover, the dust lifetime against processing and destruction by supernova shocks in the ISM is shorter, around $2\times10^{7}\,\rm yr$ \citep{2015Temim}. The inefficiency of existing AGB stars to produce the dust mass observed in the ISM was already pointed out by \citet{2009Matsuura} for the LMC and by \citet{2012Boyer} and \citet{2013Matsuura} for the SMC. Dust injection timescales from LBVs (and subsequent SNe) are much shorter ($\leq10-100\,\rm Myr$). In their study of the total dust input from stars in the SMC \citet{2012Boyer} estimated a dust production rate of $\sim10^{-6}\,M_{\odot}\,\rm yr^{-1}$ from LBVs, matching the input from all cool stars, but concluded that the grains would not survive the subsequent supernova explosion. \subsection{Potential dust survival from SN explosions} Now we evaluate how likely this LBV dust is to survive the SN forward shock (blast wave) which will follow the LBV or WR evolutionary stage. The case of ISM grains processing by the forward shock was analysed by \citet{1978Barlow}, \citet{ 2006Nozawa}, and \citet{2014Bocchio}. The processes of interest are generally sputtering and shattering. There are a few arguments to mention in favour of dust survival in the SN blast wave in LBVNe. 1) LBV eruptions are a source of large grains ($>0.1\,\rm \mu m$) due to the favourable conditions for particle growth \citep{2011Kochanek}. The relatively low $\beta$ values found in this work are also consistent with large grains. However, they could also be low due to a temperature gradient over the nebula. 2) The material in LBVNe seems often distributed in a clumpy configuration, spread at large distances from the star. 3) Such nebulae can be asymmetric (e.g. RMC~143, LHA$\,$120-S$\,$119), subtending solid angles of less than $4\pi$ so that some of the SN energy will leak out without interacting with the nebula. Such properties reduce the importance of the destructive action by the shock, as for example evidenced in studies of the newly formed dust in SN ejecta impacted by the reverse shock. In particular, survival rates vary between 40 and $98\%$ \citep{2008Nath, 2010Silvia, 2012Silvia, 2016Biscaro, 2018Gall} for dust reached by the reverse shock, and around $85\%$ for large grains in the ISM \citep[e.g.][]{2006Nozawa}. More results are summarised in Table~2 of \citet{2018Micelotta}. Some studies also show that shocks can induce dust (re)formation and growth \citep[e.g.][]{2011KochanekSN, 2019Matsuura}. Recent 3-D hydrodynamical simulations report that the wind-driven bubbles created during the entire life of a massive star create the conditions for the dust to largely survive \citep{2019MartinezGonzalez}, or that under favourable ambient conditions implantation and trapping of ions in dust grains can counteract grain destruction by sputtering \citep{2020Kirchschlager}. Dust destruction is particularly negligible if the pre-existing shell is massive \citep{2019MartinezGonzalez}, which is the case of LBV nebulae. From the observational point of view, a great example is SN1987A, the progenitor of which may have been an LBV \citep[as speculated by][]{2007Smith}. Along with the discovery of dust formation in the SN ejecta a few years after the explosion \citep{1989Danziger}, followed up by far-infrared and sub-millimetre observations about three decades later \citep[which revealed large amounts $\sim 0.5\,M_{\odot}$ of cold dust, ][]{2011Matsuura,2014Indebetouw,2015Wesson}, another dust component distributed in an equatorial ring lost by the progenitor $20-40\,\rm kyr$ ago was found \citep{1989Arnett, 1993McCray, 2006Bouchet}. The fastest part of the forward shock has passed the ring \citep{2015Fransson}, offering unique insights into the resulting dust destructive processing. The silicate grains in the ring are collisionally heated by the SN blast wave \citep{2006Bouchet}. \citet{2016Arendt} proposed that some dust grains were destroyed by sputtering, supporting the theory \citep{2008Dwek,2010Dwek}. Subsequently, \citet{2019Matsuura} reported a $30-70\,\rm \mu m$ excess at day $\sim10\,000$ on top of the earlier modelled hot and warm component (in the ring) and cold component (in the SN ejecta), and a dust mass about a factor of 10 larger than previous estimates at day $\sim8\,000$. They identified two possible mechanisms behind this infrared excess: dust re-formation (dust grains formed from the gas phase) and dust growth (survived grains accreting atoms from the gas phase). Dust re-formation was also suggested in the cool dense shell created between the forward and reverse shocks as the SN ejecta plows into the existing circumstellar material. Several studies of type IIn SNe (which, in some cases, may have an LBV as progenitor) are a good example \citep[e.g.][]{2008SmithFoleyFilippenko, 2009Fox, 2014Gall,2016Andrews,2018Chugai}. Grain growth may also be a viable mechanism as explained by \citet{2020Kirchschlager}. \section{Summary and conclusions} We have analysed the infrared images of Magellanic LBVs available in the archives, adding for a few sources new photometry that either was missing or not accurate because of extended objects or crowded fields. We have revised the Magellanic LBV sample on the basis of a new infrared SED-based classification scheme and literature information. The sources can be divided in four classes. All strongly active LBVs show either a cool dust excess peaking in the mid-infrared (Class 1a and 1b) or an excess of near-infrared emission due to free-free emission from the ionised stellar wind (Class 2), or both (Class 1a). These features are also often present in several candidate LBVs, which appear in the literature as ex-/dormant LBVs. Four sources can be removed from our initial sample, as we find that in past studies these stars did not satisfy the LBV requirements and, in addition, they do not present any of the features mentioned above (Class 4). Six other sources are known as sgB[e] (Class 3): their flux density distribution does not resemble those of confirmed LBVs, consistent with a different mechanism of dust formation, as suggested in the literature. For this reason we excluded them from further analysis. We have employed a simple grey-body fitting method to model the infrared SED of individual sources. In the LMC, which has the most numerous list of LBVs, large amounts of dust are observed ($\sim\!10^{-3}$--$10^{-2}\,M_{\odot}$), similar to Galactic LBVNe. We stacked the infrared images of the LMC LBVs and extracted the photometry of the resulting source, detected up to $160$--$250\,\rm \mu m$. The integrated SED from the stacks resembles that of LBVs with a strong ionised stellar wind and an extended dusty nebula. The SED can be fitted with only two components: a power-law describing the free-free spectrum of ionised stellar winds and the stellar photosphere, and a single-component grey-body for the dust. For the grey-body we adopted two different values of the $\kappa$ parameter, including the value determined by \citet{2014Gordon} to fit the integrated ISM SED of the LMC. A significant contribution to the stack SED comes from a few sources, the most important one is RMC143. This was already identified as a massive nebula \citep{2019Agliozzo}. We obtain an integrated present dust mass of $0.11^{+0.06}_{-0.03}\,M_{\odot}$. We have repeated a similar analysis on the sample of AGBs and RSGs by \citet{2012Riebel}. We obtain a detection in the stacked images only when considering the ``extreme''-AGBs. We find that the integrated $160\,\mu{\rm m}$ emission of 1342 extreme-AGBs is of the same order of magnitude as that of 18 LBVs. The integrated dust mass from these sources is $1.2^{+0.3}_{-0.4}\times10^{-4}\,M_{\odot}$. We do not find any correlation between the dust masses and the stellar luminosities. This could be due to the fact that such stars have different evolutionary histories or that the dust production mechanism does not depend on the initial mass of the star. Most likely we are also unable to detect the lowest-mass nebulae. To estimate the total dust mass produced by LBVs in the LMC during its full lifetime, we consider two cases: constant number of LBVs across time and a case accounting for IMF and SFH. The uncertainty on the duration of LBV phase in the first case, or on the population of LBVs in the second case, add a significant uncertainty in the total mass produced by LBVs. The range of total dust masses produced by LBVs over the lifetime of the LMC spans between $10^{4}$ and $10^{5}\,M_{\odot}$, suggesting that LBVs are potentially significant dust factories in the LMC. The LBV dust masses per source of $10^{-3}-10^{-2}\,M_{\odot}$ are at least one order of magnitude smaller than the amount of dust ($0.1-1\,M_{\odot}$) postulated in SN ejecta by \citet{2001TF} and \citet{2003Nozawa}. However, those works adopted ideal assumptions for the seed material condensing into dust grains. Mid-infrared observations of CC-SNe suggest smaller dust masses \citep[$10^{-4}-10^{-2}\,M_{\odot}$,][although estimates from single epoch observations of the rapidly evolving SN ejecta may not be representative of the total dust produced or mid-infrared observations may have not captured the emission from colder dust]{2009Kotak,2011Fox}. On the other hand ALMA sub-millimetre observations revealed larger amounts of dust in both SN1987A \citep[$0.2\,M_{\odot}$, ][]{2014Indebetouw} and LBV RMC143 \citep[$0.055\,M_{\odot}$, ][]{2019Agliozzo}, however these works assume different opacities at $850\,\rm \mu m$, making the comparison difficult. Still the LBV phenomenon has the potential to be the second most important source of dust, before low- and intermediate-mass stars. This was also the conclusion by \citet{2012Boyer} in the SMC. It should be noted that if, on the one hand, the LBV dust mass has a large uncertainty after analysing a relatively large sample, in the case of CC-SNe the dust mass is also very uncertain and based on observations of only a handful of objects. Destruction of dust in LBVNe by a possible subsequent SN blast wave remains to be determined. A significant fraction of this dust mass may survive thanks to several properties of these nebulae, such as large nebular masses, big grains, clumpiness and asymmetry. Dust grain (re)formation or growth may also occur, like in the case of SN1987A \citep{2019Matsuura}. \begin{acknowledgements} D.A. acknowledges funding through the European Union’s Horizon 2020 and Innovation programme under the Marie Sk{\l}odowska-Curie grant agreement no. 793499 (DUSTDEVILS). F.K.~is supported by the Ministry of Science and Technology (MoST) of Taiwan, under grant number MOST107-2119-M-001-031-MY3, and also by Academia Sinica, in the form of Investigator Award AS-IA-106-MO3. GP is supported by ANID – Millennium Science Initiative – ICN12$\_$009. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2017.A.00054.S and ADS/JAO.ALMA\#2018.A.00049.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This paper also includes data collected: at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 096.D-0047(A), 097.D-0612(A,B), and 0100.D-0469(A,B). This work made use of PyAstronomy. This research has made use of the International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA, and the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A\&AS 143, 23. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al. 2021). The Cornell Atlas of Spitzer/IRS Sources (CASSIS) is a product of the Infrared Science Center at Cornell University, supported by NASA and JPL. \end{acknowledgements} \bibliographystyle{aa}